Single-level mediation with neuroimaging data
This tutorial demonstrates the use of mediation_brain to run a single-level mediation analyses on a sample dataset (an emotion regulation dataset from Wager et al. 2008, Neuron).
Mediation analysis: The basics
Mediation is a useful tool for connecting variables into pathways. It extends the idea of a simple association between two variables (let's call them x and y) to a pathway involving three (or more) variables. Let's write the association x predicts y like this: [ x -> y ]. This relationship is shown graphically in the diagram below:
Suppose we hypothesize that the effect of x on y is transmitted via its influence on an intermediate variable, m. With mediation analysis, you can build a model of associations among three variables, i.e., [ x -> m -> y ]. m is a mediator, a variable that explains some or all of the relationship between x and y.
Mediation models are especially useful for developing models in which an association between two variables is transmitted via a third variable. For example, the effects of a psychological stressor (x) on heart rate (y) may be mediated by activity in brain systems responsible for threat. The stressor activates the brain, and the brain in turn controls the heart.
In mediation analysis, x is called the initial variable. it will often be an experimental variable (IV) that is experimentally manipulated, but it doesn't have to be. y is called the outcome variable (or dependent variable), and it is usually the endpoint that one is interested in explaining. m is the mediator, and is usually a variable that is thought to be important for transmitting the effect of x on y, or on the "causal path" from x to y. It is also possible to include multiple initial variables, multiple mediators, and other covariates in such models. The broader family that the [ x -> m -> y ] model belongs to is called path models.
Brain mediators and mediation effect parametric mapping (MEPM)
Mediation is a natural fit for neuroimaging data because it lets us examine the relationships between experimental variables (e.g., task conditions), brain responses (e.g., contrast values), and outcomes (e.g., behavior or experience) in a single model. These relationships would often be tested piecemeal in standard GLM analyses. The mediation analysis allows us to identify brain regions with effect sizes for the task -> brain and brain -> outcome relationships that are jointly large enough to explain task effects on the outcome. Other use cases include:
- In a Randomized Controlled Trial, testing whether brain responses mediate the effects of treatment assignment on clinical outcomes
- Testing whether brain activity mediates the effects of clinical status (does the person have fibromyalgia or not) on symptoms (pain or fatigue)
- Testing whether brain activity in one area (e.g., nucleus accumbens) mediates the relationship between another brain area (e.g., prefrontal cortex) and a behavior (e.g., emotion regulation success).
Mediation analyses can be done at two levels. In single-level mediation, there is one observation per person. Each person has a value for x, m, and y, and the mediation model tests correlations between individual differences in scores on each variable. This is the most common type of mediation analysis. In multi-level mediation, we manipulate tasks and observe both brain activity and behavioral responses (e.g., ratings or performance) within-person across time (or trials in task).
The Multilevel Mediation and Moderation (M3) Matlab Toolbox will perform both single- and multilevel mediation. The function mediation.m can operate on any type of input data (behavior, psychosocial measures, physiology, brain, genetics, etc.), and will run both single and multilevel mediation depending on the input data.
The toolbox also has special functions for running mediation on each voxel in a neuroimaging dataset (single or multilevel) and saving maps of mediation effects:
mediation_brain % for single-level mediation
mediation_brain_multilevel % for multilevel mediation
Here, we'll explore single-level mediation, and we'll deal with the multi-level case in a later tutorial. So we'll run mediation_brain. This function takes a series of brain images (i.e., one image per person) as one of its inputs (either x, m, or y). The other inputs are vectors, with one observation per person. mediation_brain will load the images from (a) a series of 3-D .nii or .img files [one per person] or (b) a 4-D .nii file with one volume per person. It will run mediation analysis with mulitple options (e.g., bias-corrected, accelerated bootstrap tests and robust regression are desirable) on each voxel individually, saving the path a, path b, and a*b mediation effects in image files.
The output is a set of brain images with the Path a slopes ("effects") and P-values, Path b slopes and P-values, and a*b slopes and P-values. Special functions like mediation_brain_results help to load these images and get, show, and make tables of results. Scripts including mediation_brain_results_all_script generate a whole set of results, tables, and figures for multiple effects. publish_mediation_report generates a series of results figures and tables and saves them in a date-stamped HTML report.
For example, if we're testing whether
-- but most often m, if we're testing brain activity as a mediator of something else).
About the dataset
"Wager_et_al_2008_Neuron_EmotionReg" The dataset is a series of contrast images from N = 30 participants. Each image is a contrast image for [reappraise neg vs. look neg]
These data were published in: Wager, T. D., Davidson, M. L., Hughes, B. L., Lindquist, M. A., Ochsner, K. N.. (2008). Prefrontal-subcortical pathways mediating successful emotion regulation. Neuron, 59, 1037-50.
In this study, each participant performed three tasks:
(1) Viewing affectively neutral images (Look Neu)
(2) Viewing aversive images (Look Neg)
(3) Viewing aversive images and reappraising them to make them less aversive (Reapp Neg)
For example, when reappraising the spider image below, you might notice the egg sacs on her back and realize that this is a cute momma spider.
Negative affect reports are shown for the three conditions below. Reappraising caused a significant drop in self-reported affect (red vs. blue bars). The difference scores for this [Reappraise Negative - Look Negative] contrast are defined as "Reappraisal success" for a person.
In this tutorial, we'll identify regions that mediate effects of a pre-defined lateral prefrontal (PFC) region of interest on Reappraisal Success. We'll treat Reappraisal Success as the outcome (Y). The X values will be scores on the [Reappraise Negative - Look Negative] for each of 30 participants averaged over a right PFC region. For the mediator (M), we'll enter the name of a 4-D .nii image containing 30 [Reappraise Negative - Look Negative] images, one for each participant.
mediation_brain will test the PFC -> [Voxel] -> Success pathway for each voxel in the brain, saving maps of each relevant effect.
The focus of the publication was on subcortical systems, particularly nucleus accumbens and amygdala. The published analyses used robust boostrapped analyses with 10,000 samples, which take a long time to estimate. The current version of this tutorial uses more "quick and dirty" methods and so produces different results -- but it illustrates the concept.
Step 1: Make a new analysis directory to save results, and go there
Make a new analysis directory to save results, and go there
andir = 'Test_mediation';
Step 2: Load image data and behavioral variables
dinf = what('Wager_et_al_2008_Neuron_EmotionReg');
%imgs = filenames(fullfile(dinf.path,'con_*img'), 'char', 'absolute');
imgs = fullfile(dinf(1).path, 'Wager_2008_emo_reg_vs_look_neg_contrast_images.nii.gz');
behav_dat = importdata(fullfile(dinf(1).path,'Wager_2008_emotionreg_behavioral_data.txt'))
Note: an alternative, convenient way to load the imaging (but not behavioral) data into an object is below. This allows you to explore the dataset with other CANlab tools. There are a number of other datasets and brain patterns that can be loaded this way.
[data_obj, subject_names, image_names] = load_image_set('emotionreg', 'noverbose');
Step 3: Load and display mask
The mask determines which voxels are analyzed. The standard mask is in the CanlabCore Tools repository, so you need the folder containing it (and other CanlabCore folders) on your path.
mask = which('gray_matter_mask.img')