Multilevel mediation in fMRI

Table of Contents

Mediation analysis: The basics

Mediation is a useful tool for connecting variables into pathways. It extends the idea of a simple association between two variables (let's call them x and y) to a pathway involving three (or more) variables. Let's write the association x predicts y like this: [ x -> y ]. This relationship is shown graphically in the diagram below:
basic_two_var.png
Suppose we hypothesize that the effect of x on y is transmitted via its influence on an intermediate variable, m. With mediation analysis, you can build a model of associations among three variables, i.e., [ x -> m -> y ]. m is a mediator, a variable that explains some or all of the relationship between x and y.
basic_mediation.png
Mediation models are especially useful for developing models in which an association between two variables is transmitted via a third variable. For example, the effects of a psychological stressor (x) on heart rate (y) may be mediated by activity in brain systems responsible for threat. The stressor activates the brain, and the brain in turn controls the heart.
In mediation analysis, x is called the initial variable. it will often be an experimental variable (IV) that is experimentally manipulated, but it doesn't have to be. y is called the outcome variable (or dependent variable), and it is usually the endpoint that one is interested in explaining. m is the mediator, and is usually a variable that is thought to be important for transmitting the effect of x on y, or on the "causal path" from x to y. It is also possible to include multiple initial variables, multiple mediators, and other covariates in such models. The broader family that the [ x -> m -> y ] model belongs to is called path models.
Here are some other examples. Exercise may improve memory via neurogenesis. Exercise causes new neurons to be generated in the brain, which helps the brain form new memories more efficiently. here, x is exercise, m is neurogenesis, and y is memory. Smoking (x) may lead to premature death (y) via increased rates of cancer (m).

Estimating path models and statistical inference

Testing for mediation involves fitting two linear regression equations. The slopes (coefficients) of these equations are called paths, and the estimated regression coefficients are path coefficients.
There are three path coefficients in the model above. The path coefficient a captures the effect of x on m (e.g., smoking on cancer in the example above). This is estimated by the first equation. The path coefficient b captures the effect of m on y (e.g., cancer on mortality in the example above), and the coefficient c’ the direct effect of x on y after controlling for m. The path through m is called the indirect pathway, and the mediation via M the indirect effect. Statistical inference on this pathway is made by testing the product of the a and b path coefficients.
The equations are:
To test mediation, we estimate both of these equations, calculate the a*b product, and turn this into an inferential statistic (e.g., a t statistic) by dividing by an estimate of its standard error. The Sobel test is one way of doing this using a formula, but it is only asymptotically accurate in large samples and depends strongly on assumptions of normality. In practice, it is typically overconservative. Mediation is more commonly tested using bootstrapping, as described in Shrout and Bolger (2002).
Here is an example of a bootstrapped distribution for the indirect effect from the M3 Mediation Toolbox. This distribution provides confidence intervals and P-values for the a*b effect and other path coefficients:
bootstrap_hist.png
The dark shading shows the top and bottom 2.5% of the distribution, and thus the 95% confidence interval for a*b. If 0 falls outside of this (in the dark gray zone), the effect is significant at P < 0.05.
In whole-brain mediation analysis, we bootstrap path coefficients and the mediation effect at each brain voxel, and save maps of P-values. These are subjected to voxel-wise False Discovery Rate correction to correct for multiple comparisons.

Counterfactual interpretation of mediation

A way of understanding the mediation effect is in terms of a counterfactual: What x -> y association would we have observed if we had prevented m from varying?
We can't directly observe the counterfactual case, but we can test the difference between the simple x -> y effect when we don't control for the mediator (Path c in the figure above) with the x -> y effect when we do control for the mediator (Path c'). Thus, we want to statistically test whether c - c' = 0.
It turns out that with a little algebraic manipulation, we can show that:
So a statistical test of a*b is the same as comparing the effect of x on y across two models: One in which the mediator can vary, and one in which it's statistically adjusted for -- the regression equivalent of holding it constant.
The M3 Mediation Toolbox
The multilevel mediation and moderation (M3) toolbox is a Matlab toolbox designed to permit tests of both single-level and multi-level models.

A note on causality

Mediation models of the type we explore here (and path models more generally) are build on linear regression, with the same assumptions and limitations of other linear models. It is a directional model: As with a standard multiple regression model, we are explaining y as a function of x and m. And, as with standard regression, the presence of a statistical association does not provide direct information on causality. The observed relationships could, in principle, be caused by other variables that are not included in the model (i.e., lurking variables or common causes), and/or the direction of causality can be reversed from how you've formulated it in the model.
Experimental manipulation of variables can provide stronger evidence for causal associations, and sometimes both x and m are experimentally manipulated. This is a major strength of experimental methods across all disciplines! However, when variables are observed rather than manipulated, statistical associations very rarely provide strong information about causality. In particular, when x and m are both observed (not manipulated), reversing the mediation (i.e., swapping x and m) is not a good way to determine what the form of the model should be. This is because differences in relative error (or reliability) could make one model fit better than another, irrespective of the true causality.
lurking.png
An alternative to the linear model approach we describe here is the causal mediation framework, which makes the assumptions required for causal inference more explicit, and provides tests for mediation based on conditional probabilty (essentially, stratification). One entry point for this is Robins and Greenland (1992) and Pearl (2014). However, there is no "magic bullet" for assuring causality if the assumptions underlying causal inference might not be valid. It's worth taking a closer look at these here. To infer causality, we need more than just the variables we're measuring: We need solid information about the broader system in which the variables are embedded.
causal_assumptions.png
The image shown below is from the Columbia causal mediation website.

Further reading

More details on basic principles of mediation, with references, can be found on David Kenny's excellent web page, and in a number of papers and books (e.g., MacKinnon 2008).

Multilevel mediation

In the standard mediation model, each of X, M, and Y are vectors with one observation per person. Thus, the individual participant is the unit of observation. When this is the case, we'll call this a single-level mediation. 99% or so of the mediation analyses in the literature are single-level models. For example, X can include a vector of individuals who are either smokers (coded as 1) or non-smokers (coded as -1), and we include measures of both the incidence of cancer (M) and survival time (Y) for each person.
What happens if we have repeated observations of X, Y, and M within a person? This allows us to estimate the Path a, Path b, and a*b effects within each person. This allows each person "to serve as their own control", and eliminates a number of person-level confounds and noise. For example, in our cancer example above, people will likely vary in their diet, and diet could be correlated with smoking, introducing a potential confound (or "3rd variable") and at the least an additional source of uncontrolled noise. In a mulitlevel model, we can estimate the effect within-person, and test the significance of the path coefficients across individuals, treating participant as a random effect. This reduces the chances that the mediation effect is related to a 3rd variable, and allows us to generalize the effect across a population with a range of diets (and other person-level characteristics). This can often dramatically increase the power and validity of a mediation model.
In addition, with multilevel mediation, we can include person-level covariates in the model as well. For example, we can test whether diet moderates (influences) the effects of smoking on cancer (Path a), the effects of cancer on mortality (Path b), or the mediation (a*b) effect. This is called moderated mediation. Thus, multilevel mediation allows us to include two kinds of variables in the same model: within-person variables that are measured for each person, and between-person (or person-level) variables that are measured once for a person. We can make inferences on the within-person mediation effect and test whether it generalizes over or is moderated by person-level characteristics.
Single-level (standard) and multilevel mediation with all of these variations is implemented in the Multilevel Mediation and Moderation (M3) Matlab Toolbox.

Brain mediators and mediation effect parametric mapping (MEPM)

Multilevel mediation models are fairly rare in the social sciences, but in fMRI studies of cognitive and affective tasks, they are a natural fit. We often manipulate tasks and observe both brain activity and behavioral responses (e.g., ratings or performance) within-person across time. This makes multilevel mediation a natural fit to analyze experimental effects on the brain and their relationships with behavior in a single model. We can even add person-level characteristics like patient status or personality into the model as well.
The mediation model we'll explore in this tutorial looks like this:
mediation1.png
X M Y
X represents the task condition -- e.g., in our example below, the temperature of a hot stimulus applied to the skin. It is often coded with 1 or -1 ("effects coding") when there are two conditions, but can be coded with linear contrast codes or other numerical values when there are more than two conditions. M represents fMRI activity. Y represents a behavioral outcome of interest.
Here, for our neuroimaging multilevel mediation tutorial, we'll extend this in two ways:
  1. We'll search over brain voxels, testing each voxel in an fMRI study as a mediator one at a time. We'll then save maps of how strong the mediation effect is.
  2. We'll change the unit of observation from the participant to conditions within-participant. This is possible because in fMRI, we typically we vary X within-person by including different experimental tasks that might impact M and Y. We can also observe fMRI acitivity (M) and behavior (Y) as they vary within-person. When we model these within-person relationships and perform statistical tests to generalize the associations to a population of individuals, we are performing multilevel mediation.
The Multilevel Mediation and Moderation (M3) Matlab Toolbox will perform both single- and multilevel mediation. The function mediation.m can operate on any type of input data (behavior, psychosocial measures, physiology, brain, genetics, etc.), and will run both single and multilevel mediation depending on the input data.
The toolbox also has special functions for running mediation on each voxel in a neuroimaging dataset (single or multilevel) and saving maps of mediation effects:
mediation_brain % for single-level mediation
mediation_brain_multilevel % for multilevel mediation

About the dataset

This dataset includes 33 participants, each with brain responses to six levels
of heat (44.3 - 49.3 degrees C in one degree increments, which generally span the range from non-painful to painful). For each participant, a .nii image is included with 6 brain volumes (one per intensity level, in ascending order). These model the average activity during painful stimulation, assuming the standard canonical HRF, and were estimated using first-level models in SPM8. A metadata file includes the temperature and average pain rating per subject per temperature.
Aspects of this data appear in these papers, among others:
Wager, T.D., Atlas, L.T., Lindquist, M.A., Roy, M., Choong-Wan, W., Kross, E. (2013). An fMRI-Based Neurologic Signature of Physical Pain. The New England Journal of Medicine. 368:1388-1397
(This study is Study 2 of the paper).
Woo, C. -W., Roy, M., Buhle, J. T. & Wager, T. D. (2015). Distinct brain systems mediate the effects of nociceptive input and self-regulation on pain. PLOS Biology. 13(1):e1002036. doi:10.1371/journal.pbio.1002036
Lindquist, Martin A., Anjali Krishnan, Marina López-Solà, Marieke Jepma, Choong-Wan Woo, Leonie Koban, Mathieu Roy, et al. 2017. Group-Regularized Individual Prediction: Theory and Application to Pain. NeuroImage. 145:274-287.
Stephan Geuter, Elizabeth A Reynolds Losin, Mathieu Roy, Lauren Y Atlas, Liane Schmidt, Anjali Krishnan, Leonie Koban, Tor D Wager, Martin A Lindquist, Multiple Brain Networks Mediating Stimulus–Pain Relationships in Humans, Cerebral Cortex, Volume 30, Issue 7, July 2020, Pages 4204–4219.
The dataset is available in CANlab Core toolbox under "Sample_Datasets/Woo_2015_PlosBio_BMRK3_pain_6levels". BMRK3 is the CANlab study code for this study. It was collected by Mathieu Roy, Jason Buhle, and research assistants in the CANlab.

Using the dataset to demonstrate mulitlevel mediation

This dataset provides a lightweight, minimally viable dataset to demonstrate multilevel mediation. Usually, a mulitlevel mediation dataset would include more than 6 images per person -- for example, it might include trial-level data (as in Atlas et al. 2010, 2014; Koban et al. 2019) or full time series data (as in Wager et al. 2009).
Here, the variable temperatures will serve as the initial variable (X), ratings will serve as the outcome variable (Y), and brain images, whose names are stored in a variable called single_trial_image_names, will serve as the mediator (M). Thus, the analysis will search over brain voxels and test whether each voxel's response during pain mediates the effects of temperature on pain ratings:
[temp (X) -> brain activity during pain (M) -> pain ratings (Y)]
The function mediation_brain_multilevel will take in a set of image names in place of either X, Y, or M.
The format is that each of X, Y, and M are cell arrays, with one cell per subject. Each cell should contain either a column vector of values (for numeric variables) or a list of image names or 4D image for brain image inputs.

Load the dataset

Locally or from the web
metadata_file = which('bmrk3_6levels_metadata.mat');
metadata_file
metadata_file = '/Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Woo_2015_PlosBio_BMRK3_pain_6levels/bmrk3_6levels_metadata.mat'
if isempty(metadata_file)
disp('You need the dataset Woo_2015_PlosBio_BMRK3_pain_6levels on your Matlab path.')
disp('This is in the CANlab Core toolbox.')
error('Check path and files.')
end
load(metadata_file);
% Re-find local path names for brain images
M = single_trial_image_names;
dir_old = fileparts(M{1});
dir_new = fileparts(metadata_file);
for i = 1:length(M), M{i} = strrep(M{i}, dir_old, dir_new); end
single_trial_image_names = M;
X = temperatures;
Y = ratings;
whos X Y M
Name Size Bytes Class Attributes M 1x33 12342 cell X 1x33 5016 cell Y 1x33 5016 cell

Set up and run multilevel mediation

Most of the heavy lifting is done here. There are three steps:
  1. Create a mediation directory to save output, and go there
  2. Configure setup options in a SETUP structure
  3. Run mediation (this will save files to disk)
Note: The mask will likely have to be resliced (resampled and interpolated) to match the origin and dimensions of the brain dataset.
mkdir mediation_results
cd mediation_results
SETUP.mask = which('gray_matter_mask.nii');
SETUP.preprocX = 0;
SETUP.preprocY = 0;
SETUP.preprocM = 0;
% mediation_brain_multilevel(X, Y, M, [other inputs])
mediation_brain_multilevel(temperatures, ratings, single_trial_image_names, SETUP);
Search for mediators First-level covariates: None Second-level moderators: None Found mask: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/gray_matter_mask.nii Preparing data. Mask info: Dimensions: 79 95 68 Voxels in mask: 195612 Mapping all input data volumes to memory Subject 1, 1 images. Subject 2, 1 images. Subject 3, 1 images. Subject 4, 1 images. Subject 5, 1 images. Subject 6, 1 images. Subject 7, 1 images. Subject 8, 1 images. Subject 9, 1 images. Subject 10, 1 images. Subject 11, 1 images. Subject 12, 1 images. Subject 13, 1 images. Subject 14, 1 images. Subject 15, 1 images. Subject 16, 1 images. Subject 17, 1 images. Subject 18, 1 images. Subject 19, 1 images. Subject 20, 1 images. Subject 21, 1 images. Subject 22, 1 images. Subject 23, 1 images. Subject 24, 1 images. Subject 25, 1 images. Subject 26, 1 images. Subject 27, 1 images. Subject 28, 1 images. Subject 29, 1 images. Subject 30, 1 images. Subject 31, 1 images. Subject 32, 1 images. Subject 33, 1 images. Preprocessing is Off ...Creating: X-M_effect ...Creating: M-Y_effect ...Creating: X-Y_direct_effect ...Creating: X-Y_total_effect ...Creating: X-M-Y_effect ...Creating: X-M_ste ...Creating: M-Y_ste ...Creating: X-Y_direct_ste ...Creating: X-Y_total_ste ...Creating: X-M-Y_ste ...Creating: X-M_pvals ...Creating: M-Y_pvals ...Creating: X-Y_direct_pvals ...Creating: X-Y_total_pvals ...Creating: X-M-Y_pvals ...Creating: X-M_indiv_effect ...Creating: M-Y_indiv_effect ...Creating: X-Y_direct_indiv_effect ...Creating: X-Y_total_indiv_effect ...Creating: X-M-Y_indiv_effect ...Creating: X-M_indiv_ste ...Creating: M-Y_indiv_ste ...Creating: X-Y_direct_indiv_ste ...Creating: X-Y_total_indiv_ste ...Creating: X-M-Y_indiv_ste Outputs: 25 images X-M_effect.img - 1 volume(s) M-Y_effect.img - 1 volume(s) X-Y_direct_effect.img - 1 volume(s) X-Y_total_effect.img - 1 volume(s) X-M-Y_effect.img - 1 volume(s) X-M_ste.img - 1 volume(s) M-Y_ste.img - 1 volume(s) X-Y_direct_ste.img - 1 volume(s) X-Y_total_ste.img - 1 volume(s) X-M-Y_ste.img - 1 volume(s) X-M_pvals.img - 1 volume(s) M-Y_pvals.img - 1 volume(s) X-Y_direct_pvals.img - 1 volume(s) X-Y_total_pvals.img - 1 volume(s) X-M-Y_pvals.img - 1 volume(s) X-M_indiv_effect.img - 33 volume(s) M-Y_indiv_effect.img - 33 volume(s) X-Y_direct_indiv_effect.img - 33 volume(s) X-Y_total_indiv_effect.img - 33 volume(s) X-M-Y_indiv_effect.img - 33 volume(s) X-M_indiv_ste.img - 1 volume(s) M-Y_indiv_ste.img - 1 volume(s) X-Y_direct_indiv_ste.img - 1 volume(s) X-Y_total_indiv_ste.img - 1 volume(s) X-M-Y_indiv_ste.img - 1 volume(s) This will be evaluated at each voxel: [out1,out2,out3,out4,out5,out6,out7,out8,out9,out10,out11,out12,out13,out14,out15,out16,out17,out18,out19,out20,out21,out22,out23,out24,out25] = fhandle(Y); Slice 1, 1005 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 35 s Writing slice 1 in output images. Slice 2, 1270 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 188 s Writing slice 2 in output images. Slice 3, 1529 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 168 s Writing slice 3 in output images. Slice 4, 1722 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 219 s Writing slice 4 in output images. Slice 5, 1882 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 295 s Writing slice 5 in output images. Slice 6, 1993 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 301 s Writing slice 6 in output images. Slice 7, 2124 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 369 s Writing slice 7 in output images. Slice 8, 2249 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 310 s Writing slice 8 in output images. Slice 9, 2381 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 331 s Writing slice 9 in output images. Slice 10, 2531 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 324 s Writing slice 10 in output images. Slice 11, 2811 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 340 s Writing slice 11 in output images. Slice 12, 3213 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 356 s Writing slice 12 in output images. Slice 13, 3414 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 358 s Writing slice 13 in output images. Slice 14, 3593 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 431 s Writing slice 14 in output images. Slice 15, 3724 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 1 s Analysis100% Elapsed: 232 s Writing slice 15 in output images. Slice 16, 3833 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 126 s Writing slice 16 in output images. Slice 17, 4187 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 154 s Writing slice 17 in output images. Slice 18, 4539 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 144 s Writing slice 18 in output images. Slice 19, 4608 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 256 s Writing slice 19 in output images. Slice 20, 4597 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 156 s Writing slice 20 in output images. Slice 21, 4603 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 148 s Writing slice 21 in output images. Slice 22, 4497 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data, and 0 voxels have missing data. Dataset 3: 0 voxels have no data, and 0 voxels have missing data. Dataset 4: 0 voxels have no data, and 0 voxels have missing data. Dataset 5: 0 voxels have no data, and 0 voxels have missing data. Dataset 6: 0 voxels have no data, and 0 voxels have missing data. Dataset 7: 0 voxels have no data, and 0 voxels have missing data. Dataset 8: 0 voxels have no data, and 0 voxels have missing data. Dataset 9: 0 voxels have no data, and 0 voxels have missing data. Dataset 10: 0 voxels have no data, and 0 voxels have missing data. Dataset 11: 0 voxels have no data, and 0 voxels have missing data. Dataset 12: 0 voxels have no data, and 0 voxels have missing data. Dataset 13: 0 voxels have no data, and 0 voxels have missing data. Dataset 14: 0 voxels have no data, and 0 voxels have missing data. Dataset 15: 0 voxels have no data, and 0 voxels have missing data. Dataset 16: 0 voxels have no data, and 0 voxels have missing data. Dataset 17: 0 voxels have no data, and 0 voxels have missing data. Dataset 18: 0 voxels have no data, and 0 voxels have missing data. Dataset 19: 0 voxels have no data, and 0 voxels have missing data. Dataset 20: 0 voxels have no data, and 0 voxels have missing data. Dataset 21: 0 voxels have no data, and 0 voxels have missing data. Dataset 22: 0 voxels have no data, and 0 voxels have missing data. Dataset 23: 0 voxels have no data, and 0 voxels have missing data. Dataset 24: 0 voxels have no data, and 0 voxels have missing data. Dataset 25: 0 voxels have no data, and 0 voxels have missing data. Dataset 26: 0 voxels have no data, and 0 voxels have missing data. Dataset 27: 0 voxels have no data, and 0 voxels have missing data. Dataset 28: 0 voxels have no data, and 0 voxels have missing data. Dataset 29: 0 voxels have no data, and 0 voxels have missing data. Dataset 30: 0 voxels have no data, and 0 voxels have missing data. Dataset 31: 0 voxels have no data, and 0 voxels have missing data. Dataset 32: 0 voxels have no data, and 0 voxels have missing data. Dataset 33: 0 voxels have no data, and 0 voxels have missing data. Elapsed: 0 s Analysis100% Elapsed: 140 s Writing slice 22 in output images. Slice 23, 4387 in-mask voxels Loading and preprocessing for this slice for subject: Summary of missing or bad voxels: Dataset 1: 0 voxels have no data, and 0 voxels have missing data. Dataset 2: 0 voxels have no data...
mediation_brain_multilevel saves a SETUP file with the input variables and estimated results maps for all the relevant effects. You can save the files in directory and use them to make tables and figures of results at any time later. Here are some of the key files:
X-M_effect.img Map of average slope coefficients for Path a
M-Y_effect.img Map of average slope coefficients for Path b
X-M-Y_effect.img Map of average slope coefficients for Path a*b (mediation effect)
X-M_indiv_effect.img 4D image wtih individual slope coefficients for Path a (one image per participant)
M-Y_indiv_effect.img 4D image wtih individual slope coefficients for Path b (one image per participant)
X-M-Y_indiv_effect.img 4D image wtih individual slope coefficients for Path a*b (one image / participant)
X-M_ste.img Standard error for Path a (across participants)
M-Y_ste, X-M-Y_ste.img Standard errors for Path b and a*b effects (across participants)
X-M_indiv_ste.img 4D image wtih individual standard errors for Path a (one image per participant)
M-Y_indiv_ste.img, X-M-Y... Individual standard errors for Path b and a*b (one image per participant
X-M_pvals.img Map of P-values for Path a
M-Y_pvals.img, X-M-Y... Maps of P-values for Paths b and a*b
Once results scripts have been run (e.g., with mediation_brain_results; see below), additional output will be generated. This includes printouts of slice montages (.png graphics) and tables with thresholded results. An example of these looks like this:
M-Y_pvals_005_k5_noprune_log.txt
M-Y_pvals_005_k5_noprune_results.txt

Multilevel mediation results: Viewing and saving

Once you've completed a successful run of mediation_brain_multilevel, you can go to the results directory and examine the results. The goal of this section is to construct and save:
We can get results on significant regions for each of the Path a, Path b, and Path a*b (mediation effect) maps.
For a single-level mediation, the thresholded Path a*b map (showing signifcant voxels) will always include a subset of the voxels significant in the Path a and Path b maps.
For a multilevel mediation, this is not necessarily the case. This is because the significance of the a*b mediation effect is based on the strength of the average a effect, the average b effect, and the covariance between them. This is explained by Kenny, Korchmaros, and Bolger (2003), a seminal paper on mulitlevel mediation. It's discussed in relation to brain maps in Atlas et al. 2010.
This means that you might be interested in looking at each of the maps, as each provides a different and complimentary piece of information.
Note: You don't need to have run the mediation in the same Matlab session to use these results functions. You just need to be in a valid mediation results directory with saved maps.
Note: To extract data from significant clusters (regions), save it in clusters structures (see below), and generate the most complete tables of mediation results, you do need to have access to the original data files used to run the mediation. Their names are stored (for this example, with brain images as mediator) in SETUP.data.M. If you move the images or results directory, you may need to replace this field with the updated file names for full functionality.

Choosing a threshold

We'll need to specify a significance threshold, entered as a P-value threshold. The function below chooses a threshold that satisfies FDR on average across the Path a, b, and a*b maps.
There are other thresholding options as well. We could also choose an uncorrected P-value threshold to be entered into the mediation_brain_results function below.
SETUP = mediation_brain_corrected_threshold('fdr');
Calculating FDR threshold across family of tests in these images: X-M_pvals.img M-Y_pvals.img X-M-Y_pvals.img Total p-values: 586836 FDR threshold is 0.003372 Saving in SETUP.fdr_p_thresh
This gives us an equivalent P-value to satisfy FDR-corrected at q < 0.05 on average across the 3 maps, which is stored in SETUP.fdr_p_thresh. It will be Inf if there is not enough signal to meet the threshold.
Note: P-values are always two-tailed, unlike SPM P-values, which are one-tailed. So P < 0.002 here is equivalent to P < 0.001 using SPM.
Multi-thresholding and 'pruned' significant regions
We can choose a series of P-value thresholds if we want to save clusters ('blobs') where at least one voxel meets the most stringent threshold, but show continguous voxels in the cluster down to the most liberal threshold. We call this 'pruned' clusters, because we're thresholding a liberal display threshold but 'pruning' the set of significant regions so that each region has at least one voxel at the most stringent (often corrected) threshold. This can be useful for display and in particular for showing the extent of regions around the most stringent value. For example, entering these options into mediation_brain_results will show us clusters where at least one voxel is FDR-corrected at q < 0.05 (map-wise), but we'll see the extent of each blob down to p < 0.005 as long as there are 3 contiguous voxels at that threshold (that are also contiguous with FDR-corrected results) and down to p < 0.01 uncorrected as long as there are at least 10 contiguous voxels.
mediation_brain_results(..., 'thresh', [Inf .005 .01], 'size', [1 3 10], 'fdrthresh', .05, ...)
Other preliminary stuff
Let's define some variables that we can use to print nice headers in the report:
dashes = '----------------------------------------------';
printstr = @(dashes) disp(dashes);
printhdr = @(str) fprintf('\n\n%s\n%s\n%s\n%s\n%s\n\n', dashes, dashes, str, dashes, dashes);

Results for Path a

The code below generates results for Path a, the effects of stimulus temperature on brain activity, within-person, estimated across the 6 temperatures and 6 brain images per participant. Results and P-values reflect population inference, treating participant as a random effect.
mediation_brain_results can accept various input options. As with all functions, help <function_name> (e.g., help mediation_brain_results) will explain the full set of options. Here, we'll use the average FDR-corrected threshold from above, and keywords that generate slices and tables, and save results output. The output will be shown on the screen, printed to the command window, and saved in files in the mediation results directory. The output variables clpos, etc. are clusters structures that can be used in interactive follow-up analyses and plots (we'll explain these below).
printhdr('Path a: Temperature to Brain Response')
---------------------------------------------- ---------------------------------------------- Path a: Temperature to Brain Response ---------------------------------------------- ----------------------------------------------
mediation_brain_results('a', 'thresh', ...
SETUP.fdr_p_thresh, 'size', 5, ...
'slices', 'tables', 'names', 'save');
Will show slices. Will print tables. Saving log file: X-M_pvals_003_k5_noprune_log.txt Other key results in: X-M_pvals_003_k5_noprune_results.txt Will also save figures of slice montages if 'slices' requested. iimg_multi_threshold viewer ===================================================== Showing positive or negative: pos Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
iimg_multi_threshold viewer ===================================================== Showing positive or negative: neg Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
Extracting image data for significant clusters. Extracting data for multilevel mediation 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 Extracting image data for significant clusters. Extracting data for multilevel mediation 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033
Warning: cl2region:no comparable field name
Warning: cl2region:no comparable field imnames
Warning: cl2region:no comparable field snr_avgts
Warning: cl2region:no comparable field snr
Warning: cl2region:no comparable field numpos
Warning: cl2region:no comparable field power80
Warning: cl2region:no comparable field name
Warning: cl2region:no comparable field imnames
Warning: cl2region:no comparable field snr_avgts
Warning: cl2region:no comparable field snr
Warning: cl2region:no comparable field numpos
Warning: cl2region:no comparable field power80
Summary of output images: X-M_pvals.img Results clusters clpos and clneg are returned for the LAST image in this set. Printing Tables. Z field contains: Mediation a effect (shown in maxstat) Name index x y z corr voxels volume_mm3 maxstat snr_avgts(d) minsnr maxsnr numpos power80 partialr_with_y partialr_p Za_max Pa_max Zb_max Pb_max Zab_max Pab_max num_sig_voxels Bstem_Med_R 1 2 -38 -44 NaN 35 280 7.84 0.03 -0.85 0.69 2 19846 NaN NaN 3.54 0.0004 1.74 0.08 1.06 0.29 Multiple regions 2 -10 -58 -24 NaN 5051 40408 19.60 0.35 -3.87 2.44 3 129 NaN NaN 5.93 0.0000 2.81 0.0050 2.02 0.04 Multiple regions 3 6 -14 -4 NaN 1280 10240 17.31 1.21 -2.26 3.58 6 12 NaN NaN 5.54 0.0000 2.29 0.02 2.03 0.04 Multiple regions 4 50 -8 12 NaN 6552 52416 23.83 2.29 -1.64 4.84 6 5 NaN NaN 6.59 0.0000 1.55 0.12 1.32 0.19 Multiple regions 5 -52 -12 10 NaN 4392 35136 21.51 1.04 -3.82 7.12 5 16 NaN NaN 6.23 0.0000 2.22 0.03 2.93 0.0034 Thal_Hythal 6 -12 -2 -14 NaN 9 72 7.59 0.89 0.72 0.97 5 22 NaN NaN 3.48 0.0005 2.79 0.0052 1.56 0.12 Caudate_Ca_L 7 -10 8 2 NaN 14 112 7.72 0.18 -0.03 1.04 5 522 NaN NaN 3.51 0.0004 0.85 0.40 1.48 0.14 Ctx_V6_L 8 -14 -74 24 NaN 12 96 6.97 -0.10 -0.33 0.23 1 1573 NaN NaN 3.31 0.0009 1.85 0.06 0.61 0.54 Multiple regions 9 10 -4 52 NaN 7225 57800 27.06 1.25 -3.37 8.88 5 12 NaN NaN 7.05 0.0000 2.26 0.02 1.69 0.09 Ctx_4_R 10 40 -16 38 NaN 27 216 7.64 0.23 -0.03 0.51 4 296 NaN NaN 3.49 0.0005 1.32 0.19 0.57 0.57 Ctx_7AL_L 11 -22 -48 70 NaN 243 1944 11.07 0.32 -0.56 1.14 3 157 NaN NaN 4.32 0.0000 0.05 0.96 -0.2962 0.77 Z field contains: Mediation a effect (shown in maxstat) Name index x y z corr voxels volume_mm3 maxstat snr_avgts(d) minsnr maxsnr numpos power80 partialr_with_y partialr_p Za_max Pa_max Zb_max Pb_max Zab_max Pab_max num_sig_voxels Multiple regions 1 -54 -10 -20 NaN 1210 9680 13.06 -0.16 -1.84 2.15 4 613 NaN NaN -4.7418 0.0000 -0.2160 0.83 0.74 0.46 Ctx_TE1a_R 2 60 -8 -20 NaN 573 4584 12.74 -0.36 -5.75 0.94 2 127 NaN NaN -4.6759 0.0000 -1.8298 0.07 1.62 0.11 Ctx_PeEc_R 3 28 -24 -24 NaN 553 4424 13.79 -3.51 -5.76 0.14 0 3 NaN NaN -4.8868 0.0000 -1.3782 0.17 1.36 0.17 Ctx_EC_L 4 -24 -12 -26 NaN 301 2408 14.79 -1.25 -1.86 0.48 0 12 NaN NaN -5.0799 0.0000 -1.6085 0.11 0.97 0.33 Ctx_PHA3_L 5 -32 -34 -20 NaN 179 1432 12.26 0.13 -1.04 2.26 3 894 NaN NaN -4.5756 0.0000 -0.7757 0.44 1.16 0.25 Ctx_OFC_R 6 6 24 -22 NaN 15 120 7.05 -0.29 -0.38 -0.15 4 191 NaN NaN -3.3314 0.0009 -0.8806 0.38 1.50 0.13 Ctx_V2_L 7 -22 -88 -14 NaN 56 448 8.22 -0.33 -1.17 0.52 3 145 NaN NaN -3.6425 0.0003 -0.2919 0.77 0.33 0.75 Ctx_a47r_L 8 -42 50 -10 NaN 356 2848 10.47 -0.59 -1.69 1.93 1 47 NaN NaN -4.1866 0.0000 -1.2592 0.21 0.77 0.44 Multiple regions 9 -6 50 4 NaN 3211 25688 16.38 -0.65 -2.35 1.27 2 40 NaN NaN -5.3737 0.0000 -1.4417 0.15 1.57 0.12 Ctx_PH_L 10 -42 -70 -6 NaN 183 1464 9.98 1.55 0.14 2.74 6 8 NaN NaN -4.0737 0.0000 -1.2684 0.20 1.34 0.18 Multiple regions 11 44 -70 28 NaN 2027 16216 15.26 -0.75 -5.03 1.64 2 30 NaN NaN -5.1683 0.0000 -2.0574 0.04 1.74 0.08 Cau_R 12 14 26 -4 NaN 13 104 6.16 -0.12 -0.26 0.32 4 1087 NaN NaN -3.0725 0.0021 -0.1474 0.88 0.85 0.39 Caudate_Ca_R 13 14 22 10 NaN 56 448 7.89 -0.31 -0.47 -0.13 4 171 NaN NaN -3.5587 0.0004 -1.1738 0.24 1.58 0.11 Multiple regions 14 2 -52 30 NaN 2308 18464 14.17 -0.49 -2.44 1.92 2 67 NaN NaN -4.9607 0.0000 0.31 0.76 0.52 0.60 V_Striatum_L 15 -10 16 12 NaN 33 264 7.87 -0.26 -0.38 -0.02 3 241 NaN NaN -3.5523 0.0004 0.11 0.92 1.06 0.29 Ctx_V4_L 16 -28 -84 16 NaN 159 1272 9.60 -0.18 -1.48 0.89 2 513 NaN NaN -3.9843 0.0001 -1.1468 0.25 0.54 0.59 Ctx_PGs_L 17 -44 -68 30 NaN 1147 9176 13.84 -1.01 -1.85 0.80 1 17 NaN NaN -4.8972 0.0000 -1.2226 0.22 0.48 0.63 Ctx_1_L 18 -66 -6 28 NaN 8 64 6.58 -0.97 -1.22 -0.71 1 19 NaN NaN -3.1985 0.0014 -0.6720 0.50 1.19 0.23 Ctx_IPS1_L 19 -26 -70 28 NaN 6 48 6.07 -1.38 -1.80 -0.79 0 10 NaN NaN -3.0459 0.0023 0.14 0.89 0.66 0.51 Ctx_3b_L 20 -38 -26 52 NaN 931 7448 14.59 -1.00 -3.42 0.20 1 18 NaN NaN -5.0417 0.0000 -1.3769 0.17 1.45 0.15 Ctx_8Av_R 21 30 26 50 NaN 691 5528 12.79 -0.85 -2.86 0.57 1 24 NaN NaN -4.6865 0.0000 -2.2102 0.03 1.45 0.15 Ctx_LIPd_R 22 28 -58 42 NaN 18 144 6.40 0.33 -0.26 0.79 4 146 NaN NaN -3.1458 0.0017 -0.0682 0.95 0.38 0.71 Ctx_PGs_L 23 -30 -74 46 NaN 7 56 6.38 -0.94 -1.07 -0.79 2 20 NaN NaN -3.1382 0.0017 -1.5494 0.12 0.88 0.38 Saving clusters in cl_X-M_pvals_003_k5_noprune
Saving image of slices: orth_X-M_pvals_003_k5_noprune_coronal.png
Saving image of slices: orth_X-M_pvals_003_k5_noprune_sagittal.png
Saving image of slices: orth_X-M_pvals_003_k5_noprune_axial.png

Results for Path b

Generate results for effects of brain responses on pain reports, controlling for stimulus temperature.
printhdr('Path b: Brain Response to Pain, Adjusting for Temperature')
---------------------------------------------- ---------------------------------------------- Path b: Brain Response to Pain, Adjusting for Temperature ---------------------------------------------- ----------------------------------------------
mediation_brain_results('b', 'thresh', ...
SETUP.fdr_p_thresh, 'size', 5, ...
'slices', 'tables', 'names', 'save');
Will show slices.
Will print tables. Saving log file: M-Y_pvals_003_k5_noprune_log.txt Other key results in: M-Y_pvals_003_k5_noprune_results.txt Will also save figures of slice montages if 'slices' requested. iimg_multi_threshold viewer ===================================================== Showing positive or negative: pos Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
iimg_multi_threshold viewer ===================================================== Showing positive or negative: neg Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
Extracting image data for significant clusters. Extracting data for multilevel mediation 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 Extracting image data for significant clusters. Extracting data for multilevel mediation 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033
Warning: cl2region:no comparable field name
Warning: cl2region:no comparable field imnames
Warning: cl2region:no comparable field snr_avgts
Warning: cl2region:no comparable field snr
Warning: cl2region:no comparable field numpos
Warning: cl2region:no comparable field power80
Warning: cl2region:no comparable field name
Warning: cl2region:no comparable field imnames
Warning: cl2region:no comparable field snr_avgts
Warning: cl2region:no comparable field snr
Warning: cl2region:no comparable field numpos
Warning: cl2region:no comparable field power80
Summary of output images: M-Y_pvals.img Results clusters clpos and clneg are returned for the LAST image in this set. Printing Tables. Z field contains: Mediation b effect (shown in maxstat) Name index x y z corr voxels volume_mm3 maxstat snr_avgts(d) minsnr maxsnr numpos power80 partialr_with_y partialr_p Za_max Pa_max Zb_max Pb_max Zab_max Pab_max num_sig_voxels Cblm_X_L 1 -24 -38 -38 NaN 5 40 6.78 -0.55 -0.77 -0.24 2 55 NaN NaN 2.61 0.0089 3.25 0.0011 1.54 0.12 Cblm_VI_L 2 -20 -60 -28 NaN 196 1568 8.40 0.86 -0.92 1.82 6 23 NaN NaN 5.24 0.0000 3.69 0.0002 2.82 0.0048 Cblm_V_L 3 -2 -62 -14 NaN 10 80 6.72 0.41 0.35 0.45 4 98 NaN NaN 4.38 0.0000 3.24 0.0012 2.70 0.0069 Amygdala_CM_ 4 20 -2 -14 NaN 6 48 6.80 1.22 0.97 1.33 6 12 NaN NaN 4.94 0.0000 3.26 0.0011 1.78 0.07 Ctx_43_L 5 -64 2 8 NaN 14 112 6.47 1.01 0.95 1.11 6 17 NaN NaN 4.24 0.0000 3.17 0.0015 2.60 0.0092 Ctx_Ig_R 6 36 -16 16 NaN 12 96 7.36 1.20 0.67 1.55 5 13 NaN NaN 4.87 0.0000 3.41 0.0006 1.86 0.06 Ctx_PFcm_R 7 38 -30 18 NaN 5 40 6.31 0.45 0.32 0.50 5 80 NaN NaN 3.32 0.0009 3.12 0.0018 1.08 0.28 Ctx_OP4_R 8 48 -18 18 NaN 17 136 6.79 1.18 0.70 1.56 5 13 NaN NaN 5.33 0.0000 3.26 0.0011 2.07 0.04 Ctx_PSL_L 9 -60 -44 24 NaN 61 488 9.38 -0.68 -1.50 0.17 2 36 NaN NaN 3.51 0.0005 3.93 0.0001 1.52 0.13 Ctx_p32pr_L 10 -8 14 30 NaN 14 112 7.02 0.64 0.49 0.92 5 40 NaN NaN 5.40 0.0000 3.32 0.0009 2.09 0.04 Ctx_p32pr_R 11 6 18 32 NaN 7 56 6.39 0.81 0.71 0.92 5 26 NaN NaN 4.75 0.0000 3.14 0.0017 2.39 0.02 Ctx_6mp_R 12 20 -16 58 NaN 6 48 6.69 -0.33 -0.50 0.01 3 150 NaN NaN 3.50 0.0005 3.23 0.0012 1.94 0.05 Z field contains: Mediation b effect (shown in maxstat) Name index x y z corr voxels volume_mm3 maxstat snr_avgts(d) minsnr maxsnr numpos power80 partialr_with_y partialr_p Za_max Pa_max Zb_max Pb_max Zab_max Pab_max num_sig_voxels Ctx_PHA3_R 1 32 -34 -20 NaN 71 568 8.70 -1.51 -2.88 -0.71 0 9 NaN NaN -3.2236 0.0013 -3.7659 0.0002 1.95 0.05 Ctx_STSva_L 2 -52 -8 -20 NaN 6 48 6.80 -0.24 -0.36 -0.12 3 288 NaN NaN -3.6330 0.0003 -3.2590 0.0011 2.17 0.03 Ctx_V4t_R 3 44 -76 -2 NaN 33 264 8.24 0.67 -0.40 1.14 4 38 NaN NaN -3.9319 0.0001 -3.6472 0.0003 1.30 0.19 Ctx_8C_L 4 -48 24 38 NaN 5 40 6.17 -0.40 -0.49 -0.30 3 100 NaN NaN -2.8994 0.0037 -3.0757 0.0021 1.39 0.16 Ctx_8C_L 5 -38 20 40 NaN 5 40 6.09 -0.61 -0.63 -0.55 1 45 NaN NaN -2.4114 0.02 -3.0519 0.0023 0.92 0.36 Ctx_8Av_L 6 -36 24 50 NaN 67 536 8.35 -0.69 -0.99 -0.52 0 35 NaN NaN -3.5284 0.0004 -3.6776 0.0002 1.84 0.07 Ctx_8BL_R 7 2 48 48 NaN 12 96 7.27 -0.52 -0.64 -0.04 2 60 NaN NaN -1.9130 0.06 -3.3908 0.0007 0.81 0.42 Saving clusters in cl_M-Y_pvals_003_k5_noprune
Saving image of slices: orth_M-Y_pvals_003_k5_noprune_coronal.png
Saving image of slices: orth_M-Y_pvals_003_k5_noprune_sagittal.png
Saving image of slices: orth_M-Y_pvals_003_k5_noprune_axial.png

Results for Path a*b

Generate results for the mediation effect. Here, we'll return some clusters structures with results to the workspace as output so we can examine them later. (We can also load them from disk).
printhdr('Path a*b: Brain Mediators of Temperature Effects on Pain')
---------------------------------------------- ---------------------------------------------- Path a*b: Brain Mediators of Temperature Effects on Pain ---------------------------------------------- ----------------------------------------------
[clpos, clneg, clpos_data, clneg_data, clpos_data2, clneg_data2] = mediation_brain_results('ab', 'thresh', ...
SETUP.fdr_p_thresh, 'size', 5, ...
'slices', 'tables', 'names', 'save');
Will show slices.
Will print tables. Saving log file: X-M-Y_pvals_003_k5_noprune_log.txt Other key results in: X-M-Y_pvals_003_k5_noprune_results.txt Will also save figures of slice montages if 'slices' requested. iimg_multi_threshold viewer ===================================================== Showing positive or negative: pos Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
iimg_multi_threshold viewer ===================================================== Showing positive or negative: neg Overlay is: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/SPM8_colin27T1_seg.img No mask specified in iimg_multi_threshold Entered p-value image? : Yes Height thresholds:0.0034 Extent thresholds: 5 Show only contiguous with seed regions: No Warning: p-values in cl.Z will not give valid spm_max subclusters. log(1/p) saved in output cl.Z field.
Extracting image data for significant clusters. Extracting data for multilevel mediation 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 Extracting image data for significant clusters. No clusters to extract.
Warning: cl2region:no comparable field name
Warning: cl2region:no comparable field imnames
Warning: cl2region:no comparable field snr_avgts
Warning: cl2region:no comparable field snr
Warning: cl2region:no comparable field numpos
Warning: cl2region:no comparable field power80
Summary of output images: X-M-Y_pvals.img Results clusters clpos and clneg are returned for the LAST image in this set. Printing Tables. Full tables can only be printed if valid data are extracted. I did not find valid images to extract from, so I'm printing abbreviated tables. Positive effects Z field contains: Mediation ab effect (shown in maxstat) Name index x y z corr voxels volume_mm3 maxstat snr_avgts(d) minsnr maxsnr numpos power80 Ctx_TE1m_L 1 -62 -26 -20 NaN 6 48 6.92 0.23 0.15 0.28 3 298 Ctx_FOP1_L 2 -58 4 4 NaN 13 104 6.99 1.10 1.01 1.22 6 15 Ctx_FOP4_R 3 38 10 6 NaN 25 200 8.93 0.40 -0.15 1.91 5 103 Ctx_43_R 4 60 2 10 NaN 14 112 7.35 1.75 1.39 2.25 6 7 Ctx_6a_L 5 -22 14 48 NaN 12 96 7.64 -0.23 -0.32 -0.13 5 294 Negative effects No results to print. Saving clusters in cl_X-M-Y_pvals_003_k5_noprune
Saving image of slices: orth_X-M-Y_pvals_003_k5_noprune_coronal.png
Saving image of slices: orth_X-M-Y_pvals_003_k5_noprune_sagittal.png
Saving image of slices: orth_X-M-Y_pvals_003_k5_noprune_axial.png

Interactive data analysis: Using extracted clusters

The mediation toolbox saves output in standard formats that you can use in scripted or interactive analyses using other tools, including the CANlab Core toolbox. CANlab Core is an interactive, object-oriented toolbox written in Matlab. For more information on these tools and walkthroughs, see canlab.github.io.
A sample "road map" showing some of the things you can do with simple commands in the CANlab Core toolbox is below. These include surfaces, slices, and other visualization; tables of neuroimaging results labeled with reference atlases; data diagnostic plots; and statistical and machine learning analyses.
canlab_core_displaytools_2021-09-03.png
The mediation toolbox returns two kinds of outputs that you can use for a variety of analyses. The first is .nii images saved to disk, which you can read and analyze using a variety of neuroimaging software applications. The second is "clusters" structures that return output to the workspace (and disk) that you can use for further visualization and analysis.
The "clusters" structure, and it's newer object-oriented version in the CANlab Core toolbox--the region class object--is a data format that is designed to store information about contiguous suprathreshold (significant) regions ('blobs'). The cluster or region-class variable is a vector, with one element per brain region. Within each element, a struct-format data structure describes the region, including its voxel locations, mapping from voxel to "world" (mm brain) space, statistics associated with each voxel and the region average, and potentially data for voxels or the region average that has been extracted from images and attached.
By default (if output variables are requested and it can find valid data input images), mediation_brain_results returns clusters for both positive and negative effects, with image data extracted for each region and averaged over in-region voxels. This allows you to examine and plot the data, and re-run a descriptive mediation analysis and mediation plots for any given region.
More generally, once you have a clusters structure, you can do lots of things with it:
1. Extract data from any images you want and do ROI analysis (e.g., extract_contrast_data)
2. Plot blobs on orthviews (sections) or slices, with many options for display control with cluster_orthviews
3. Show them on a slice montage with montage_clusters
4. Make a table of coordinates, etc. with cluster_table or mediation_brain_print_tables
5. Make a surface image plot with cluster_surf
6. Plot on flexible, custom surfaces by combining use of addbrain and cluster_surf
7. Convert to a region object with cluster2region, and then use newer region methods to do all the above and more (see methods(region))
Clusters are defined relative to a threshold, and are produced by mediation_brain_results. When we ran the results above, it generated these cluster structure variables:
Within each cluster, the field .shorttitle contains autolabeled names and .mm_center contains the coordinates of the region center. For clpos_data and clneg_data, clpos_data2, clneg_data2 only, .timeseries contains extracted data averaged across the region, and .all_data contains extracted data from each voxel for a given participant. The fields .a_effect_Z, .a_effect_p, and so on give Z-values and P-values for each effect for each voxel in the region (for the group, saved in the first subject's cell only).
The clusters with extracted data are also saved to the hard drive, since the 'save' option was specified. For example, because we entered the 'save' option above, mediation_brain_results has saved files like this one:
load('cl_M-Y_pvals_003_k5_noprune.mat')
whos cl*
Name Size Bytes Class Attributes clfilename 1x27 54 char clneg 1x1 25232 cell clneg_data 1x33 1455826 cell clneg_data2 1x7 103738 struct clneg_extent 1x7 25128 struct clpos 1x1 43296 cell clpos_data 1x33 2500778 cell clpos_data2 1x12 177362 struct clpos_extent 1x12 43192 struct
M-Y indicates the Path b effect, 003 indicates a P < 0.003 threshold, and k5 indicates a 5-contiguous-voxel minimum extent threshold. Noprune indicates that we did not ask for multi-threshold pruning but instead entered a single threshold (see above). This yields the clpos, clneg, etc. variables described above.
Here is another way of printing a basic table with region labels and region-level stats, using CANlab object-oriented tools. We first convert the clusters structure (clpos_data2) to a region object, and then we can use region-class methods like table().
The region object cannot store all the fields we've added to the cl structure, so we'll get warnings that it's not transferring some information. We'll turn those off here.
warning off
pos_regions = cluster2region(clpos_data2);
warning on
table(pos_regions, 'nolegend');
____________________________________________________________________________________________________________________________________________ Positive Effects Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index ________________ ______ _________________ ______ _____________________________ _____________________ _____________________ ____________ {'Amygdala_CM_'} 216 20 -2 -14 6.8032 {'Amygdala' } 33 0 4 {'Cblm_X_L' } 184 -24 -38 -38 6.7805 {'Cerebellum' } 52 0 1 {'Cblm_VI_L' } 3472 -20 -60 -28 7.0345 {'Cerebellum' } 72 0 2 {'Cblm_V_L' } 320 -2 -62 -14 6.7182 {'Cerebellum' } 65 0 3 {'Ctx_6mp_R' } 208 20 -16 58 6.6858 {'Cortex_SomatomotorA' } 27 0 12 {'Ctx_43_L' } 392 -64 2 8 6.4727 {'Cortex_SomatomotorB' } 49 0 5 {'Ctx_Ig_R' } 464 36 -16 16 7.0345 {'Cortex_SomatomotorB' } 52 1 6 {'Ctx_PFcm_R' } 224 38 -30 18 6.3051 {'Cortex_SomatomotorB' } 50 0 7 {'Ctx_OP4_R' } 696 48 -18 18 6.7912 {'Cortex_SomatomotorB' } 48 0 8 {'Ctx_PSL_L' } 1168 -60 -44 24 7.0345 {'Cortex_Ventral_AttentionA'} 64 1 9 {'Ctx_p32pr_L' } 400 -8 14 30 7.0235 {'Cortex_Ventral_AttentionA'} 32 0 10 {'Ctx_p32pr_R' } 232 6 18 32 6.3916 {'Cortex_Ventral_AttentionB'} 59 0 11 Negative Effects No regions to display
There are many other region methods and options as well. For example, slice montages:
montage(pos_regions);
montage(pos_regions, 'regioncenters', 'colormap');