CANlab Robust Regression Walkthrough
This script demonstrates how to use the CANlab robust regression toolbox to run robust 2nd-level analyses.
Robust regression is a popular approach for principled down-weighting of outliers in a General Linear Model analysis. Robust regression has particular advantages when conducting maps over many tests, as in many neuroimaging analyses. In such cases, outliers cannot be easily checked and dealt with for each individual test. The application of robust regression to neuroimaging is described in this paper:
Theory
• Outliers can violate assumptions, have very large effects on regression coefficients
• High-variance observations tend to dominate if their observed values are extreme
• When assumptions cannot be checked at each voxel, automatic procedures for weighting based on outlier status advantageous
• Robust regression: An automatic procedure for identifying cases that are potential outliers and down-weighting
• Iterative GLS, estimation procedure, but weights are based on residual value rather than variancePotential way to deal with heteroscedastic variances
Installation and code
Toolbox installation
In brief, you will need SPM software (for image reading/writing only) and the Robust toolbox on your Matlab path. For full functionality, you'll also need the CANlab Core Tools repository, including subfolders, on your path. Download or "clone" (add to your Github destkop) these two toolboxes and add each to your Matlab path with subfolders:
In addition, you will need Statistical Parametric Mapping (SPM, SPM2/5/12) installed and on your Matlab path. This is used for image reading and writing, but not for statistics.
If you are using robfit_parcelwise (see below), you will also need:
Installation. To install these, place the source code folders in a folder on your hard drive, and type
>> pathtool
at the Matlab command prompt.
Select each of the folders named above, and select “add with subfolders”
SPM and Matlab signal processing toolboxes required
The robust regression toolbox is not an SPM toolbox per se. It uses SPM image manipulation (data I/O) functions, but does not rely on any SPM functions for statistics. It does, however, use the Matlab Statistics Toolbox, so you will need that as well.
Main functions in the robust regression toolbox
The main functions used in the toolbox are:
robfit.m
Runs robust regression for a series of group analyses, creating a directory and saving results for each
robfit_parcelwise.m
Runs robust regression for a single group analysis on each of a series of "parcels", regions of interest that are larger than voxels and generally defined to conform to pre-defined anatomical or functional boundaries.
robustfit.m
Matlab's internal function, which is run for each test conducted across brain voxels or parcels.
robseed.m
Robust regression correlation analysis, correlating each voxel with a seed region you specify.
robust_results_batch.m
Threshold maps and generate and save plots and tables of results.
publish_robust_regression_report.m
Thresholds maps and generates an HTML report with results
Note: If you are using object-oriented tools in CANlab Core, the regress( ) method for fmri_data objects also has a robust regression option. This works independent of the robust regression toolbox.
Sample datasets
A sample dataset is in the "Sample_datasets" folder in CANlab_Core_Tools.
See canlab.github.io for more sample datasets, and in particular the function load_image_set( ) for some easy, pre-packaged datasets, atlases, and maps. This walkthough will use emotion regulation data in the "Sample_datasets" folder, in this subfolder: "Wager_et_al_2008_Neuron_EmotionReg"
The dataset is a series of contrast images from N = 30 participants. Each image is a contrast image for [reappraise neg vs. look neg] for one participant.
These data were published in:
Wager, T. D., Davidson, M. L., Hughes, B. L., Lindquist, M. A., Ochsner, K. N.. (2008). Prefrontal-subcortical pathways mediating successful emotion regulation. Neuron, 59, 1037-50.
We will load it with load_image_set( ).
Robust regression with robfit.m
Robfit runs a series of analyses. Each analysis is a univariate robust regression analysis, regressing data from each voxel in a series of images on a GLM design matrix.
- The typical use case for a single analysis is to enter a series of contrast images, one for each of n participants.
- robfit can handle a series of contrast images, with one set of n images per contrast. It creates directories called robust0001, robust0002, etc., one per analysis (i.e., per contrast).
- A SETUP.mat file for each analysis contains information about the input images and design matrix, along with some other meta-data.
Covariates
You can run robfit to test the group mean (intercept) only or you can enter one or more covariates. Each covariate is entered as a predictor in the design matrix.
As with any multiple regression, the intercept will be tested (with associated t- and p-values) at the zero-point of all covariates, i.e., when they are zero. Therefore, if you mean-center covariates and use effects codes (1, -1) for categorical variables, the intercept will be interpreted as the group effect for the average participant, assuming input images correspond to participants.
Robfit will save t-maps and P-maps for each regressor in your design matrix, with the intercept first (rob_tmap_0001.nii) followed by covariate effects if covariates are entered (rob_tmap_0002.nii, etc.)
Robfit automatically mean-centers continuous covariates, but does not mean-center categorical covariates, and instead codes them with effects codes of 1 and -1.
robfit inputs
robfit uses a data structure with a specific format that stores information about image files and the second-level experimental design. This structure has a special variable name here, by convention: EXPT.
The main 2nd-level analysis function is called robfit.m. Once you create an EXPT structure, you will pass it into robfit.m.
Here is a list of the fields in EXPT required to ultimately run analyses with robfit:
EXPT.subjects List of subject directory names, in a cell array.{‘subj1’ ‘subj2’ … etc.}
EXPT.SNPM.P Cell array; each cell contains names of a set of images (one per subject) to be subjected to a group analysis. Each image list is a string matrix of image file names. Thus, images for multiple 2nd level analyses can be specified at once by entering images for each analysis into separate cells in EXPT.SNPM.P
EXPT.SNPM.connames String matrix of names, whose rows correspond to names for the analyses. Each row (name) corresponds to the cell with the same index in EXPT.SNPM.P.
E.g.,
['Faces-Houses'
'Houses-Faces']
EXPT.SNPM.connums A vector of integers numbering the analyses, and corresponding to cells in EXPT.SNPM.P. These numbers determine output directory names, which will be called:
robust_0001
robust_0002
…and so forth (one directory per 2nd-level analysis)
EXPT.mask Name of mask image containing in-analysis voxels. This need not be in the same space as the images you’re analyzing.
EXPT.cov Matrix of subjects x covariates for analysis of between subjects effects. One row per subject, one column per predictor. Do not include an intercept here. It will be added to the front end of the design matrix, as the first predictor.
The output images, i.e., rob_tmap_0001.img, are numbered so that 0001 is the intercept (overall activation) and 0002 – 0000x are the maps for covariates 1 thru (x-1)
Load the dataset
% This script uses object-oriented tools, which we cover in another tutorial.
% For now, it's just a convenient way to get a set of image filenames.
[data_obj, subject_names, image_names] = load_image_set('emotionreg');
Direct calls to spm_defauts are deprecated.
Please use spm('Defaults',modality) or spm_get_defaults instead.
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Loaded images:
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
% data_obj is an fmri_data object, with its own methods and properties
Load the behavioral data
beh_data_file = which('Wager_2008_emotionreg_behavioral_data.txt')
if isempty(beh_data_file), error('Sample behavioral data not found. '); end
beh = readtable(beh_data_file);
Define the structure with input variables for robfit
Here we fill in fields of a structure, called EXPT, that contain what we need to run the analysis. We'll do this for only a sngle contrast, though we could add more cells to the fields to add others as well.
% The fields of EXPT should be:
% EXPT.SNPM.P Cell vector. Each cell specifies images for one analysis. Each cell contains a string matrix with image names for the analysis. Image files can be 3-D or 4-D images.
% EXPT.SNPM.connames Cell vector. Each cell contains a string with the analysis name (e.g., contrast name) for this contrast
% EXPT.SNPM.connums Vector of contrast numbers. Determines folder names (e.g., 1 = robust0001
% EXPT.cov [n x k] matrix of n observations (must match number of images) empty for 1-sample ttest, or containing covariates
% EXPT.mask Optional mask file image name, for voxels to include
EXPT.subjects = subject_names;
EXPT.SNPM.P{1} = image_names{1}; % string matrix of images names
% Note: In the sample data file, 30 images for 30 subjects are saved in 1
% 4-D .nii file. So we only need to list the name of this one file.
% If you had 3-D image files, this would be a string matrix with 30 rows
EXPT.SNPM.connames = 'Reapp_vs_Look';
EXPT.mask = which('gray_matter_mask.img');
EXPT.cov = beh.Y_Reappraisal_Success;
% Because we have one column in EXPT.cov, our 2nd-level design matrix
% will contain two images (e.g., two t-maps/p-maps):
% One for the intercept (always first) and one for the covariate entered.
Let's look at the input structure we have created:
Create a directory for the results and go to it
This directory will contain subdirectories called robust00XX for each cell containing images entered in EXPT.SNPM.P.
Make sure you are starting in a directory that you can create a folder for the sample results in before you run this.
basedir = fullfile(pwd, 'Robust_regression_sample_results_dir');
if ~exist(basedir, 'dir'), mkdir(basedir), end
% Save the EXPT setup file for later reference:
Run robfit
EXPT = robfit(EXPT);
Robfit.m
____________________________________________________________
Preparing covariates:
Cov 1: Centering and scaling to unit variance. Will be saved in rob_tmap_0002
____________________________________________________________
robfit: Will write to robust???? directories in: /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir
Robfit.m - working on Reapp_vs_Look
Running OLS and IRLS comparison (slower) - to turn this off, use 0 as 3rd input argument.
____________________________________________________________
Using mask image:
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/gray_matter_mask.img
Saving results in : robust0001 (Creating directory)
Minimum allowed observations per voxel:25
Image name for first image:
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/Sample_datasets/Wager_et_al_2008_Neuron_EmotionReg/Wager_2008_emo_reg_vs_look_neg_contrast_images.nii
Number of images: 1
31078 voxels, 30 planes in analysis
Done: 100% done. writing volumes.
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/nsubjects.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/mask.nii
Writing 4-D weight file: /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/weights.nii
Writing t- and p-images for univariate effects in model.
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_beta_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_tmap_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_p_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_beta_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_tmap_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_p_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/irls-ols_z_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/irls-ols_p_0001.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_beta_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_tmap_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/rob_p_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_beta_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_tmap_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/ols_p_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/irls-ols_z_0002.nii
Writing /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/irls-ols_p_0002.nii
Creating HTML report for results in:
/Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001
Saved HTML report:
/Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/published_output/robust_regression_report_09-Aug-2022_18_58/robust_results_batch.html
% Try help robfit for more options.
% e.g., you can specify that you want to run only some of the contrasts
% in EXPT.SNPM.P. You can specify that you want to create OLS images
% side by side with robust images to compare.
% And you can specify a mask filename.
% The mask image should contain 1’s and 0’s, with 1’s in voxels you want to analyze.
Note: The robfit command will also attempt to run publish_robust_regression_report for each set of images (each analysis) it runs. This generates an HTML report in the results directory and opens it. Thus, if this command runs correctly, it will bring up a report (or multiple reports).
Examine the output files
ls -lt
total 8
drwxr-xr-x 23 f003vz1 staff 736 Aug 9 18:58 robust0001
-rw-r--r-- 1 f003vz1 staff 824 Aug 9 18:57 EXPT.mat
published_output A folder with HTML reports from publish_robust_regression_report
irls-ols_p_0002.nii P-value image for the covariate, difference between robust and OLS
irls-ols_z_0002.nii Z-value image for the covariate, difference between robust and OLS
ols_p_0002.nii P-value image for the covariate, OLS regression
ols_tmap_0002.nii t-value image for the covariate, OLS regression
ols_beta_0002.nii beta (slope) image for the covariate, OLS regression
rob_p_0002.nii P-value image for the covariate, robust regression
rob_tmap_0002.nii t-value image for the covariate, robust regression
rob_beta_0002.nii beta (slope) image for the covariate, robust regression
irls-ols_p_0001.nii P-value image for the intercept, difference between robust and OLS
irls-ols_z_0001.nii The images below are the same as those above, but for the intercept
ols_p_0001.nii
ols_tmap_0001.nii
ols_beta_0001.nii
rob_p_0001.nii
rob_tmap_0001.nii
rob_beta_0001.nii
weights.nii Robust regression weights for each image (i.e., participant)
mask.nii mask of voxels included in the analysis
nsubjects.nii Image with integer values for number of subjects with valid data
SETUP.mat Metadata file
- If you are using CANlab object-oriented tools to threshold and view the results maps, you can load and combine the t- and P-maps into t-statistic image objects for each regressor.
Get batch results
Results should have been saved by publish_robust_regression_report as
HTML. To re-run batch results, go to the robust regression folder and
run the batch script.
You can edit this script to customize it as well.
robust_results_batch
----------------------------------------------
Robust regression
----------------------------------------------
Loading files from /Users/f003vz1/Downloads/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001/Robust_regression_sample_results_dir/robust0001
robust0001
Reapp_vs_Look
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Reapp_vs_Look
robust0001
----------------------------------------------
Design Matrix
----------------------------------------------
First regressor (image) is intercept: Yes
Regressors mean-centered (intercept reflects group mean): Yes
Number of regressors (including intercept): 2
Setting up fmridisplay objects
Grouping contiguous voxels: 1 regions
sagittal montage: 1058 voxels displayed, 29070 not displayed on these slices
sagittal montage: 1032 voxels displayed, 29096 not displayed on these slices
sagittal montage: 985 voxels displayed, 29143 not displayed on these slices
axial montage: 9831 voxels displayed, 20297 not displayed on these slices
axial montage: 10506 voxels displayed, 19622 not displayed on these slices
Grouping contiguous voxels: 1 regions
sagittal montage: 1058 voxels displayed, 29070 not displayed on these slices
sagittal montage: 1032 voxels displayed, 29096 not displayed on these slices
sagittal montage: 985 voxels displayed, 29143 not displayed on these slices
axial montage: 9831 voxels displayed, 20297 not displayed on these slices
axial montage: 10506 voxels displayed, 19622 not displayed on these slices
No variability in mapped values. Not plotting legend.
Modal value for number of participants: 30
______________________________________________________________
Outlier analysis
______________________________________________________________
global mean | global mean to var | spatial MAD | Missing values | 0 images
Retained 4 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 20.00%
Expected 1.50 outside 95% ellipsoid, found 3
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 3 images Cases 6 12 16
Uncorrected: 3 images Cases 6 12 16
Retained 8 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 26.67%
Expected 1.50 outside 95% ellipsoid, found 7
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 3 images Cases 6 12 16
Uncorrected: 7 images Cases 6 11 12 15 16 18 27
Mahalanobis (cov and corr, q<0.05 corrected):
3 images
Outlier_count Percentage
_____________ __________
global_mean 2 6.6667
global_mean_to_variance 2 6.6667
missing_values 0 0
rmssd_dvars 0 0
spatial_variability 2 6.6667
mahal_cov_uncor 3 10
mahal_cov_corrected 3 10
mahal_corr_uncor 7 23.333
mahal_corr_corrected 3 10
Overall_uncorrected 7 23.333
Overall_corrected 3 10
SPM12: spm_check_registration (v7759) 18:59:50 - 09/08/2022
========================================================================
Display /Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
(all) /Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Setting up fmridisplay objects
Grouping contiguous voxels: 1 regions
sagittal montage: 2054 voxels displayed, 29024 not displayed on these slices
axial montage: 14136 voxels displayed, 16942 not displayed on these slices
Grouping contiguous voxels: 1 regions
sagittal montage: 2054 voxels displayed, 29024 not displayed on these slices
axial montage: 14136 voxels displayed, 16942 not displayed on these slices
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Warning: resample_space returned illegal length for removed voxels. Fixing...
Image 1 FDR q < 0.050 threshold is 0.005171
Image 1
48 contig. clusters, sizes 1 to 1880
Positive effect: 3063 voxels, min p-value: 0.00000000
Negative effect: 103 voxels, min p-value: 0.00009374
Grouping contiguous voxels: 48 regions
sagittal montage: 182 voxels displayed, 2984 not displayed on these slices
axial montage: 1669 voxels displayed, 1497 not displayed on these slices
Image 1 FDR q < 0.050 threshold is 0.000486
Image 1
49 contig. clusters, sizes 1 to 71
Positive effect: 286 voxels, min p-value: 0.00000000
Negative effect: 10 voxels, min p-value: 0.00002587
Grouping contiguous voxels: 49 regions
sagittal montage: 50 voxels displayed, 246 not displayed on these slices
axial montage: 158 voxels displayed, 138 not displayed on these slices
----------------------------------------------
FDR-corrected: Group average (Intercept)
----------------------------------------------
Image 1 FDR q < 0.050 threshold is 0.005171
Image 1
48 contig. clusters, sizes 1 to 1880
Positive effect: 3063 voxels, min p-value: 0.00000000
Negative effect: 103 voxels, min p-value: 0.00009374
Grouping contiguous voxels: 48 regions
____________________________________________________________________________________________________________________________________________
Positive Effects
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
____________________ __________ _________________ ______ _____________________________ _____________________ _____________________ ____________
{'GPe_R' } 992 21 0 5 3.3699 {'Basal_ganglia' } 58 0 28
{'rn_L' } 480 -7 -21 -5 3.0773 {'Brainstem' } 47 1 25
{'Cblm_CrusII_R' } 288 48 -58 -50 3.2078 {'Cerebellum' } 100 0 1
{'Cblm_CrusII_R' } 216 41 -55 -50 3.2339 {'Cerebellum' } 52 0 2
{'Cblm_VIIb_R' } 816 38 -45 -50 3.3519 {'Cerebellum' } 56 0 3
{'Cblm_CrusI_L' } 1160 -34 -62 -36 3.5414 {'Cerebellum' } 71 0 6
{'Cblm_CrusI_R' } 3064 55 -48 -27 4.2095 {'Cerebellum' } 60 0 11
{'Cblm_CrusI_L' } 480 -41 -65 -27 4.1625 {'Cerebellum' } 100 0 14
{'Multiple regions'} 78352 52 -48 9 7.0345 {'Cortex_Default_ModeA' } 6 17 4
{'Ctx_31pv_L' } 1112 -10 -48 32 3.3184 {'Cortex_Default_ModeA' } 47 1 31
{'Ctx_31pv_R' } 1624 7 -48 36 3.2956 {'Cortex_Default_ModeA' } 37 2 32
{'Ctx_23d_L' } 960 0 -17 36 3.3513 {'Cortex_Default_ModeA' } 42 1 33
{'Ctx_TE1a_L' } 12744 -62 -7 -23 5.3782 {'Cortex_Default_ModeB' } 30 2 10
{'Multiple regions'} 1.7784e+05 21 28 36 7.0345 {'Cortex_Default_ModeB' } 3 56 18
{'Ctx_TGd_L' } 384 -55 21 -23 3.2466 {'Cortex_Default_ModeB' } 31 0 20
{'Ctx_STSda_L' } 480 -55 -14 -14 3.3903 {'Cortex_Default_ModeB' } 23 0 21
{'Ctx_47m_R' } 672 34 34 -14 3.4052 {'Cortex_Default_ModeB' } 57 1 23
{'Ctx_TPOJ2_L' } 1016 -55 -58 14 3.166 {'Cortex_Dorsal_AttentionA' } 44 1 29
{'Ctx_7Am_R' } 512 14 -72 63 3.2133 {'Cortex_Dorsal_AttentionB' } 13 0 36
{'Multiple regions'} 28056 -48 21 27 7.0345 {'Cortex_Fronto_ParietalA' } 10 9 22
{'Ctx_PFm_L' } 3440 -55 -55 32 3.7149 {'Cortex_Fronto_ParietalB' } 44 0 30
{'Ctx_PFm_L' } 800 -45 -62 50 3.2547 {'Cortex_Fronto_ParietalB' } 84 0 34
{'Ctx_PFm_L' } 384 -52 -55 50 3.0434 {'Cortex_Fronto_ParietalB' } 88 0 35
{'Ctx_TF_L' } 672 -34 -17 -41 3.4264 {'Cortex_Limbic' } 1 0 7
{'Ctx_TGv_R' } 864 24 -3 -36 3.1976 {'Cortex_Limbic' } 52 0 8
{'Ctx_PeEc_R' } 360 17 -3 -36 3.2398 {'Cortex_Limbic' } 58 0 9
{'Ctx_TE2a_R' } 640 52 -21 -32 3.6776 {'Cortex_Limbic' } 57 0 12
{'Ctx_TGd_R' } 480 14 10 -32 3.3273 {'Cortex_Limbic' } 13 0 13
{'Ctx_TE2a_R' } 512 62 -34 -27 3.0362 {'Cortex_Limbic' } 17 0 15
{'Ctx_TE2a_R' } 1536 65 -21 -23 5.0197 {'Cortex_Limbic' } 19 0 16
{'Ctx_OFC_R' } 480 7 21 -27 3.6338 {'Cortex_Limbic' } 30 0 17
{'Ctx_TF_R' } 288 45 -17 -23 3.4574 {'Cortex_Limbic' } 61 0 19
{'Ctx_AAIC_L' } 1016 -45 21 -5 3.1489 {'Cortex_Ventral_AttentionB'} 22 0 24
{'Ctx_V1_L' } 288 -7 -93 5 3.1008 {'Cortex_Visual_Peripheral' } 97 0 26
{'Thal_VL' } 1936 14 -10 9 3.3238 {'Diencephalon' } 53 2 27
{'No label' } 640 14 0 -45 3.206 {'No description' } 0 0 5
Negative Effects
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
________________ ______ _________________ _______ _____________________________ _____________________ _____________________ ____________
{'Cau_L' } 1728 -24 -34 9 -3.3178 {'Basal_ganglia' } 42 0 43
{'Bstem_Med_R' } 472 3 -28 -50 -3.4083 {'Brainstem' } 39 0 37
{'Bstem_Pons_L'} 736 -7 -24 -45 -3.3761 {'Brainstem' } 43 0 38
{'Cblm_I_IV_R' } 1024 14 -38 -27 -3.4009 {'Cerebellum' } 84 0 39
{'Cblm_V_L' } 7584 -17 -41 -18 -4.5542 {'Cerebellum' } 48 2 40
{'Ctx_PFop_L' } 1152 -65 -21 32 -3.5726 {'Cortex_Dorsal_AttentionB' } 58 1 47
{'Ctx_3b_L' } 480 -58 -10 45 -3.0345 {'Cortex_SomatomotorA' } 40 0 48
{'Ctx_OP4_R' } 480 52 -17 14 -3.194 {'Cortex_SomatomotorB' } 53 0 44
{'Ctx_OP2_3_L' } 992 -45 -10 14 -3.9815 {'Cortex_SomatomotorB' } 48 1 45
{'Ctx_PF_R' } 3720 58 -28 27 -4.2128 {'Cortex_Ventral_AttentionA'} 22 1 46
{'Ctx_ProS_R' } 2104 28 -52 9 -3.4088 {'Cortex_Visual_Peripheral' } 15 0 42
{'Thal_Hythal' } 480 -10 0 -18 -3.1673 {'Diencephalon' } 17 0 41
ans =
Light with properties:
Color: [1 1 1]
Style: 'infinite'
Position: [1.4142 -8.6596e-17 0]
Visible: on
Show all properties
Anatomical image: Adapted from the 7T high-resolution atlas of Keuken, Forstmann et al.
Keuken et al. (2014). Quantifying inter-individual anatomical variability in the subcortex using 7T structural MRI.
Forstmann, Birte U., Max C. Keuken, Andreas Schafer, Pierre-Louis Bazin, Anneke Alkemade, and Robert Turner. 2014. ?Multi-Modal Ultra-High Resolution Structural 7-Tesla MRI Data Repository.? Scientific Data 1 (December): 140050.
Setting up fmridisplay objects
48
sagittal montage: 649 voxels displayed, 2414 not displayed on these slices
coronal montage: 1002 voxels displayed, 2061 not displayed on these slices
axial montage: 1617 voxels displayed, 1446 not displayed on these slices
axial montage: 1558 voxels displayed, 1505 not displayed on these slices
----------------------------------------------
FDR-corrected: Regressor 1
----------------------------------------------
Image 1 FDR q < 0.050 threshold is 0.000486
Image 1
49 contig. clusters, sizes 1 to 71
Positive effect: 286 voxels, min p-value: 0.00000000
Negative effect: 10 voxels, min p-value: 0.00002587
Grouping contiguous voxels: 49 regions
____________________________________________________________________________________________________________________________________________
Positive Effects
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
____________________ ______ _________________ ______ _____________________________ _____________________ _____________________ ____________
{'Multiple regions'} 11056 0 17 -5 7.0345 {'Basal_ganglia' } 21 6 19
{'Cblm_CrusII_R' } 480 38 -69 -45 4.23 {'Cerebellum' } 92 0 1
{'Cblm_CrusII_L' } 720 -7 -89 -32 5.1441 {'Cerebellum' } 100 0 4
{'Cblm_V_L' } 768 -28 -38 -23 4.4743 {'Cerebellum' } 52 0 9
{'Cblm_Vermis_VI' } 480 0 -79 -14 4.3021 {'Cerebellum' } 73 0 15
{'Cblm_V_L' } 480 -21 -48 -14 3.9608 {'Cerebellum' } 72 0 16
{'Cblm_I_IV_R' } 480 7 -38 -9 4.7269 {'Cerebellum' } 82 0 18
{'Cblm_I_IV_L' } 368 -3 -41 5 4.7524 {'Cerebellum' } 9 0 29
{'Ctx_10d_R' } 3448 3 65 9 5.0434 {'Cortex_Default_ModeA' } 48 1 24
{'Ctx_v23ab_R' } 2872 3 -45 9 5.3391 {'Cortex_Default_ModeA' } 18 2 28
{'Ctx_10d_R' } 360 17 62 9 4.4654 {'Cortex_Default_ModeA' } 22 0 31
{'Ctx_s6_8_R' } 1456 28 24 50 4.7632 {'Cortex_Default_ModeA' } 34 1 39
{'Ctx_47m_R' } 4272 38 31 -18 5.9969 {'Cortex_Default_ModeB' } 26 2 8
{'Ctx_47l_L' } 288 -48 34 -9 4.2199 {'Cortex_Default_ModeB' } 83 0 20
{'Ctx_STSvp_R' } 360 69 -17 -5 4.4587 {'Cortex_Default_ModeB' } 20 0 21
{'Ctx_9p_L' } 576 -24 45 36 4.911 {'Cortex_Default_ModeB' } 51 0 38
{'Ctx_8BL_L' } 3696 -14 38 50 6.0215 {'Cortex_Default_ModeB' } 48 1 40
{'Ctx_PHA1_L' } 1296 -28 -31 -14 4.407 {'Cortex_Default_ModeC' } 20 0 13
{'Ctx_TE2p_L' } 512 -52 -34 -27 4.05 {'Cortex_Dorsal_AttentionA' } 39 0 7
{'Ctx_MT_R' } 480 48 -69 0 4.6327 {'Cortex_Dorsal_AttentionA' } 28 0 22
{'Ctx_PFop_L' } 1120 -65 -21 32 4.8802 {'Cortex_Dorsal_AttentionB' } 72 1 35
{'Ctx_FST_L' } 288 -48 -58 5 4.149 {'Cortex_Fronto_ParietalA' } 50 0 27
{'Ctx_TE1m_L' } 384 -69 -31 -23 4.5284 {'Cortex_Fronto_ParietalB' } 23 0 10
{'Ctx_PFm_R' } 960 48 -38 32 4.1841 {'Cortex_Fronto_ParietalB' } 25 0 36
{'Ctx_TE2a_R' } 288 55 -7 -41 4.1262 {'Cortex_Limbic' } 33 0 2
{'Ctx_TGv_L' } 512 -52 0 -41 4.4133 {'Cortex_Limbic' } 66 0 3
{'Ctx_TE2a_R' } 640 52 -21 -32 7.0329 {'Cortex_Limbic' } 57 0 5
{'Ctx_TGd_R' } 600 28 10 -32 4.4219 {'Cortex_Limbic' } 97 0 6
{'Ctx_TF_L' } 288 -38 -14 -23 5.1359 {'Cortex_Limbic' } 25 0 11
{'Ctx_pOFC_R' } 864 14 14 -23 4.0596 {'Cortex_Limbic' } 61 0 12
{'Ctx_10v_L' } 1176 -3 58 -14 4.8067 {'Cortex_Limbic' } 41 0 14
{'Ctx_10pp_L' } 1712 -10 69 -14 5.9465 {'Cortex_Limbic' } 49 1 17
{'Ctx_4_L' } 640 -28 -21 59 4.1054 {'Cortex_SomatomotorA' } 31 0 41
{'Ctx_6d_R' } 384 38 -3 63 3.9564 {'Cortex_SomatomotorA' } 83 0 43
{'Ctx_6r_R' } 360 45 10 14 5.2747 {'Cortex_Ventral_AttentionA'} 62 0 32
{'Ctx_PF_R' } 360 55 -28 32 4.0339 {'Cortex_Ventral_AttentionA'} 89 0 37
{'Ctx_6ma_R' } 1280 14 3 63 4.598 {'Cortex_Ventral_AttentionA'} 28 0 44
{'Ctx_V4t_L' } 288 -45 -79 5 4.0273 {'Cortex_Visual_Central' } 42 0 26
{'Ctx_V2_R' } 288 7 -79 23 4.3585 {'Cortex_Visual_Central' } 25 0 34
{'Ctx_V1_L' } 3352 -7 -96 9 5.9386 {'Cortex_Visual_Peripheral' } 28 0 25
{'Thal_Pulv' } 840 -17 -34 0 4.6048 {'Diencephalon' } 33 0 23
{'Multiple regions'} 9960 0 -21 14 4.9579 {'Diencephalon' } 22 9 30
{'Thal_Pulv' } 480 -3 -24 18 3.9773 {'Diencephalon' } 2 0 33
{'No label' } 360 17 -7 59 4.2722 {'No description' } 0 0 42
Negative Effects
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
____________________ ______ _________________ _______ ____________________________ _____________________ _____________________ ____________
{'Ctx_1_R' } 1272 69 -7 18 -5.0255 {'Cortex_SomatomotorA' } 28 0 47
{'Ctx_43_L' } 480 -58 3 14 -4.1029 {'Cortex_SomatomotorB' } 40 0 48
{'Ctx_A5_R' } 512 62 -24 5 -4.214 {'Cortex_Temporal_Parietal'} 50 0 46
{'Ctx_STV_R' } 384 48 -45 18 -4.4834 {'Cortex_Temporal_Parietal'} 71 0 49
{'CA1_Hippocampus_'} 480 41 -21 -14 -4.4023 {'Hippocampus' } 23 0 45
ans =
Light with properties:
Color: [1 1 1]
Style: 'infinite'
Position: [1.4142 -8.6596e-17 0]
Visible: on
Show all properties
Anatomical image: Adapted from the 7T high-resolution atlas of Keuken, Forstmann et al.
Keuken et al. (2014). Quantifying inter-individual anatomical variability in the subcortex using 7T structural MRI.
Forstmann, Birte U., Max C. Keuken, Andreas Schafer, Pierre-Louis Bazin, Anneke Alkemade, and Robert Turner. 2014. ?Multi-Modal Ultra-High Resolution Structural 7-Tesla MRI Data Repository.? Scientific Data 1 (December): 140050.
Setting up fmridisplay objects
48
sagittal montage: 169 voxels displayed, 117 not displayed on these slices
coronal montage: 102 voxels displayed, 184 not displayed on these slices
axial montage: 152 voxels displayed, 134 not displayed on these slices
axial montage: 148 voxels displayed, 138 not displayed on these slices
Or,. run publish_robust_regression_report( ) to create a report and save an HTML file in the robust0001 folder.
See the object oriented help for more details on how to run robust analyses, and how to extract and plot data from significant regions.
robfit_parcelwise
Voxel-wise results are often under-powered with moderate samples, and it may be advantageous to test averages across a series of parcels instead. If the parcel boundaries match the underlying boundaries of activated regions, this could substantially increase power.
- robfit_parcelwise extracts mean values for each of a series of regions -- with a default CANlab atlas parcellation that covers the brain -- and runs robust regressions on each parcel
- Corrections (i.e., False Discovery Rate) are performed across parcels rather than voxels
- For convenience in viewing results, parcel-by-parcel results are returned in voxel-wise maps where all values in the parcel have the same statistic values.
- robfit_parcelwise uses the object-oriented tools, so you need images in an fmri_data object to run it. We have created one above, in the variable data_obj. See canlab.github.io for more details.
OUT = robfit_parcelwise(data_obj);
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Loading atlas: CANlab_combined_atlas_object_2018_2mm.mat
Initializing nodes to match regions.
Updating node response data.
Updating obj.connectivity.nodes.
Updating obj.connectivity.nodes.
Updating region averages.
Updating obj.connectivity.regions.
Updating obj.connectivity.regions.
__________________________________________________________________
Input image diagnostic information
__________________________________________________________________
Retained 6 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 56.67%
Expected 1.50 outside 95% ellipsoid, found 0
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 0 images Cases
Uncorrected: 0 images Cases
Retained 3 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 26.67%
Expected 1.50 outside 95% ellipsoid, found 3
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 0 images Cases
Uncorrected: 3 images Cases 11 12 16
Extracting from gray_matter_mask_sparse.img.
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Extracting from canonical_white_matter.img.
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Extracting from canonical_ventricles.img.
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
mean_gray_matter_coverage: 1
global_d_ventricles: -0.2514
global_logp_ventricles: 0.5741
global_d_wm: 0.0734
global_logp_wm: 0.1237
gm_explained_by_csf_pvalue: 4.3515e-08
r2_explained_by_csf: 0.6633
gm_l2norm_explained_by_csf_pvalue: 5.3318e-09
r2_l2norm_explained_by_csf: 0.7095
csf_to_gm_signal_ratio: 1.1197
gm_scale_inhom: 0.5110
csf_scale_inhom: 0.5282
warnings: {1×4 cell}
Warning: Gray-matter individual diffs significantly correlated with mean CSF value.
- Var explained (r^2) = 66.33%
Warning: Gray-matter scale (L2 norm) significantly correlated with mean CSF L2 norm.
- Var explained (r^2) = 70.95%
Warning: Strong non-zero signal in CSF relative to gray matter.
- Ratio is = 1.12
Warning: High individual diffs in image scaling estimated from CSF.
- CV is = 0.53
Number of unique values in dataset: 1 Bit rate: 0.00 bits
Warning: Number of unique values in dataset is low, indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values
Number of unique values in dataset: 1 Bit rate: 0.00 bits
Warning: Number of unique values in dataset is low, indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values
Number of unique values in dataset: 1 Bit rate: 0.00 bits
Warning: Number of unique values in dataset is low, indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values
__________________________________________________________________
Parcel-wise robust regression
__________________________________________________________________
maxT minP sig05 sig005 sig001 sigFDR05 p_thr_FDR05 min_d_FDR05
______ __________ _____ ______ ______ ________ ___________ ___________
Intercept (Group avg) 7.6363 2.0279e-08 125 64 38 83 0.015578 0.42061
sig*: Significant parcels at given threshold (p < 0.05 two-tailed, q < 0.05 FDR, etc.)
p_thr_FDR05: P-value threshold to achieve q < 0.05 FDR-corrected for each predictor
min_d_FDR05: Min Cohen's d detectable at FDR q < 0.05dashes
__________________________________________________________________
Tables of regions at q < 0.05 FDR
__________________________________________________________________
__________________________________________________________________
Predictor 1: Intercept (Group avg)
__________________________________________________________________
Grouping contiguous voxels: 10 regions
48
____________________________________________________________________________________________________________________________________________
Positive Effects
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
____________________ __________ _________________ ______ ___________________________ _____________________ _____________________ ____________
{'rn_L' } 408 -4 -20 -8 2.9126 {'Brainstem' } 100 1 4
{'Multiple regions'} 1.6813e+05 6 28 30 7.0345 {'Cortex_Default_ModeA' } 4 64 3
{'Ctx_TE1a_L' } 9440 -60 -16 -26 3.3937 {'Cortex_Default_ModeB' } 52 2 2
{'Ctx_8C_L' } 8 -42 10 42 4.7034 {'Cortex_Fronto_ParietalA'} 100 0 9
{'Ctx_8C_L' } 8 -48 10 42 4.7034 {'Cortex_Fronto_ParietalA'} 100 0 10
{'Multiple regions'} 49608 58 -44 6 7.0345 {'Cortex_Fronto_ParietalB'} 13 14 1
{'Ctx_PFm_L' } 6680 -50 -58 42 3.0366 {'Cortex_Fronto_ParietalB'} 100 1 7
{'Ctx_PFm_L' } 32 -56 -48 32 3.0366 {'Cortex_Fronto_ParietalB'} 100 0 8
{'Thal_VM' } 136 -10 -12 0 2.6526 {'Diencephalon' } 100 1 5
{'Thal_VM' } 136 10 -12 0 2.6526 {'Diencephalon' } 100 1 6
Negative Effects
No regions to display
______________________________________________________________
Outlier analysis
______________________________________________________________
global mean | global mean to var | spatial MAD | Missing values | 0 images
Retained 3 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 26.67%
Expected 1.50 outside 95% ellipsoid, found 3
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 0 images Cases
Uncorrected: 3 images Cases 11 12 16
Retained 6 components for mahalanobis distance
Expected 50% of points within 50% normal ellipsoid, found 56.67%
Expected 1.50 outside 95% ellipsoid, found 0
Potential outliers based on mahalanobis distance:
Bonferroni corrected: 0 images Cases
Uncorrected: 0 images Cases
Mahalanobis (cov and corr, q<0.05 corrected):
0 images
Outlier_count Percentage
_____________ __________
global_mean 2 6.6667
global_mean_to_variance 1 3.3333
missing_values 0 0
rmssd_dvars 0 0
spatial_variability 2 6.6667
mahal_cov_uncor 3 10
mahal_cov_corrected 0 0
mahal_corr_uncor 0 0
mahal_corr_corrected 0 0
Overall_uncorrected 4 13.333
Overall_corrected 2 6.6667
SPM12: spm_check_registration (v7759) 19:02:19 - 09/08/2022
========================================================================
Display /Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
(all) /Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
/Users/f003vz1/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/keuken_2014_enhanced_for_underlay.img,1
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
Grouping contiguous voxels: 1 regions
Defaults settings have been modified by file(s):
/Users/f003vz1/Dropbox (Dartmouth College)/Matlab_code_external/spm12/spm_my_defaults.m
Modified fields: stats
When you run robfit_parcelwise, it generates additional plots of the mask, data, and weights by default.
It also saves a structure with t-statitic objects, results tables, region-class objects with extracted data, and more:
OUT
OUT =
betas: [489×1 double]
tscores: [489×1 double]
pvalues: [489×1 double]
nsubjects: [489×1 double]
maskvol: [489×1 double]
weights: [489×30 double]
dfe: [489×1 double]
datmatrix: [30×489 double]
pthr_FDRq05: 0.0156
pthr_FDRq05_descrip: 'Highest P-value significant at FDR q < 0.05; threshold at p <= pthr'
sig_q05: [489×1 logical]
cohens_d_fdr05: 0.4206
cohens_d_fdr05_descrip: 'Min Cohen's d detectable at FDR q < 0.05'
group_metrics: [1×1 struct]
individual_metrics: [1×1 struct]
global_gm_wm_csf_values: [30×3 double]
outliers_corr: [30×1 logical]
outliers_uncorr: [30×1 logical]
ind_quality_dat: [30×9 table]
t_obj: [1×1 statistic_image]
beta_obj: [1×1 fmri_data]
mask: [1×1 fmri_data]
nsubjects_obj: [1×1 fmri_data]
resultstable: [1×8 table]
region_objects: {[1×10 region]}
contrast_tables_FDR05: {[10×8 table]}
Parcel-by-parcel betas (slopes), t-values, P-values, image counts:
betas: [489×1 double]
tscores: [489×1 double]
pvalues: [489×1 double]
nsubjects: [489×1 double]
maskvol: [489×1 double]
Parcel-by-parcel robust regression weights, degrees of freedom, and data (parcel averages)
weights: [489×30 double]
dfe: [489×1 double]
datmatrix: [30×489 double]
False discovery rate threshold and significant parcels
pthr_FDRq05: 0.0156
pthr_FDRq05_descrip: 'Highest P-value significant at FDR q < 0.05; threshold at p <= pthr'
sig_q05: [489×1 logical]
cohens_d_fdr05: 0.4206
cohens_d_fdr05_descrip: 'Min Cohen's d detectable at FDR q < 0.05'
Quality control metrics and outliers for input images:
group_metrics: [1×1 struct]
individual_metrics: [1×1 struct]
global_gm_wm_csf_values: [30×3 double]
outliers_corr: [30×1 logical]
outliers_uncorr: [30×1 logical]
ind_quality_dat: [30×9 table]
t-maps, slopes, mask, and subject counts in voxel-wise fmri_data object format:
t_obj: [1×1 statistic_image]
beta_obj: [1×1 fmri_data]
mask: [1×1 fmri_data]
nsubjects_obj: [1×1 fmri_data]
resultstable: [1×8 table]
Region-class objects with extracted data:
region_objects: {[1×10 region]}
Table objects with significant effects (FDR):
contrast_tables_FDR05: {[10×8 table]}
CANlab object-oriented tools, or other toolboxes, can be used to visualize the results.
Let's view the results and do some post-run interactive visualization:
o2 = canlab_results_fmridisplay(OUT.t_obj, 'montagetype', 'full', 'noverbose');
Grouping contiguous voxels: 10 regions
48
disp(OUT.contrast_tables_FDR05{1})
Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index
____________________ __________ _________________ ______ ___________________________ _____________________ _____________________ ____________
{'Multiple regions'} 49608 58 -44 6 7.0345 {'Cortex_Fronto_ParietalB'} 13 14 1
{'Ctx_TE1a_L' } 9440 -60 -16 -26 3.3937 {'Cortex_Default_ModeB' } 52 2 2
{'Multiple regions'} 1.6813e+05 6 28 30 7.0345 {'Cortex_Default_ModeA' } 4 64 3
{'rn_L' } 408 -4 -20 -8 2.9126 {'Brainstem' } 100 1 4
{'Thal_VM' } 136 -10 -12 0 2.6526 {'Diencephalon' } 100 1 5
{'Thal_VM' } 136 10 -12 0 2.6526 {'Diencephalon' } 100 1 6
{'Ctx_PFm_L' } 6680 -50 -58 42 3.0366 {'Cortex_Fronto_ParietalB'} 100 1 7
{'Ctx_PFm_L' } 32 -56 -48 32 3.0366 {'Cortex_Fronto_ParietalB'} 100 0 8
{'Ctx_8C_L' } 8 -42 10 42 4.7034 {'Cortex_Fronto_ParietalA'} 100 0 9
{'Ctx_8C_L' } 8 -48 10 42 4.7034 {'Cortex_Fronto_ParietalA'} 100 0 10
Extract data from all significant regions to make a plot of individual subject post-hoc values. Note that this induces a voxel-selection bias ("circularity") that inflates the apparent effect sizes in these regions. But it can be useful for visualizing individual subjects' data and distributions. You can also use the same code for regions of interest defined a priori, which are free of this bias.
- Region-class object stored in OUT.region_objects{1} is saved in the "r" variable for convenience.
- The extract_data( ) method extracts the data from the original contrast images.
- Regions are already auto-labeled with a standard atlas, and names stored in the .shorttitle field.
r = extract_data(OUT.region_objects{1}, data_obj);
dat_matrix = cat(2, r.dat);
create_figure('individual subjects');
barplot_columns(dat_matrix, 'names', {r.shorttitle});
Col 1: Multiple regions Col 2: Ctx_TE1a_L Col 3: Multiple regions Col 4: rn_L Col 5: Thal_VM Col 6: Thal_VM Col 7: Ctx_PFm_L Col 8: Ctx_PFm_L Col 9: Ctx_8C_L Col 10: Ctx_8C_L
---------------------------------------------
Tests of column means against zero
---------------------------------------------
Name Mean_Value Std_Error T P Cohens_d
____________________ __________ _________ ______ __________ ________
{'Multiple regions'} 0.57633 0.14597 3.9483 0.00046023 0.72086
{'Ctx_TE1a_L' } 0.3994 0.11588 3.4467 0.001753 0.62928
{'Multiple regions'} 0.72529 0.19569 3.7063 0.00088252 0.67667
{'rn_L' } 0.28925 0.18699 1.5469 0.13274 0.28242
{'Thal_VM' } 0.27101 0.2135 1.2694 0.21439 0.23176
{'Thal_VM' } 0.33726 0.1672 2.0171 0.053022 0.36828
{'Ctx_PFm_L' } 0.39549 0.21476 1.8416 0.075786 0.33623
{'Ctx_PFm_L' } 0.52108 0.22808 2.2846 0.029838 0.41711
{'Ctx_8C_L' } 0.85501 0.23438 3.6479 0.001031 0.66602
{'Ctx_8C_L' } 1.2 0.2629 4.5643 8.4992e-05 0.83333
Adjust for global intensity and global components and re-run:
imgs2 = imgs.rescale('l2norm_images');
OUT = robfit_parcelwise(imgs2);
Run adding global WM and CSF signal covariates:
OUT = robfit_parcelwise(imgs2, 'csf_wm_covs');
run with global covariates and removing outliers based on Mahalanobis distance:
OUT = robfit_parcelwise(imgs2, 'csf_wm_covs', 'remove_outliers');