First-level model with CANlab regress( )

The CANlab fmri_data object has a regress( ) method that will run a basic GLM. It returns a data structure with a series of statistic_image objects containing regression parameter estimates ("beta images") and t maps with P values included. These are statistically thresholded for display by default, but can be re-thresholded after running regress( ).
This lab walks you through the steps.

Setting up a first-level analysis: Ingredients

We're setting up for the first-level analysis. We need five things:
1) Brain data. The functional fMRI data, preprocessed and ready for analysis (usually, realigned, warped to MNI space, and smoothed)
2) Task information. Onset times, durations, and names for each event type (condition) of interest
3) Nuisance covariates. Any custom nuisance covariates we want to add, including
4) Filter. An intended HP filter cutoff in sec
- SPM will add the high-pass filter when it does the analysis
5) Contrast weights for effects (comparisons across conditions) we care about

Find and examine the brain data

Navigate to files

It's helpful to start in a folder with data in it, so you don't have to navigate extensively through the SPM file selection browser.
pwd % where am i now? (in terms of working directory, at least)
ans = '/Users/torwager/Dropbox (Dartmouth College)/COURSES/Courses_Dartmouth/2021_3_Spring_fMRI_Class/Labs_and_Lectures_for_Instructors/Labs_and_assignments/Lab_5/Pinel_data_sample_subject_INSTRUCTOR'
Use cd( ) to go to your sample subject directory. Drag and drop, or copy and paste the path from Terminal.
cd('/Users/torwager/Dropbox (Dartmouth College)/COURSES/Courses_Dartmouth/2021_3_Spring_fMRI_Class/Labs_and_Lectures_for_Instructors/Labs_and_assignments/Lab_5/Pinel_data_sample_subject_INSTRUCTOR')
ls % list files here
SPM_onsets_and_nuisance_regressors.mat c1sub-sid001567_acq-MPRAGE_T1w.nii c2sub-sid001567_acq-MPRAGE_T1w.nii c3sub-sid001567_acq-MPRAGE_T1w.nii c4sub-sid001567_acq-MPRAGE_T1w.nii c5sub-sid001567_acq-MPRAGE_T1w.nii canlab_robust_reg_model1 mwc1sub-sid001567_acq-MPRAGE_T1w.nii mwc2sub-sid001567_acq-MPRAGE_T1w.nii mwc3sub-sid001567_acq-MPRAGE_T1w.nii rp_sub-sid001567_task-pinel_acq-s1p2_run-03_bold.txt rsub-sid001567_task-pinel_acq-s1p2_run-03_bold.mat rsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii rsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii.gz rsub-sid001567_task-pinel_acq-s1p2_run-03_bold_reorient.mat spm_model1 sub-sid001567_acq-MPRAGE_T1w.nii sub-sid001567_acq-MPRAGE_T1w.nii.gz sub-sid001567_acq-MPRAGE_T1w_reorient.mat sub-sid001567_acq-MPRAGE_T1w_seg8.mat sub-sid001567_task-pinel_acq-s1p2_run-03_bold.json sub-sid001567_task-pinel_acq-s1p2_run-03_bold.mat sub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii sub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii.gz sub-sid001567_task-pinel_acq-s1p2_run-03_events.tsv swrsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii wrsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii wrsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii.gz y_sub-sid001567_acq-MPRAGE_T1w.nii

Find the functional run and examine the data

It's always important to look at your data! Check: Do the brains look like brains? is the range of data values what you expected? Are there crazy outliers? Does the image space match the template space? Are the images relatively homogenous, or do some appear to be really different, i.e., on the same scale?
We'll use CANlab object-oriented tools, specifically the fmri_data object, to load the preprocessed data and make a summary plot. Here, we want "swr*" images, which are Smoothed, Warped, and Realigned...and hopefully thus ready for analysis.
preprocessed_image_name = 'swrsub-sid001567_task-pinel_acq-s1p2_run-03_bold.nii';
% Load the images into an fmri_data object
dat = fmri_data(preprocessed_image_name);
Using default mask: /Users/torwager/Documents/GitHub/CanlabCore/CanlabCore/canlab_canonical_brains/Canonical_brains_surfaces/brainmask_canlab.nii loading mask. mapping volumes. checking that dimensions and voxel sizes of volumes are the same. Pre-allocating data array. Needed: 145720800 bytes Loading image number: 150 Elapsed time is 2.552240 seconds. Image names entered, but fullpath attribute is empty. Getting path info. .fullpath should have image name for each image column in .dat Attempting to expand image filenames in case image list is unexpanded 4-D images Number of unique values in dataset: 493 Bit rate: 8.95 bits Warning: Number of unique values in dataset is low, indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values
% Plot the images. This will output potential outliers too, but this has been pre-prepped here.
plot(dat);
Calculating mahalanobis distances to identify extreme-valued images ...based on union of corr...Retained 2 components for mahalanobis distance Expected 50% of points within 50% normal ellipsoid, found 22.67% Expected 7.50 outside 95% ellipsoid, found 6 Potential outliers based on mahalanobis distance: Bonferroni corrected: 1 images Cases 1 Uncorrected: 6 images Cases 1 2 3 4 5 6 ...and cov...Retained 2 components for mahalanobis distance Expected 50% of points within 50% normal ellipsoid, found 18.67% Expected 7.50 outside 95% ellipsoid, found 6 Potential outliers based on mahalanobis distance: Bonferroni corrected: 1 images Cases 1 Uncorrected: 6 images Cases 1 2 3 4 5 6 done. Outliers: Outliers after p-value correction: Image numbers: 1 Image numbers, uncorrected: 1 2 3 4 5 6
Grouping contiguous voxels: 1 regions
Grouping contiguous voxels: 1 regions
Grouping contiguous voxels: 1 regions

Load setup file with other variables we need

The file SPM_onsets_and_nuisance_regressors should have everything we need to build the design in SPM! It's always a good idea to save prepared variables before they are entered into SPM, etc.
This .mat file was created by the first_level_spm12 lab, so go back to that if needed. It has the key variables:
load SPM_onsets_and_nuisance_regressors

Construct design matrix

Now, we've extracted the information in a format we can use to construct a design matrix. Let's try it.
Note that we need to know the TR for this! It's 2 sec. In BIDS formatted data, this can be found in the ".json sidecar" file, in the "RepetitionTime" field. We also need the scan length in sec, which in this case is 300 sec or 5 min.
TR = 2;
X = onsets2fmridesign(onsets, TR, 300);
% let's plot it
plotDesign(onsets, [], TR);
This returns the design matrix, X.
We can look at the Variance Inflation Factors to see whether there are mulitcolinearity issues or not. We'll use the 'plot' option and save the handle to this figure so we can re-activate it and add things later:
fig_han = create_figure('vifs'); % Get handle to current figure
vifs = getvif(X, 0, 'plot');
Variance inflation: 1 (black line) = minimum possible (best) Successive lines indicate doublings of variance inflation factor. Red boxes have extremely high VIFs, perfect multicolinearity

Add nuisance covariates and check VIFs again

Finally, let's re-calculate the VIFs for task regressors and add them to our original plot:
X3 = [X R];
figure(fig_han) % re-activate the VIF figure
vifs3 = getvif(X3);
han = plot(vifs3(1:10), 'bo', 'MarkerSize', 8, 'MarkerFaceColor', [.5 .5 1]);

Attach design matrix to dat object

The CANlab regress( ) method
X3 = intercept(X3, 'remove');
dat.X = X3;

Pad C with zeros so that we have an entry in C for each regressor

C.weights = [C.weights; zeros(size(X3, 2) - size(C.weights, 1), size(C.weights, 2))];
% add one for the intercept, which will be added automatically in regress().
C.weights(end+1, :) = 0;

Run regression

We will add some optional inputs to save names in the output structure.
regression_results_ols = regress(dat, 'variable_names', names, ...
'analysis_name', 'Pinel localizer 1st-level GLM', ...
'C', C.weights, 'contrast_names', C.names);
Analysis: Pinel localizer 1st-level GLM ---------------------------------- Design matrix warnings: ---------------------------------- Number of unique values in dataset is low (8.945444e+00, 493.00 bits), indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values No intercept detected, adding intercept to last column of design matrix Warning: Predictors are not centered -- intercept is not interpretable as stats for average subject Warning!!! Design multicolinearity. Some regressors have variance inflation factors > 4. Check out.diagnostics Warning!!! Some observations have high leverage values relative to others, regression may be unstable. abs(z(leverage)) > 3 Warning!!! Too few variable names entered, less than size(X, 2). Names may be inaccurate. ______________________________________________________ Running regression: 242868 voxels. Design: 150 obs, 41 regressors, intercept is last Predicting exogenous variable(s) in dat.X using brain data as predictors, mass univariate Running in OLS Mode Model run in 0 minutes and 4.27 seconds Summary of conditions and contrasts ______________________________________________________ Audio_vs_video Checkerboard_vs_other Right_vs_left_hand Computation_vs_Motor Language_vs_Motor Horiz_vs_Vert_checkerboard ______________ _____________________ __________________ ____________________ _________________ __________________________ audio_computation 1 0 0 2 0 0 audio_left_hand 1 0 -1 -1 -1 0 audio_right_hand 1 0 1 -1 -1 0 audio_sentence 1 0 0 0 2 0 horizontal_checkerboard 0 1 0 0 0 1 vertical_checkerboard 0 1 0 0 0 -1 video_computation -1 0 0 2 0 0 video_left_hand -1 0 -1 -1 -1 0 video_right_hand -1 0 1 -1 -1 0 video_sentence -1 0 0 0 2 0 Intercept 0 0 0 0 0 0 R12 0 0 0 0 0 0 R13 0 0 0 0 0 0 R14 0 0 0 0 0 0 R15 0 0 0 0 0 0 R16 0 0 0 0 0 0 R17 0 0 0 0 0 0 R18 0 0 0 0 0 0 R19 0 0 0 0 0 0 R20 0 0 0 0 0 0 R21 0 0 0 0 0 0 R22 0 0 0 0 0 0 R23 0 0 0 0 0 0 R24 0 0 0 0 0 0 R25 0 0 0 0 0 0 R26 0 0 0 0 0 0 R27 0 0 0 0 0 0 R28 0 0 0 0 0 0 R29 0 0 0 0 0 0 R30 0 0 0 0 0 0 R31 0 0 0 0 0 0 R32 0 0 0 0 0 0 R33 0 0 0 0 0 0 R34 0 0 0 0 0 0 R35 0 0 0 0 0 0 R36 0 0 0 0 0 0 R37 0 0 0 0 0 0 R38 0 0 0 0 0 0 R39 0 0 0 0 0 0 R40 0 0 0 0 0 0 R41 0 0 0 0 0 0 Creating beta and t images, thresholding t images ______________________________________________________ Thresholding t images at 0.001000 uncorrected Image 1 184 contig. clusters, sizes 1 to 5131 Positive effect: 10428 voxels, min p-value: 0.00000000 Negative effect: 1792 voxels, min p-value: 0.00000048 Image 2 184 contig. clusters, sizes 1 to 5131 Positive effect: 28362 voxels, min p-value: 0.00000000 Negative effect: 662 voxels, min p-value: 0.00000005 Image 3 184 contig. clusters, sizes 1 to 5131 Positive effect: 34720 voxels, min p-value: 0.00000000 Negative effect: 200 voxels, min p-value: 0.00000037 Image 4 184 contig. clusters, sizes 1 to 5131 Positive effect: 11999 voxels, min p-value: 0.00000000 Negative effect: 11435 voxels, min p-value: 0.00000000 Image 5 184 contig. clusters, sizes 1 to 5131 Positive effect: 907 voxels, min p-value: 0.00000000 Negative effect: 328 voxels, min p-value: 0.00000838 Image 6 184 contig. clusters, sizes 1 to 5131 Positive effect: 7668 voxels, min p-value: 0.00000000 Negative effect: 3360 voxels, min p-value: 0.00000000 Image 7 184 contig. clusters, sizes 1 to 5131 Positive effect: 11455 voxels, min p-value: 0.00000000 Negative effect: 747 voxels, min p-value: 0.00000072 Image 8 184 contig. clusters, sizes 1 to 5131 Positive effect: 47265 voxels, min p-value: 0.00000000 Negative effect: 590 voxels, min p-value: 0.00000001 Image 9 184 contig. clusters, sizes 1 to 5131 Positive effect: 18133 voxels, min p-value: 0.00000000 Negative effect: 236 voxels, min p-value: 0.00000113 Image 10 184 contig. clusters, sizes 1 to 5131 Positive effect: 12746 voxels, min p-value: 0.00000000 Negative effect: 1761 voxels, min p-value: 0.00000118 Image 11 184 contig. clusters, sizes 1 to 5131 Positive effect: 58209 voxels, min p-value: 0.00000000 Negative effect: 86767 voxels, min p-value: 0.00000000 Image 12 184 contig. clusters, sizes 1 to 5131 Positive effect: 563 voxels, min p-value: 0.00000000 Negative effect: 1543 voxels, min p-value: 0.00000000 Image 13 184 contig. clusters, sizes 1 to 5131 Positive effect: 1025 voxels, min p-value: 0.00000002 Negative effect: 396 voxels, min p-value: 0.00000000 Image 14 184 contig. clusters, sizes 1 to 5131 Positive effect: 2459 voxels, min p-value: 0.00000000 Negative effect: 847 voxels, min p-value: 0.00000000 Image 15 184 contig. clusters, sizes 1 to 5131 Positive effect: 608 voxels, min p-value: 0.00000000 Negative effect: 337 voxels, min p-value: 0.00000000 Image 16 184 contig. clusters, sizes 1 to 5131 Positive effect: 536 voxels, min p-value: 0.00000000 Negative effect: 198 voxels, min p-value: 0.00000000 Image 17 184 contig. clusters, sizes 1 to 5131 Positive effect: 1325 voxels, min p-value: 0.00000009 Negative effect: 4050 voxels, min p-value: 0.00000000 Image 18 184 contig. clusters, sizes 1 to 5131 Positive effect: 3510 voxels, min p-value: 0.00000000 Negative effect: 8306 voxels, min p-value: 0.00000000 Image 19 184 contig. clusters, sizes 1 to 5131 Positive effect: 1319 voxels, min p-value: 0.00000000 Negative effect: 20418 voxels, min p-value: 0.00000000 Image 20 184 contig. clusters, sizes 1 to 5131 Positive effect: 5788 voxels, min p-value: 0.00000001 Negative effect: 14286 voxels, min p-value: 0.00000000 Image 21 184 contig. clusters, sizes 1 to 5131 Positive effect: 3749 voxels, min p-value: 0.00000000 Negative effect: 7085 voxels, min p-value: 0.00000000 Image 22 184 contig. clusters, sizes 1 to 5131 Positive effect: 7071 voxels, min p-value: 0.00000000 Negative effect: 5293 voxels, min p-value: 0.00000000 Image 23 184 contig. clusters, sizes 1 to 5131 Positive effect: 336 voxels, min p-value: 0.00000480 Negative effect: 100 voxels, min p-value: 0.00003304 Image 24 184 contig. clusters, sizes 1 to 5131 Positive effect: 3264 voxels, min p-value: 0.00000001 Negative effect: 2968 voxels, min p-value: 0.00000002 Image 25 184 contig. clusters, sizes 1 to 5131 Positive effect: 2242 voxels, min p-value: 0.00000012 Negative effect: 1021 voxels, min p-value: 0.00000057 Image 26 184 contig. clusters, sizes 1 to 5131 Positive effect: 153 voxels, min p-value: 0.00002752 Negative effect: 1076 voxels, min p-value: 0.00000404 Image 27 184 contig. clusters, sizes 1 to 5131 Positive effect: 2429 voxels, min p-value: 0.00000012 Negative effect: 29 voxels, min p-value: 0.00011542 Image 28 184 contig. clusters, sizes 1 to 5131 Positive effect: 230 voxels, min p-value: 0.00000644 Negative effect: 333 voxels, min p-value: 0.00000011 Image 29 184 contig. clusters, sizes 1 to 5131 Positive effect: 4701 voxels, min p-value: 0.00000000 Negative effect: 15204 voxels, min p-value: 0.00000000 Image 30 184 contig. clusters, sizes 1 to 5131 Positive effect: 22 voxels, min p-value: 0.00003713 Negative effect: 683 voxels, min p-value: 0.00000025 Image 31 184 contig. clusters, sizes 1 to 5131 Positive effect: 3000 voxels, min p-value: 0.00000000 Negative effect: 1602 voxels, min p-value: 0.00000000 Image 32 184 contig. clusters, sizes 1 to 5131 Positive effect: 9873 voxels, min p-value: 0.00000000 Negative effect: 2060 voxels, min p-value: 0.00000000 Image 33 184 contig. clusters, sizes 1 to 5131 Positive effect: 1716 voxels, min p-value: 0.00000000 Negative effect: 27518 voxels, min p-value: 0.00000000 Image 34 184 contig. clusters, sizes 1 to 5131 Positive effect: 9486 voxels, min p-value: 0.00000000 Negative effect: 995 voxels, min p-value: 0.00000000 Image 35 184 contig. clusters, sizes 1 to 5131 Positive effect: 9 voxels, min p-value: 0.00045142 Negative effect: 161 voxels, min p-value: 0.00003017 Image 36 184 contig. clusters, sizes 1 to 5131 Positive effect: 56 voxels, min p-value: 0.00000533 Negative effect: 76 voxels, min p-value: 0.00000209 Image 37 184 contig. clusters, sizes 1 to 5131 Positive effect: 235 voxels, min p-value: 0.00000218 Negative effect: 74 voxels, min p-value: 0.00000000 Image 38 184 contig. clusters, sizes 1 to 5131 Positive effect: 5 voxels, min p-value: 0.00000362 Negative effect: 1548 voxels, min p-value: 0.00000001 Image 39 184 contig. clusters, sizes 1 to 5131 Positive effect: 432 voxels, min p-value: 0.00000084 Negative effect: 92 voxels, min p-value: 0.00000100 Image 40 184 contig. clusters, sizes 1 to 5131 Positive effect: 243 voxels, min p-value: 0.00000000 Negative effect: 116 voxels, min p-value: 0.00000000 Image 41 184 contig. clusters, sizes 1 to 5131 Positive effect: 227636 voxels, min p-value: 0.00000000 Negative effect: 0 voxels, min p-value: 0.02566992 Creating contrast and t images and thresholding t images ______________________________________________________ Thresholding t images at 0.001000 uncorrected Image 1 156 contig. clusters, sizes 1 to 1427 Positive effect: 1934 voxels, min p-value: 0.00000000 Negative effect: 3004 voxels, min p-value: 0.00000000 Image 2 156 contig. clusters, sizes 1 to 1427 Positive effect: 5752 voxels, min p-value: 0.00000000 Negative effect: 1456 voxels, min p-value: 0.00000103 Image 3 156 contig. clusters, sizes 1 to 1427 Positive effect: 349 voxels, min p-value: 0.00000776 Negative effect: 412 voxels, min p-value: 0.00000089 Image 4 156 contig. clusters, sizes 1 to 1427 Positive effect: 661 voxels, min p-value: 0.00000004 Negative effect: 37953 voxels, min p-value: 0.00000000 Image 5 156 contig. clusters, sizes 1 to 1427 Positive effect: 400 voxels, min p-value: 0.00000004 Negative effect: 37280 voxels, min p-value: 0.00000000 Image 6 156 contig. clusters, sizes 1 to 1427 Positive effect: 490 voxels, min p-value: 0.00000000 Negative effect: 3016 voxels, min p-value: 0.00000000

Notes on a few key points:

Show some results for selected individual conditions

% Thresholded t map for the first condition
t = get_wh_image(regression_results_ols.t, 1);
create_figure('montage'); axis off
display_obj = montage(t);
Setting up fmridisplay objects
sagittal montage: 56 voxels displayed, 12164 not displayed on these slices
sagittal montage: 136 voxels displayed, 12084 not displayed on these slices
sagittal montage: 56 voxels displayed, 12164 not displayed on these slices
axial montage: 2007 voxels displayed, 10213 not displayed on these slices
axial montage: 2080 voxels displayed, 10140 not displayed on these slices
figure; surface(t);
ans =
Light with properties: Color: [1 1 1] Style: 'infinite' Position: [1.4142 -8.6596e-17 0] Visible: on Show all properties
Anatomical image: Adapted from the 7T high-resolution atlas of Keuken, Forstmann et al. Keuken et al. (2014). Quantifying inter-individual anatomical variability in the subcortex using 7T structural MRI. Forstmann, Birte U., Max C. Keuken, Andreas Schafer, Pierre-Louis Bazin, Anneke Alkemade, and Robert Turner. 2014. ?Multi-Modal Ultra-High Resolution Structural 7-Tesla MRI Data Repository.? Scientific Data 1 (December): 140050.
% Montages for thresholded t images, condition [2 4 6 8]
t = get_wh_image(regression_results_ols.t, [2 4 6 8]);
create_figure('montage2'); axis off
display_obj = montage(t);
Setting up fmridisplay objects
sagittal montage: 566 voxels displayed, 28458 not displayed on these slices
axial montage: 6274 voxels displayed, 22750 not displayed on these slices
sagittal montage: 804 voxels displayed, 22630 not displayed on these slices
axial montage: 5242 voxels displayed, 18192 not displayed on these slices
sagittal montage: 19 voxels displayed, 11009 not displayed on these slices
axial montage: 2608 voxels displayed, 8420 not displayed on these slices
sagittal montage: 1588 voxels displayed, 46267 not displayed on these slices
axial montage: 10510 voxels displayed, 37345 not displayed on these slices

Show results for contrasts

t = get_wh_image(regression_results_ols.con_t, 1:3);
create_figure('montage3'); axis off
display_obj = montage(t);
Setting up fmridisplay objects
sagittal montage: 2 voxels displayed, 4936 not displayed on these slices
axial montage: 1205 voxels displayed, 3733 not displayed on these slices
sagittal montage: 43 voxels displayed, 7165 not displayed on these slices
axial montage: 1777 voxels displayed, 5431 not displayed on these slices
sagittal montage: 27 voxels displayed, 734 not displayed on these slices
axial montage: 158 voxels displayed, 603 not displayed on these slices
t = get_wh_image(regression_results_ols.con_t, 4:6);
create_figure('montage4'); axis off
display_obj = montage(t);
Setting up fmridisplay objects
sagittal montage: 1164 voxels displayed, 37450 not displayed on these slices
axial montage: 8218 voxels displayed, 30396 not displayed on these slices
sagittal montage: 1304 voxels displayed, 36376 not displayed on these slices
axial montage: 8362 voxels displayed, 29318 not displayed on these slices
sagittal montage: 74 voxels displayed, 3432 not displayed on these slices
axial montage: 863 voxels displayed, 2643 not displayed on these slices
% Create and print a table for the first contrast
% at p < 0.001 and k = 50 contiguous voxels
t = get_wh_image(regression_results_ols.con_t, 1);
t = threshold(t, .001, 'unc', 'k', 50);
Image 1 14 contig. clusters, sizes 51 to 1427 Positive effect: 1582 voxels, min p-value: 0.00000000 Negative effect: 2786 voxels, min p-value: 0.00000000
table(t);
Grouping contiguous voxels: 14 regions ____________________________________________________________________________________________________________________________________________ Positive Effects Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index ________________ ______ _________________ ______ _____________________________ _____________________ _____________________ ____________ {'Cblm_CrusI_R'} 13312 28 -70 -20 6.8615 {'Cerebellum' } 40 2 1 {'Cblm_VI_L' } 3232 -28 -62 -20 5.022 {'Cerebellum' } 68 0 2 {'Ctx_7PC_R' } 1672 36 -40 64 4.8256 {'Cortex_Dorsal_AttentionA' } 39 0 8 {'Ctx_6a_L' } 1432 -22 0 52 4.2571 {'Cortex_Dorsal_AttentionB' } 69 1 6 {'Ctx_6ma_R' } 1696 24 4 60 4.5423 {'Cortex_Ventral_AttentionA'} 51 0 7 {'Ctx_6ma_R' } 1232 12 8 74 4.2662 {'Cortex_Ventral_AttentionA'} 28 0 9 {'Ctx_9_46d_R' } 1096 32 52 30 4.2008 {'Cortex_Ventral_AttentionB'} 27 0 5 {'Ctx_LO2_L' } 1384 -44 -76 -12 4.1893 {'Cortex_Visual_Central' } 39 2 3 {'Ctx_V2_L' } 1240 -8 -84 -8 4.9863 {'Cortex_Visual_Peripheral' } 56 0 4 Negative Effects Region Volume XYZ maxZ modal_label_descriptions Perc_covered_by_label Atlas_regions_covered region_index ____________________ ______ _________________ _______ ____________________________ _____________________ _____________________ ____________ {'Ctx_8BL_R' } 2344 18 40 50 -4.3515 {'Cortex_Default_ModeB' } 38 0 14 {'Ctx_TE1p_R' } 1912 66 -34 -2 -5.2823 {'Cortex_Fronto_ParietalB' } 28 0 12 {'Ctx_PFm_R' } 1888 58 -50 28 -4.1509 {'Cortex_Fronto_ParietalB' } 57 0 13 {'Multiple regions'} 18848 62 -4 -10 -9.6404 {'Cortex_Temporal_Parietal'} 11 9 10 {'Multiple regions'} 14864 -62 -6 -8 -9.1316 {'Cortex_Temporal_Parietal'} 19 8 11 ____________________________________________________________________________________________________________________________________________ Regions labeled by reference atlas CANlab_2018_combined Volume: Volume of contiguous region in cubic mm. MaxZ: Signed max over T Atlas_regions_covered: Number of reference atlas regions covered at least 25% by the region. This relates to whether the region covers multiple reference atlas regions Region: Best reference atlas label, defined as reference region with highest number of in-region voxels. Regions covering >25% of >5 regions labeled as "Multiple regions" Perc_covered_by_label: Percentage of the region covered by the label. Ref_region_perc: Percentage of the label region within the target region. modal_atlas_index: Index number of label region in reference atlas all_regions_covered: All regions covered >5% in descending order of importance For example, if a region is labeled 'TE1a' and Perc_covered_by_label = 8, Ref_region_perc = 38, and Atlas_regions_covered = 17, this means that 8% of the region's voxels are labeled TE1a, which is the highest percentage among reference label regions. 38% of the region TE1a is covered by the region. However, the region covers at least 25% of 17 distinct labeled reference regions. References for atlases: Beliveau, Vincent, Claus Svarer, Vibe G. Frokjaer, Gitte M. Knudsen, Douglas N. Greve, and Patrick M. Fisher. 2015. “Functional Connectivity of the Dorsal and Median Raphe Nuclei at Rest.” NeuroImage 116 (August): 187–95. Bär, Karl-Jürgen, Feliberto de la Cruz, Andy Schumann, Stefanie Koehler, Heinrich Sauer, Hugo Critchley, and Gerd Wagner. 2016. ?Functional Connectivity and Network Analysis of Midbrain and Brainstem Nuclei.? NeuroImage 134 (July):53?63. Diedrichsen, Jörn, Joshua H. Balsters, Jonathan Flavell, Emma Cussans, and Narender Ramnani. 2009. A Probabilistic MR Atlas of the Human Cerebellum. NeuroImage 46 (1): 39?46. Fairhurst, Merle, Katja Wiech, Paul Dunckley, and Irene Tracey. 2007. ?Anticipatory Brainstem Activity Predicts Neural Processing of Pain in Humans.? Pain 128 (1-2):101?10. Fan 2016 Cerebral Cortex; doi:10.1093/cercor/bhw157 Glasser, Matthew F., Timothy S. Coalson, Emma C. Robinson, Carl D. Hacker, John Harwell, Essa Yacoub, Kamil Ugurbil, et al. 2016. A Multi-Modal Parcellation of Human Cerebral Cortex. Nature 536 (7615): 171?78. Keren, Noam I., Carl T. Lozar, Kelly C. Harris, Paul S. Morgan, and Mark A. Eckert. 2009. “In Vivo Mapping of the Human Locus Coeruleus.” NeuroImage 47 (4): 1261–67. Keuken, M. C., P-L Bazin, L. Crown, J. Hootsmans, A. Laufer, C. Müller-Axt, R. Sier, et al. 2014. “Quantifying Inter-Individual Anatomical Variability in the Subcortex Using 7 T Structural MRI.” NeuroImage 94 (July): 40–46. Krauth A, Blanc R, Poveda A, Jeanmonod D, Morel A, Székely G. (2010) A mean three-dimensional atlas of the human thalamus: generation from multiple histological data. Neuroimage. 2010 Feb 1;49(3):2053-62. Jakab A, Blanc R, Berényi EL, Székely G. (2012) Generation of Individualized Thalamus Target Maps by Using Statistical Shape Models and Thalamocortical Tractography. AJNR Am J Neuroradiol. 33: 2110-2116, doi: 10.3174/ajnr.A3140 Nash, Paul G., Vaughan G. Macefield, Iven J. Klineberg, Greg M. Murray, and Luke A. Henderson. 2009. ?Differential Activation of the Human Trigeminal Nuclear Complex by Noxious and Non-Noxious Orofacial Stimulation.? Human Brain Mapping 30 (11):3772?82. Pauli 2018 Bioarxiv: CIT168 from Human Connectome Project data Pauli, Wolfgang M., Amanda N. Nili, and J. Michael Tyszka. 2018. ?A High-Resolution Probabilistic in Vivo Atlas of Human Subcortical Brain Nuclei.? Scientific Data 5 (April): 180063. Pauli, Wolfgang M., Randall C. O?Reilly, Tal Yarkoni, and Tor D. Wager. 2016. ?Regional Specialization within the Human Striatum for Diverse Psychological Functions.? Proceedings of the National Academy of Sciences of the United States of America 113 (7): 1907?12. Sclocco, Roberta, Florian Beissner, Gaelle Desbordes, Jonathan R. Polimeni, Lawrence L. Wald, Norman W. Kettner, Jieun Kim, et al. 2016. ?Neuroimaging Brainstem Circuitry Supporting Cardiovagal Response to Pain: A Combined Heart Rate Variability/ultrahigh-Field (7 T) Functional Magnetic Resonance Imaging Study.? Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 374 (2067). rsta.royalsocietypublishing.org. https://doi.org/10.1098/rsta.2015.0189. Shen, X., F. Tokoglu, X. Papademetris, and R. T. Constable. 2013. “Groupwise Whole-Brain Parcellation from Resting-State fMRI Data for Network Node Identification.” NeuroImage 82 (November): 403–15. Zambreanu, L., R. G. Wise, J. C. W. Brooks, G. D. Iannetti, and I. Tracey. 2005. ?A Role for the Brainstem in Central Sensitisation in Humans. Evidence from Functional Magnetic Resonance Imaging.? Pain 114 (3):397?407. Note: Region object r(i).title contains full list of reference atlas regions covered by each cluster. ____________________________________________________________________________________________________________________________________________

All the CANlab display tools are available

Here is a diagram showing some examples:

Save the results structure for later use

Now we can save the results. You should do this in the Matlab Command Window, for your particular path, as it depends on where your local files are stored. Be careful, because save may overwrite existing files!
cd('/Users/torwager/Dropbox (Dartmouth College)/COURSES/Courses_Dartmouth/2021_3_Spring_fMRI_Class/Labs_and_Lectures_for_Instructors/Labs_and_assignments/Lab_5/Pinel_data_sample_subject_INSTRUCTOR/')
save canlab_1st_level_ols_regression_results regression_results_ols

Run robust regression

This runs robust regression at each voxel. It will take a while!
regression_results_robust = regress(dat, 'variable_names', names, ...
'analysis_name', 'Pinel localizer 1st-level robust GLM', 'robust', ...
'C', C.weights, 'contrast_names', C.names);
Analysis: Pinel localizer 1st-level robust GLM ---------------------------------- Design matrix warnings: ---------------------------------- Number of unique values in dataset is low (8.945444e+00, 493.00 bits), indicating possible restriction of bit rate. For comparison, Int16 has 65,536 unique values No intercept detected, adding intercept to last column of design matrix Warning: Predictors are not centered -- intercept is not interpretable as stats for average subject Warning!!! Design multicolinearity. Some regressors have variance inflation factors > 4. Check out.diagnostics Warning!!! Some observations have high leverage values relative to others, regression may be unstable. abs(z(leverage)) > 3 Warning!!! Too few variable names entered, less than size(X, 2). Names may be inaccurate. ______________________________________________________ Running regression: 242868 voxels. Design: 150 obs, 41 regressors, intercept is last Predicting exogenous variable(s) in dat.X using brain data as predictors, mass univariate Running in Robust Mode 000%
Note: Here it may tell us that we have design multicolinearity, but we should consider whether this is because of colinearity among nuisance regressors only (which is fine, as long as we don't want to interpret their individual coefficients), or whether it's related to multicolinearity with the regressors of interest. Only the latter is a problem.

Show results for contrasts

t = get_wh_image(regression_results_robust.con_t, 1:3);
create_figure('montage3'); axis off
display_obj = montage(t);
t = get_wh_image(regression_results_robust.con_t, 4:6);
create_figure('montage4'); axis off
display_obj = montage(t);

Save the results structure for later use

Now we can save the results. You should do this in the Matlab Command Window, for your particular path, as it depends on where your local files are stored. Be careful, because save may overwrite existing files!
cd('/Users/torwager/Dropbox (Dartmouth College)/COURSES/Courses_Dartmouth/2021_3_Spring_fMRI_Class/Labs_and_Lectures_for_Instructors/Labs_and_assignments/Lab_5/Pinel_data_sample_subject_INSTRUCTOR/')
save canlab_1st_level_robust_regression_results regression_results_robust

Key ideas: From 1st-level to 2nd-level (group) analysis

if you're running 1st-level models from multiple subjects and are going to use the output for group analysis, here is a workflow, and some things to keep in mind:
1. Create a script that loops through all subjects and runs the models
2. Create a script that loops through all subjects and aggregates the contrast images
objcat = image_math(obj1, obj2, 'cat');
3. Create a script that loops through contrasts and performs 2nd-level (group) analysis