Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Second-level fMRI model: two-sample test, unpaired and paired¶
Full step-by-step example of fitting a GLM to perform a second level analysis in experimental data and visualizing the results
More specifically:
A sample of n=16 visual activity fMRIs are downloaded.
2. An unpaired, two-sample t-test is applied to the brain maps in order to see the effect of the contrast difference across subjects.
3. A paired, two-sample t-test is applied to the brain maps in order to see the effect of the contrast difference across subjects, considering subject intercepts
The contrast is between responses to retinotopically distinct vertical versus horizontal checkerboards. At the individual level, these stimuli are sometimes used to map the borders of primary visual areas. At the group level, such a mapping is not possible. Yet, we may observe some significant effects in these areas.
import pandas as pd
from nilearn import plotting
from nilearn.datasets import fetch_localizer_contrasts
Fetch dataset¶
We download a list of left vs right button press contrasts from a localizer dataset.
n_subjects = 16
sample_vertical = fetch_localizer_contrasts(
["vertical checkerboard"],
n_subjects,
legacy_format=False,
)
sample_horizontal = fetch_localizer_contrasts(
["horizontal checkerboard"],
n_subjects,
legacy_format=False,
)
# Implicitly, there is a one-to-one correspondence between the two samples:
# the first image of both samples comes from subject S1,
# the second from subject S2 etc.
[get_dataset_dir] Dataset found in
/home/runner/nilearn_data/brainomics_localizer
[fetch_single_file] Downloading data from
https://osf.io/download/5d27c2c41c5b4a001d9f4e7e/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27d3c3114a42001804500a/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27e5fa1c5b4a001aa09681/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27f18945253a00193cb2dd/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d2808401c5b4a001d9f83b2/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d2811fba26b340017085492/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d282b2345253a001c3e7d09/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28318445253a00193ce6d7/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d2848581c5b4a001aa10aac/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28545ca26b340018089ba7/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d285cd945253a001a3c8509/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d286e49114a42001904ab90/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d288af11c5b4a001d9ff0cb/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d289be945253a001c3ef5e2/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28a1c91c5b4a001da00bd9/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28bb90a26b3400190925d2/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[get_dataset_dir] Dataset found in
/home/runner/nilearn_data/brainomics_localizer
[fetch_single_file] Downloading data from
https://osf.io/download/5d27ccde1c5b4a001d9f5602/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27d9c6114a420019045370/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27de38a26b340016099771/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d27fb651c5b4a001d9f7938/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d280057a26b340019089965/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d2814d145253a001c3e6404/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28244745253a001b3c4afa/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28309645253a001a3c6a8d/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d284a3445253a001c3ea2d1/ ...
[_chunk_read_] Downloaded 196608 of 239830 bytes (82.0%, 0.2s remaining)
[fetch_single_file] ...done. (5 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28564b1c5b4a001d9fc9d6/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d285b6c1c5b4a001c9edada/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28765645253a001b3c8106/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d287eeb45253a001c3ed1ba/ ...
[fetch_single_file] ...done. (2 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d2896fb45253a001a3cabe0/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28af541c5b4a001da01caa/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
[fetch_single_file] Downloading data from
https://osf.io/download/5d28b9af45253a001a3ccb85/ ...
[fetch_single_file] ...done. (3 seconds, 0 min)
Estimate second level models¶
We define the input maps and the design matrix for the second level model and fit it.
second_level_input = sample_vertical["cmaps"] + sample_horizontal["cmaps"]
Next, we model the effect of conditions (sample 1 vs sample 2).
import numpy as np
condition_effect = np.hstack(([1] * n_subjects, [0] * n_subjects))
The design matrix for the unpaired test needs to add an intercept, For the paired test, we include an intercept for each subject.
subject_effect = np.vstack((np.eye(n_subjects), np.eye(n_subjects)))
subjects = [f"S{i:02d}" for i in range(1, n_subjects + 1)]
We then assemble those into design matrices
unpaired_design_matrix = pd.DataFrame(
{
"vertical vs horizontal": condition_effect,
"intercept": 1,
}
)
paired_design_matrix = pd.DataFrame(
np.hstack((condition_effect[:, np.newaxis], subject_effect)),
columns=["vertical vs horizontal", *subjects],
)
and plot the designs.
import matplotlib.pyplot as plt
_, (ax_unpaired, ax_paired) = plt.subplots(
1,
2,
gridspec_kw={"width_ratios": [1, 17]},
constrained_layout=True,
)
plotting.plot_design_matrix(
unpaired_design_matrix, rescale=False, axes=ax_unpaired
)
plotting.plot_design_matrix(
paired_design_matrix, rescale=False, axes=ax_paired
)
ax_unpaired.set_title("unpaired design", fontsize=12)
ax_paired.set_title("paired design", fontsize=12)
plotting.show()
We specify the analysis models and fit them.
from nilearn.glm.second_level import SecondLevelModel
second_level_model_unpaired = SecondLevelModel(n_jobs=2).fit(
second_level_input, design_matrix=unpaired_design_matrix
)
second_level_model_paired = SecondLevelModel(n_jobs=2).fit(
second_level_input, design_matrix=paired_design_matrix
)
Estimating the contrast is simple. To do so, we provide the column name of the design matrix. The argument ‘output_type’ is set to return all available outputs so that we can compare differences in the effect size, variance, and z-score.
stat_maps_unpaired = second_level_model_unpaired.compute_contrast(
"vertical vs horizontal", output_type="all"
)
stat_maps_paired = second_level_model_paired.compute_contrast(
"vertical vs horizontal", output_type="all"
)
Plot the results¶
The two ‘effect_size’ images are essentially identical.
(
stat_maps_unpaired["effect_size"].get_fdata()
- stat_maps_paired["effect_size"].get_fdata()
).max()
2.220446049250313e-15
But the variance in the unpaired image is larger.
plotting.plot_glass_brain(
stat_maps_unpaired["effect_variance"],
colorbar=True,
vmin=0,
vmax=6,
title="vertical vs horizontal effect variance, unpaired",
)
plotting.plot_glass_brain(
stat_maps_paired["effect_variance"],
colorbar=True,
vmin=0,
vmax=6,
title="vertical vs horizontal effect variance, paired",
)
plotting.show()
Together, this makes the z_scores from the paired test larger. We threshold the second level contrast and plot it.
threshold = 3.1 # corresponds to p < .001, uncorrected
display = plotting.plot_glass_brain(
stat_maps_unpaired["z_score"],
threshold=threshold,
colorbar=True,
plot_abs=False,
title="vertical vs horizontal (unc p<0.001)",
vmin=0,
vmax=6,
)
display = plotting.plot_glass_brain(
stat_maps_paired["z_score"],
threshold=threshold,
colorbar=True,
plot_abs=False,
title="vertical vs horizontal (unc p<0.001)",
vmin=0,
vmax=6,
)
plotting.show()
Unsurprisingly, we see activity in the primary visual cortex, both positive and negative.
Total running time of the script: (1 minutes 39.681 seconds)
Estimated memory usage: 149 MB