Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Single-subject data (two runs) in native space¶
The example shows the analysis of an SPM dataset, with two conditions: viewing a face image or a scrambled face image.
This example takes a lot of time because the input are lists of 3D images sampled in different positions (encoded by different affine functions).
See also
For more information see the dataset description.
Fetch and inspect the data¶
Fetch the SPM multimodal_faces data.
from nilearn.datasets import fetch_spm_multimodal_fmri
subject_data = fetch_spm_multimodal_fmri()
[wrapper] Dataset created in /home/runner/nilearn_data/spm_multimodal_fmri
[wrapper] Missing 390 functional scans for session 1.
[wrapper] Data absent, downloading...
[wrapper] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/mmfaces/multimodal_fmri.zip ...
[wrapper] Downloaded 3842048 of 134263085 bytes (2.9%%, 35.4s remaining)
[wrapper] Downloaded 8486912 of 134263085 bytes (6.3%%, 30.5s remaining)
[wrapper] Downloaded 13287424 of 134263085 bytes (9.9%%, 28.2s remaining)
[wrapper] Downloaded 18300928 of 134263085 bytes (13.6%%, 26.2s remaining)
[wrapper] Downloaded 23429120 of 134263085 bytes (17.5%%, 24.4s remaining)
[wrapper] Downloaded 28598272 of 134263085 bytes (21.3%%, 22.9s remaining)
[wrapper] Downloaded 33767424 of 134263085 bytes (25.2%%, 21.6s remaining)
[wrapper] Downloaded 38936576 of 134263085 bytes (29.0%%, 20.3s remaining)
[wrapper] Downloaded 44154880 of 134263085 bytes (32.9%%, 19.0s remaining)
[wrapper] Downloaded 49487872 of 134263085 bytes (36.9%%, 17.7s remaining)
[wrapper] Downloaded 55017472 of 134263085 bytes (41.0%%, 16.4s remaining)
[wrapper] Downloaded 60907520 of 134263085 bytes (45.4%%, 14.9s remaining)
[wrapper] Downloaded 67321856 of 134263085 bytes (50.1%%, 13.4s remaining)
[wrapper] Downloaded 74489856 of 134263085 bytes (55.5%%, 11.6s remaining)
[wrapper] Downloaded 82616320 of 134263085 bytes (61.5%%, 9.7s remaining)
[wrapper] Downloaded 91996160 of 134263085 bytes (68.5%%, 7.6s remaining)
[wrapper] Downloaded 100319232 of 134263085 bytes (74.7%%, 6.0s remaining)
[wrapper] Downloaded 108503040 of 134263085 bytes (80.8%%, 4.4s remaining)
[wrapper] Downloaded 113778688 of 134263085 bytes (84.7%%, 3.5s remaining)
[wrapper] Downloaded 118603776 of 134263085 bytes (88.3%%, 2.7s remaining)
[wrapper] Downloaded 123707392 of 134263085 bytes (92.1%%, 1.9s remaining)
[wrapper] Downloaded 128966656 of 134263085 bytes (96.1%%, 0.9s remaining)
[wrapper] ...done. (25 seconds, 0 min)
[wrapper] Extracting data from
/home/runner/nilearn_data/spm_multimodal_fmri/sub001/multimodal_fmri.zip...
[wrapper] .. done.
[wrapper] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/mmfaces/multimodal_smri.zip ...
[wrapper] Downloaded 3325952 of 6852766 bytes (48.5%%, 1.1s remaining)
[wrapper] ...done. (3 seconds, 0 min)
[wrapper] Extracting data from
/home/runner/nilearn_data/spm_multimodal_fmri/sub001/multimodal_smri.zip...
[wrapper] .. done.
Let’s inspect one of the event files before using them.
import pandas as pd
events = [subject_data.events1, subject_data.events2]
events_dataframe = pd.read_csv(events[0], sep="\t")
events_dataframe["trial_type"].value_counts()
trial_type
scrambled 86
faces 64
Name: count, dtype: int64
We can confirm there are only 2 conditions in the dataset.
from nilearn.plotting import plot_event, show
plot_event(events)
show()

Resample the images:
this is achieved by the concat_imgs
function of Nilearn.
import warnings
from nilearn.image import concat_imgs, mean_img, resample_img
# Avoid getting too many warnings due to resampling
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fmri_img = [
concat_imgs(subject_data.func1, auto_resample=True),
concat_imgs(subject_data.func2, auto_resample=True),
]
affine, shape = fmri_img[0].affine, fmri_img[0].shape
print("Resampling the second image (this takes time)...")
fmri_img[1] = resample_img(
fmri_img[1], affine, shape[:3], copy_header=True, force_resample=True
)
Resampling the second image (this takes time)...
Let’s create mean image for display purposes.
mean_image = mean_img(fmri_img, copy_header=True)
Fit the model¶
Fit the GLM for the 2 runs by specifying a FirstLevelModel and then fitting it.
# Sample at the beginning of each acquisition.
slice_time_ref = 0.0
# We use a discrete cosine transform to model signal drifts.
drift_model = "cosine"
# The cutoff for the drift model is 0.01 Hz.
high_pass = 0.01
# The hemodynamic response function
hrf_model = "spm + derivative"
from nilearn.glm.first_level import FirstLevelModel
print("Fitting a GLM")
fmri_glm = FirstLevelModel(
smoothing_fwhm=None,
t_r=subject_data.t_r,
hrf_model=hrf_model,
drift_model=drift_model,
high_pass=high_pass,
)
fmri_glm = fmri_glm.fit(fmri_img, events=events)
Fitting a GLM
View the results¶
Now we can compute contrast-related statistical maps (in z-scale), and plot them.
from nilearn.plotting import plot_stat_map
print("Computing contrasts")
Computing contrasts
We actually want more interesting contrasts. The simplest contrast just makes the difference between the two main conditions. We define the two opposite versions to run one-tailed t-tests.
contrasts = ["faces - scrambled", "scrambled - faces"]
Let’s store common parameters for all plots.
We plot the contrasts values overlaid on the mean fMRI image and we will use the z-score values as transparency, with any voxel with | Z-score | > 3 being fully opaque and any voxel with 0 < | Z-score | < 1.96 being partly transparent.
plot_param = {
"vmin": 0,
"display_mode": "z",
"cut_coords": 3,
"black_bg": True,
"bg_img": mean_image,
"cmap": "inferno",
"transparency_range": [0, 3],
}
# Iterate on contrasts to compute and plot them.
for contrast_id in contrasts:
print(f"\tcontrast id: {contrast_id}")
results = fmri_glm.compute_contrast(contrast_id, output_type="all")
plot_stat_map(
results["stat"],
title=contrast_id,
transparency=results["z_score"],
**plot_param,
)
contrast id: faces - scrambled
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:135: UserWarning: One contrast given, assuming it for all 2 runs
results = fmri_glm.compute_contrast(contrast_id, output_type="all")
contrast id: scrambled - faces
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:135: UserWarning: One contrast given, assuming it for all 2 runs
results = fmri_glm.compute_contrast(contrast_id, output_type="all")
We also define the effects of interest contrast, a 2-dimensional contrasts spanning the two conditions.
import numpy as np
contrasts = np.eye(2)
results = fmri_glm.compute_contrast(contrasts, output_type="all")
plot_stat_map(
results["stat"],
title="effects of interest",
transparency=results["z_score"],
**plot_param,
)
show()

/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:152: UserWarning: One contrast given, assuming it for all 2 runs
results = fmri_glm.compute_contrast(contrasts, output_type="all")
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:152: UserWarning: F contrasts should have 20 columns, but it has only 2. The rest of the contrast was padded with zeros.
results = fmri_glm.compute_contrast(contrasts, output_type="all")
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:152: UserWarning: Running approximate fixed effects on F statistics.
results = fmri_glm.compute_contrast(contrasts, output_type="all")
Based on the resulting maps we observe that the analysis results in wide activity for the ‘effects of interest’ contrast, showing the implications of large portions of the visual cortex in the conditions. By contrast, the differential effect between “faces” and “scrambled” involves sparser, more anterior and lateral regions. It also displays some responses in the frontal lobe.
Total running time of the script: (2 minutes 13.149 seconds)
Estimated memory usage: 976 MB