Single-subject data (two runs) in native space

The example shows the analysis of an SPM dataset studying face perception. The analysis is performed in native space. Realignment parameters are provided with the input images, but those have not been resampled to a common space.

The experimental paradigm is simple, with two conditions; viewing a face image or a scrambled face image, supposedly with the same low-level statistical properties, to find face-specific responses.

For details on the data, please see Henson et al.[1].

This example takes a lot of time because the input are lists of 3D images sampled in different positions (encoded by different affine functions).

Fetch the SPM multimodal_faces data.

[get_dataset_dir] Dataset created in
/home/runner/nilearn_data/spm_multimodal_fmri
[_glob_spm_multimodal_fmri_data] Missing 390 functional scans for session 1.
[fetch_spm_multimodal_fmri] Data absent, downloading...
[fetch_single_file] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/mmfaces/multimodal_fmri.zip ...
[_chunk_report_] Downloaded 3489792 of 134263085 bytes (2.6%%,   37.7s
remaining)
[_chunk_report_] Downloaded 6914048 of 134263085 bytes (5.1%%,   37.1s
remaining)
[_chunk_report_] Downloaded 10272768 of 134263085 bytes (7.7%%,   36.6s
remaining)
[_chunk_report_] Downloaded 13516800 of 134263085 bytes (10.1%%,   36.1s
remaining)
[_chunk_report_] Downloaded 17145856 of 134263085 bytes (12.8%%,   34.6s
remaining)
[_chunk_report_] Downloaded 20496384 of 134263085 bytes (15.3%%,   33.7s
remaining)
[_chunk_report_] Downloaded 24150016 of 134263085 bytes (18.0%%,   32.3s
remaining)
[_chunk_report_] Downloaded 27516928 of 134263085 bytes (20.5%%,   31.4s
remaining)
[_chunk_report_] Downloaded 31219712 of 134263085 bytes (23.3%%,   30.1s
remaining)
[_chunk_report_] Downloaded 34742272 of 134263085 bytes (25.9%%,   29.0s
remaining)
[_chunk_report_] Downloaded 38617088 of 134263085 bytes (28.8%%,   27.6s
remaining)
[_chunk_report_] Downloaded 41582592 of 134263085 bytes (31.0%%,   27.1s
remaining)
[_chunk_report_] Downloaded 45187072 of 134263085 bytes (33.7%%,   25.9s
remaining)
[_chunk_report_] Downloaded 48791552 of 134263085 bytes (36.3%%,   24.8s
remaining)
[_chunk_report_] Downloaded 52822016 of 134263085 bytes (39.3%%,   23.4s
remaining)
[_chunk_report_] Downloaded 56647680 of 134263085 bytes (42.2%%,   22.2s
remaining)
[_chunk_report_] Downloaded 60768256 of 134263085 bytes (45.3%%,   20.8s
remaining)
[_chunk_report_] Downloaded 64618496 of 134263085 bytes (48.1%%,   19.6s
remaining)
[_chunk_report_] Downloaded 68763648 of 134263085 bytes (51.2%%,   18.3s
remaining)
[_chunk_report_] Downloaded 72663040 of 134263085 bytes (54.1%%,   17.2s
remaining)
[_chunk_report_] Downloaded 76996608 of 134263085 bytes (57.3%%,   15.8s
remaining)
[_chunk_report_] Downloaded 81248256 of 134263085 bytes (60.5%%,   14.5s
remaining)
[_chunk_report_] Downloaded 86212608 of 134263085 bytes (64.2%%,   13.0s
remaining)
[_chunk_report_] Downloaded 91357184 of 134263085 bytes (68.0%%,   11.4s
remaining)
[_chunk_report_] Downloaded 97640448 of 134263085 bytes (72.7%%,    9.5s
remaining)
[_chunk_report_] Downloaded 104439808 of 134263085 bytes (77.8%%,    7.5s
remaining)
[_chunk_report_] Downloaded 109535232 of 134263085 bytes (81.6%%,    6.2s
remaining)
[_chunk_report_] Downloaded 113655808 of 134263085 bytes (84.7%%,    5.1s
remaining)
[_chunk_report_] Downloaded 118308864 of 134263085 bytes (88.1%%,    4.0s
remaining)
[_chunk_report_] Downloaded 122798080 of 134263085 bytes (91.5%%,    2.8s
remaining)
[_chunk_report_] Downloaded 127647744 of 134263085 bytes (95.1%%,    1.6s
remaining)
[_chunk_report_] Downloaded 132202496 of 134263085 bytes (98.5%%,    0.5s
remaining)
[fetch_single_file]  ...done. (34 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/spm_multimodal_fmri/sub001/multimodal_fmri.zip...
[uncompress_file] .. done.

[fetch_single_file] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/mmfaces/multimodal_smri.zip ...
[_chunk_report_] Downloaded 3760128 of 6852766 bytes (54.9%%,    0.8s remaining)
[fetch_single_file]  ...done. (2 seconds, 0 min)

[uncompress_file] Extracting data from
/home/runner/nilearn_data/spm_multimodal_fmri/sub001/multimodal_smri.zip...
[uncompress_file] .. done.

Specify timing and design matrix parameters.

# repetition time, in seconds
t_r = 2.0
# Sample at the beginning of each acquisition.
slice_time_ref = 0.0
# We use a discrete cosine transform to model signal drifts.
drift_model = "Cosine"
# The cutoff for the drift model is 0.01 Hz.
high_pass = 0.01
# The hemodynamic response function
hrf_model = "spm + derivative"

Resample the images.

This is achieved by the concat_imgs function of Nilearn.

import warnings

from nilearn.image import concat_imgs, mean_img, resample_img

# Avoid getting too many warnings due to resampling
with warnings.catch_warnings():
    warnings.simplefilter("ignore")
    fmri_img = [
        concat_imgs(subject_data.func1, auto_resample=True),
        concat_imgs(subject_data.func2, auto_resample=True),
    ]
affine, shape = fmri_img[0].affine, fmri_img[0].shape
print("Resampling the second image (this takes time)...")
fmri_img[1] = resample_img(
    fmri_img[1], affine, shape[:3], copy_header=True, force_resample=True
)
Resampling the second image (this takes time)...

Let’s create mean image for display purposes.

mean_image = mean_img(fmri_img, copy_header=True)

Make the design matrices.

import numpy as np
import pandas as pd

from nilearn.glm.first_level import make_first_level_design_matrix

design_matrices = []

Loop over the two runs.

for idx, img in enumerate(fmri_img, start=1):
    # Build experimental paradigm
    n_scans = img.shape[-1]
    events = pd.read_table(subject_data[f"events{idx}"])
    # Define the sampling times for the design matrix
    frame_times = np.arange(n_scans) * t_r
    # Build design matrix with the reviously defined parameters
    design_matrix = make_first_level_design_matrix(
        frame_times,
        events,
        hrf_model=hrf_model,
        drift_model=drift_model,
        high_pass=high_pass,
    )

    # put the design matrices in a list
    design_matrices.append(design_matrix)

We can specify basic contrasts (to get beta maps). We start by specifying canonical contrast that isolate design matrix columns.

We actually want more interesting contrasts. The simplest contrast just makes the difference between the two main conditions. We define the two opposite versions to run one-tailed t-tests. We also define the effects of interest contrast, a 2-dimensional contrasts spanning the two conditions.

contrasts = {
    "faces-scrambled": basic_contrasts["faces"] - basic_contrasts["scrambled"],
    "scrambled-faces": -basic_contrasts["faces"]
    + basic_contrasts["scrambled"],
    "effects_of_interest": np.vstack(
        (basic_contrasts["faces"], basic_contrasts["scrambled"])
    ),
}

Fit the GLM for the 2 runs by specifying a FirstLevelModel and then fitting it.

from nilearn.glm.first_level import FirstLevelModel

print("Fitting a GLM")
fmri_glm = FirstLevelModel()
fmri_glm = fmri_glm.fit(fmri_img, design_matrices=design_matrices)
Fitting a GLM

Now we can compute contrast-related statistical maps (in z-scale), and plot them.

from nilearn import plotting

print("Computing contrasts")

# Iterate on contrasts
for contrast_id, contrast_val in contrasts.items():
    print(f"\tcontrast id: {contrast_id}")
    # compute the contrasts
    z_map = fmri_glm.compute_contrast(contrast_val, output_type="z_score")
    # plot the contrasts as soon as they're generated
    # the display is overlaid on the mean fMRI image
    # a threshold of 3.0 is used, more sophisticated choices are possible
    plotting.plot_stat_map(
        z_map,
        bg_img=mean_image,
        threshold=3.0,
        display_mode="z",
        cut_coords=3,
        black_bg=True,
        title=contrast_id,
    )
    plotting.show()
  • plot spm multimodal faces
  • plot spm multimodal faces
  • plot spm multimodal faces
Computing contrasts
        contrast id: faces-scrambled
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:142: UserWarning: One contrast given, assuming it for all 2 runs
  z_map = fmri_glm.compute_contrast(contrast_val, output_type="z_score")
        contrast id: scrambled-faces
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:142: UserWarning: One contrast given, assuming it for all 2 runs
  z_map = fmri_glm.compute_contrast(contrast_val, output_type="z_score")
        contrast id: effects_of_interest
/home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_spm_multimodal_faces.py:142: UserWarning: One contrast given, assuming it for all 2 runs
  z_map = fmri_glm.compute_contrast(contrast_val, output_type="z_score")
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/glm/contrasts.py:166: UserWarning: Running approximate fixed effects on F statistics.
  contrast = contrast_ if contrast is None else contrast + contrast_

Based on the resulting maps we observe that the analysis results in wide activity for the ‘effects of interest’ contrast, showing the implications of large portions of the visual cortex in the conditions. By contrast, the differential effect between “faces” and “scrambled” involves sparser, more anterior and lateral regions. It also displays some responses in the frontal lobe.

References

Total running time of the script: (2 minutes 19.171 seconds)

Estimated memory usage: 974 MB

Gallery generated by Sphinx-Gallery