Surface-based dataset first and second level analysis of a dataset

Full step-by-step example of fitting a GLM (first and second level analysis) in a 10-subjects dataset and visualizing the results.

More specifically:

  1. Download an fMRI BIDS dataset with two language conditions to contrast.

  2. Project the data to a standard mesh, fsaverage5, also known as the Freesurfer template mesh downsampled to about 10k nodes per hemisphere.

  3. Run the first level model objects.

  4. Fit a second level model on the fitted first level models.

Notice that in this case the preprocessed bold images were already normalized to the same MNI space.

Fetch example BIDS dataset

We download a simplified BIDS dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files.

[fetch_language_localizer_demo_dataset] Dataset found in
/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset

Here is the location of the dataset on disk.

'/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset'

Subject level models

From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the BIDS dataset. Along, we also obtain:

  • a list with the Nifti image associated with each run

  • a list of events read from events.tsv in the BIDS dataset

  • a list of confounder motion regressors since in this case a confounds.tsv file is available in the BIDS dataset.

To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names.

from nilearn.glm.first_level import first_level_from_bids

models, run_imgs, events, confounds = first_level_from_bids(
    dataset_path=data.data_dir,
    task_label="languagelocalizer",
    space_label="",
    img_filters=[("desc", "preproc")],
    n_jobs=2,
)
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: RuntimeWarning: 'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json.
  models, run_imgs, events, confounds = first_level_from_bids(
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: UserWarning: 'slice_time_ref' not provided and cannot be inferred from metadata.
It will be assumed that the slice timing reference is 0.0 percent of the repetition time.
If it is not the case it will need to be set manually in the generated list of models.
  models, run_imgs, events, confounds = first_level_from_bids(

Project fMRI data to the surface, fit the GLM and compute contrasts

The projection function simply takes the fMRI data and the mesh. Note that those correspond spatially, as they are both in same space.

Warning

Note that here we pass ALL the confounds when we fit the model. In this case we can do this because our regressors only include the motion realignment parameters. For most preprocessed BIDS dataset, you would have to carefully choose which confounds to include.

When working with a typical BIDS derivative dataset generated by fmriprep, the first_level_from_bids function allows you to indirectly pass arguments to load_confounds, so you can selectively load specific subsets of confounds to implement certain denoising strategies.

from pathlib import Path

from nilearn.datasets import load_fsaverage, load_fsaverage_data
from nilearn.surface import SurfaceImage

fsaverage5 = load_fsaverage()

# let's get the fsaverage curvature data image
# to use as background for the GLM report.
curvature = load_fsaverage_data(mesh_type="inflated", data_type="curvature")

threshold = 1.96

# Empty lists in which we are going to store activation values.
z_scores = []
z_scores_left = []
z_scores_right = []
for i, (first_level_glm, fmri_img, confound, event) in enumerate(
    zip(models, run_imgs, confounds, events, strict=False)
):
    print(f"Running GLM on {Path(fmri_img[0]).relative_to(data.data_dir)}")

    image = SurfaceImage.from_volume(
        mesh=fsaverage5["pial"],
        volume_img=fmri_img[0],
    )

    # Fit GLM.
    # Pass events and all confounds
    first_level_glm.fit(
        run_imgs=image,
        events=event[0],
        confounds=confound[0],
    )

    # Compute contrast between 'language' and 'string' events
    z_scores.append(
        first_level_glm.compute_contrast(
            "language-string", stat_type="t", output_type="z_score"
        )
    )

    # Let's only generate a report for the first subject
    if i == 1:
        report_flm = first_level_glm.generate_report(
            contrasts="language-string",
            threshold=threshold,
            height_control=None,
            alpha=0.001,
            bg_img=curvature,
            title="surface based subject-level model",
        )
Running GLM on derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-02/func/sub-02_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-03/func/sub-03_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-04/func/sub-04_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-05/func/sub-05_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-06/func/sub-06_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-07/func/sub-07_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-08/func/sub-08_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-09/func/sub-09_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-10/func/sub-10_task-languagelocalizer_desc-preproc_bold.nii.gz

View the GLM report of the first subject

Note

The generated report can be:

  • displayed in a Notebook,

  • opened in a browser using the .open_in_browser() method,

  • or saved to a file using the .save_as_html(output_filepath) method.

Statistical Report - First Level Model
surface based subject-level model Implement the General Linear Model for single run :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.2.dev196+g329d9011f; RRID:SCR_001362).

At the subject level, a mass univariate analysis was performed with a linear regression at each voxel of the brain, using generalized least squares with a global ar1 noise model to account for temporal auto-correlation and a cosine drift model (high pass filter=0.01 Hz).

Regressors were entered into run-specific design matrices and onsets were convolved with a glover canonical hemodynamic response function for the following conditions:

  • string
  • language

The following contrasts were computed using a fixed-effect approach across runs :

  • language-string

Model details

Value
Parameter
drift_model cosine
high_pass (Hertz) 0.01
hrf_model glover
noise_model ar1
signal_scaling 0
slice_time_ref 0.0
standardize False
subject_label 02
t_r (seconds) 1.5

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

language-string

Stat map plot for the contrast: language-string
Cluster Table
Height control None
Threshold Z 1.96
Cluster ID Hemisphere Peak Stat Cluster Size (vertices)
1 left 6.51 70
2 left 2.83 6
3 left 2.44 2
4 left 9.08 465
5 left 3.59 33
6 left 10.39 153
7 left 3.61 14
8 left 4.77 98
9 left 3.54 9
10 left 2.63 11
11 left 6.09 81
12 left 2.07 2
13 left 3.52 19
14 left 2.65 6
15 left 3.67 48
16 left 2.18 3
17 left 4.76 62
18 left 2.92 4
19 left 4.24 80
20 left 8.35 49
21 left 5.74 30
22 left 2.67 6
23 left 5.39 23
24 left 8.01 68
25 left 3.88 24
26 left 3.10 15
27 left 2.80 10
28 left 3.06 12
29 left 2.36 1
30 left 2.88 1
31 left 2.90 6
32 left 2.97 9
33 left 2.42 6
34 left 4.68 27
35 left 2.78 3
36 left 2.14 3
37 left 3.75 20
38 left 9.96 41
39 left 2.77 16
40 left 3.98 25
41 left 3.93 13
42 left 3.47 22
43 left 2.33 3
44 left 2.21 4
45 left 3.20 8
46 left 2.78 3
47 left 3.79 10
48 left 2.18 4
49 left 2.44 3
50 left 2.34 4
51 left 3.47 4
52 left 2.75 4
53 left 2.68 6
54 left 2.57 2
55 left 2.72 5
56 left 2.56 1
57 left 3.45 5
58 left 2.78 5
59 left 3.62 6
60 left 2.06 1
61 left 2.68 8
62 left 3.09 7
63 left 2.19 4
64 left 2.72 3
65 left 2.56 1
66 left 2.46 2
67 left 2.11 1
68 left 3.11 12
69 left 2.10 1
70 left 2.92 4
71 left 2.00 1
72 left 2.58 3
73 left 2.01 1
74 left 2.40 5
75 left 2.21 2
76 left 2.03 3
77 left 2.25 1
78 left 2.11 2
79 left 2.11 2
80 left 2.15 2
81 left 1.97 1
82 left 2.98 2
83 left 1.99 2
84 left 2.21 1
85 left 2.62 2
86 left 2.01 1
87 left 2.18 1
88 left 1.98 1
89 left 2.06 1
90 left 2.16 1
91 left 2.40 1
92 left 2.21 1
93 left 2.06 1
94 left 2.32 1
95 left 2.21 1
96 left 2.12 1
97 left 1.98 2
98 right 8.51 169
99 right 2.05 2
100 right 4.08 33
101 right 6.09 169
102 right 3.76 38
103 right 8.54 161
104 right 3.03 25
105 right 4.50 30
106 right 2.80 4
107 right 2.87 8
108 right 4.41 20
109 right 3.13 3
110 right 7.86 100
111 right 3.33 33
112 right 2.38 1
113 right 3.45 30
114 right 2.85 10
115 right 4.57 19
116 right 3.35 14
117 right 5.10 16
118 right 2.59 9
119 right 6.10 27
120 right 2.79 8
121 right 2.97 7
122 right 4.41 24
123 right 2.93 9
124 right 2.03 3
125 right 2.48 5
126 right 2.89 10
127 right 2.23 3
128 right 2.16 2
129 right 2.86 5
130 right 2.20 3
131 right 3.25 5
132 right 2.45 5
133 right 2.94 9
134 right 3.14 3
135 right 2.86 4
136 right 2.23 6
137 right 2.53 7
138 right 4.98 24
139 right 2.19 2
140 right 2.68 1
141 right 2.89 6
142 right 3.81 9
143 right 3.12 22
144 right 2.35 4
145 right 2.12 1
146 right 3.06 6
147 right 3.30 7
148 right 3.01 8
149 right 3.26 13
150 right 3.22 9
151 right 2.58 8
152 right 2.29 5
153 right 2.53 6
154 right 2.78 4
155 right 2.72 4
156 right 2.04 2
157 right 3.55 5
158 right 2.79 4
159 right 2.72 4
160 right 2.31 2
161 right 2.10 4
162 right 2.81 4
163 right 2.68 3
164 right 3.41 8
165 right 2.61 2
166 right 2.52 1
167 right 3.05 3
168 right 2.16 2
169 right 1.98 1
170 right 1.97 1
171 right 1.98 1
172 right 2.03 1
173 right 2.54 1
174 right 2.08 1
175 right 1.98 1
176 right 2.01 1
177 right 2.00 1
178 right 3.13 3
179 right 2.05 1
180 right 2.15 1
181 right 2.34 2
182 right 2.03 1
183 right 2.12 1
184 right 1.96 1
185 right 2.06 1
186 right 2.07 1
187 right 2.26 1
188 right 2.45 2
189 right 2.06 1
190 right 2.06 1
191 right 2.53 1
192 right 2.14 1
193 right 2.03 1
194 right 2.01 1
195 right 2.12 1
196 right 2.46 1
197 right 2.00 1
198 right 2.18 1

About

  • Date preprocessed:


Group level model

Individual activation maps have been accumulated in the z_score. We can now use them in a one-sample t-test at the group level model by passing them as input to SecondLevelModel.

import pandas as pd

from nilearn.glm.second_level import SecondLevelModel

second_level_glm = SecondLevelModel()
design_matrix = pd.DataFrame([1] * len(z_scores), columns=["intercept"])
second_level_glm.fit(second_level_input=z_scores, design_matrix=design_matrix)

report_slm = second_level_glm.generate_report(
    contrasts=["intercept"],
    threshold=threshold,
    height_control=None,
    alpha=0.001,
    bg_img=curvature,
    title="surface based group-level model",
)

View the GLM report at the group level.

Note

The generated report can be:

  • displayed in a Notebook,

  • opened in a browser using the .open_in_browser() method,

  • or saved to a file using the .save_as_html(output_filepath) method.

Statistical Report - Second Level Model
surface based group-level model Implement the :term:`General Linear Model` for multiple subject :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.2.dev196+g329d9011f; RRID:SCR_001362).

At the group level, a mass univariate analysis was performed with a linear regression at each voxel of the brain.

The following contrasts were computed :

  • intercept

Model details

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Cluster Table
Height control None
Threshold Z 1.96
Cluster ID Hemisphere Peak Stat Cluster Size (vertices)
1 left 4.07 151
2 left 4.15 151
3 left 3.12 9
4 left 3.34 35
5 left 3.66 13
6 left 3.04 15
7 left 3.49 41
8 left 2.94 15
9 left 3.10 15
10 left 3.13 7
11 left 2.88 9
12 left 2.22 3
13 left 3.45 29
14 left 2.10 1
15 left 2.36 4
16 left 2.19 1
17 left 2.28 1
18 left 2.77 14
19 left 2.58 2
20 left 2.45 4
21 left 2.17 2
22 left 3.30 6
23 left 2.05 1
24 left 2.02 1
25 left 2.10 1
26 left 2.18 2
27 left 2.09 1
28 left 2.27 3
29 left 1.97 1
30 left 2.09 1
31 left 2.04 2
32 left 2.11 1
33 left 2.08 1
34 left 2.53 1
35 left 2.03 1
36 left 2.00 1
37 left 2.18 1
38 left 2.47 1
39 left 2.09 1
40 left 1.97 1
41 left 2.11 1
42 left 2.53 1
43 left 2.02 1
44 left 2.19 1
45 left 2.24 1
46 left 2.13 1
47 left 2.07 2
48 left 2.49 1
49 left 2.46 1
50 left 2.35 1
51 right 4.10 90
52 right 2.05 1
53 right 2.24 1
54 right 4.09 101
55 right 2.36 2
56 right 2.06 1
57 right 2.15 5
58 right 2.07 1
59 right 3.36 9
60 right 2.11 3
61 right 2.06 1
62 right 2.25 2
63 right 2.56 9
64 right 2.42 1
65 right 2.79 7
66 right 2.99 3
67 right 2.45 7
68 right 2.12 2
69 right 2.60 3
70 right 2.22 2
71 right 2.71 2
72 right 2.39 1
73 right 2.63 2
74 right 2.28 1
75 right 2.09 1
76 right 1.98 1
77 right 3.03 3
78 right 2.79 3
79 right 2.00 1
80 right 3.17 4
81 right 2.05 2
82 right 2.21 3
83 right 1.97 1
84 right 1.98 1
85 right 2.21 1
86 right 2.01 1
87 right 2.13 2
88 right 2.35 2
89 right 1.99 1
90 right 2.14 1
91 right 2.17 1
92 right 2.05 1
93 right 2.00 1
94 right 1.98 1
95 right 1.99 1
96 right 2.25 1

About

  • Date preprocessed:


Total running time of the script: (2 minutes 8.526 seconds)

Estimated memory usage: 749 MB

Gallery generated by Sphinx-Gallery