Surface-based dataset first and second level analysis of a dataset

Full step-by-step example of fitting a GLM (first and second level analysis) in a 10-subjects dataset and visualizing the results.

More specifically:

  1. Download an fMRI BIDS dataset with two language conditions to contrast.

  2. Project the data to a standard mesh, fsaverage5, also known as the Freesurfer template mesh downsampled to about 10k nodes per hemisphere.

  3. Run the first level model objects.

  4. Fit a second level model on the fitted first level models.

Notice that in this case the preprocessed bold images were already normalized to the same MNI space.

Fetch example BIDS dataset

We download a simplified BIDS dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files.

from nilearn.datasets import fetch_language_localizer_demo_dataset

data = fetch_language_localizer_demo_dataset(legacy_output=False)
[get_dataset_dir] Dataset found in
/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset

Here is the location of the dataset on disk.

'/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset'

Subject level models

From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the BIDS dataset. Along, we also obtain:

  • a list with the Nifti image associated with each run

  • a list of events read from events.tsv in the BIDS dataset

  • a list of confounder motion regressors since in this case a confounds.tsv file is available in the BIDS dataset.

To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names.

from nilearn.glm.first_level import first_level_from_bids

models, run_imgs, events, confounds = first_level_from_bids(
    dataset_path=data.data_dir,
    task_label="languagelocalizer",
    space_label="",
    img_filters=[("desc", "preproc")],
    n_jobs=2,
)
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: DeprecationWarning: Starting in version 0.12, slice_time_ref will default to None.
  models, run_imgs, events, confounds = first_level_from_bids(
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: UserWarning: 'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json.
  models, run_imgs, events, confounds = first_level_from_bids(

Project fMRI data to the surface, fit the GLM and compute contrasts

The projection function simply takes the fMRI data and the mesh. Note that those correspond spatially, as they are both in same space.

Warning

Note that here we pass ALL the confounds when we fit the model. In this case we can do this because our regressors only include the motion realignment parameters. For most preprocessed BIDS dataset, you would have to carefully choose which confounds to include.

When working with a typical BIDS derivative dataset generated by fmriprep, the first_level_from_bids function allows you to indirectly pass arguments to load_confounds, so you can selectively load specific subsets of confounds to implement certain denoising strategies.

from pathlib import Path

from nilearn.datasets import load_fsaverage, load_fsaverage_data
from nilearn.surface import SurfaceImage

fsaverage5 = load_fsaverage()

# let's get the fsaverage curvature data image
# to use as background for the GLM report.
curvature = load_fsaverage_data(mesh_type="inflated", data_type="curvature")

threshold = 1.96

# Empty lists in which we are going to store activation values.
z_scores = []
z_scores_left = []
z_scores_right = []
for i, (first_level_glm, fmri_img, confound, event) in enumerate(
    zip(models, run_imgs, confounds, events)
):
    print(f"Running GLM on {Path(fmri_img[0]).relative_to(data.data_dir)}")

    image = SurfaceImage.from_volume(
        mesh=fsaverage5["pial"],
        volume_img=fmri_img[0],
    )

    # Fit GLM.
    # Pass events and all confounds
    first_level_glm.fit(
        run_imgs=image,
        events=event[0],
        confounds=confound[0],
    )

    # Compute contrast between 'language' and 'string' events
    z_scores.append(
        first_level_glm.compute_contrast(
            "language-string", stat_type="t", output_type="z_score"
        )
    )

    # Let's only generate a report for the first subject
    if i == 1:
        report_flm = first_level_glm.generate_report(
            contrasts="language-string",
            threshold=threshold,
            height_control=None,
            alpha=0.001,
            bg_img=curvature,
            title="surface based subject-level model",
        )
Running GLM on derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-02/func/sub-02_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-03/func/sub-03_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-04/func/sub-04_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-05/func/sub-05_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-06/func/sub-06_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-07/func/sub-07_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-08/func/sub-08_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-09/func/sub-09_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-10/func/sub-10_task-languagelocalizer_desc-preproc_bold.nii.gz

View the GLM report of the first subject

Statistical Report - First Level Model
surface based subject-level model Implement the General Linear Model for single run :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.11.2.dev244+g4f00d0a18; RRID:SCR_001362).

At the subject level, a mass univariate analysis was performed with a linear regression at each voxel of the brain, using generalized least squares with a global ar1 noise model to account for temporal auto-correlation and a cosine drift model (high pass filter=0.01 Hz).

Regressors were entered into run-specific design matrices and onsets were convolved with a glover canonical hemodynamic response function for the following conditions:

  • string
  • language

The following contrasts were computed using a fixed-effect approach across runs :

  • language-string

Model details

Value
Parameter
drift_model cosine
high_pass (Hertz) 0.01
hrf_model glover
noise_model ar1
signal_scaling 0
slice_time_ref 0.0
standardize False
subject_label 02
t_r (seconds) 1.5

Design Matrix

run 0

Plot of design matrix for 0.

correlation matrix

Plot of correlation of design matrix for run 0.

Contrasts

Plot of the contrast language-string (run 0).

Mask

Mask image

Statistical Maps

language-string

Stat map plot for the contrast: language-string
Cluster Table
Height control None
Threshold Z 1.96

About

  • Date preprocessed:


Or in a separate browser window report_flm.open_in_browser()

Save the report to disk

output_dir = Path.cwd() / "results" / "plot_surface_bids_analysis"
output_dir.mkdir(exist_ok=True, parents=True)
report_flm.save_as_html(output_dir / "report_flm.html")

Group level model

Individual activation maps have been accumulated in the z_score. We can now use them in a one-sample t-test at the group level model by passing them as input to SecondLevelModel.

import pandas as pd

from nilearn.glm.second_level import SecondLevelModel

second_level_glm = SecondLevelModel()
design_matrix = pd.DataFrame([1] * len(z_scores), columns=["intercept"])
second_level_glm.fit(second_level_input=z_scores, design_matrix=design_matrix)

report_slm = second_level_glm.generate_report(
    contrasts=["intercept"],
    threshold=threshold,
    height_control=None,
    alpha=0.001,
    bg_img=curvature,
    title="surface based group-level model",
)
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:31: UserWarning: constrained_layout not applied.  At least one axes collapsed to zero width or height.
  fig.savefig(

View the GLM report at the group level.

Statistical Report - Second Level Model
surface based group-level model Implement the :term:`General Linear Model<GLM>` for multiple subject :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.11.2.dev244+g4f00d0a18; RRID:SCR_001362).

At the group level, a mass univariate analysis was performed with a linear regression at each voxel of the brain.

The following contrasts were computed :

  • intercept

Model details

Design Matrix

run 0

Plot of design matrix for 0.

Contrasts

Plot of the contrast intercept (run 0).

Mask

Mask image

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Cluster Table
Height control None
Threshold Z 1.96

About

  • Date preprocessed:


Or in a separate browser window report_flm.open_in_browser()

Save it as an html file.

report_slm.save_as_html(output_dir / "report_slm.html")

Total running time of the script: (1 minutes 48.011 seconds)

Estimated memory usage: 903 MB

Gallery generated by Sphinx-Gallery