Surface-based dataset first and second level analysis of a dataset

Full step-by-step example of fitting a GLM (first and second level analysis) in a 10-subjects dataset and visualizing the results.

More specifically:

  1. Download an fMRI BIDS dataset with two language conditions to contrast.

  2. Project the data to a standard mesh, fsaverage5, also known as the Freesurfer template mesh downsampled to about 10k nodes per hemisphere.

  3. Run the first level model objects.

  4. Fit a second level model on the fitted first level models.

Notice that in this case the preprocessed bold images were already normalized to the same MNI space.

Fetch example BIDS dataset

We download a simplified BIDS dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files.

from nilearn.datasets import fetch_language_localizer_demo_dataset

data = fetch_language_localizer_demo_dataset(legacy_output=False)
[get_dataset_dir] Dataset found in
/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset

Here is the location of the dataset on disk.

'/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset'

Subject level models

From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the BIDS dataset. Along, we also obtain:

  • a list with the Nifti image associated with each run

  • a list of events read from events.tsv in the BIDS dataset

  • a list of confounder motion regressors since in this case a confounds.tsv file is available in the BIDS dataset.

To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names.

from nilearn.glm.first_level import first_level_from_bids

models, run_imgs, events, confounds = first_level_from_bids(
    dataset_path=data.data_dir,
    task_label="languagelocalizer",
    space_label="",
    img_filters=[("desc", "preproc")],
    n_jobs=2,
)
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: UserWarning: 'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json.
  models, run_imgs, events, confounds = first_level_from_bids(

Project fMRI data to the surface, fit the GLM and compute contrasts

The projection function simply takes the fMRI data and the mesh. Note that those correspond spatially, as they are both in same space.

Warning

Note that here we pass ALL the confounds when we fit the model. In this case we can do this because our regressors only include the motion realignment parameters. For most preprocessed BIDS dataset, you would have to carefully choose which confounds to include.

When working with a typical BIDS derivative dataset generated by fmriprep, the first_level_from_bids function allows you to indirectly pass arguments to load_confounds, so you can selectively load specific subsets of confounds to implement certain denoising strategies.

from pathlib import Path

from nilearn.datasets import load_fsaverage, load_fsaverage_data
from nilearn.surface import SurfaceImage

fsaverage5 = load_fsaverage()

# let's get the fsaverage curvature data image
# to use as background for the GLM report.
curvature = load_fsaverage_data(mesh_type="inflated", data_type="curvature")

# Empty lists in which we are going to store activation values.
z_scores = []
z_scores_left = []
z_scores_right = []
for i, (first_level_glm, fmri_img, confound, event) in enumerate(
    zip(models, run_imgs, confounds, events)
):
    print(f"Running GLM on {Path(fmri_img[0]).relative_to(data.data_dir)}")

    image = SurfaceImage.from_volume(
        mesh=fsaverage5["pial"],
        volume_img=fmri_img[0],
    )

    # Fit GLM.
    # Pass events and all confounds
    first_level_glm.fit(
        run_imgs=image,
        events=event[0],
        confounds=confound[0],
    )

    # Compute contrast between 'language' and 'string' events
    z_scores.append(
        first_level_glm.compute_contrast(
            "language-string", stat_type="t", output_type="z_score"
        )
    )

    # Let's only generate a report for the first subject
    if i == 1:
        report_flm = first_level_glm.generate_report(
            contrasts="language-string",
            threshold=1.96,
            alpha=0.001,
            bg_img=curvature,
        )

# View the GLM report of the first subject
report_flm
Running GLM on derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-02/func/sub-02_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-03/func/sub-03_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-04/func/sub-04_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-05/func/sub-05_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-06/func/sub-06_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-07/func/sub-07_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-08/func/sub-08_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-09/func/sub-09_task-languagelocalizer_desc-preproc_bold.nii.gz
Running GLM on derivatives/sub-10/func/sub-10_task-languagelocalizer_desc-preproc_bold.nii.gz

Statistical Report - First Level Model Implement the General Linear Model for single run :term:`fMRI` data.

Model details

Value
Parameter
drift_model cosine
high_pass (Hertz) 0.01
hrf_model glover
noise_model ar1
signal_scaling 0
slice_time_ref 0.0
smoothing_fwhm None
standardize False
subject_label 02
t_r (seconds) 1.5

Design Matrix

Plot of Design Matrix used in Run 1.

Mask

Mask image.

Contrasts

Plot of the contrast: language-string.

Statistical Maps

language-string

Stat map plot for the contrast: language-string
Contrast Plot Stat map plot for the contrast: language-string
Cluster Table

No cluster table parameter provided

About

  • Date preprocessed:


Group level model

Individual activation maps have been accumulated in the z_score. We can now use them in a one-sample t-test at the group level model by passing them as input to SecondLevelModel.

import pandas as pd

from nilearn.glm.second_level import SecondLevelModel

second_level_glm = SecondLevelModel()
design_matrix = pd.DataFrame([1] * len(z_scores), columns=["intercept"])
second_level_glm.fit(second_level_input=z_scores, design_matrix=design_matrix)

results = second_level_glm.compute_contrast("intercept", output_type="z_score")

report_slm = second_level_glm.generate_report(
    contrasts="intercept", threshold=1.96, alpha=0.001, bg_img=curvature
)

# View the GLM report at the group level
report_slm
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:21: UserWarning: constrained_layout not applied.  At least one axes collapsed to zero width or height.
  fig.savefig(

Statistical Report - Second Level Model Implement the :term:`General Linear Model<GLM>` for multiple subject :term:`fMRI` data.

Model details

Value
Parameter
smoothing_fwhm None

Design Matrix

Plot of Design Matrix used in Run 1.

Mask

Mask image.

Contrasts

Plot of the contrast: intercept.

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Contrast Plot Stat map plot for the contrast: intercept
Cluster Table

No cluster table parameter provided

About

  • Date preprocessed:


Visualization

We can now plot the computed group-level maps for left and right hemisphere

from nilearn.plotting import plot_surf_stat_map, show

fsaverage_data = load_fsaverage_data(data_type="sulcal")

for hemi in ["left", "right"]:
    plot_surf_stat_map(
        surf_mesh=fsaverage5["inflated"],
        stat_map=results,
        hemi=hemi,
        title=f"(language-string), {hemi} hemisphere",
        colorbar=True,
        threshold=1.96,
        bg_map=fsaverage_data,
    )

show()
  • (language-string), left hemisphere
  • (language-string), right hemisphere

Total running time of the script: (1 minutes 40.783 seconds)

Estimated memory usage: 963 MB

Gallery generated by Sphinx-Gallery