.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/07_advanced/plot_surface_bids_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_07_advanced_plot_surface_bids_analysis.py: Surface-based dataset first and second level analysis of a dataset ================================================================== Full step-by-step example of fitting a :term:`GLM` (first and second level analysis) in a 10-subjects dataset and visualizing the results. More specifically: #. Download an :term:`fMRI` :term:`BIDS` dataset with two language conditions to contrast. #. Project the data to a standard mesh, fsaverage5, also known as the Freesurfer template :term:`mesh` downsampled to about 10k nodes per hemisphere. #. Run the first level model objects. #. Fit a second level model on the fitted first level models. Notice that in this case the preprocessed :term:`bold` images were already normalized to the same :term:`MNI` space. .. GENERATED FROM PYTHON SOURCE LINES 27-36 Fetch example :term:`BIDS` dataset ---------------------------------- We download a simplified :term:`BIDS` dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files. .. GENERATED FROM PYTHON SOURCE LINES 36-40 .. code-block:: Python from nilearn.datasets import fetch_language_localizer_demo_dataset data = fetch_language_localizer_demo_dataset(legacy_output=False) .. rst-class:: sphx-glr-script-out .. code-block:: none [fetch_language_localizer_demo_dataset] Dataset found in /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset .. GENERATED FROM PYTHON SOURCE LINES 41-42 Here is the location of the dataset on disk. .. GENERATED FROM PYTHON SOURCE LINES 42-44 .. code-block:: Python data.data_dir .. rst-class:: sphx-glr-script-out .. code-block:: none '/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset' .. GENERATED FROM PYTHON SOURCE LINES 45-64 Subject level models -------------------- From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the :term:`BIDS` dataset. Along, we also obtain: - a list with the Nifti image associated with each run - a list of events read from events.tsv in the :term:`BIDS` dataset - a list of confounder motion regressors since in this case a confounds.tsv file is available in the :term:`BIDS` dataset. To get the first level models we only have to specify the dataset directory and the ``task_label`` as specified in the file names. .. GENERATED FROM PYTHON SOURCE LINES 64-74 .. code-block:: Python from nilearn.glm.first_level import first_level_from_bids models, run_imgs, events, confounds = first_level_from_bids( dataset_path=data.data_dir, task_label="languagelocalizer", space_label="", img_filters=[("desc", "preproc")], n_jobs=2, ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: UserWarning: 'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json. models, run_imgs, events, confounds = first_level_from_bids( /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:66: UserWarning: 'slice_time_ref' not provided and cannot be inferred from metadata. It will be assumed that the slice timing reference is 0.0 percent of the repetition time. If it is not the case it will need to be set manually in the generated list of models. models, run_imgs, events, confounds = first_level_from_bids( .. GENERATED FROM PYTHON SOURCE LINES 75-96 Project :term:`fMRI` data to the surface, fit the GLM and compute contrasts The projection function simply takes the :term:`fMRI` data and the mesh. Note that those correspond spatially, as they are both in same space. .. warning:: Note that here we pass ALL the confounds when we fit the model. In this case we can do this because our regressors only include the motion realignment parameters. For most preprocessed BIDS dataset, you would have to carefully choose which confounds to include. When working with a typical BIDS derivative dataset generated by fmriprep, the :obj:`~nilearn.glm.first_level.first_level_from_bids` function allows you to indirectly pass arguments to :obj:`~nilearn.interfaces.fmriprep.load_confounds`, so you can selectively load specific subsets of confounds to implement certain denoising strategies. .. GENERATED FROM PYTHON SOURCE LINES 96-149 .. code-block:: Python from pathlib import Path from nilearn.datasets import load_fsaverage, load_fsaverage_data from nilearn.surface import SurfaceImage fsaverage5 = load_fsaverage() # let's get the fsaverage curvature data image # to use as background for the GLM report. curvature = load_fsaverage_data(mesh_type="inflated", data_type="curvature") threshold = 1.96 # Empty lists in which we are going to store activation values. z_scores = [] z_scores_left = [] z_scores_right = [] for i, (first_level_glm, fmri_img, confound, event) in enumerate( zip(models, run_imgs, confounds, events) ): print(f"Running GLM on {Path(fmri_img[0]).relative_to(data.data_dir)}") image = SurfaceImage.from_volume( mesh=fsaverage5["pial"], volume_img=fmri_img[0], ) # Fit GLM. # Pass events and all confounds first_level_glm.fit( run_imgs=image, events=event[0], confounds=confound[0], ) # Compute contrast between 'language' and 'string' events z_scores.append( first_level_glm.compute_contrast( "language-string", stat_type="t", output_type="z_score" ) ) # Let's only generate a report for the first subject if i == 1: report_flm = first_level_glm.generate_report( contrasts="language-string", threshold=threshold, height_control=None, alpha=0.001, bg_img=curvature, title="surface based subject-level model", ) .. rst-class:: sphx-glr-script-out .. code-block:: none Running GLM on derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-02/func/sub-02_task-languagelocalizer_desc-preproc_bold.nii.gz /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:140: UserWarning: Meshes are not identical but have compatible number of vertices. report_flm = first_level_glm.generate_report( Running GLM on derivatives/sub-03/func/sub-03_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-04/func/sub-04_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-05/func/sub-05_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-06/func/sub-06_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-07/func/sub-07_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-08/func/sub-08_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-09/func/sub-09_task-languagelocalizer_desc-preproc_bold.nii.gz Running GLM on derivatives/sub-10/func/sub-10_task-languagelocalizer_desc-preproc_bold.nii.gz .. GENERATED FROM PYTHON SOURCE LINES 150-151 View the GLM report of the first subject .. GENERATED FROM PYTHON SOURCE LINES 151-153 .. code-block:: Python report_flm .. raw:: html

Statistical Report - First Level Model
surface based subject-level model Implement the General Linear Model for single run :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.0; RRID:SCR_001362).

At the subject level, a mass univariate analysis was performed with a linear regression at each voxel of the brain, using generalized least squares with a global ar1 noise model to account for temporal auto-correlation and a cosine drift model (high pass filter=0.01 Hz).

Regressors were entered into run-specific design matrices and onsets were convolved with a glover canonical hemodynamic response function for the following conditions:

  • language
  • string

The following contrasts were computed using a fixed-effect approach across runs :

  • language-string

Model details

Value
Parameter
drift_model cosine
high_pass (Hertz) 0.01
hrf_model glover
noise_model ar1
signal_scaling 0
slice_time_ref 0.0
standardize False
subject_label 02
t_r (seconds) 1.5

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

language-string

Stat map plot for the contrast: language-string
Cluster Table
Height control None
Threshold Z 1.96

Results table not available for surface data.

About

  • Date preprocessed:


.. GENERATED FROM PYTHON SOURCE LINES 154-156 Or in a separate browser window report_flm.open_in_browser() .. GENERATED FROM PYTHON SOURCE LINES 159-160 Save the report to disk .. GENERATED FROM PYTHON SOURCE LINES 160-165 .. code-block:: Python output_dir = Path.cwd() / "results" / "plot_surface_bids_analysis" output_dir.mkdir(exist_ok=True, parents=True) report_flm.save_as_html(output_dir / "report_flm.html") .. GENERATED FROM PYTHON SOURCE LINES 166-174 Group level model ----------------- Individual activation maps have been accumulated in the ``z_score``. We can now use them in a one-sample t-test at the group level model by passing them as input to :class:`~nilearn.glm.second_level.SecondLevelModel`. .. GENERATED FROM PYTHON SOURCE LINES 174-191 .. code-block:: Python import pandas as pd from nilearn.glm.second_level import SecondLevelModel second_level_glm = SecondLevelModel() design_matrix = pd.DataFrame([1] * len(z_scores), columns=["intercept"]) second_level_glm.fit(second_level_input=z_scores, design_matrix=design_matrix) report_slm = second_level_glm.generate_report( contrasts=["intercept"], threshold=threshold, height_control=None, alpha=0.001, bg_img=curvature, title="surface based group-level model", ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_surface_bids_analysis.py:182: UserWarning: Meshes are not identical but have compatible number of vertices. report_slm = second_level_glm.generate_report( /home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:31: UserWarning: constrained_layout not applied. At least one axes collapsed to zero width or height. fig.savefig( .. GENERATED FROM PYTHON SOURCE LINES 192-193 View the GLM report at the group level. .. GENERATED FROM PYTHON SOURCE LINES 193-195 .. code-block:: Python report_slm .. raw:: html

Statistical Report - Second Level Model
surface based group-level model Implement the :term:`General Linear Model<GLM>` for multiple subject :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.0; RRID:SCR_001362).

At the group level, a mass univariate analysis was performed with a linear regression at each voxel of the brain.

The following contrasts were computed :

  • intercept

Model details

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Cluster Table
Height control None
Threshold Z 1.96

Results table not available for surface data.

About

  • Date preprocessed:


.. GENERATED FROM PYTHON SOURCE LINES 196-198 Or in a separate browser window report_flm.open_in_browser() .. GENERATED FROM PYTHON SOURCE LINES 200-201 Save it as an html file. .. GENERATED FROM PYTHON SOURCE LINES 201-202 .. code-block:: Python report_slm.save_as_html(output_dir / "report_slm.html") .. rst-class:: sphx-glr-timing **Total running time of the script:** (1 minutes 46.689 seconds) **Estimated memory usage:** 920 MB .. _sphx_glr_download_auto_examples_07_advanced_plot_surface_bids_analysis.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.12.0?urlpath=lab/tree/notebooks/auto_examples/07_advanced/plot_surface_bids_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_surface_bids_analysis.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_surface_bids_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_surface_bids_analysis.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_