.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/07_advanced/plot_bids_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_07_advanced_plot_bids_analysis.py: BIDS dataset first and second level analysis ============================================ Full step-by-step example of fitting a :term:`GLM` to perform a first and second level analysis in a :term:`BIDS` dataset and visualizing the results. Details about the :term:`BIDS` standard can be consulted at `https://bids.neuroimaging.io/ `_. More specifically: 1. Download an :term:`fMRI` :term:`BIDS` dataset with two language conditions to contrast. 2. Extract first level model objects automatically from the :term:`BIDS` dataset. 3. Fit a second level model on the fitted first level models. Notice that in this case the preprocessed :term:`bold` images were already normalized to the same :term:`MNI` space. .. GENERATED FROM PYTHON SOURCE LINES 21-24 .. code-block:: Python from nilearn import plotting .. GENERATED FROM PYTHON SOURCE LINES 25-33 Fetch example :term:`BIDS` dataset ---------------------------------- We download a simplified :term:`BIDS` dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files. .. GENERATED FROM PYTHON SOURCE LINES 33-37 .. code-block:: Python from nilearn.datasets import fetch_language_localizer_demo_dataset data = fetch_language_localizer_demo_dataset(legacy_output=False) .. rst-class:: sphx-glr-script-out .. code-block:: none [fetch_language_localizer_demo_dataset] Dataset created in /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset [fetch_language_localizer_demo_dataset] Downloading data from https://osf.io/3dj2a/download ... [fetch_language_localizer_demo_dataset] Downloaded 6684672 of 749503182 bytes (0.9%%, 1.9min remaining) [fetch_language_localizer_demo_dataset] Downloaded 13492224 of 749503182 bytes (1.8%%, 1.8min remaining) [fetch_language_localizer_demo_dataset] Downloaded 20824064 of 749503182 bytes (2.8%%, 1.8min remaining) [fetch_language_localizer_demo_dataset] Downloaded 93560832 of 749503182 bytes (12.5%%, 28.1s remaining) [fetch_language_localizer_demo_dataset] Downloaded 100171776 of 749503182 bytes (13.4%%, 32.5s remaining) [fetch_language_localizer_demo_dataset] Downloaded 106504192 of 749503182 bytes (14.2%%, 36.3s remaining) [fetch_language_localizer_demo_dataset] Downloaded 113180672 of 749503182 bytes (15.1%%, 39.4s remaining) [fetch_language_localizer_demo_dataset] Downloaded 120012800 of 749503182 bytes (16.0%%, 42.0s remaining) [fetch_language_localizer_demo_dataset] Downloaded 126353408 of 749503182 bytes (16.9%%, 44.5s remaining) [fetch_language_localizer_demo_dataset] Downloaded 132915200 of 749503182 bytes (17.7%%, 46.5s remaining) [fetch_language_localizer_demo_dataset] Downloaded 161497088 of 749503182 bytes (21.5%%, 40.1s remaining) [fetch_language_localizer_demo_dataset] Downloaded 173105152 of 749503182 bytes (23.1%%, 40.0s remaining) [fetch_language_localizer_demo_dataset] Downloaded 195133440 of 749503182 bytes (26.0%%, 37.0s remaining) [fetch_language_localizer_demo_dataset] Downloaded 201039872 of 749503182 bytes (26.8%%, 38.3s remaining) [fetch_language_localizer_demo_dataset] Downloaded 247472128 of 749503182 bytes (33.0%%, 30.5s remaining) [fetch_language_localizer_demo_dataset] Downloaded 255606784 of 749503182 bytes (34.1%%, 31.0s remaining) [fetch_language_localizer_demo_dataset] Downloaded 279404544 of 749503182 bytes (37.3%%, 28.7s remaining) [fetch_language_localizer_demo_dataset] Downloaded 286244864 of 749503182 bytes (38.2%%, 29.2s remaining) [fetch_language_localizer_demo_dataset] Downloaded 292765696 of 749503182 bytes (39.1%%, 29.7s remaining) [fetch_language_localizer_demo_dataset] Downloaded 299433984 of 749503182 bytes (40.0%%, 30.2s remaining) [fetch_language_localizer_demo_dataset] Downloaded 327802880 of 749503182 bytes (43.7%%, 27.1s remaining) [fetch_language_localizer_demo_dataset] Downloaded 370868224 of 749503182 bytes (49.5%%, 22.5s remaining) [fetch_language_localizer_demo_dataset] Downloaded 393928704 of 749503182 bytes (52.6%%, 20.8s remaining) [fetch_language_localizer_demo_dataset] Downloaded 427859968 of 749503182 bytes (57.1%%, 18.1s remaining) [fetch_language_localizer_demo_dataset] Downloaded 434626560 of 749503182 bytes (58.0%%, 18.2s remaining) [fetch_language_localizer_demo_dataset] Downloaded 454295552 of 749503182 bytes (60.6%%, 16.9s remaining) [fetch_language_localizer_demo_dataset] Downloaded 502472704 of 749503182 bytes (67.0%%, 13.3s remaining) [fetch_language_localizer_demo_dataset] Downloaded 530833408 of 749503182 bytes (70.8%%, 11.6s remaining) [fetch_language_localizer_demo_dataset] Downloaded 563306496 of 749503182 bytes (75.2%%, 9.6s remaining) [fetch_language_localizer_demo_dataset] Downloaded 627335168 of 749503182 bytes (83.7%%, 5.9s remaining) [fetch_language_localizer_demo_dataset] Downloaded 650354688 of 749503182 bytes (86.8%%, 4.7s remaining) [fetch_language_localizer_demo_dataset] Downloaded 713277440 of 749503182 bytes (95.2%%, 1.6s remaining) [fetch_language_localizer_demo_dataset] ...done. (35 seconds, 0 min) [fetch_language_localizer_demo_dataset] Extracting data from /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/fMRI-language-loc alizer-demo-dataset.zip... [fetch_language_localizer_demo_dataset] .. done. .. GENERATED FROM PYTHON SOURCE LINES 38-39 Here is the location of the dataset on disk. .. GENERATED FROM PYTHON SOURCE LINES 39-41 .. code-block:: Python print(data.data_dir) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset .. GENERATED FROM PYTHON SOURCE LINES 42-53 Obtain automatically FirstLevelModel objects and fit arguments -------------------------------------------------------------- From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the :term:`BIDS` dataset. Moreover, we obtain for each model a dictionary with run_imgs, events and confounder regressors since in this case a confounds.tsv file is available in the :term:`BIDS` dataset. To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names. .. GENERATED FROM PYTHON SOURCE LINES 53-70 .. code-block:: Python from nilearn.glm.first_level import first_level_from_bids task_label = "languagelocalizer" ( models, models_run_imgs, models_events, models_confounds, ) = first_level_from_bids( data.data_dir, task_label, img_filters=[("desc", "preproc")], n_jobs=2, space_label="", smoothing_fwhm=8, ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_bids_analysis.py:61: UserWarning: 'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json. ) = first_level_from_bids( /home/runner/work/nilearn/nilearn/examples/07_advanced/plot_bids_analysis.py:61: UserWarning: 'slice_time_ref' not provided and cannot be inferred from metadata. It will be assumed that the slice timing reference is 0.0 percent of the repetition time. If it is not the case it will need to be set manually in the generated list of models. ) = first_level_from_bids( .. GENERATED FROM PYTHON SOURCE LINES 71-75 Quick sanity check on fit arguments ----------------------------------- Additional checks or information extraction from pre-processed data can be made here. .. GENERATED FROM PYTHON SOURCE LINES 77-78 We just expect one run_img per subject. .. GENERATED FROM PYTHON SOURCE LINES 78-82 .. code-block:: Python from pathlib import Path print([Path(run).name for run in models_run_imgs[0]]) .. rst-class:: sphx-glr-script-out .. code-block:: none ['sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz'] .. GENERATED FROM PYTHON SOURCE LINES 83-86 The only confounds stored are regressors obtained from motion correction. As we can verify from the column headers of the confounds table corresponding to the only run_img present. .. GENERATED FROM PYTHON SOURCE LINES 86-88 .. code-block:: Python print(models_confounds[0][0].columns) .. rst-class:: sphx-glr-script-out .. code-block:: none Index(['RotX', 'RotY', 'RotZ', 'X', 'Y', 'Z'], dtype='object') .. GENERATED FROM PYTHON SOURCE LINES 89-92 During this acquisition the subject read blocks of sentences and consonant strings. So these are our only two conditions in events. We verify there are 12 blocks for each condition. .. GENERATED FROM PYTHON SOURCE LINES 92-94 .. code-block:: Python print(models_events[0][0]["trial_type"].value_counts()) .. rst-class:: sphx-glr-script-out .. code-block:: none trial_type language 12 string 12 Name: count, dtype: int64 .. GENERATED FROM PYTHON SOURCE LINES 95-102 First level model estimation ---------------------------- Now we simply fit each first level model and plot for each subject the :term:`contrast` that reveals the language network (language - string). Notice that we can define a :term:`contrast` using the names of the conditions specified in the events dataframe. Sum, subtraction and scalar multiplication are allowed. .. GENERATED FROM PYTHON SOURCE LINES 104-105 Set the threshold as the z-variate with an uncorrected p-value of 0.001. .. GENERATED FROM PYTHON SOURCE LINES 105-109 .. code-block:: Python from scipy.stats import norm p001_unc = norm.isf(0.001) .. GENERATED FROM PYTHON SOURCE LINES 110-111 Prepare figure for concurrent plot of individual maps. .. GENERATED FROM PYTHON SOURCE LINES 111-141 .. code-block:: Python from math import ceil import matplotlib.pyplot as plt import numpy as np ncols = 2 nrows = ceil(len(models) / ncols) fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 12)) axes = np.atleast_2d(axes) model_and_args = zip(models, models_run_imgs, models_events, models_confounds) for midx, (model, imgs, events, confounds) in enumerate(model_and_args): # fit the GLM model.fit(imgs, events, confounds) # compute the contrast of interest zmap = model.compute_contrast("language-string") plotting.plot_glass_brain( zmap, threshold=p001_unc, title=f"sub-{model.subject_label}", axes=axes[int(midx / ncols), int(midx % ncols)], plot_abs=False, colorbar=True, display_mode="x", vmin=-12, vmax=12, ) fig.suptitle("subjects z_map language network (unc p<0.001)") plotting.show() .. image-sg:: /auto_examples/07_advanced/images/sphx_glr_plot_bids_analysis_001.png :alt: subjects z_map language network (unc p<0.001) :srcset: /auto_examples/07_advanced/images/sphx_glr_plot_bids_analysis_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 142-148 Second level model estimation ----------------------------- We just have to provide the list of fitted FirstLevelModel objects to the SecondLevelModel object for estimation. We can do this because all subjects share a similar design matrix (same variables reflected in column names). .. GENERATED FROM PYTHON SOURCE LINES 148-152 .. code-block:: Python from nilearn.glm.second_level import SecondLevelModel second_level_input = models .. GENERATED FROM PYTHON SOURCE LINES 153-154 Note that we apply a smoothing of 8mm. .. GENERATED FROM PYTHON SOURCE LINES 154-157 .. code-block:: Python second_level_model = SecondLevelModel(smoothing_fwhm=8.0, n_jobs=2) second_level_model = second_level_model.fit(second_level_input) .. GENERATED FROM PYTHON SOURCE LINES 158-162 Computing contrasts at the second level is as simple as at the first level. Since we are not providing confounders we are performing a one-sample test at the second level with the images determined by the specified first level contrast. .. GENERATED FROM PYTHON SOURCE LINES 162-166 .. code-block:: Python zmap = second_level_model.compute_contrast( first_level_contrast="language-string" ) .. GENERATED FROM PYTHON SOURCE LINES 167-169 The group level :term:`contrast` reveals a left lateralized fronto-temporal language network. .. GENERATED FROM PYTHON SOURCE LINES 169-179 .. code-block:: Python plotting.plot_glass_brain( zmap, threshold=p001_unc, title="Group language network (unc p<0.001)", plot_abs=False, display_mode="x", figure=plt.figure(figsize=(5, 4)), ) plotting.show() .. image-sg:: /auto_examples/07_advanced/images/sphx_glr_plot_bids_analysis_002.png :alt: plot bids analysis :srcset: /auto_examples/07_advanced/images/sphx_glr_plot_bids_analysis_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 180-181 Generate and save the GLM report at the group level. .. GENERATED FROM PYTHON SOURCE LINES 181-188 .. code-block:: Python report_slm = second_level_model.generate_report( contrasts="intercept", first_level_contrast="language-string", threshold=p001_unc, display_mode="x", ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:31: UserWarning: constrained_layout not applied. At least one axes collapsed to zero width or height. fig.savefig( .. GENERATED FROM PYTHON SOURCE LINES 189-190 View the GLM report at the group level. .. GENERATED FROM PYTHON SOURCE LINES 190-192 .. code-block:: Python report_slm .. raw:: html

Statistical Report - Second Level Model Implement the :term:`General Linear Model<GLM>` for multiple subject :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.0; RRID:SCR_001362).

At the group level, a mass univariate analysis was performed with a linear regression at each voxel of the brain.

Input images were smoothed with gaussian kernel (full-width at half maximum=8.0 mm).

The following contrasts were computed :

  • intercept

Model details

Value
Parameter
smoothing_fwhm (mm) 8.0

Mask

Mask image

The mask includes 23640 voxels (23.1 %) of the image.

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Cluster Table
Height control fpr
α 0.001
Threshold (computed) 3.291
Cluster size threshold (voxels) 0
Minimum distance (mm) 8.0
Cluster ID X Y Z Peak Stat Cluster Size (mm3)
1 -57.5 -48.5 13.5 4.43 5103
1a -71.0 -53.0 18.0 3.64
2 -62.0 -8.0 49.5 4.32 455
2a -53.0 -8.0 45.0 4.17
3 -48.5 -30.5 -22.5 3.98 364
4 46.0 5.5 -27.0 3.91 2369
4a 50.5 23.5 -27.0 3.80
4b 55.0 14.5 -18.0 3.79
5 -71.0 -17.0 -4.5 3.89 6287
5a -53.0 -8.0 -9.0 3.82
5b -66.5 1.0 -4.5 3.82
5c -48.5 19.0 -18.0 3.69
6 -39.5 -3.5 -40.5 3.60 91
7 50.5 -12.5 -9.0 3.51 364
8 -48.5 14.5 18.0 3.43 91
9 -53.0 19.0 22.5 3.34 91
10 -75.5 -35.0 4.5 3.32 91
11 55.0 10.0 -13.5 3.30 91

About

  • Date preprocessed:


.. GENERATED FROM PYTHON SOURCE LINES 193-195 Or in a separate browser window report_slm.open_in_browser() .. GENERATED FROM PYTHON SOURCE LINES 197-198 Save the report to disk .. GENERATED FROM PYTHON SOURCE LINES 198-201 .. code-block:: Python output_dir = Path.cwd() / "results" / "plot_bids_analysis" output_dir.mkdir(exist_ok=True, parents=True) report_slm.save_as_html(output_dir / "report_slm.html") .. rst-class:: sphx-glr-timing **Total running time of the script:** (1 minutes 47.289 seconds) **Estimated memory usage:** 1306 MB .. _sphx_glr_download_auto_examples_07_advanced_plot_bids_analysis.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.12.0?urlpath=lab/tree/notebooks/auto_examples/07_advanced/plot_bids_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_bids_analysis.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_bids_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_bids_analysis.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_