.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/04_glm_first_level/plot_localizer_surface_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: Example of surface-based first-level analysis ============================================= A full step-by-step example of fitting a :term:`GLM` to experimental data sampled on the cortical surface and visualizing the results. More specifically: 1. A sequence of :term:`fMRI` volumes is loaded. 2. :term:`fMRI` data are projected onto a reference cortical surface (the FreeSurfer template, fsaverage). 3. A :term:`GLM` is applied to the dataset (effect/covariance, then contrast estimation). 4. Inspect GLM reports and save the results to disk. The result of the analysis are statistical maps that are defined on the brain mesh. We display them using Nilearn capabilities. The projection of :term:`fMRI` data onto a given brain :term:`mesh` requires that both are initially defined in the same space. * The functional data should be coregistered to the anatomy from which the mesh was obtained. * Another possibility, used here, is to project the normalized :term:`fMRI` data to an :term:`MNI`-coregistered mesh, such as fsaverage. The advantage of this second approach is that it makes it easy to run second-level analyses on the surface. On the other hand, it is obviously less accurate than using a subject-tailored mesh. .. GENERATED FROM PYTHON SOURCE LINES 39-42 Prepare data and analysis parameters ------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 44-45 Fetch the data. .. GENERATED FROM PYTHON SOURCE LINES 45-51 .. code-block:: Python from nilearn.datasets import fetch_localizer_first_level data = fetch_localizer_first_level() t_r = data.t_r slice_time_ref = data.slice_time_ref .. rst-class:: sphx-glr-script-out .. code-block:: none [fetch_localizer_first_level] Dataset found in /home/runner/nilearn_data/localizer_first_level .. GENERATED FROM PYTHON SOURCE LINES 52-66 Project the :term:`fMRI` image to the surface --------------------------------------------- For this we need to get a :term:`mesh` representing the geometry of the surface. We could use an individual :term:`mesh`, but we first resort to a standard :term:`mesh`, the so-called fsaverage5 template from the FreeSurfer software. We use the :class:`~nilearn.surface.SurfaceImage` to create an surface object instance that contains both the mesh (here we use the one from the fsaverage5 templates) and the BOLD data that we project on the surface. .. GENERATED FROM PYTHON SOURCE LINES 66-76 .. code-block:: Python from nilearn.datasets import load_fsaverage from nilearn.surface import SurfaceImage fsaverage5 = load_fsaverage() surface_image = SurfaceImage.from_volume( mesh=fsaverage5["pial"], volume_img=data.epi_img, ) .. GENERATED FROM PYTHON SOURCE LINES 77-87 Perform first level analysis ---------------------------- We can now simply run a GLM by directly passing our :class:`~nilearn.surface.SurfaceImage` instance as input to FirstLevelModel.fit Here we use an :term:`HRF` model containing the Glover model and its time derivative The drift model is implicitly a cosine basis with a period cutoff at 128s. .. GENERATED FROM PYTHON SOURCE LINES 87-97 .. code-block:: Python from nilearn.glm.first_level import FirstLevelModel glm = FirstLevelModel( t_r=t_r, slice_time_ref=slice_time_ref, hrf_model="glover + derivative", minimize_memory=False, verbose=1, ).fit(run_imgs=surface_image, events=data.events) .. rst-class:: sphx-glr-script-out .. code-block:: none [FirstLevelModel.fit] Computing mask [FirstLevelModel.fit] Finished fit [FirstLevelModel.fit] Computing run 1 out of 1 runs (go take a coffee, a big one). [FirstLevelModel.fit] Performing mask computation. [FirstLevelModel.fit] Extracting region signals [FirstLevelModel.fit] Cleaning extracted signals [FirstLevelModel.fit] Masking took 0 seconds. [FirstLevelModel.fit] Performing GLM computation. [FirstLevelModel.fit] GLM took 0 seconds. [FirstLevelModel.fit] Computation of 1 runs done in 0 seconds. .. GENERATED FROM PYTHON SOURCE LINES 98-105 Estimate contrasts ------------------ Specify the contrasts. For practical purpose, we first generate an identity matrix whose size is the number of columns of the design matrix. .. GENERATED FROM PYTHON SOURCE LINES 105-110 .. code-block:: Python import numpy as np design_matrix = glm.design_matrices_[0] contrast_matrix = np.eye(design_matrix.shape[1]) .. GENERATED FROM PYTHON SOURCE LINES 111-112 At first, we create basic contrasts. .. GENERATED FROM PYTHON SOURCE LINES 112-117 .. code-block:: Python basic_contrasts = { column: contrast_matrix[i] for i, column in enumerate(design_matrix.columns) } .. GENERATED FROM PYTHON SOURCE LINES 118-120 Next, we add some intermediate contrasts and one :term:`contrast` adding all conditions with some auditory parts. .. GENERATED FROM PYTHON SOURCE LINES 120-127 .. code-block:: Python basic_contrasts["audio"] = ( basic_contrasts["audio_left_hand_button_press"] + basic_contrasts["audio_right_hand_button_press"] + basic_contrasts["audio_computation"] + basic_contrasts["sentence_listening"] ) .. GENERATED FROM PYTHON SOURCE LINES 128-129 one contrast adding all conditions involving instructions reading .. GENERATED FROM PYTHON SOURCE LINES 129-136 .. code-block:: Python basic_contrasts["visual"] = ( basic_contrasts["visual_left_hand_button_press"] + basic_contrasts["visual_right_hand_button_press"] + basic_contrasts["visual_computation"] + basic_contrasts["sentence_reading"] ) .. GENERATED FROM PYTHON SOURCE LINES 137-138 one contrast adding all conditions involving computation .. GENERATED FROM PYTHON SOURCE LINES 138-143 .. code-block:: Python basic_contrasts["computation"] = ( basic_contrasts["visual_computation"] + basic_contrasts["audio_computation"] ) .. GENERATED FROM PYTHON SOURCE LINES 144-145 one contrast adding all conditions involving sentences .. GENERATED FROM PYTHON SOURCE LINES 145-149 .. code-block:: Python basic_contrasts["sentences"] = ( basic_contrasts["sentence_listening"] + basic_contrasts["sentence_reading"] ) .. GENERATED FROM PYTHON SOURCE LINES 150-162 Finally, we create a dictionary of more relevant contrasts * ``'left - right button press'``: probes motor activity in left versus right button presses. * ``'audio - visual'``: probes the difference of activity between listening to some content or reading the same type of content (instructions, stories). * ``'computation - sentences'``: looks at the activity when performing a mental computation task versus simply reading sentences. Of course, we could define other contrasts, but we keep only 3 for simplicity. .. GENERATED FROM PYTHON SOURCE LINES 162-176 .. code-block:: Python contrasts = { "(left - right) button press": ( basic_contrasts["audio_left_hand_button_press"] - basic_contrasts["audio_right_hand_button_press"] + basic_contrasts["visual_left_hand_button_press"] - basic_contrasts["visual_right_hand_button_press"] ), "audio - visual": basic_contrasts["audio"] - basic_contrasts["visual"], "computation - sentences": ( basic_contrasts["computation"] - basic_contrasts["sentences"] ), } .. GENERATED FROM PYTHON SOURCE LINES 177-188 Let's estimate the t-contrasts, and extract a table of clusters that survive our thresholds. We use the same threshold (uncorrected p < 0.001) for all contrasts. We plot each contrast map on the inflated fsaverage mesh, together with a suitable background to give an impression of the cortex folding. .. GENERATED FROM PYTHON SOURCE LINES 188-231 .. code-block:: Python from scipy.stats import norm from nilearn.datasets import load_fsaverage_data from nilearn.plotting import plot_surf_stat_map, show from nilearn.reporting import get_clusters_table p_val = 0.001 threshold = norm.isf(p_val) cluster_threshold = 20 two_sided = True fsaverage_data = load_fsaverage_data(data_type="sulcal") results = {} for contrast_id, contrast_val in contrasts.items(): results[contrast_id] = glm.compute_contrast(contrast_val, stat_type="t") table = get_clusters_table( results[contrast_id], stat_threshold=threshold, cluster_threshold=cluster_threshold, two_sided=two_sided, ) print(f"\n{contrast_id=}") print(table) for contrast_id, z_score in results.items(): hemi = "left" if contrast_id == "(left - right) button press": hemi = "both" plot_surf_stat_map( surf_mesh=fsaverage5["inflated"], stat_map=z_score, hemi=hemi, title=contrast_id, threshold=threshold, bg_map=fsaverage_data, ) show() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :alt: (left - right) button press :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :alt: audio - visual :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :alt: computation - sentences :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out .. code-block:: none [FirstLevelModel.compute_contrast] Computing image from signals contrast_id='(left - right) button press' Cluster ID Hemisphere Peak Stat Cluster Size (vertices) 0 1 right 7.363035 133 1 2 left -3.095616 223 2 3 left -3.090853 20 3 4 left -3.157886 21 [FirstLevelModel.compute_contrast] Computing image from signals contrast_id='audio - visual' Cluster ID Hemisphere Peak Stat Cluster Size (vertices) 0 1 left 7.589528 188 1 2 right 7.194872 149 2 3 right 6.364164 75 3 4 right 4.300838 20 4 5 left -3.102627 239 5 6 left -3.100104 61 6 7 right -3.098115 136 7 8 right -3.200971 27 8 9 right -3.103647 270 9 10 right -3.096202 52 [FirstLevelModel.compute_contrast] Computing image from signals contrast_id='computation - sentences' Cluster ID Hemisphere Peak Stat Cluster Size (vertices) 0 1 right 3.803730 35 1 2 left -3.100574 47 /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:229: UserWarning: You are using the 'agg' matplotlib backend that is non-interactive. No figure will be plotted when calling matplotlib.pyplot.show() or nilearn.plotting.show(). You can fix this by installing a different backend: for example via pip install PyQt6 .. GENERATED FROM PYTHON SOURCE LINES 232-240 Cluster-level inference ----------------------- We can also perform cluster-level inference (aka "All resolution Inference") for a given contrast. This gives a high-probability lower bound on the proportion of true discoveries in each cluster .. GENERATED FROM PYTHON SOURCE LINES 240-263 .. code-block:: Python from nilearn.glm import cluster_level_inference from nilearn.surface.surface import get_data as get_surf_data proportion_true_discoveries_img = cluster_level_inference( results["audio - visual"], threshold=[2.5, 3.5, 4.5], alpha=0.05 ) data = get_surf_data(proportion_true_discoveries_img) unique_vals = np.unique(data.ravel()) print(unique_vals) plot_surf_stat_map( surf_mesh=fsaverage5["inflated"], stat_map=proportion_true_discoveries_img, hemi="left", cmap="inferno", title="audio - visual, proportion true positives", bg_map=fsaverage_data, avg_method="max", ) show() .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_004.png :alt: audio - visual, proportion true positives :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_004.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none [0. 0.16666667 0.2 0.25 0.33333333 0.36363636 0.4 0.42857143 0.46153846 0.5 0.55555556 0.57142857 0.58333333 0.6 0.61538462 0.625 0.65217391 0.65384615 0.66666667 0.7 0.71428571 0.75 0.77777778 0.8 0.83333333 0.875 0.9 0.90909091 1. ] /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:261: UserWarning: You are using the 'agg' matplotlib backend that is non-interactive. No figure will be plotted when calling matplotlib.pyplot.show() or nilearn.plotting.show(). You can fix this by installing a different backend: for example via pip install PyQt6 .. GENERATED FROM PYTHON SOURCE LINES 264-267 Generate a report for the GLM ----------------------------- .. GENERATED FROM PYTHON SOURCE LINES 267-276 .. code-block:: Python report = glm.generate_report( contrasts, threshold=threshold, bg_img=load_fsaverage_data(data_type="sulcal", mesh_type="inflated"), height_control=None, cluster_threshold=cluster_threshold, two_sided=two_sided, ) .. rst-class:: sphx-glr-script-out .. code-block:: none [FirstLevelModel.generate_report] Computing image from signals [FirstLevelModel.generate_report] Computing image from signals [FirstLevelModel.generate_report] Computing image from signals [FirstLevelModel.generate_report] Generating contrast-level figures... /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:267: RuntimeWarning: Meshes are not identical but have compatible number of vertices. /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:267: RuntimeWarning: Meshes are not identical but have compatible number of vertices. /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:267: RuntimeWarning: Meshes are not identical but have compatible number of vertices. [FirstLevelModel.generate_report] Generating design matrices figures... [FirstLevelModel.generate_report] Generating contrast matrices figures... .. GENERATED FROM PYTHON SOURCE LINES 277-279 .. include:: ../../../examples/report_note.rst .. GENERATED FROM PYTHON SOURCE LINES 280-281 .. code-block:: Python report .. raw:: html

Statistical Report - First Level Model Implement the General Linear Model for single run :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.13.1; RRID:SCR_001362).

At the subject level, a mass univariate analysis was performed with a linear regression at each voxel of the brain, using generalized least squares with a global ar1 noise model to account for temporal auto-correlation and a cosine drift model (high pass filter=0.01 Hz).

Regressors were entered into run-specific design matrices and onsets were convolved with a glover + derivative canonical hemodynamic response function for the following conditions:

  • sentence_reading
  • audio_right_hand_button_press
  • vertical_checkerboard
  • audio_computation
  • sentence_listening
  • audio_left_hand_button_press
  • visual_right_hand_button_press
  • visual_computation
  • visual_left_hand_button_press
  • horizontal_checkerboard

Model details

First Level Model
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

(left - right) button press

Stat map plot for the contrast: (left - right) button press
Cluster Table
Height control None
Threshold Z 3.09
Cluster ID Hemisphere Peak Stat Cluster Size (vertices)
1 right 7.36 133
2 left -3.10 223
3 left -3.09 20
4 left -3.16 21

audio - visual

Stat map plot for the contrast: audio - visual
Cluster Table
Height control None
Threshold Z 3.09
Cluster ID Hemisphere Peak Stat Cluster Size (vertices)
1 left 7.59 188
2 right 7.19 149
3 right 6.36 75
4 right 4.30 20
5 left -3.10 239
6 left -3.10 61
7 right -3.10 136
8 right -3.20 27
9 right -3.10 270
10 right -3.10 52

computation - sentences

Stat map plot for the contrast: computation - sentences
Cluster Table
Height control None
Threshold Z 3.09
Cluster ID Hemisphere Peak Stat Cluster Size (vertices)
1 right 3.8 35
2 left -3.1 47

About

  • Date preprocessed:


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 55.986 seconds) **Estimated memory usage:** 251 MB .. _sphx_glr_download_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.13.1?urlpath=lab/tree/notebooks/auto_examples/04_glm_first_level/plot_localizer_surface_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_localizer_surface_analysis.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_localizer_surface_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_localizer_surface_analysis.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_