.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/04_glm_first_level/plot_localizer_surface_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: Example of surface-based first-level analysis ============================================= A full step-by-step example of fitting a :term:`GLM` to experimental data sampled on the cortical surface and visualizing the results. More specifically: 1. A sequence of :term:`fMRI` volumes is loaded. 2. :term:`fMRI` data are projected onto a reference cortical surface (the FreeSurfer template, fsaverage). 3. A :term:`GLM` is applied to the dataset (effect/covariance, then contrast estimation). The result of the analysis are statistical maps that are defined on the brain mesh. We display them using Nilearn capabilities. The projection of :term:`fMRI` data onto a given brain :term:`mesh` requires that both are initially defined in the same space. * The functional data should be coregistered to the anatomy from which the mesh was obtained. * Another possibility, used here, is to project the normalized :term:`fMRI` data to an :term:`MNI`-coregistered mesh, such as fsaverage. The advantage of this second approach is that it makes it easy to run second-level analyses on the surface. On the other hand, it is obviously less accurate than using a subject-tailored mesh. .. GENERATED FROM PYTHON SOURCE LINES 38-42 Prepare data and analysis parameters ------------------------------------ Prepare the timing parameters. .. GENERATED FROM PYTHON SOURCE LINES 42-45 .. code-block:: Python t_r = 2.4 slice_time_ref = 0.5 .. GENERATED FROM PYTHON SOURCE LINES 46-47 Fetch the data. .. GENERATED FROM PYTHON SOURCE LINES 47-51 .. code-block:: Python from nilearn.datasets import fetch_localizer_first_level data = fetch_localizer_first_level() .. rst-class:: sphx-glr-script-out .. code-block:: none [fetch_localizer_first_level] Dataset found in /home/runner/nilearn_data/localizer_first_level .. GENERATED FROM PYTHON SOURCE LINES 52-66 Project the :term:`fMRI` image to the surface --------------------------------------------- For this we need to get a :term:`mesh` representing the geometry of the surface. We could use an individual :term:`mesh`, but we first resort to a standard :term:`mesh`, the so-called fsaverage5 template from the FreeSurfer software. We use the :class:`~nilearn.surface.SurfaceImage` to create an surface object instance that contains both the mesh (here we use the one from the fsaverage5 templates) and the BOLD data that we project on the surface. .. GENERATED FROM PYTHON SOURCE LINES 66-76 .. code-block:: Python from nilearn.datasets import load_fsaverage from nilearn.surface import SurfaceImage fsaverage5 = load_fsaverage() surface_image = SurfaceImage.from_volume( mesh=fsaverage5["pial"], volume_img=data.epi_img, ) .. GENERATED FROM PYTHON SOURCE LINES 77-87 Perform first level analysis ---------------------------- We can now simply run a GLM by directly passing our :class:`~nilearn.surface.SurfaceImage` instance as input to FirstLevelModel.fit Here we use an :term:`HRF` model containing the Glover model and its time derivative The drift model is implicitly a cosine basis with a period cutoff at 128s. .. GENERATED FROM PYTHON SOURCE LINES 87-96 .. code-block:: Python from nilearn.glm.first_level import FirstLevelModel glm = FirstLevelModel( t_r=t_r, slice_time_ref=slice_time_ref, hrf_model="glover + derivative", minimize_memory=False, ).fit(run_imgs=surface_image, events=data.events) .. GENERATED FROM PYTHON SOURCE LINES 97-104 Estimate contrasts ------------------ Specify the contrasts. For practical purpose, we first generate an identity matrix whose size is the number of columns of the design matrix. .. GENERATED FROM PYTHON SOURCE LINES 104-109 .. code-block:: Python import numpy as np design_matrix = glm.design_matrices_[0] contrast_matrix = np.eye(design_matrix.shape[1]) .. GENERATED FROM PYTHON SOURCE LINES 110-111 At first, we create basic contrasts. .. GENERATED FROM PYTHON SOURCE LINES 111-116 .. code-block:: Python basic_contrasts = { column: contrast_matrix[i] for i, column in enumerate(design_matrix.columns) } .. GENERATED FROM PYTHON SOURCE LINES 117-119 Next, we add some intermediate contrasts and one :term:`contrast` adding all conditions with some auditory parts. .. GENERATED FROM PYTHON SOURCE LINES 119-145 .. code-block:: Python basic_contrasts["audio"] = ( basic_contrasts["audio_left_hand_button_press"] + basic_contrasts["audio_right_hand_button_press"] + basic_contrasts["audio_computation"] + basic_contrasts["sentence_listening"] ) # one contrast adding all conditions involving instructions reading basic_contrasts["visual"] = ( basic_contrasts["visual_left_hand_button_press"] + basic_contrasts["visual_right_hand_button_press"] + basic_contrasts["visual_computation"] + basic_contrasts["sentence_reading"] ) # one contrast adding all conditions involving computation basic_contrasts["computation"] = ( basic_contrasts["visual_computation"] + basic_contrasts["audio_computation"] ) # one contrast adding all conditions involving sentences basic_contrasts["sentences"] = ( basic_contrasts["sentence_listening"] + basic_contrasts["sentence_reading"] ) .. GENERATED FROM PYTHON SOURCE LINES 146-158 Finally, we create a dictionary of more relevant contrasts * ``'left - right button press'``: probes motor activity in left versus right button presses. * ``'audio - visual'``: probes the difference of activity between listening to some content or reading the same type of content (instructions, stories). * ``'computation - sentences'``: looks at the activity when performing a mental computation task versus simply reading sentences. Of course, we could define other contrasts, but we keep only 3 for simplicity. .. GENERATED FROM PYTHON SOURCE LINES 158-172 .. code-block:: Python contrasts = { "(left - right) button press": ( basic_contrasts["audio_left_hand_button_press"] - basic_contrasts["audio_right_hand_button_press"] + basic_contrasts["visual_left_hand_button_press"] - basic_contrasts["visual_right_hand_button_press"] ), "audio - visual": basic_contrasts["audio"] - basic_contrasts["visual"], "computation - sentences": ( basic_contrasts["computation"] - basic_contrasts["sentences"] ), } .. GENERATED FROM PYTHON SOURCE LINES 173-174 Let's estimate the contrasts by iterating over them. .. GENERATED FROM PYTHON SOURCE LINES 174-205 .. code-block:: Python from nilearn.datasets import load_fsaverage_data from nilearn.plotting import plot_surf_stat_map, show # let's make sure we use the same threshold threshold = 3.0 fsaverage_data = load_fsaverage_data(data_type="sulcal") for contrast_id, contrast_val in contrasts.items(): # compute contrast-related statistics z_score = glm.compute_contrast(contrast_val, stat_type="t") hemi = "left" if contrast_id == "(left - right) button press": hemi = "both" # we plot it on the surface, on the inflated fsaverage mesh, # together with a suitable background to give an impression # of the cortex folding. plot_surf_stat_map( surf_mesh=fsaverage5["inflated"], stat_map=z_score, hemi=hemi, title=contrast_id, threshold=threshold, bg_map=fsaverage_data, darkness=None, ) show() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :alt: (left - right) button press :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :alt: audio - visual :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :alt: computation - sentences :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :class: sphx-glr-multi-img .. GENERATED FROM PYTHON SOURCE LINES 206-207 Or we can save as an html file. .. GENERATED FROM PYTHON SOURCE LINES 207-231 .. code-block:: Python from pathlib import Path from nilearn.interfaces.bids import save_glm_to_bids output_dir = Path.cwd() / "results" / "plot_localizer_surface_analysis" output_dir.mkdir(exist_ok=True, parents=True) save_glm_to_bids( glm, contrasts=contrasts, threshold=threshold, bg_img=load_fsaverage_data(data_type="sulcal", mesh_type="inflated"), height_control=None, prefix="sub-01", out_dir=output_dir, ) report = glm.generate_report( contrasts, threshold=threshold, bg_img=load_fsaverage_data(data_type="sulcal", mesh_type="inflated"), height_control=None, ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:214: UserWarning: Contrast name "(left - right) button press" changed to "leftMinusRightButtonPress" save_glm_to_bids( /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:214: UserWarning: Contrast name "audio - visual" changed to "audioMinusVisual" save_glm_to_bids( /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:214: UserWarning: Contrast name "computation - sentences" changed to "computationMinusSentences" save_glm_to_bids( /home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/_utils/plotting.py:154: UserWarning: constrained_layout not applied. At least one axes collapsed to zero width or height. contrast_plot.figure.savefig(output["dir"] / contrast_fig) /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:214: UserWarning: Meshes are not identical but have compatible number of vertices. save_glm_to_bids( /home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:31: UserWarning: constrained_layout not applied. At least one axes collapsed to zero width or height. fig.savefig( /home/runner/work/nilearn/nilearn/examples/04_glm_first_level/plot_localizer_surface_analysis.py:224: UserWarning: Meshes are not identical but have compatible number of vertices. report = glm.generate_report( /home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/reporting/utils.py:31: UserWarning: constrained_layout not applied. At least one axes collapsed to zero width or height. fig.savefig( .. GENERATED FROM PYTHON SOURCE LINES 232-233 This report can be viewed in a notebook. .. GENERATED FROM PYTHON SOURCE LINES 233-235 .. code-block:: Python report .. raw:: html

Statistical Report - First Level Model Implement the General Linear Model for single run :term:`fMRI` data.

Description

Data were analyzed using Nilearn (version= 0.12.0; RRID:SCR_001362).

At the subject level, a mass univariate analysis was performed with a linear regression at each voxel of the brain, using generalized least squares with a global ar1 noise model to account for temporal auto-correlation and a cosine drift model (high pass filter=0.01 Hz).

Regressors were entered into run-specific design matrices and onsets were convolved with a glover + derivative canonical hemodynamic response function for the following conditions:

  • audio_computation
  • vertical_checkerboard
  • sentence_listening
  • audio_right_hand_button_press
  • audio_left_hand_button_press
  • visual_right_hand_button_press
  • horizontal_checkerboard
  • visual_computation
  • visual_left_hand_button_press
  • sentence_reading

Model details

Value
Parameter
drift_model cosine
high_pass (Hertz) 0.01
hrf_model glover + derivative
noise_model ar1
signal_scaling 0
slice_time_ref 0.5
standardize False
t_r (seconds) 2.4

Mask

Mask image

The mask includes 20484 voxels (100.0 %) of the image.

Statistical Maps

(left - right) button press

Stat map plot for the contrast: (left - right) button press
Cluster Table
Height control None
Threshold Z 3.0

Results table not available for surface data.

audio - visual

Stat map plot for the contrast: audio - visual
Cluster Table
Height control None
Threshold Z 3.0

Results table not available for surface data.

computation - sentences

Stat map plot for the contrast: computation - sentences
Cluster Table
Height control None
Threshold Z 3.0

Results table not available for surface data.

About

  • Date preprocessed:


.. GENERATED FROM PYTHON SOURCE LINES 236-238 Or in a separate browser window report.open_in_browser() .. GENERATED FROM PYTHON SOURCE LINES 238-241 .. code-block:: Python report.save_as_html(output_dir / "report.html") .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 38.423 seconds) **Estimated memory usage:** 208 MB .. _sphx_glr_download_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.12.0?urlpath=lab/tree/notebooks/auto_examples/04_glm_first_level/plot_localizer_surface_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_localizer_surface_analysis.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_localizer_surface_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_localizer_surface_analysis.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_