.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/04_glm_first_level/plot_localizer_surface_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: Example of surface-based first-level analysis ================================================ A full step-by-step example of fitting a GLM to experimental data sampled on the cortical surface and visualizing the results. More specifically: 1. A sequence of fMRI volumes is loaded. 2. fMRI data are projected onto a reference cortical surface (the FreeSurfer template, fsaverage). 3. A design matrix describing all the effects related to the data is computed. 4. A GLM is applied to the dataset (effect/covariance, then contrast estimation). The result of the analysis are statistical maps that are defined on the brain mesh. We display them using Nilearn capabilities. The projection of fMRI data onto a given brain mesh requires that both are initially defined in the same space. * The functional data should be coregistered to the anatomy from which the mesh was obtained. * Another possibility, used here, is to project the normalized fMRI data to an MNI-coregistered mesh, such as fsaverage. The advantage of this second approach is that it makes it easy to run second-level analyses on the surface. On the other hand, it is obviously less accurate than using a subject-tailored mesh. .. GENERATED FROM PYTHON SOURCE LINES 34-37 Prepare data and analysis parameters ------------------------------------- Prepare the timing parameters. .. GENERATED FROM PYTHON SOURCE LINES 37-40 .. code-block:: default t_r = 2.4 slice_time_ref = 0.5 .. GENERATED FROM PYTHON SOURCE LINES 41-43 Prepare the data. First, the volume-based fMRI data. .. GENERATED FROM PYTHON SOURCE LINES 43-47 .. code-block:: default from nilearn.datasets import fetch_localizer_first_level data = fetch_localizer_first_level() fmri_img = data.epi_img .. GENERATED FROM PYTHON SOURCE LINES 48-49 Second, the experimental paradigm. .. GENERATED FROM PYTHON SOURCE LINES 49-53 .. code-block:: default events_file = data.events import pandas as pd events = pd.read_table(events_file) .. GENERATED FROM PYTHON SOURCE LINES 54-60 Project the fMRI image to the surface ------------------------------------- For this we need to get a mesh representing the geometry of the surface. We could use an individual mesh, but we first resort to a standard mesh, the so-called fsaverage5 template from the FreeSurfer software. .. GENERATED FROM PYTHON SOURCE LINES 60-64 .. code-block:: default import nilearn fsaverage = nilearn.datasets.fetch_surf_fsaverage() .. GENERATED FROM PYTHON SOURCE LINES 65-67 The projection function simply takes the fMRI data and the mesh. Note that those correspond spatially, are they are both in MNI space. .. GENERATED FROM PYTHON SOURCE LINES 67-70 .. code-block:: default from nilearn import surface texture = surface.vol_to_surf(fmri_img, fsaverage.pial_right) .. GENERATED FROM PYTHON SOURCE LINES 71-76 Perform first level analysis ---------------------------- This involves computing the design matrix and fitting the model. We start by specifying the timing of fMRI frames. .. GENERATED FROM PYTHON SOURCE LINES 76-81 .. code-block:: default import numpy as np n_scans = texture.shape[1] frame_times = t_r * (np.arange(n_scans) + .5) .. GENERATED FROM PYTHON SOURCE LINES 82-86 Create the design matrix. We specify an hrf model containing the Glover model and its time derivative The drift model is implicitly a cosine basis with a period cutoff at 128s. .. GENERATED FROM PYTHON SOURCE LINES 86-92 .. code-block:: default from nilearn.glm.first_level import make_first_level_design_matrix design_matrix = make_first_level_design_matrix(frame_times, events=events, hrf_model='glover + derivative' ) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none /home/nicolas/GitRepos/nilearn-fork/nilearn/glm/first_level/experimental_paradigm.py:89: UserWarning: Unexpected column `Unnamed: 0.1` in events data will be ignored. /home/nicolas/GitRepos/nilearn-fork/nilearn/glm/first_level/experimental_paradigm.py:89: UserWarning: Unexpected column `Unnamed: 0` in events data will be ignored. .. GENERATED FROM PYTHON SOURCE LINES 93-99 Setup and fit GLM. Note that the output consists in 2 variables: `labels` and `fit`. `labels` tags voxels according to noise autocorrelation. `estimates` contains the parameter estimates. We keep them for later contrast computation. .. GENERATED FROM PYTHON SOURCE LINES 99-103 .. code-block:: default from nilearn.glm.first_level import run_glm labels, estimates = run_glm(texture.T, design_matrix.values) .. GENERATED FROM PYTHON SOURCE LINES 104-110 Estimate contrasts ------------------ Specify the contrasts. For practical purpose, we first generate an identity matrix whose size is the number of columns of the design matrix. .. GENERATED FROM PYTHON SOURCE LINES 110-112 .. code-block:: default contrast_matrix = np.eye(design_matrix.shape[1]) .. GENERATED FROM PYTHON SOURCE LINES 113-114 At first, we create basic contrasts. .. GENERATED FROM PYTHON SOURCE LINES 114-117 .. code-block:: default basic_contrasts = dict([(column, contrast_matrix[i]) for i, column in enumerate(design_matrix.columns)]) .. GENERATED FROM PYTHON SOURCE LINES 118-120 Next, we add some intermediate contrasts and one contrast adding all conditions with some auditory parts. .. GENERATED FROM PYTHON SOURCE LINES 120-141 .. code-block:: default basic_contrasts['audio'] = ( basic_contrasts['audio_left_hand_button_press'] + basic_contrasts['audio_right_hand_button_press'] + basic_contrasts['audio_computation'] + basic_contrasts['sentence_listening']) # one contrast adding all conditions involving instructions reading basic_contrasts['visual'] = ( basic_contrasts['visual_left_hand_button_press'] + basic_contrasts['visual_right_hand_button_press'] + basic_contrasts['visual_computation'] + basic_contrasts['sentence_reading']) # one contrast adding all conditions involving computation basic_contrasts['computation'] = (basic_contrasts['visual_computation'] + basic_contrasts['audio_computation']) # one contrast adding all conditions involving sentences basic_contrasts['sentences'] = (basic_contrasts['sentence_listening'] + basic_contrasts['sentence_reading']) .. GENERATED FROM PYTHON SOURCE LINES 142-149 Finally, we create a dictionary of more relevant contrasts * 'left - right button press': probes motor activity in left versus right button presses. * 'audio - visual': probes the difference of activity between listening to some content or reading the same type of content (instructions, stories). * 'computation - sentences': looks at the activity when performing a mental computation task versus simply reading sentences. Of course, we could define other contrasts, but we keep only 3 for simplicity. .. GENERATED FROM PYTHON SOURCE LINES 149-163 .. code-block:: default contrasts = { 'left - right button press': ( basic_contrasts['audio_left_hand_button_press'] - basic_contrasts['audio_right_hand_button_press'] + basic_contrasts['visual_left_hand_button_press'] - basic_contrasts['visual_right_hand_button_press'] ), 'audio - visual': basic_contrasts['audio'] - basic_contrasts['visual'], 'computation - sentences': (basic_contrasts['computation'] - basic_contrasts['sentences'] ) } .. GENERATED FROM PYTHON SOURCE LINES 164-165 Let's estimate the contrasts by iterating over them. .. GENERATED FROM PYTHON SOURCE LINES 165-184 .. code-block:: default from nilearn.glm.contrasts import compute_contrast from nilearn import plotting for index, (contrast_id, contrast_val) in enumerate(contrasts.items()): print(' Contrast % i out of %i: %s, right hemisphere' % (index + 1, len(contrasts), contrast_id)) # compute contrast-related statistics contrast = compute_contrast(labels, estimates, contrast_val, contrast_type='t') # we present the Z-transform of the t map z_score = contrast.z_score() # we plot it on the surface, on the inflated fsaverage mesh, # together with a suitable background to give an impression # of the cortex folding. plotting.plot_surf_stat_map( fsaverage.infl_right, z_score, hemi='right', title=contrast_id, colorbar=True, threshold=3., bg_map=fsaverage.sulc_right) .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :alt: left - right button press :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_001.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :alt: audio - visual :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_002.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :alt: computation - sentences :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_003.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Contrast 1 out of 3: left - right button press, right hemisphere Contrast 2 out of 3: audio - visual, right hemisphere Contrast 3 out of 3: computation - sentences, right hemisphere .. GENERATED FROM PYTHON SOURCE LINES 185-190 Analysing the left hemisphere ----------------------------- Note that re-creating the above analysis for the left hemisphere requires little additional code! .. GENERATED FROM PYTHON SOURCE LINES 192-193 We project the fMRI data to the mesh. .. GENERATED FROM PYTHON SOURCE LINES 193-195 .. code-block:: default texture = surface.vol_to_surf(fmri_img, fsaverage.pial_left) .. GENERATED FROM PYTHON SOURCE LINES 196-197 Then we estimate the General Linear Model. .. GENERATED FROM PYTHON SOURCE LINES 197-199 .. code-block:: default labels, estimates = run_glm(texture.T, design_matrix.values) .. GENERATED FROM PYTHON SOURCE LINES 200-201 Finally, we create contrast-specific maps and plot them. .. GENERATED FROM PYTHON SOURCE LINES 201-215 .. code-block:: default for index, (contrast_id, contrast_val) in enumerate(contrasts.items()): print(' Contrast % i out of %i: %s, left hemisphere' % (index + 1, len(contrasts), contrast_id)) # compute contrasts contrast = compute_contrast(labels, estimates, contrast_val, contrast_type='t') z_score = contrast.z_score() # plot the result plotting.plot_surf_stat_map( fsaverage.infl_left, z_score, hemi='left', title=contrast_id, colorbar=True, threshold=3., bg_map=fsaverage.sulc_left) plotting.show() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_004.png :alt: left - right button press :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_004.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_005.png :alt: audio - visual :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_005.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_006.png :alt: computation - sentences :srcset: /auto_examples/04_glm_first_level/images/sphx_glr_plot_localizer_surface_analysis_006.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Contrast 1 out of 3: left - right button press, left hemisphere Contrast 2 out of 3: audio - visual, left hemisphere Contrast 3 out of 3: computation - sentences, left hemisphere .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 15.818 seconds) **Estimated memory usage:** 220 MB .. _sphx_glr_download_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/main?filepath=examples/auto_examples/04_glm_first_level/plot_localizer_surface_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_localizer_surface_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_localizer_surface_analysis.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_