.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/07_advanced/plot_surface_bids_analysis.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_07_advanced_plot_surface_bids_analysis.py: Surface-based dataset first and second level analysis of a dataset ================================================================== Full step-by-step example of fitting a :term:`GLM` (first and second level analysis) in a 10-subjects dataset and visualizing the results. More specifically: 1. Download an :term:`fMRI` :term:`BIDS` dataset with two language conditions to contrast. 2. Project the data to a standard mesh, fsaverage5, aka the Freesurfer template mesh downsampled to about 10k nodes per hemisphere. 3. Run the first level model objects. 4. Fit a second level model on the fitted first level models. Notice that in this case the preprocessed :term:`bold` images were already normalized to the same :term:`MNI` space. To run this example, you must launch IPython via ``ipython --matplotlib`` in a terminal, or use the Jupyter notebook. .. contents:: **Contents** :local: :depth: 1 .. GENERATED FROM PYTHON SOURCE LINES 27-35 Fetch example BIDS dataset -------------------------- We download a simplified :term:`BIDS` dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files. .. GENERATED FROM PYTHON SOURCE LINES 35-38 .. code-block:: default from nilearn.datasets import fetch_language_localizer_demo_dataset data_dir, _ = fetch_language_localizer_demo_dataset() .. GENERATED FROM PYTHON SOURCE LINES 39-40 Here is the location of the dataset on disk. .. GENERATED FROM PYTHON SOURCE LINES 40-42 .. code-block:: default print(data_dir) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none /home/nicolas/nilearn_data/fMRI-language-localizer-demo-dataset .. GENERATED FROM PYTHON SOURCE LINES 43-51 Obtain automatically FirstLevelModel objects and fit arguments -------------------------------------------------------------- From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the :term:`BIDS` dataset. Moreover, we obtain for each model a dictionary with run_imgs, events and confounder regressors since in this case a confounds.tsv file is available in the :term:`BIDS` dataset. To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names. .. GENERATED FROM PYTHON SOURCE LINES 51-58 .. code-block:: default from nilearn.glm.first_level import first_level_from_bids task_label = 'languagelocalizer' _, models_run_imgs, models_events, models_confounds = \ first_level_from_bids( data_dir, task_label, img_filters=[('desc', 'preproc')]) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none /home/nicolas/GitRepos/nilearn-fork/nilearn/glm/first_level/first_level.py:901: UserWarning: SliceTimingRef not found in file /home/nicolas/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json. It will be assumed that the slice timing reference is 0.0 percent of the repetition time. If it is not the case it will need to be set manually in the generated list of models .. GENERATED FROM PYTHON SOURCE LINES 59-61 We also need to get the TR information. For that we use the json sidecar file of the dataset's functional images. .. GENERATED FROM PYTHON SOURCE LINES 61-68 .. code-block:: default import os json_file = os.path.join(data_dir, 'derivatives', 'sub-01', 'func', 'sub-01_task-languagelocalizer_desc-preproc_bold.json') import json with open(json_file, 'r') as f: t_r = json.load(f)['RepetitionTime'] .. GENERATED FROM PYTHON SOURCE LINES 69-70 Project fMRI data to the surface: First get fsaverage5. .. GENERATED FROM PYTHON SOURCE LINES 70-73 .. code-block:: default from nilearn.datasets import fetch_surf_fsaverage fsaverage = fetch_surf_fsaverage(mesh='fsaverage5') .. GENERATED FROM PYTHON SOURCE LINES 74-76 The projection function simply takes the fMRI data and the mesh. Note that those correspond spatially, as they are both in MNI space. .. GENERATED FROM PYTHON SOURCE LINES 76-82 .. code-block:: default import numpy as np from nilearn import surface from nilearn.glm.first_level import make_first_level_design_matrix from nilearn.glm.first_level import run_glm from nilearn.glm.contrasts import compute_contrast .. GENERATED FROM PYTHON SOURCE LINES 83-84 Empty lists in which we are going to store activation values. .. GENERATED FROM PYTHON SOURCE LINES 84-123 .. code-block:: default z_scores_right = [] z_scores_left = [] for (fmri_img, confound, events) in zip( models_run_imgs, models_confounds, models_events): texture = surface.vol_to_surf(fmri_img[0], fsaverage.pial_right) n_scans = texture.shape[1] frame_times = t_r * (np.arange(n_scans) + .5) # Create the design matrix # # We specify an hrf model containing Glover model and its time derivative. # The drift model is implicitly a cosine basis with period cutoff 128s. design_matrix = make_first_level_design_matrix( frame_times, events=events[0], hrf_model='glover + derivative', add_regs=confound[0]) # Contrast specification contrast_values = (design_matrix.columns == 'language') * 1.0 -\ (design_matrix.columns == 'string') # Setup and fit GLM. # Note that the output consists in 2 variables: `labels` and `fit` # `labels` tags voxels according to noise autocorrelation. # `estimates` contains the parameter estimates. # We input them for contrast computation. labels, estimates = run_glm(texture.T, design_matrix.values) contrast = compute_contrast(labels, estimates, contrast_values, contrast_type='t') # We present the Z-transform of the t map. z_score = contrast.z_score() z_scores_right.append(z_score) # Do the left hemipshere exactly in the same way. texture = surface.vol_to_surf(fmri_img, fsaverage.pial_left) labels, estimates = run_glm(texture.T, design_matrix.values) contrast = compute_contrast(labels, estimates, contrast_values, contrast_type='t') z_scores_left.append(contrast.z_score()) .. GENERATED FROM PYTHON SOURCE LINES 124-127 Individual activation maps have been accumulated in the z_score_left and az_scores_right lists respectively. We can now use them in a group study (one-sample study). .. GENERATED FROM PYTHON SOURCE LINES 129-135 Group study ----------- Prepare figure for concurrent plot of individual maps compute population-level maps for left and right hemisphere We directly do that on the value arrays. .. GENERATED FROM PYTHON SOURCE LINES 135-139 .. code-block:: default from scipy.stats import ttest_1samp, norm t_left, pval_left = ttest_1samp(np.array(z_scores_left), 0) t_right, pval_right = ttest_1samp(np.array(z_scores_right), 0) .. GENERATED FROM PYTHON SOURCE LINES 140-141 What we have so far are p-values: we convert them to z-values for plotting. .. GENERATED FROM PYTHON SOURCE LINES 141-144 .. code-block:: default z_val_left = norm.isf(pval_left) z_val_right = norm.isf(pval_right) .. GENERATED FROM PYTHON SOURCE LINES 145-146 Plot the resulting maps, at first on the left hemipshere. .. GENERATED FROM PYTHON SOURCE LINES 146-151 .. code-block:: default from nilearn import plotting plotting.plot_surf_stat_map( fsaverage.infl_left, z_val_left, hemi='left', title="language-string, left hemisphere", colorbar=True, threshold=3., bg_map=fsaverage.sulc_left) .. image-sg:: /auto_examples/07_advanced/images/sphx_glr_plot_surface_bids_analysis_001.png :alt: language-string, left hemisphere :srcset: /auto_examples/07_advanced/images/sphx_glr_plot_surface_bids_analysis_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none
.. GENERATED FROM PYTHON SOURCE LINES 152-153 Next, on the right hemisphere. .. GENERATED FROM PYTHON SOURCE LINES 153-159 .. code-block:: default plotting.plot_surf_stat_map( fsaverage.infl_right, z_val_left, hemi='right', title="language-string, right hemisphere", colorbar=True, threshold=3., bg_map=fsaverage.sulc_right) plotting.show() .. image-sg:: /auto_examples/07_advanced/images/sphx_glr_plot_surface_bids_analysis_002.png :alt: language-string, right hemisphere :srcset: /auto_examples/07_advanced/images/sphx_glr_plot_surface_bids_analysis_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 2 minutes 5.580 seconds) **Estimated memory usage:** 456 MB .. _sphx_glr_download_auto_examples_07_advanced_plot_surface_bids_analysis.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/main?filepath=examples/auto_examples/07_advanced/plot_surface_bids_analysis.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_surface_bids_analysis.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_surface_bids_analysis.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_