.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_single_subject_single_run.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_single_subject_single_run.py: Intro to GLM Analysis: a single-session, single-subject fMRI dataset ===================================================================== In this tutorial, we use a General Linear Model (:term:`GLM`) to compare the :term:`fMRI` signal during periods of auditory stimulation versus periods of rest. .. contents:: **Contents** :local: :depth: 1 The analyse described here is performed in the native space, directly on the original :term:`EPI` scans without any spatial or temporal preprocessing. (More sensitive results would likely be obtained on the corrected, spatially normalized and smoothed images). The data --------- The dataset comes from an experiment conducted at the FIL by Geraint Rees under the direction of Karl Friston. It is provided by FIL methods group which develops the SPM software. According to SPM documentation, 96 scans were acquired (repetition time :term:`TR` = 7s) in one session. The paradigm consisted of alternating periods of stimulation and rest, lasting 42s each (that is, for 6 scans). The session started with a rest block. Auditory stimulation consisted of bi-syllabic words presented binaurally at a rate of 60 per minute. The functional data starts at scan number 4, that is the image file ``fM00223_004``. The whole brain :term:`BOLD`/:term:`EPI` images were acquired on a 2T Siemens MAGNETOM Vision system. Each scan consisted of 64 contiguous slices (64x64x64 3mm x 3mm x 3mm :term:`voxels`). Acquisition of one scan took 6.05s, with the scan to scan repeat time (:term:`TR`) set arbitrarily to 7s. To run this example, you must launch IPython via ``ipython --matplotlib`` in a terminal, or use ``jupyter-notebook``. .. GENERATED FROM PYTHON SOURCE LINES 42-50 Retrieving the data ------------------- .. note:: In this tutorial, we load the data using a data downloading function. To input your own data, you will need to provide a list of paths to your own files in the ``subject_data`` variable. These should abide to the Brain Imaging Data Structure (:term:`BIDS`) organization. .. GENERATED FROM PYTHON SOURCE LINES 50-55 .. code-block:: default from nilearn.datasets import fetch_spm_auditory subject_data = fetch_spm_auditory() subject_data.func # print the list of names of functional images .. rst-class:: sphx-glr-script-out Out: .. code-block:: none ['/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_004.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_005.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_006.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_007.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_008.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_009.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_010.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_011.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_012.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_013.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_014.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_015.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_016.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_017.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_018.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_019.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_020.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_021.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_022.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_023.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_024.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_025.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_026.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_027.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_028.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_029.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_030.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_031.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_032.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_033.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_034.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_035.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_036.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_037.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_038.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_039.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_040.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_041.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_042.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_043.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_044.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_045.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_046.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_047.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_048.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_049.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_050.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_051.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_052.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_053.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_054.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_055.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_056.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_057.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_058.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_059.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_060.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_061.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_062.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_063.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_064.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_065.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_066.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_067.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_068.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_069.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_070.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_071.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_072.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_073.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_074.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_075.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_076.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_077.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_078.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_079.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_080.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_081.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_082.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_083.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_084.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_085.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_086.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_087.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_088.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_089.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_090.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_091.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_092.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_093.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_094.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_095.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_096.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_097.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_098.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_099.img'] .. GENERATED FROM PYTHON SOURCE LINES 56-57 We can display the first functional image and the subject's anatomy: .. GENERATED FROM PYTHON SOURCE LINES 57-61 .. code-block:: default from nilearn.plotting import plot_stat_map, plot_anat, plot_img plot_img(subject_data.func[0], colorbar=True, cbar_tick_format="%i") plot_anat(subject_data.anat, colorbar=True, cbar_tick_format="%i") .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_001.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_001.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_002.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_002.png :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 62-65 Next, we concatenate all the 3D :term:`EPI` image into a single 4D image, then we average them in order to create a background image that will be used to display the activations: .. GENERATED FROM PYTHON SOURCE LINES 65-70 .. code-block:: default from nilearn.image import concat_imgs, mean_img fmri_img = concat_imgs(subject_data.func) mean_img = mean_img(fmri_img) .. GENERATED FROM PYTHON SOURCE LINES 71-78 Specifying the experimental paradigm ------------------------------------ We must now provide a description of the experiment, that is, define the timing of the auditory stimulation and rest periods. This is typically provided in an events.tsv file. The path of this file is provided in the dataset. .. GENERATED FROM PYTHON SOURCE LINES 78-82 .. code-block:: default import pandas as pd events = pd.read_table(subject_data['events']) events .. raw:: html
onset duration trial_type
0 0.0 42.0 rest
1 42.0 42.0 active
2 84.0 42.0 rest
3 126.0 42.0 active
4 168.0 42.0 rest
5 210.0 42.0 active
6 252.0 42.0 rest
7 294.0 42.0 active
8 336.0 42.0 rest
9 378.0 42.0 active
10 420.0 42.0 rest
11 462.0 42.0 active
12 504.0 42.0 rest
13 546.0 42.0 active
14 588.0 42.0 rest
15 630.0 42.0 active


.. GENERATED FROM PYTHON SOURCE LINES 83-87 Performing the GLM analysis --------------------------- It is now time to create and estimate a ``FirstLevelModel`` object, that will generate the *design matrix* using the information provided by the ``events`` object. .. GENERATED FROM PYTHON SOURCE LINES 87-90 .. code-block:: default from nilearn.glm.first_level import FirstLevelModel .. GENERATED FROM PYTHON SOURCE LINES 91-99 Parameters of the first-level model * t_r=7(s) is the time of repetition of acquisitions * noise_model='ar1' specifies the noise covariance model: a lag-1 dependence * standardize=False means that we do not want to rescale the time series to mean 0, variance 1 * hrf_model='spm' means that we rely on the SPM "canonical hrf" model (without time or dispersion derivatives) * drift_model='cosine' means that we model the signal drifts as slow oscillating time functions * high_pass=0.01(Hz) defines the cutoff frequency (inverse of the time period). .. GENERATED FROM PYTHON SOURCE LINES 99-106 .. code-block:: default fmri_glm = FirstLevelModel(t_r=7, noise_model='ar1', standardize=False, hrf_model='spm', drift_model='cosine', high_pass=.01) .. GENERATED FROM PYTHON SOURCE LINES 107-108 Now that we have specified the model, we can run it on the :term:`fMRI` image .. GENERATED FROM PYTHON SOURCE LINES 108-110 .. code-block:: default fmri_glm = fmri_glm.fit(fmri_img, events) .. GENERATED FROM PYTHON SOURCE LINES 111-113 One can inspect the design matrix (rows represent time, and columns contain the predictors). .. GENERATED FROM PYTHON SOURCE LINES 113-115 .. code-block:: default design_matrix = fmri_glm.design_matrices_[0] .. GENERATED FROM PYTHON SOURCE LINES 116-118 Formally, we have taken the first design matrix, because the model is implictily meant to for multiple runs. .. GENERATED FROM PYTHON SOURCE LINES 118-123 .. code-block:: default from nilearn.plotting import plot_design_matrix plot_design_matrix(design_matrix) import matplotlib.pyplot as plt plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_003.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 124-126 Save the design matrix image to disk first create a directory where you want to write the images .. GENERATED FROM PYTHON SOURCE LINES 126-136 .. code-block:: default import os outdir = 'results' if not os.path.exists(outdir): os.mkdir(outdir) from os.path import join plot_design_matrix( design_matrix, output_file=join(outdir, 'design_matrix.png')) .. GENERATED FROM PYTHON SOURCE LINES 137-140 The first column contains the expected response profile of regions which are sensitive to the auditory stimulation. Let's plot this first column .. GENERATED FROM PYTHON SOURCE LINES 140-146 .. code-block:: default plt.plot(design_matrix['active']) plt.xlabel('scan') plt.title('Expected Auditory Response') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_004.png :alt: Expected Auditory Response :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 147-158 Detecting voxels with significant effects ----------------------------------------- To access the estimated coefficients (Betas of the :term:`GLM` model), we created :term:`contrast` with a single '1' in each of the columns: The role of the :term:`contrast` is to select some columns of the model --and potentially weight them-- to study the associated statistics. So in a nutshell, a contrast is a weighted combination of the estimated effects. Here we can define canonical contrasts that just consider the two effects in isolation ---let's call them "conditions"--- then a :term:`contrast` that makes the difference between these conditions. .. GENERATED FROM PYTHON SOURCE LINES 158-167 .. code-block:: default from numpy import array conditions = { 'active': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'rest': array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), } .. GENERATED FROM PYTHON SOURCE LINES 168-170 We can then compare the two conditions 'active' and 'rest' by defining the corresponding :term:`contrast`: .. GENERATED FROM PYTHON SOURCE LINES 170-173 .. code-block:: default active_minus_rest = conditions['active'] - conditions['rest'] .. GENERATED FROM PYTHON SOURCE LINES 174-176 Let's look at it: plot the coefficients of the :term:`contrast`, indexed by the names of the columns of the design matrix. .. GENERATED FROM PYTHON SOURCE LINES 176-180 .. code-block:: default from nilearn.plotting import plot_contrast_matrix plot_contrast_matrix(active_minus_rest, design_matrix=design_matrix) .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_005.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_005.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 181-184 Below, we compute the estimated effect. It is in :term:`BOLD` signal unit, but has no statistical guarantees, because it does not take into account the associated variance. .. GENERATED FROM PYTHON SOURCE LINES 184-188 .. code-block:: default eff_map = fmri_glm.compute_contrast(active_minus_rest, output_type='effect_size') .. GENERATED FROM PYTHON SOURCE LINES 189-193 In order to get statistical significance, we form a t-statistic, and directly convert it into z-scale. The z-scale means that the values are scaled to match a standard Gaussian distribution (mean=0, variance=1), across voxels, if there were no effects in the data. .. GENERATED FROM PYTHON SOURCE LINES 193-197 .. code-block:: default z_map = fmri_glm.compute_contrast(active_minus_rest, output_type='z_score') .. GENERATED FROM PYTHON SOURCE LINES 198-206 Plot thresholded z scores map ------------------------------ We display it on top of the average functional image of the series (could be the anatomical image of the subject). We use arbitrarily a threshold of 3.0 in z-scale. We'll see later how to use corrected thresholds. We will show 3 axial views, with display_mode='z' and cut_coords=3. .. GENERATED FROM PYTHON SOURCE LINES 206-212 .. code-block:: default plot_stat_map(z_map, bg_img=mean_img, threshold=3.0, display_mode='z', cut_coords=3, black_bg=True, title='Active minus Rest (Z>3)') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_006.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_006.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 213-220 Statistical significance testing. One should worry about the statistical validity of the procedure: here we used an arbitrary threshold of 3.0 but the threshold should provide some guarantees on the risk of false detections (aka type-1 errors in statistics). One suggestion is to control the false positive rate (:term:`fpr`, denoted by alpha) at a certain level, e.g. 0.001: this means that there is 0.1% chance of declaring an inactive :term:`voxel`, active. .. GENERATED FROM PYTHON SOURCE LINES 220-229 .. code-block:: default from nilearn.glm import threshold_stats_img _, threshold = threshold_stats_img(z_map, alpha=.001, height_control='fpr') print('Uncorrected p<0.001 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Active minus Rest (p<0.001)') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_007.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_007.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Uncorrected p<0.001 threshold: 3.291 .. GENERATED FROM PYTHON SOURCE LINES 230-235 The problem is that with this you expect 0.001 * n_voxels to show up while they're not active --- tens to hundreds of voxels. A more conservative solution is to control the family wise error rate, i.e. the probability of making only one false detection, say at 5%. For that we use the so-called Bonferroni correction. .. GENERATED FROM PYTHON SOURCE LINES 235-244 .. code-block:: default _, threshold = threshold_stats_img( z_map, alpha=.05, height_control='bonferroni') print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Active minus Rest (p<0.05, corrected)') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_008.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_008.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Bonferroni-corrected, p<0.05 threshold: 4.934 .. GENERATED FROM PYTHON SOURCE LINES 245-249 This is quite conservative indeed! A popular alternative is to control the expected proportion of false discoveries among detections. This is called the False discovery rate. .. GENERATED FROM PYTHON SOURCE LINES 249-257 .. code-block:: default _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr') print('False Discovery rate = 0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Active minus Rest (fdr=0.05)') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_009.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_009.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none False Discovery rate = 0.05 threshold: 2.904 .. GENERATED FROM PYTHON SOURCE LINES 258-263 Finally people like to discard isolated voxels (aka "small clusters") from these images. It is possible to generate a thresholded map with small clusters removed by providing a cluster_threshold argument. Here clusters smaller than 10 voxels will be discarded. .. GENERATED FROM PYTHON SOURCE LINES 263-273 .. code-block:: default clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=10) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Active minus Rest (fdr=0.05), clusters > 10 voxels') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_010.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_010.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 274-275 We can save the effect and zscore maps to the disk. .. GENERATED FROM PYTHON SOURCE LINES 275-278 .. code-block:: default z_map.to_filename(join(outdir, 'active_vs_rest_z_map.nii.gz')) eff_map.to_filename(join(outdir, 'active_vs_rest_eff_map.nii.gz')) .. GENERATED FROM PYTHON SOURCE LINES 279-280 We can furthermore extract and report the found positions in a table. .. GENERATED FROM PYTHON SOURCE LINES 280-286 .. code-block:: default from nilearn.reporting import get_clusters_table table = get_clusters_table(z_map, stat_threshold=threshold, cluster_threshold=20) table .. raw:: html
Cluster ID X Y Z Peak Stat Cluster Size (mm3)
0 1 -60.0 -6.0 42.0 9.811979 4050
1 1a -63.0 6.0 36.0 8.601922
2 1b -63.0 0.0 42.0 8.435063
3 1c -48.0 -15.0 39.0 8.364058
4 2 60.0 0.0 36.0 9.605128 1512
5 2a 45.0 -12.0 42.0 7.590200
6 3 63.0 12.0 27.0 8.253889 972
7 3a 51.0 3.0 30.0 6.968355
8 3b 54.0 9.0 39.0 3.565609
9 4 36.0 -3.0 15.0 8.087451 1188
10 5 -63.0 -18.0 27.0 5.807510 594
11 5a -63.0 -21.0 42.0 5.646352
12 5b -60.0 -21.0 33.0 5.416271
13 6 45.0 -18.0 57.0 5.710963 702
14 6a 36.0 -12.0 57.0 5.633746
15 6b 30.0 -9.0 66.0 4.796135
16 6c 36.0 -15.0 69.0 4.254544
17 7 -12.0 -15.0 93.0 5.522477 621
18 7a -6.0 -15.0 99.0 4.713852
19 7b -3.0 -18.0 90.0 4.270733
20 7c -18.0 -12.0 96.0 4.085568
21 8 -24.0 -24.0 90.0 5.331806 648
22 8a -36.0 -24.0 90.0 4.700088
23 8b -12.0 -24.0 90.0 4.037845
24 8c -36.0 -24.0 84.0 3.527477
25 9 -15.0 -60.0 66.0 4.835099 837
26 9a -15.0 -60.0 57.0 4.615642
27 9b -6.0 -63.0 63.0 4.091568


.. GENERATED FROM PYTHON SOURCE LINES 287-288 This table can be saved for future use. .. GENERATED FROM PYTHON SOURCE LINES 288-291 .. code-block:: default table.to_csv(join(outdir, 'table.csv')) .. GENERATED FROM PYTHON SOURCE LINES 292-301 Performing an F-test --------------------- "active vs rest" is a typical t test: condition versus baseline. Another popular type of test is an F test in which one seeks whether a certain combination of conditions (possibly two-, three- or higher-dimensional) explains a significant proportion of the signal. Here one might for instance test which voxels are well explained by the combination of the active and rest condition. .. GENERATED FROM PYTHON SOURCE LINES 303-306 Specify the contrast and compute the corresponding map. Actually, the contrast specification is done exactly the same way as for t- contrasts. .. GENERATED FROM PYTHON SOURCE LINES 306-315 .. code-block:: default import numpy as np effects_of_interest = np.vstack((conditions['active'], conditions['rest'])) plot_contrast_matrix(effects_of_interest, design_matrix) plt.show() z_map = fmri_glm.compute_contrast(effects_of_interest, output_type='z_score') .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_011.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_011.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 316-318 Note that the statistic has been converted to a z-variable, which makes it easier to represent it. .. GENERATED FROM PYTHON SOURCE LINES 318-326 .. code-block:: default clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=10) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Effects of interest (fdr=0.05), clusters > 10 voxels') plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_012.png :alt: plot single subject single run :srcset: /auto_examples/images/sphx_glr_plot_single_subject_single_run_012.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 327-328 Oops, there is a lot of non-neural signal in there (ventricles, arteries)... .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 15.411 seconds) **Estimated memory usage:** 313 MB .. _sphx_glr_download_auto_examples_plot_single_subject_single_run.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/main?filepath=examples/auto_examples/plot_single_subject_single_run.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_single_subject_single_run.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_single_subject_single_run.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_