.. only:: html

    .. note::
        :class: sphx-glr-download-link-note

        Click :ref:`here <sphx_glr_download_auto_examples_plot_single_subject_single_run.py>`     to download the full example code or to run this example in your browser via Binder
    .. rst-class:: sphx-glr-example-title

    .. _sphx_glr_auto_examples_plot_single_subject_single_run.py:

Intro to GLM Analysis: a single-session, single-subject fMRI dataset
=====================================================================

In this tutorial, we use a General Linear Model (:term:`GLM`) to compare the
:term:`fMRI` signal during periods of auditory stimulation versus periods of rest.

.. contents:: **Contents**
    :local:
    :depth: 1

The analyse described here is performed in the native space, directly on the
original :term:`EPI` scans without any spatial or temporal preprocessing.
(More sensitive results would likely be obtained on the corrected,
spatially normalized and smoothed images).

The data
---------

The dataset comes from an experiment conducted at the FIL by Geraint Rees
under the direction of Karl Friston. It is provided by FIL methods
group which develops the SPM software.

According to SPM documentation, 96 scans were acquired (repetition time
:term:`TR` = 7s) in one session. The paradigm consisted of alternating periods
of stimulation and rest, lasting 42s each (that is, for 6 scans). The session
started with a rest block.  Auditory stimulation consisted of bi-syllabic words
presented binaurally at a rate of 60 per minute. The functional data starts at scan
number 4, that is the image file ``fM00223_004``.

The whole brain :term:`BOLD`/:term:`EPI` images were acquired on a 2T Siemens
MAGNETOM Vision system. Each scan consisted of 64 contiguous slices (64x64x64
3mm x 3mm x 3mm :term:`voxels<voxel>`). Acquisition of one scan took 6.05s, with the
scan to scan repeat time (:term:`TR`) set arbitrarily to 7s.


To run this example, you must launch IPython via ``ipython
--matplotlib`` in a terminal, or use ``jupyter-notebook``.

Retrieving the data
-------------------

.. note:: In this tutorial, we load the data using a data downloading
          function. To input your own data, you will need to provide
          a list of paths to your own files in the ``subject_data`` variable.
          These should abide to the Brain Imaging Data Structure (:term:`BIDS`)
          organization.


.. code-block:: default


    from nilearn.datasets import fetch_spm_auditory
    subject_data = fetch_spm_auditory()
    subject_data.func  # print the list of names of functional images





.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none


    ['/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_004.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_005.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_006.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_007.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_008.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_009.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_010.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_011.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_012.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_013.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_014.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_015.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_016.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_017.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_018.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_019.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_020.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_021.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_022.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_023.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_024.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_025.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_026.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_027.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_028.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_029.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_030.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_031.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_032.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_033.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_034.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_035.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_036.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_037.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_038.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_039.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_040.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_041.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_042.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_043.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_044.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_045.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_046.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_047.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_048.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_049.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_050.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_051.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_052.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_053.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_054.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_055.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_056.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_057.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_058.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_059.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_060.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_061.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_062.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_063.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_064.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_065.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_066.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_067.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_068.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_069.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_070.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_071.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_072.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_073.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_074.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_075.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_076.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_077.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_078.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_079.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_080.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_081.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_082.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_083.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_084.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_085.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_086.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_087.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_088.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_089.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_090.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_091.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_092.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_093.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_094.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_095.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_096.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_097.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_098.img', '/home/nicolas/nilearn_data/spm_auditory/sub001/fM00223/fM00223_099.img']



We can display the first functional image and the subject's anatomy:


.. code-block:: default

    from nilearn.plotting import plot_stat_map, plot_anat, plot_img
    plot_img(subject_data.func[0])
    plot_anat(subject_data.anat)




.. rst-class:: sphx-glr-horizontal


    *

      .. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_001.png
          :alt: plot single subject single run
          :class: sphx-glr-multi-img

    *

      .. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_002.png
          :alt: plot single subject single run
          :class: sphx-glr-multi-img


.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none


    <nilearn.plotting.displays.OrthoSlicer object at 0x7fbf06edde50>



Next, we concatenate all the 3D :term:`EPI` image into a single 4D image,
then we average them in order to create a background
image that will be used to display the activations:


.. code-block:: default


    from nilearn.image import concat_imgs, mean_img
    fmri_img = concat_imgs(subject_data.func)
    mean_img = mean_img(fmri_img)








Specifying the experimental paradigm
------------------------------------

We must now provide a description of the experiment, that is, define the
timing of the auditory stimulation and rest periods. This is typically
provided in an events.tsv file. The path of this file is
provided in the dataset.


.. code-block:: default

    import pandas as pd
    events = pd.read_table(subject_data['events'])
    events






.. raw:: html

    <div>
    <style scoped>
        .dataframe tbody tr th:only-of-type {
            vertical-align: middle;
        }

        .dataframe tbody tr th {
            vertical-align: top;
        }

        .dataframe thead th {
            text-align: right;
        }
    </style>
    <table border="1" class="dataframe">
      <thead>
        <tr style="text-align: right;">
          <th></th>
          <th>onset</th>
          <th>duration</th>
          <th>trial_type</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <th>0</th>
          <td>0.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>1</th>
          <td>42.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>2</th>
          <td>84.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>3</th>
          <td>126.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>4</th>
          <td>168.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>5</th>
          <td>210.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>6</th>
          <td>252.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>7</th>
          <td>294.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>8</th>
          <td>336.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>9</th>
          <td>378.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>10</th>
          <td>420.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>11</th>
          <td>462.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>12</th>
          <td>504.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>13</th>
          <td>546.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
        <tr>
          <th>14</th>
          <td>588.0</td>
          <td>42.0</td>
          <td>rest</td>
        </tr>
        <tr>
          <th>15</th>
          <td>630.0</td>
          <td>42.0</td>
          <td>active</td>
        </tr>
      </tbody>
    </table>
    </div>
    <br />
    <br />

Performing the GLM analysis
---------------------------

It is now time to create and estimate a ``FirstLevelModel`` object, that will generate the *design matrix* using the  information provided by the ``events`` object.


.. code-block:: default


    from nilearn.glm.first_level import FirstLevelModel








Parameters of the first-level model

* t_r=7(s) is the time of repetition of acquisitions
* noise_model='ar1' specifies the noise covariance model: a lag-1 dependence
* standardize=False means that we do not want to rescale the time series to mean 0, variance 1
* hrf_model='spm' means that we rely on the SPM "canonical hrf" model (without time or dispersion derivatives)
* drift_model='cosine' means that we model the signal drifts as slow oscillating time functions
* high_pass=0.01(Hz) defines the cutoff frequency (inverse of the time period).


.. code-block:: default

    fmri_glm = FirstLevelModel(t_r=7,
                               noise_model='ar1',
                               standardize=False,
                               hrf_model='spm',
                               drift_model='cosine',
                               high_pass=.01)








Now that we have specified the model, we can run it on the :term:`fMRI` image


.. code-block:: default

    fmri_glm = fmri_glm.fit(fmri_img, events)








One can inspect the design matrix (rows represent time, and
columns contain the predictors).


.. code-block:: default

    design_matrix = fmri_glm.design_matrices_[0]








Formally, we have taken the first design matrix, because the model is
implictily meant to for multiple runs.


.. code-block:: default

    from nilearn.plotting import plot_design_matrix
    plot_design_matrix(design_matrix)
    import matplotlib.pyplot as plt
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_003.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img





Save the design matrix image to disk
first create a directory where you want to write the images


.. code-block:: default


    import os
    outdir = 'results'
    if not os.path.exists(outdir):
        os.mkdir(outdir)

    from os.path import join
    plot_design_matrix(
        design_matrix, output_file=join(outdir, 'design_matrix.png'))








The first column contains the expected response profile of regions which are
sensitive to the auditory stimulation.
Let's plot this first column


.. code-block:: default


    plt.plot(design_matrix['active'])
    plt.xlabel('scan')
    plt.title('Expected Auditory Response')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_004.png
    :alt: Expected Auditory Response
    :class: sphx-glr-single-img





Detecting voxels with significant effects
-----------------------------------------

To access the estimated coefficients (Betas of the :term:`GLM` model), we
created :term:`contrast` with a single '1' in each of the columns: The role
of the :term:`contrast` is to select some columns of the model --and
potentially weight them-- to study the associated statistics. So in
a nutshell, a contrast is a weighted combination of the estimated
effects.  Here we can define canonical contrasts that just consider
the two effects in isolation ---let's call them "conditions"---
then a :term:`contrast` that makes the difference between these conditions.


.. code-block:: default


    from numpy import array
    conditions = {
        'active': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
                         0.]),
        'rest':   array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
                         0.]),
    }








We can then compare the two conditions 'active' and 'rest' by
defining the corresponding :term:`contrast`:


.. code-block:: default


    active_minus_rest = conditions['active'] - conditions['rest']








Let's look at it: plot the coefficients of the :term:`contrast`, indexed by
the names of the columns of the design matrix.


.. code-block:: default


    from nilearn.plotting import plot_contrast_matrix
    plot_contrast_matrix(active_minus_rest, design_matrix=design_matrix)




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_005.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img


.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none


    <AxesSubplot:label='conditions'>



Below, we compute the estimated effect. It is in :term:`BOLD` signal unit,
but has no statistical guarantees, because it does not take into
account the associated variance.


.. code-block:: default


    eff_map = fmri_glm.compute_contrast(active_minus_rest,
                                        output_type='effect_size')








In order to get statistical significance, we form a t-statistic, and
directly convert it into z-scale. The z-scale means that the values
are scaled to match a standard Gaussian distribution (mean=0,
variance=1), across voxels, if there were no effects in the data.


.. code-block:: default


    z_map = fmri_glm.compute_contrast(active_minus_rest,
                                      output_type='z_score')








Plot thresholded z scores map
------------------------------

We display it on top of the average
functional image of the series (could be the anatomical image of the
subject).  We use arbitrarily a threshold of 3.0 in z-scale. We'll
see later how to use corrected thresholds. We will show 3
axial views, with display_mode='z' and cut_coords=3.


.. code-block:: default


    plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Active minus Rest (Z>3)')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_006.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img





Statistical significance testing. One should worry about the
statistical validity of the procedure: here we used an arbitrary
threshold of 3.0 but the threshold should provide some guarantees on
the risk of false detections (aka type-1 errors in statistics).
One suggestion is to control the false positive rate (:term:`fpr<FPR correction>`, denoted by
alpha) at a certain level, e.g. 0.001: this means that there is 0.1% chance
of declaring an inactive :term:`voxel`, active.


.. code-block:: default


    from nilearn.glm import threshold_stats_img
    _, threshold = threshold_stats_img(z_map, alpha=.001, height_control='fpr')
    print('Uncorrected p<0.001 threshold: %.3f' % threshold)
    plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Active minus Rest (p<0.001)')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_007.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img


.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Uncorrected p<0.001 threshold: 3.291




The problem is that with this you expect 0.001 * n_voxels to show up
while they're not active --- tens to hundreds of voxels. A more
conservative solution is to control the family wise error rate,
i.e. the probability of making only one false detection, say at
5%. For that we use the so-called Bonferroni correction.


.. code-block:: default


    _, threshold = threshold_stats_img(
        z_map, alpha=.05, height_control='bonferroni')
    print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold)
    plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Active minus Rest (p<0.05, corrected)')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_008.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img


.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Bonferroni-corrected, p<0.05 threshold: 4.934




This is quite conservative indeed!  A popular alternative is to
control the expected proportion of
false discoveries among detections. This is called the False
discovery rate.


.. code-block:: default


    _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr')
    print('False Discovery rate = 0.05 threshold: %.3f' % threshold)
    plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Active minus Rest (fdr=0.05)')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_009.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img


.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    False Discovery rate = 0.05 threshold: 2.904




Finally people like to discard isolated voxels (aka "small
clusters") from these images. It is possible to generate a
thresholded map with small clusters removed by providing a
cluster_threshold argument. Here clusters smaller than 10 voxels
will be discarded.


.. code-block:: default


    clean_map, threshold = threshold_stats_img(
        z_map, alpha=.05, height_control='fdr', cluster_threshold=10)
    plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Active minus Rest (fdr=0.05), clusters > 10 voxels')
    plt.show()






.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_010.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img





We can save the effect and zscore maps to the disk.


.. code-block:: default

    z_map.to_filename(join(outdir, 'active_vs_rest_z_map.nii.gz'))
    eff_map.to_filename(join(outdir, 'active_vs_rest_eff_map.nii.gz'))








We can furthermore extract and report the found positions in a table.


.. code-block:: default


    from nilearn.reporting import get_clusters_table
    table = get_clusters_table(z_map, stat_threshold=threshold,
                               cluster_threshold=20)
    table






.. raw:: html

    <div>
    <style scoped>
        .dataframe tbody tr th:only-of-type {
            vertical-align: middle;
        }

        .dataframe tbody tr th {
            vertical-align: top;
        }

        .dataframe thead th {
            text-align: right;
        }
    </style>
    <table border="1" class="dataframe">
      <thead>
        <tr style="text-align: right;">
          <th></th>
          <th>Cluster ID</th>
          <th>X</th>
          <th>Y</th>
          <th>Z</th>
          <th>Peak Stat</th>
          <th>Cluster Size (mm3)</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <th>0</th>
          <td>1</td>
          <td>-60.0</td>
          <td>-6.0</td>
          <td>42.0</td>
          <td>9.811979</td>
          <td>4050</td>
        </tr>
        <tr>
          <th>1</th>
          <td>1a</td>
          <td>-63.0</td>
          <td>6.0</td>
          <td>36.0</td>
          <td>8.601922</td>
          <td></td>
        </tr>
        <tr>
          <th>2</th>
          <td>1b</td>
          <td>-63.0</td>
          <td>0.0</td>
          <td>42.0</td>
          <td>8.435063</td>
          <td></td>
        </tr>
        <tr>
          <th>3</th>
          <td>1c</td>
          <td>-48.0</td>
          <td>-15.0</td>
          <td>39.0</td>
          <td>8.364058</td>
          <td></td>
        </tr>
        <tr>
          <th>4</th>
          <td>2</td>
          <td>60.0</td>
          <td>0.0</td>
          <td>36.0</td>
          <td>9.605128</td>
          <td>1512</td>
        </tr>
        <tr>
          <th>5</th>
          <td>2a</td>
          <td>45.0</td>
          <td>-12.0</td>
          <td>42.0</td>
          <td>7.590200</td>
          <td></td>
        </tr>
        <tr>
          <th>6</th>
          <td>3</td>
          <td>63.0</td>
          <td>12.0</td>
          <td>27.0</td>
          <td>8.253889</td>
          <td>972</td>
        </tr>
        <tr>
          <th>7</th>
          <td>3a</td>
          <td>51.0</td>
          <td>3.0</td>
          <td>30.0</td>
          <td>6.968355</td>
          <td></td>
        </tr>
        <tr>
          <th>8</th>
          <td>3b</td>
          <td>54.0</td>
          <td>9.0</td>
          <td>39.0</td>
          <td>3.565609</td>
          <td></td>
        </tr>
        <tr>
          <th>9</th>
          <td>4</td>
          <td>36.0</td>
          <td>-3.0</td>
          <td>15.0</td>
          <td>8.087451</td>
          <td>1188</td>
        </tr>
        <tr>
          <th>10</th>
          <td>5</td>
          <td>-63.0</td>
          <td>-18.0</td>
          <td>27.0</td>
          <td>5.807510</td>
          <td>594</td>
        </tr>
        <tr>
          <th>11</th>
          <td>5a</td>
          <td>-63.0</td>
          <td>-21.0</td>
          <td>42.0</td>
          <td>5.646352</td>
          <td></td>
        </tr>
        <tr>
          <th>12</th>
          <td>5b</td>
          <td>-60.0</td>
          <td>-21.0</td>
          <td>33.0</td>
          <td>5.416271</td>
          <td></td>
        </tr>
        <tr>
          <th>13</th>
          <td>6</td>
          <td>45.0</td>
          <td>-18.0</td>
          <td>57.0</td>
          <td>5.710963</td>
          <td>702</td>
        </tr>
        <tr>
          <th>14</th>
          <td>6a</td>
          <td>36.0</td>
          <td>-12.0</td>
          <td>57.0</td>
          <td>5.633746</td>
          <td></td>
        </tr>
        <tr>
          <th>15</th>
          <td>6b</td>
          <td>30.0</td>
          <td>-9.0</td>
          <td>66.0</td>
          <td>4.796135</td>
          <td></td>
        </tr>
        <tr>
          <th>16</th>
          <td>6c</td>
          <td>36.0</td>
          <td>-15.0</td>
          <td>69.0</td>
          <td>4.254544</td>
          <td></td>
        </tr>
        <tr>
          <th>17</th>
          <td>7</td>
          <td>-12.0</td>
          <td>-15.0</td>
          <td>93.0</td>
          <td>5.522477</td>
          <td>621</td>
        </tr>
        <tr>
          <th>18</th>
          <td>7a</td>
          <td>-6.0</td>
          <td>-15.0</td>
          <td>99.0</td>
          <td>4.713852</td>
          <td></td>
        </tr>
        <tr>
          <th>19</th>
          <td>7b</td>
          <td>-3.0</td>
          <td>-18.0</td>
          <td>90.0</td>
          <td>4.270733</td>
          <td></td>
        </tr>
        <tr>
          <th>20</th>
          <td>7c</td>
          <td>-18.0</td>
          <td>-12.0</td>
          <td>96.0</td>
          <td>4.085568</td>
          <td></td>
        </tr>
        <tr>
          <th>21</th>
          <td>8</td>
          <td>-24.0</td>
          <td>-24.0</td>
          <td>90.0</td>
          <td>5.331806</td>
          <td>648</td>
        </tr>
        <tr>
          <th>22</th>
          <td>8a</td>
          <td>-36.0</td>
          <td>-24.0</td>
          <td>90.0</td>
          <td>4.700088</td>
          <td></td>
        </tr>
        <tr>
          <th>23</th>
          <td>8b</td>
          <td>-12.0</td>
          <td>-24.0</td>
          <td>90.0</td>
          <td>4.037845</td>
          <td></td>
        </tr>
        <tr>
          <th>24</th>
          <td>8c</td>
          <td>-36.0</td>
          <td>-24.0</td>
          <td>84.0</td>
          <td>3.527477</td>
          <td></td>
        </tr>
        <tr>
          <th>25</th>
          <td>9</td>
          <td>-15.0</td>
          <td>-60.0</td>
          <td>66.0</td>
          <td>4.835099</td>
          <td>837</td>
        </tr>
        <tr>
          <th>26</th>
          <td>9a</td>
          <td>-15.0</td>
          <td>-60.0</td>
          <td>57.0</td>
          <td>4.615642</td>
          <td></td>
        </tr>
        <tr>
          <th>27</th>
          <td>9b</td>
          <td>-6.0</td>
          <td>-63.0</td>
          <td>63.0</td>
          <td>4.091568</td>
          <td></td>
        </tr>
      </tbody>
    </table>
    </div>
    <br />
    <br />

This table can be saved for future use.


.. code-block:: default


    table.to_csv(join(outdir, 'table.csv'))








Performing an F-test
---------------------

"active vs rest" is a typical t test: condition versus
baseline. Another popular type of test is an F test in which one
seeks whether a certain combination of conditions (possibly two-,
three- or higher-dimensional) explains a significant proportion of
the signal.  Here one might for instance test which voxels are well
explained by the combination of the active and rest condition.

Specify the contrast and compute the corresponding map. Actually, the
contrast specification is done exactly the same way as for t-
contrasts.


.. code-block:: default


    import numpy as np
    effects_of_interest = np.vstack((conditions['active'], conditions['rest']))
    plot_contrast_matrix(effects_of_interest, design_matrix)
    plt.show()

    z_map = fmri_glm.compute_contrast(effects_of_interest,
                                      output_type='z_score')




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_011.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img





Note that the statistic has been converted to a z-variable, which
makes it easier to represent it.


.. code-block:: default


    clean_map, threshold = threshold_stats_img(
        z_map, alpha=.05, height_control='fdr', cluster_threshold=10)
    plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
                  display_mode='z', cut_coords=3, black_bg=True,
                  title='Effects of interest (fdr=0.05), clusters > 10 voxels')
    plt.show()




.. image:: /auto_examples/images/sphx_glr_plot_single_subject_single_run_012.png
    :alt: plot single subject single run
    :class: sphx-glr-single-img





Oops, there is a lot of non-neural signal in there (ventricles, arteries)...


.. rst-class:: sphx-glr-timing

   **Total running time of the script:** ( 0 minutes  12.819 seconds)


.. _sphx_glr_download_auto_examples_plot_single_subject_single_run.py:


.. only :: html

 .. container:: sphx-glr-footer
    :class: sphx-glr-footer-example


  .. container:: binder-badge

    .. image:: images/binder_badge_logo.svg
      :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/main?filepath=examples/auto_examples/plot_single_subject_single_run.ipynb
      :alt: Launch binder
      :width: 150 px


  .. container:: sphx-glr-download sphx-glr-download-python

     :download:`Download Python source code: plot_single_subject_single_run.py <plot_single_subject_single_run.py>`



  .. container:: sphx-glr-download sphx-glr-download-jupyter

     :download:`Download Jupyter notebook: plot_single_subject_single_run.ipynb <plot_single_subject_single_run.ipynb>`


.. only:: html

 .. rst-class:: sphx-glr-signature

    `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_