Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Intro to GLM Analysis: a single-run, single-subject fMRI dataset¶
In this tutorial, we use a General Linear Model (GLM) to compare the fMRI signal during periods of auditory stimulation versus periods of rest.
Warning
The analysis described here is performed in the native space, directly on the original EPI scans without any spatial or temporal preprocessing. More sensitive results would likely be obtained on the corrected, spatially normalized and smoothed images.
Retrieving the data¶
Note
In this tutorial, we load the data using a data downloading
function. To input your own data, you will need to provide
a list of paths to your own files in the subject_data
variable.
These should abide to the Brain Imaging Data Structure
(BIDS) organization.
from nilearn.datasets import fetch_spm_auditory
subject_data = fetch_spm_auditory()
[get_dataset_dir] Dataset created in /home/runner/nilearn_data/spm_auditory
[fetch_spm_auditory] Data absent, downloading...
[fetch_single_file] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/MoAEpilot/MoAEpilot.bids.zip ...
[_chunk_report_] Downloaded 1884160 of 30176409 bytes (6.2%%, 16.0s remaining)
[_chunk_report_] Downloaded 6938624 of 30176409 bytes (23.0%%, 7.0s
remaining)
[_chunk_report_] Downloaded 12140544 of 30176409 bytes (40.2%%, 4.6s
remaining)
[_chunk_report_] Downloaded 17072128 of 30176409 bytes (56.6%%, 3.2s
remaining)
[_chunk_report_] Downloaded 22781952 of 30176409 bytes (75.5%%, 1.7s
remaining)
[_chunk_report_] Downloaded 28254208 of 30176409 bytes (93.6%%, 0.4s
remaining)
[fetch_single_file] ...done. (8 seconds, 0 min)
[uncompress_file] Extracting data from
/home/runner/nilearn_data/spm_auditory/MoAEpilot.bids.zip...
[uncompress_file] .. done.
Inspecting the dataset¶
print dataset descriptions
print(subject_data.description)
.. _spm_auditory_dataset:
SPM auditory dataset
====================
Access
------
See :func:`nilearn.datasets.fetch_spm_auditory`.
Notes
-----
These whole brain BOLD/EPI images were acquired on a modified 2T Siemens MAGNETOM Vision system.
Each acquisition consisted of 64 contiguous slices (64x64x64 3mm x 3mm x 3mm voxels).
Acquisition took 6.05 seconds, with the scan to scan repeat time (RT) set arbitrarily to 7 seconds.
96 acquisitions were made (RT= 7 seconds), in blocks of 6, giving 16 blocks of 42 seconds.
The condition for successive blocks alternated between rest and auditory stimulation,
starting with rest.
Auditory stimulation was bi-syllabic words presented binaurally at a rate of 60 per minute.
A structural image was also acquired.
.. warning::
This dataset is a raw BIDS dataset.
The data are in the native space
and no spatial or temporal preprocessing has been performed.
This experiment was conducted by Geriant Rees
under the direction of Karl Friston and the FIL methods group.
See :footcite:t:`spm_auditory`.
Content
-------
:'func': Paths to functional images
:'anat': Path to anat image
:'events': Path to events.tsv
:'description': Data description
References
----------
.. footbibliography::
License
-------
The purpose was to explore new equipment and techniques.
As such it has not been formally written up,
and is freely available for personal education and evaluation purposes.
Those wishing to use these data for other purposes,
including published evaluation of methods,
should contact the methods group at the Wellcome Department of Cognitive Neurology.
print paths of func image
'/home/runner/nilearn_data/spm_auditory/MoAEpilot/sub-01/func/sub-01_task-auditory_bold.nii'
We can display the mean functional image and the subject’s anatomy:
from nilearn.image import mean_img
from nilearn.plotting import plot_anat, plot_img, plot_stat_map, show
fmri_img = subject_data.func
mean_img = mean_img(subject_data.func[0], copy_header=True)
plot_img(mean_img, colorbar=True, cbar_tick_format="%i", cmap="gray")
plot_anat(subject_data.anat, colorbar=True, cbar_tick_format="%i")
show()
Specifying the experimental paradigm¶
We must now provide a description of the experiment, that is, define the timing of the auditory stimulation and rest periods. This is typically provided in an events.tsv file. The path of this file is provided in the dataset.
import pandas as pd
events = pd.read_table(subject_data.events)
events
Performing the GLM analysis¶
It is now time to create and estimate a FirstLevelModel
object,
that will generate the design matrix
using the information provided by the events
object.
from nilearn.glm.first_level import FirstLevelModel
Parameters of the first-level model
t_r=7(s)
is the time of repetition of acquisitionsnoise_model='ar1'
specifies the noise covariance model: a lag-1 dependencestandardize=False
means that we do not want to rescale the time series to mean 0, variance 1hrf_model='spm'
means that we rely on the SPM “canonical hrf” model (without time or dispersion derivatives)drift_model='cosine'
means that we model the signal drifts as slow oscillating time functionshigh_pass=0.01
(Hz) defines the cutoff frequency (inverse of the time period).
fmri_glm = FirstLevelModel(
t_r=7,
noise_model="ar1",
standardize=False,
hrf_model="spm",
drift_model="cosine",
high_pass=0.01,
)
Now that we have specified the model, we can run it on the fMRI image
One can inspect the design matrix (rows represent time, and columns contain the predictors).
Formally, we have taken the first design matrix, because the model is implictily meant to for multiple runs.
from nilearn.plotting import plot_design_matrix
plot_design_matrix(design_matrix)
show()
Save the design matrix image to disk first create a directory where you want to write the images
from pathlib import Path
output_dir = Path.cwd() / "results" / "plot_single_subject_single_run"
output_dir.mkdir(exist_ok=True, parents=True)
print(f"Output will be saved to: {output_dir}")
plot_design_matrix(design_matrix, output_file=output_dir / "design_matrix.png")
Output will be saved to: /home/runner/work/nilearn/nilearn/examples/00_tutorials/results/plot_single_subject_single_run
The first column contains the expected response profile of regions which are sensitive to the auditory stimulation. Let’s plot this first column
import matplotlib.pyplot as plt
plt.plot(design_matrix["listening"])
plt.xlabel("scan")
plt.title("Expected Auditory Response")
show()
Detecting voxels with significant effects¶
To access the estimated coefficients (Betas of the GLM model), we created contrast with a single ‘1’ in each of the columns: The role of the contrast is to select some columns of the model –and potentially weight them– to study the associated statistics. So in a nutshell, a contrast is a weighted combination of the estimated effects. Here we can define canonical contrasts that just consider the effect of the stimulation in isolation.
Note
Here the baseline is implicit, so passing a value of 1
for the first column will give contrast for: listening > rest
import numpy as np
n_regressors = design_matrix.shape[1]
activation = np.zeros(n_regressors)
activation[0] = 1
Let’s look at it: plot the coefficients of the contrast, indexed by the names of the columns of the design matrix.
from nilearn.plotting import plot_contrast_matrix
plot_contrast_matrix(contrast_def=activation, design_matrix=design_matrix)
<AxesSubplot:label='conditions'>
Below, we compute the ‘estimated effect’. It is in BOLD signal unit, but has no statistical guarantees, because it does not take into account the associated variance.
eff_map = fmri_glm.compute_contrast(activation, output_type="effect_size")
In order to get statistical significance, we form a t-statistic, and directly convert it into z-scale. The z-scale means that the values are scaled to match a standard Gaussian distribution (mean=0, variance=1), across voxels, if there were no effects in the data.
z_map = fmri_glm.compute_contrast(activation, output_type="z_score")
Plot thresholded z scores map¶
We display it on top of the average functional image of the series (could be the anatomical image of the subject). We use arbitrarily a threshold of 3.0 in z-scale. We’ll see later how to use corrected thresholds. We will show 3 axial views, with display_mode=’z’ and cut_coords=3.
plotting_config = {
"bg_img": mean_img,
"display_mode": "z",
"cut_coords": 3,
"black_bg": True,
}
plot_stat_map(
z_map,
threshold=3,
title="listening > rest (|Z|>3)",
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
Note
Notice how the visualisations above shows both ‘activated’ voxels with Z > 3, as well as ‘deactivated’ voxels with Z < -3. In the rest of this example we will show only the activate voxels by using one-sided tests.
Statistical significance testing¶
One should worry about the statistical validity of the procedure: here we used an arbitrary threshold of 3.0 but the threshold should provide some guarantees on the risk of false detections (aka type-1 errors in statistics). One suggestion is to control the false positive rate (fpr, denoted by alpha) at a certain level, e.g. 0.001: this means that there is 0.1% chance of declaring an inactive voxel, active.
from nilearn.glm import threshold_stats_img
clean_map, threshold = threshold_stats_img(
z_map,
alpha=0.001,
height_control="fpr",
two_sided=False, # using a one-sided test
)
# Let's use a sequential colormap as we will only display positive values.
plotting_config["cmap"] = "black_red"
plot_stat_map(
clean_map,
threshold=threshold,
title=(
"listening > rest (Uncorrected p<0.001; "
f"threshold: {threshold:.3f})"
),
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
The problem is that with this you expect 0.001 * n_voxels to show up while they’re not active — tens to hundreds of voxels. A more conservative solution is to control the family wise error rate, i.e. the probability of making only one false detection, say at 5%. For that we use the so-called Bonferroni correction.
clean_map, threshold = threshold_stats_img(
z_map, alpha=0.05, height_control="bonferroni", two_sided=False
)
plot_stat_map(
clean_map,
threshold=threshold,
title=(
"listening > rest (p<0.05 Bonferroni-corrected, "
f"threshold: {threshold:.3f})"
),
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
This is quite conservative indeed! A popular alternative is to control the expected proportion of false discoveries among detections. This is called the False discovery rate.
clean_map, threshold = threshold_stats_img(
z_map, alpha=0.05, height_control="fdr", two_sided=False
)
plot_stat_map(
clean_map,
threshold=threshold,
title=(
"listening > rest (p<0.05 FDR-corrected; "
f"threshold: {threshold:.3f})"
),
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
Finally people like to discard isolated voxels (aka “small clusters”) from these images. It is possible to generate a thresholded map with small clusters removed by providing a cluster_threshold argument. Here clusters smaller than 10 voxels will be discarded.
clean_map, threshold = threshold_stats_img(
z_map,
alpha=0.05,
height_control="fdr",
cluster_threshold=10,
two_sided=False,
)
plot_stat_map(
clean_map,
threshold=threshold,
title=(
"listening > rest "
f"(p<0.05 FDR-corrected; threshold: {threshold:.3f}; "
"clusters > 10 voxels)"
),
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
We can save the effect and zscore maps to the disk.
z_map.to_filename(output_dir / "listening_gt_rest_z_map.nii.gz")
eff_map.to_filename(output_dir / "listening_gt_rest_eff_map.nii.gz")
We can furthermore extract and report the found positions in a table.
from nilearn.reporting import get_clusters_table
table = get_clusters_table(
z_map, stat_threshold=threshold, cluster_threshold=20
)
table
This table can be saved for future use.
table.to_csv(output_dir / "table.csv")
Performing an F-test¶
“listening > rest” is a typical t test: condition versus baseline. Another popular type of test is an F test in which one seeks whether a certain combination of conditions (possibly two-, three- or higher-dimensional) explains a significant proportion of the signal. Here one might for instance test which voxels are well explained by the combination of more active or less active than rest.
Note
As opposed to t-tests, the beta images produced by of F-tests only contain positive values.
Specify the contrast and compute the corresponding map. Actually, the contrast specification is done exactly the same way as for t-contrasts.
z_map = fmri_glm.compute_contrast(
activation,
output_type="z_score",
stat_type="F", # set stat_type to 'F' to perform an F test
)
Note that the statistic has been converted to a z-variable, which makes it easier to represent it.
clean_map, threshold = threshold_stats_img(
z_map,
alpha=0.05,
height_control="fdr",
cluster_threshold=10,
two_sided=False,
)
plot_stat_map(
clean_map,
threshold=threshold,
title="Effects of interest (fdr=0.05), clusters > 10 voxels",
figure=plt.figure(figsize=(10, 4)),
**plotting_config,
)
show()
Total running time of the script: (0 minutes 26.373 seconds)
Estimated memory usage: 803 MB