Analysis of an fMRI dataset with a Finite Impule Response (FIR) model#

FIR models are used to estimate the hemodyamic response non-parametrically. The example below shows that they’re good to do statistical inference even on fast event-related fMRI datasets.

Here, we demonstrate the use of a FIR model with 3 lags, computing 4 contrasts from a single subject dataset from the “Neurospin Localizer”. It is a fast event related design: During 5 minutes, 80 events of the following types are presented : [‘audio_computation’, ‘audio_left_hand_button_press’, ‘audio_right_hand_button_press’, ‘horizontal_checkerboard’, ‘sentence_listening’, ‘sentence_reading’, ‘vertical_checkerboard’, ‘visual_computation’, ‘visual_left_hand_button_press’, ‘visual_right_hand_button_press’]

At first, we grab the localizer data.

import pandas as pd

from nilearn.datasets import func

data = func.fetch_localizer_first_level()
fmri_img = data.epi_img
t_r = 2.4
events_file = data["events"]
events = pd.read_table(events_file)
Dataset created in /home/remi/nilearn_data/localizer_first_level

Downloading data from ...

Downloaded 5169152 of 34532740 bytes (15.0%,    5.7s remaining)
Downloaded 11018240 of 34532740 bytes (31.9%,    4.3s remaining)
Downloaded 15081472 of 34532740 bytes (43.7%,    3.9s remaining)
Downloaded 18243584 of 34532740 bytes (52.8%,    3.8s remaining)
Downloaded 22945792 of 34532740 bytes (66.4%,    2.7s remaining)
Downloaded 30253056 of 34532740 bytes (87.6%,    0.9s remaining) ...done. (9 seconds, 0 min)
Extracting data from /home/remi/nilearn_data/localizer_first_level/62d16779cfae29f3623d6a194141d263/download..... done.

Next solution is to try Finite Impulse Response (FIR) models: we just say that the hrf is an arbitrary function that lags behind the stimulus onset. In the present case, given that the numbers of conditions is high, we should use a simple FIR model.

Concretely, we set hrf_model to ‘fir’ and fir_delays to [1, 2, 3] (scans) corresponding to a 3-step functions on the [1 * t_r, 4 * t_r] seconds interval.

plot fir model
<Axes: label='conditions', ylabel='scan number'>

We have to adapt contrast specification. We characterize the BOLD response by the sum across the three time lags. It’s a bit hairy, sorry, but this is the price to pay for flexibility…

import numpy as np

contrast_matrix = np.eye(design_matrix.shape[1])
contrasts = {
    column: contrast_matrix[i]
    for i, column in enumerate(design_matrix.columns)
conditions = events.trial_type.unique()
for condition in conditions:
    contrasts[condition] = np.sum(
            for name in design_matrix.columns
            if name[: len(condition)] == condition

contrasts["audio"] = np.sum(
        for name in [
contrasts["video"] = np.sum(
        for name in [

contrasts["computation"] = (
    contrasts["audio_computation"] + contrasts["visual_computation"]
contrasts["sentences"] = (
    contrasts["sentence_listening"] + contrasts["sentence_reading"]

contrasts = {
    "left-right": (
        + contrasts["audio_left_hand_button_press"]
        - contrasts["visual_right_hand_button_press"]
        - contrasts["audio_right_hand_button_press"]
    "H-V": (
        - contrasts["vertical_checkerboard"]
    "audio-video": contrasts["audio"] - contrasts["video"],
    "sentences-computation": (
        contrasts["sentences"] - contrasts["computation"]

Take a look at the contrasts.

plot fir model
<Axes: label='conditions'>

Take a breath.

We can now proceed by estimating the contrasts and displaying them.

import matplotlib.pyplot as plt

from nilearn.plotting import plot_stat_map

fig = plt.figure(figsize=(11, 3))
for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):
    ax = plt.subplot(1, len(contrasts), 1 + index)
    z_map = first_level_model.compute_contrast(
        contrast_val, output_type="z_score"
plot fir model

The result is acceptable. Note that we’re asking a lot of questions to a small dataset, yet with a relatively large number of experimental conditions.

Total running time of the script: (0 minutes 21.523 seconds)

Estimated memory usage: 194 MB

Gallery generated by Sphinx-Gallery