Searchlight analysis of face vs house recognition#

Searchlight analysis requires fitting a classifier a large amount of times. As a result, it is an intrinsically slow method. In order to speed up computing, in this example, Searchlight is run only on one slice on the fMRI (see the generated figures).

Note

If you are using Nilearn with a version older than 0.9.0, then you should either upgrade your version or import maskers from the input_data module instead of the maskers module.

That is, you should manually replace in the following example all occurrences of:

from nilearn.maskers import NiftiMasker

with:

from nilearn.input_data import NiftiMasker

Load Haxby dataset#

import pandas as pd
from nilearn import datasets
from nilearn.image import get_data, load_img, new_img_like

# We fetch 2nd subject from haxby datasets (which is default)
haxby_dataset = datasets.fetch_haxby()

# print basic information on the dataset
print(f"Anatomical nifti image (3D) is located at: {haxby_dataset.mask}")
print(f"Functional nifti image (4D) is located at: {haxby_dataset.func[0]}")

fmri_filename = haxby_dataset.func[0]
labels = pd.read_csv(haxby_dataset.session_target[0], sep=" ")
y = labels["labels"]
session = labels["chunks"]
Anatomical nifti image (3D) is located at: /home/runner/work/nilearn/nilearn/nilearn_data/haxby2001/mask.nii.gz
Functional nifti image (4D) is located at: /home/runner/work/nilearn/nilearn/nilearn_data/haxby2001/subj2/bold.nii.gz

Restrict to faces and houses#

Prepare masks#

  • mask_img is the original mask

  • process_mask_img is a subset of mask_img, it contains the voxels that should be processed (we only keep the slice z = 26 and the back of the brain to speed up computation)

import numpy as np

mask_img = load_img(haxby_dataset.mask)

# .astype() makes a copy.
process_mask = get_data(mask_img).astype(int)
picked_slice = 29
process_mask[..., (picked_slice + 1) :] = 0
process_mask[..., :picked_slice] = 0
process_mask[:, 30:] = 0
process_mask_img = new_img_like(mask_img, process_mask)
/home/runner/work/nilearn/nilearn/examples/02_decoding/plot_haxby_searchlight.py:60: UserWarning:

Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.

Searchlight computation#

# Make processing parallel
# /!\ As each thread will print its progress, n_jobs > 1 could mess up the
#     information output.
n_jobs = 1

# Define the cross-validation scheme used for validation.
# Here we use a KFold cross-validation on the session, which corresponds to
# splitting the samples in 4 folds and make 4 runs using each fold as a test
# set once and the others as learning sets
from sklearn.model_selection import KFold

cv = KFold(n_splits=4)

import nilearn.decoding

# The radius is the one of the Searchlight sphere that will scan the volume
searchlight = nilearn.decoding.SearchLight(
    mask_img,
    process_mask_img=process_mask_img,
    radius=5.6,
    n_jobs=n_jobs,
    verbose=1,
    cv=cv,
)
searchlight.fit(fmri_img, y)
/usr/share/miniconda3/envs/testenv/lib/python3.9/site-packages/nilearn/image/resampling.py:494: UserWarning:

The provided image has no sform in its header. Please check the provided file. Results may not be as expected.

[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
Job #1, processed 0/739 voxels (0.00%, 374.1192817687988 seconds remaining)
Job #1, processed 10/739 voxels (1.35%, 31.6189194255405 seconds remaining)
Job #1, processed 20/739 voxels (2.71%, 32.4041028225114 seconds remaining)
Job #1, processed 30/739 voxels (4.06%, 33.54253709962216 seconds remaining)
Job #1, processed 40/739 voxels (5.41%, 33.39685144486136 seconds remaining)
Job #1, processed 50/739 voxels (6.77%, 33.39955657905455 seconds remaining)
Job #1, processed 60/739 voxels (8.12%, 33.23350149422444 seconds remaining)
Job #1, processed 70/739 voxels (9.47%, 33.03992067722735 seconds remaining)
Job #1, processed 80/739 voxels (10.83%, 32.84598260220839 seconds remaining)
Job #1, processed 90/739 voxels (12.18%, 32.49165791282904 seconds remaining)
Job #1, processed 100/739 voxels (13.53%, 32.232044754958736 seconds remaining)
Job #1, processed 110/739 voxels (14.88%, 31.825934358822405 seconds remaining)
Job #1, processed 120/739 voxels (16.24%, 31.458822617977127 seconds remaining)
Job #1, processed 130/739 voxels (17.59%, 31.066857216505923 seconds remaining)
Job #1, processed 140/739 voxels (18.94%, 30.66791984070694 seconds remaining)
Job #1, processed 150/739 voxels (20.30%, 30.177976140835018 seconds remaining)
Job #1, processed 160/739 voxels (21.65%, 29.74518692686156 seconds remaining)
Job #1, processed 170/739 voxels (23.00%, 29.276323691658355 seconds remaining)
Job #1, processed 180/739 voxels (24.36%, 28.82040389809507 seconds remaining)
Job #1, processed 190/739 voxels (25.71%, 28.356276221796488 seconds remaining)
Job #1, processed 200/739 voxels (27.06%, 27.908151402794157 seconds remaining)
Job #1, processed 210/739 voxels (28.42%, 27.378701601457962 seconds remaining)
Job #1, processed 220/739 voxels (29.77%, 26.90849941946894 seconds remaining)
Job #1, processed 230/739 voxels (31.12%, 26.40225617989783 seconds remaining)
Job #1, processed 240/739 voxels (32.48%, 25.908519157635173 seconds remaining)
Job #1, processed 250/739 voxels (33.83%, 25.394862812438593 seconds remaining)
Job #1, processed 260/739 voxels (35.18%, 24.908792986525533 seconds remaining)
Job #1, processed 270/739 voxels (36.54%, 24.412217599026285 seconds remaining)
Job #1, processed 280/739 voxels (37.89%, 23.90394551070015 seconds remaining)
Job #1, processed 290/739 voxels (39.24%, 23.41748284515863 seconds remaining)
Job #1, processed 300/739 voxels (40.60%, 22.89586368217844 seconds remaining)
Job #1, processed 310/739 voxels (41.95%, 22.407547645261943 seconds remaining)
Job #1, processed 320/739 voxels (43.30%, 21.900471302983945 seconds remaining)
Job #1, processed 330/739 voxels (44.65%, 21.401591409360712 seconds remaining)
Job #1, processed 340/739 voxels (46.01%, 20.877479272778984 seconds remaining)
Job #1, processed 350/739 voxels (47.36%, 20.375690259643502 seconds remaining)
Job #1, processed 360/739 voxels (48.71%, 19.868332899514957 seconds remaining)
Job #1, processed 370/739 voxels (50.07%, 19.345472160965617 seconds remaining)
Job #1, processed 380/739 voxels (51.42%, 18.83767536779824 seconds remaining)
Job #1, processed 390/739 voxels (52.77%, 18.311626292187672 seconds remaining)
Job #1, processed 400/739 voxels (54.13%, 17.792703819257284 seconds remaining)
Job #1, processed 410/739 voxels (55.48%, 17.269632757182766 seconds remaining)
Job #1, processed 420/739 voxels (56.83%, 16.836388460918148 seconds remaining)
Job #1, processed 430/739 voxels (58.19%, 16.29930208994978 seconds remaining)
Job #1, processed 440/739 voxels (59.54%, 15.778118542788771 seconds remaining)
Job #1, processed 450/739 voxels (60.89%, 15.25632974423687 seconds remaining)
Job #1, processed 460/739 voxels (62.25%, 14.722714798517496 seconds remaining)
Job #1, processed 470/739 voxels (63.60%, 14.201315894816656 seconds remaining)
Job #1, processed 480/739 voxels (64.95%, 13.672241975381981 seconds remaining)
Job #1, processed 490/739 voxels (66.31%, 13.145144735485013 seconds remaining)
Job #1, processed 500/739 voxels (67.66%, 12.616104648834217 seconds remaining)
Job #1, processed 510/739 voxels (69.01%, 12.096646015548371 seconds remaining)
Job #1, processed 520/739 voxels (70.37%, 11.561308079320156 seconds remaining)
Job #1, processed 530/739 voxels (71.72%, 11.036689459379998 seconds remaining)
Job #1, processed 540/739 voxels (73.07%, 10.505173927817186 seconds remaining)
Job #1, processed 550/739 voxels (74.42%, 9.980602666531412 seconds remaining)
Job #1, processed 560/739 voxels (75.78%, 9.450931951596322 seconds remaining)
Job #1, processed 570/739 voxels (77.13%, 8.921768047521201 seconds remaining)
Job #1, processed 580/739 voxels (78.48%, 8.39670120175096 seconds remaining)
Job #1, processed 590/739 voxels (79.84%, 7.861760100286324 seconds remaining)
Job #1, processed 600/739 voxels (81.19%, 7.337843867971593 seconds remaining)
Job #1, processed 610/739 voxels (82.54%, 6.8081771699458455 seconds remaining)
Job #1, processed 620/739 voxels (83.90%, 6.278802620213702 seconds remaining)
Job #1, processed 630/739 voxels (85.25%, 5.749762429519833 seconds remaining)
Job #1, processed 640/739 voxels (86.60%, 5.225160816800514 seconds remaining)
Job #1, processed 650/739 voxels (87.96%, 4.692644650527381 seconds remaining)
Job #1, processed 660/739 voxels (89.31%, 4.166269091365986 seconds remaining)
Job #1, processed 670/739 voxels (90.66%, 3.6400961738639457 seconds remaining)
Job #1, processed 680/739 voxels (92.02%, 3.108857518977329 seconds remaining)
Job #1, processed 690/739 voxels (93.37%, 2.5828735697406033 seconds remaining)
Job #1, processed 700/739 voxels (94.72%, 2.0547481441014526 seconds remaining)
Job #1, processed 710/739 voxels (96.08%, 1.5240443034731883 seconds remaining)
Job #1, processed 720/739 voxels (97.43%, 0.9980741666227836 seconds remaining)
Job #1, processed 730/739 voxels (98.78%, 0.472875642158599 seconds remaining)
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:   38.6s finished
SearchLight(cv=KFold(n_splits=4, random_state=None, shuffle=False),
            mask_img=<nibabel.nifti1.Nifti1Image object at 0x7f3171772670>,
            process_mask_img=<nibabel.nifti1.Nifti1Image object at 0x7f31785f18e0>,
            radius=5.6, verbose=1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


F-scores computation#

from nilearn.maskers import NiftiMasker

# For decoding, standardizing is often very important
nifti_masker = NiftiMasker(
    mask_img=mask_img,
    runs=session,
    standardize=True,
    memory="nilearn_cache",
    memory_level=1,
)
fmri_masked = nifti_masker.fit_transform(fmri_img)

from sklearn.feature_selection import f_classif

_, p_values = f_classif(fmri_masked, y)
p_values = -np.log10(p_values)
p_values[p_values > 10] = 10
p_unmasked = get_data(nifti_masker.inverse_transform(p_values))
/usr/share/miniconda3/envs/testenv/lib/python3.9/site-packages/nilearn/signal.py:915: FutureWarning:

The default strategy for standardize is currently 'zscore' which incorrectly uses population std to calculate sample zscores. The new strategy 'zscore_sample' corrects this behavior by using the sample std. In release 0.13, the default strategy will be replaced by the new strategy and the 'zscore' option will be removed. Please use 'zscore_sample' instead.

Visualization#

Use the fmri mean image as a surrogate of anatomical data

from nilearn import image

mean_fmri = image.mean_img(fmri_img)

from nilearn.plotting import plot_img, plot_stat_map, show

searchlight_img = new_img_like(mean_fmri, searchlight.scores_)

# Because scores are not a zero-center test statistics, we cannot use
# plot_stat_map
plot_img(
    searchlight_img,
    bg_img=mean_fmri,
    title="Searchlight",
    display_mode="z",
    cut_coords=[-9],
    vmin=0.42,
    cmap="hot",
    threshold=0.2,
    black_bg=True,
)

# F_score results
p_ma = np.ma.array(p_unmasked, mask=np.logical_not(process_mask))
f_score_img = new_img_like(mean_fmri, p_ma)
plot_stat_map(
    f_score_img,
    mean_fmri,
    title="F-scores",
    display_mode="z",
    cut_coords=[-9],
    colorbar=False,
)

show()
  • plot haxby searchlight
  • plot haxby searchlight

Total running time of the script: ( 0 minutes 50.117 seconds)

Estimated memory usage: 916 MB

Gallery generated by Sphinx-Gallery