Note
Go to the end to download the full example code or to run this example in your browser via Binder
Searchlight analysis of face vs house recognition#
Searchlight analysis requires fitting a classifier a large amount of times. As a result, it is an intrinsically slow method. In order to speed up computing, in this example, Searchlight is run only on one slice on the fMRI (see the generated figures).
Note
If you are using Nilearn with a version older than 0.9.0
,
then you should either upgrade your version or import maskers
from the input_data
module instead of the maskers
module.
That is, you should manually replace in the following example all occurrences of:
from nilearn.maskers import NiftiMasker
with:
from nilearn.input_data import NiftiMasker
Load Haxby dataset#
import pandas as pd
from nilearn import datasets
from nilearn.image import get_data, load_img, new_img_like
# We fetch 2nd subject from haxby datasets (which is default)
haxby_dataset = datasets.fetch_haxby()
# print basic information on the dataset
print(f"Anatomical nifti image (3D) is located at: {haxby_dataset.mask}")
print(f"Functional nifti image (4D) is located at: {haxby_dataset.func[0]}")
fmri_filename = haxby_dataset.func[0]
labels = pd.read_csv(haxby_dataset.session_target[0], sep=" ")
y = labels["labels"]
session = labels["chunks"]
Anatomical nifti image (3D) is located at: /home/runner/work/nilearn/nilearn/nilearn_data/haxby2001/mask.nii.gz
Functional nifti image (4D) is located at: /home/runner/work/nilearn/nilearn/nilearn_data/haxby2001/subj2/bold.nii.gz
Restrict to faces and houses#
from nilearn.image import index_img
condition_mask = y.isin(["face", "house"])
fmri_img = index_img(fmri_filename, condition_mask)
y, session = y[condition_mask], session[condition_mask]
Prepare masks#
mask_img is the original mask
process_mask_img is a subset of mask_img, it contains the voxels that should be processed (we only keep the slice z = 29 and the back of the brain to speed up computation)
import numpy as np
mask_img = load_img(haxby_dataset.mask)
# .astype() makes a copy.
process_mask = get_data(mask_img).astype(int)
picked_slice = 29
process_mask[..., (picked_slice + 1) :] = 0
process_mask[..., :picked_slice] = 0
process_mask[:, 30:] = 0
process_mask_img = new_img_like(mask_img, process_mask)
/home/runner/work/nilearn/nilearn/examples/02_decoding/plot_haxby_searchlight.py:61: UserWarning:
Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32.
Searchlight computation#
# Make processing parallel
# /!\ As each thread will print its progress, n_jobs > 1 could mess up the
# information output.
n_jobs = 1
# Define the cross-validation scheme used for validation.
# Here we use a KFold cross-validation on the session, which corresponds to
# splitting the samples in 4 folds and make 4 runs using each fold as a test
# set once and the others as learning sets
from sklearn.model_selection import KFold
cv = KFold(n_splits=4)
import nilearn.decoding
# The radius is the one of the Searchlight sphere that will scan the volume
searchlight = nilearn.decoding.SearchLight(
mask_img,
process_mask_img=process_mask_img,
radius=5.6,
n_jobs=n_jobs,
verbose=1,
cv=cv,
)
searchlight.fit(fmri_img, y)
/usr/share/miniconda3/envs/testenv/lib/python3.9/site-packages/nilearn/image/resampling.py:493: UserWarning:
The provided image has no sform in its header. Please check the provided file. Results may not be as expected.
Job #1, processed 0/739 voxels (0.00%, 461.49492263793945 seconds remaining)
Job #1, processed 10/739 voxels (1.35%, 39.59393921604863 seconds remaining)
Job #1, processed 20/739 voxels (2.71%, 40.609926640767455 seconds remaining)
Job #1, processed 30/739 voxels (4.06%, 42.061090580935556 seconds remaining)
Job #1, processed 40/739 voxels (5.41%, 41.81719381487524 seconds remaining)
Job #1, processed 50/739 voxels (6.77%, 41.83473368761635 seconds remaining)
Job #1, processed 60/739 voxels (8.12%, 41.654142577072676 seconds remaining)
Job #1, processed 70/739 voxels (9.47%, 41.3793669006263 seconds remaining)
Job #1, processed 80/739 voxels (10.83%, 41.10835941522516 seconds remaining)
Job #1, processed 90/739 voxels (12.18%, 40.65081966021182 seconds remaining)
Job #1, processed 100/739 voxels (13.53%, 40.29731956025185 seconds remaining)
Job #1, processed 110/739 voxels (14.88%, 39.76208216144193 seconds remaining)
Job #1, processed 120/739 voxels (16.24%, 39.29142542543083 seconds remaining)
Job #1, processed 130/739 voxels (17.59%, 38.78357895387039 seconds remaining)
Job #1, processed 140/739 voxels (18.94%, 38.320246982725514 seconds remaining)
Job #1, processed 150/739 voxels (20.30%, 37.71991520797091 seconds remaining)
Job #1, processed 160/739 voxels (21.65%, 37.221559150664966 seconds remaining)
Job #1, processed 170/739 voxels (23.00%, 36.64552976774133 seconds remaining)
Job #1, processed 180/739 voxels (24.36%, 36.074826812117756 seconds remaining)
Job #1, processed 190/739 voxels (25.71%, 35.45369549634066 seconds remaining)
Job #1, processed 200/739 voxels (27.06%, 34.89312893782205 seconds remaining)
Job #1, processed 210/739 voxels (28.42%, 34.235039920726344 seconds remaining)
Job #1, processed 220/739 voxels (29.77%, 33.67315206393164 seconds remaining)
Job #1, processed 230/739 voxels (31.12%, 33.06055859980056 seconds remaining)
Job #1, processed 240/739 voxels (32.48%, 32.46240870940862 seconds remaining)
Job #1, processed 250/739 voxels (33.83%, 31.836259009955803 seconds remaining)
Job #1, processed 260/739 voxels (35.18%, 31.245491241853028 seconds remaining)
Job #1, processed 270/739 voxels (36.54%, 30.628802806612047 seconds remaining)
Job #1, processed 280/739 voxels (37.89%, 29.990333323052074 seconds remaining)
Job #1, processed 290/739 voxels (39.24%, 29.37414911252643 seconds remaining)
Job #1, processed 300/739 voxels (40.60%, 28.719246329932375 seconds remaining)
Job #1, processed 310/739 voxels (41.95%, 28.227155148343616 seconds remaining)
Job #1, processed 320/739 voxels (43.30%, 27.579545033170778 seconds remaining)
Job #1, processed 330/739 voxels (44.65%, 26.946045471599366 seconds remaining)
Job #1, processed 340/739 voxels (46.01%, 26.27720759697931 seconds remaining)
Job #1, processed 350/739 voxels (47.36%, 25.636448329364928 seconds remaining)
Job #1, processed 360/739 voxels (48.71%, 24.997397221718153 seconds remaining)
Job #1, processed 370/739 voxels (50.07%, 24.340219369543178 seconds remaining)
Job #1, processed 380/739 voxels (51.42%, 23.704539833436073 seconds remaining)
Job #1, processed 390/739 voxels (52.77%, 23.04540663323936 seconds remaining)
Job #1, processed 400/739 voxels (54.13%, 22.39515569587523 seconds remaining)
Job #1, processed 410/739 voxels (55.48%, 21.743342028303637 seconds remaining)
Job #1, processed 420/739 voxels (56.83%, 21.099892002332595 seconds remaining)
Job #1, processed 430/739 voxels (58.19%, 20.434982818925395 seconds remaining)
Job #1, processed 440/739 voxels (59.54%, 19.788169035064243 seconds remaining)
Job #1, processed 450/739 voxels (60.89%, 19.1390098395812 seconds remaining)
Job #1, processed 460/739 voxels (62.25%, 18.47420783311009 seconds remaining)
Job #1, processed 470/739 voxels (63.60%, 17.82570884062809 seconds remaining)
Job #1, processed 480/739 voxels (64.95%, 17.16528101624481 seconds remaining)
Job #1, processed 490/739 voxels (66.31%, 16.504988321656104 seconds remaining)
Job #1, processed 500/739 voxels (67.66%, 15.843657441509924 seconds remaining)
Job #1, processed 510/739 voxels (69.01%, 15.191048096926412 seconds remaining)
Job #1, processed 520/739 voxels (70.37%, 14.521652275245092 seconds remaining)
Job #1, processed 530/739 voxels (71.72%, 13.865781465456996 seconds remaining)
Job #1, processed 540/739 voxels (73.07%, 13.199179475669975 seconds remaining)
Job #1, processed 550/739 voxels (74.42%, 12.540016854075262 seconds remaining)
Job #1, processed 560/739 voxels (75.78%, 11.873998022985571 seconds remaining)
Job #1, processed 570/739 voxels (77.13%, 11.20910357906555 seconds remaining)
Job #1, processed 580/739 voxels (78.48%, 10.54871670964053 seconds remaining)
Job #1, processed 590/739 voxels (79.84%, 9.875471035798705 seconds remaining)
Job #1, processed 600/739 voxels (81.19%, 9.216902957230987 seconds remaining)
Job #1, processed 610/739 voxels (82.54%, 8.553091828228629 seconds remaining)
Job #1, processed 620/739 voxels (83.90%, 7.888658477808206 seconds remaining)
Job #1, processed 630/739 voxels (85.25%, 7.223856484785109 seconds remaining)
Job #1, processed 640/739 voxels (86.60%, 6.564123587024683 seconds remaining)
Job #1, processed 650/739 voxels (87.96%, 5.8952806456732 seconds remaining)
Job #1, processed 660/739 voxels (89.31%, 5.233838842371891 seconds remaining)
Job #1, processed 670/739 voxels (90.66%, 4.5724869403990605 seconds remaining)
Job #1, processed 680/739 voxels (92.02%, 3.9047270217581302 seconds remaining)
Job #1, processed 690/739 voxels (93.37%, 3.243799383486823 seconds remaining)
Job #1, processed 700/739 voxels (94.72%, 2.5802656963870336 seconds remaining)
Job #1, processed 710/739 voxels (96.08%, 1.913713629696392 seconds remaining)
Job #1, processed 720/739 voxels (97.43%, 1.25319630184739 seconds remaining)
Job #1, processed 730/739 voxels (98.78%, 0.5937313234113343 seconds remaining)
F-scores computation#
from nilearn.maskers import NiftiMasker
# For decoding, standardizing is often very important
nifti_masker = NiftiMasker(
mask_img=mask_img,
runs=session,
standardize="zscore_sample",
memory="nilearn_cache",
memory_level=1,
)
fmri_masked = nifti_masker.fit_transform(fmri_img)
from sklearn.feature_selection import f_classif
_, p_values = f_classif(fmri_masked, y)
p_values = -np.log10(p_values)
p_values[p_values > 10] = 10
p_unmasked = get_data(nifti_masker.inverse_transform(p_values))
Visualization#
Use the fmri mean image as a surrogate of anatomical data
from nilearn import image
mean_fmri = image.mean_img(fmri_img)
from nilearn.plotting import plot_img, plot_stat_map, show
searchlight_img = new_img_like(mean_fmri, searchlight.scores_)
# Because scores are not a zero-center test statistics, we cannot use
# plot_stat_map
plot_img(
searchlight_img,
bg_img=mean_fmri,
title="Searchlight",
display_mode="z",
cut_coords=[-9],
vmin=0.42,
cmap="hot",
threshold=0.2,
black_bg=True,
)
# F_score results
p_ma = np.ma.array(p_unmasked, mask=np.logical_not(process_mask))
f_score_img = new_img_like(mean_fmri, p_ma)
plot_stat_map(
f_score_img,
mean_fmri,
title="F-scores",
display_mode="z",
cut_coords=[-9],
colorbar=False,
)
show()
Total running time of the script: (1 minutes 1.544 seconds)
Estimated memory usage: 916 MB