This chapter introduces the maskers: objects that go from neuroimaging volumes, on the disk or in memory, to data matrices, eg of time series.
In any analysis, the first step is to load the data. It is often convenient to apply some basic data transformations and to turn the data in a 2D (samples x features) matrix, where the samples could be different time points, and the features derived from different voxels (e.g., restrict analysis to the ventral visual stream), regions of interest (e.g., extract local signals from spheres/cubes), or pre-specified networks (e.g., look at data from all voxels of a set of network nodes). Think of masker objects as swiss-army knifes for shaping the raw neuroimaging data in 3D space into the units of observation relevant for the research questions at hand.
“masker” objects (found in modules nilearn.input_data) simplify these “data folding” steps that often preceed the statistical analysis.
Note that the masker objects may not cover all the image transformations for specific tasks. Users who want to make some specific processing may have to call specific functions (modules nilearn.signal, nilearn.masking).
Advanced: Design philosophy of “Maskers”
The design of these classes is similar to scikit-learn‘s transformers. First, objects are initialized with some parameters guiding the transformation (unrelated to the data). Then the fit() method should be called, possibly specifying some data-related information (such as number of images to process), to perform some initial computation (e.g., fitting a mask based on the data). Finally, transform() can be called, with the data as argument, to perform some computation on data themselves (e.g., extracting time series from images).
NiftiMasker is a powerful tool to load images and extract voxel signals in the area defined by the mask. It applies some basic preprocessing steps with commonly used parameters as defaults. But it is very important to look at your data to see the effects of the preprocessings and validate them.
Advanced: scikit-learn Pipelines
Suppose we want to restrict a dataset to the first 100 frames. Below, we load a resting-state dataset with fetch_adhd(), restrict it to 100 frames and build a new niimg object that we can give to the masker. Although possible, there is no need to save your data to a file to pass it to a NiftiMasker. Simply use nilearn.image.index_img to apply a slice and create a Niimg in memory:
dataset = datasets.fetch_adhd(n_subjects=1) epi_filename = dataset.func # Restrict to 100 frames to speed up computation from nilearn.image import index_img epi_img = index_img(epi_filename, slice(0, 100))
In this section, we show how the masker object can compute a mask automatically for subsequent statistical analysis. On some datasets, the default algorithm may however perform poorly. This is why it is very important to always look at your data before and after feature engineering using masker objects.
If a mask is not specified as an argument, NiftiMasker will try to compute one from the provided neuroimaging data. It is very important to verify the quality of the generated mask by visualization. This allows to see whether it is suitable for your data and intended analyses. Alternatively, the mask computation parameters can still be modified. See the NiftiMasker documentation for a complete list of mask computation parameters.
As a first example, we will now automatically build a mask from a dataset. We will here use the Haxby dataset because it provides the original mask that we can compare the data-derived mask against.
Generate a mask with default parameters and visualize it (it is in the mask_img_ attribute of the masker:
# We need to specify an 'epi' mask_strategy, as this is raw EPI data masker = NiftiMasker(mask_strategy='epi') masker.fit(epi_img) plot_roi(masker.mask_img_, mean_img, title='EPI automatic mask')
We can then fine-tune the outline of the mask by increasing the number of opening steps (opening=10) using the mask_args argument of the NiftiMasker. This effectively performs erosion and dilation operations on the outer voxel layers of the mask, which can for example remove remaining skull parts in the image.
masker = NiftiMasker(mask_strategy='epi', mask_args=dict(opening=10)) masker.fit(epi_img) plot_roi(masker.mask_img_, mean_img, title='EPI Mask with strong opening')
Looking at the nilearn.masking.compute_epi_mask called by the NiftiMasker object, we see two interesting parameters: lower_cutoff and upper_cutoff. These set the grey-value bounds in which the masking algorithm will search for its threshold (0 being the minimum of the image and 1 the maximum). We will here increase the lower cutoff to enforce selection of those voxels that appear as bright in the EPI image.
masker = NiftiMasker(mask_strategy='epi', mask_args=dict(upper_cutoff=.9, lower_cutoff=.8, opening=False)) masker.fit(epi_img) plot_roi(masker.mask_img_, mean_img, title='EPI Mask: high lower_cutoff')
NiftiMasker comes with many parameters that enable data preparation:
>>> from nilearn import input_data >>> masker = input_data.NiftiMasker() >>> masker NiftiMasker(detrend=False, high_pass=None, low_pass=None, mask_args=None, mask_img=None, mask_strategy='background', memory=Memory(cachedir=None), memory_level=1, sample_mask=None, sessions=None, smoothing_fwhm=None, standardize=False, t_r=None, target_affine=None, target_shape=None, verbose=0)
NiftiMasker can apply Gaussian spatial smoothing to the neuroimaging data, useful to fight noise or for inter-individual differences in neuroanatomy. It is achieved by specifying the full-width half maximum (FWHM; in millimeter scale) with the smoothing_fwhm parameter. Anisotropic filtering is also possible by passing 3 scalars (x, y, z), the FWHM along the x, y, and z direction.
The underlying function handles properly non-cubic voxels by scaling the given widths appropriately.
NiftiMasker can also improve aspects of temporal data properties, before conversion to voxel signals.
You can, more as a training than as an exercise, try to play with the parameters in A introduction tutorial to fMRI decoding. Try to enable detrending and run the script: does it have a big impact on the result?
NiftiMasker and many similar classes enable resampling (recasting of images into different resolutions and transformations of brain voxel data). Two parameters control resampling:
How to combine these parameter to obtain the specific resampling desired is explained in details in Resampling images.
Once voxel signals have been processed, the result can be visualized as images after unmasking (masked-reduced data transformed back into the original whole-brain space). This step is present in almost all the examples provided in nilearn. Below you will find an excerpt of the example performing Anova-SVM on the Haxby data):
coef = svc.coef_ # reverse feature selection coef = feature_selection.inverse_transform(coef) # reverse masking weight_img = nifti_masker.inverse_transform(coef)
Examples to better understand the NiftiMasker
The purpose of NiftiLabelsMasker and NiftiMapsMasker is to compute signals from regions containing many voxels. They make it easy to get these signals once you have an atlas or a parcellation into brain regions.
These usage are illustrated in the section Extracting times series to build a functional connectome.
The background_label keyword of NiftiLabelsMasker deserves some explanation. The voxels that correspond to the brain or a region of interest in an fMRI image do not fill the entire image. Consequently, in the labels image, there must be a label value that corresponds to “outside” the brain (for which no signal should be extracted). By default, this label is set to zero in nilearn (refered to as “background”). Should some non-zero value encoding be necessary, it is possible to change the background value with the background_label keyword.
This atlas defines its regions using maps. The path to the corresponding file is given in the maps_img argument.
One important thing that happens transparently during the execution of NiftiMasker.fit_transform is resampling. Initially, the images and the atlas do typically not have the same shape nor the same affine. Casting them into the same format is required for successful signal extraction The keyword argument resampling_target specifies which format (i.e., dimensions and affine) the data should be resampled to. See the reference documentation for NiftiMapsMasker for every possible option.
The purpose of NiftiSpheresMasker is to compute signals from seeds containing voxels in spheres. It makes it easy to get these signals once you have a list of coordinates. A single seed is a sphere defined by the radius (in millimeters) and the coordinates (typically MNI or TAL) of its center.
Using NiftiSpheresMasker needs to define a list of coordinates. seeds argument takes a list of 3D coordinates (tuples) of the spheres centers, they should be in the same space as the images. Seeds can overlap spatially and are represented in a binary present/nonpresent coding (no weighting). Below is an example of a coordinates list of four seeds from the default mode network:
>>> dmn_coords = [(0, -52, 18), (-46, -68, 32), (46, -68, 32), (0, 50, -5)]
radius is an optional argument that takes a real value in millimeters. If no value is given for the radius argument, the single voxel at the given seed position is used.