Note
This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.
8.6.3. nilearn.input_data.NiftiLabelsMasker¶
- class
nilearn.input_data.
NiftiLabelsMasker
(labels_img, background_label=0, mask_img=None, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, dtype=None, resampling_target='data', memory=Memory(location=None), memory_level=1, verbose=0, strategy='mean')¶ Class for masking of Niimg-like objects.
NiftiLabelsMasker is useful when data from non-overlapping volumes should be extracted (contrarily to NiftiMapsMasker). Use case: Summarize brain signals from clusters that were obtained by prior K-means or Ward clustering.
Parameters: labels_img: Niimg-like object
See http://nilearn.github.io/manipulating_images/input_output.html Region definitions, as one image of labels.
background_label: number, optional
Label used in labels_img to represent background.
mask_img: Niimg-like object, optional
See http://nilearn.github.io/manipulating_images/input_output.html Mask to apply to regions before extracting signals.
smoothing_fwhm: float, optional
If smoothing_fwhm is not None, it gives the full-width half maximum in millimeters of the spatial smoothing to apply to the signal.
standardize: {‘zscore’, ‘psc’, True, False}, default is ‘zscore’
Strategy to standardize the signal. ‘zscore’: the signal is z-scored. Timeseries are shifted to zero mean and scaled to unit variance. ‘psc’: Timeseries are shifted to zero mean value and scaled to percent signal change (as compared to original mean signal). True : the signal is z-scored. Timeseries are shifted to zero mean and scaled to unit variance. False : Do not standardize the data.
detrend: boolean, optional
This parameter is passed to signal.clean. Please see the related documentation for details
low_pass: None or float, optional
This parameter is passed to signal.clean. Please see the related documentation for details
high_pass: None or float, optional
This parameter is passed to signal.clean. Please see the related documentation for details
t_r: float, optional
This parameter is passed to signal.clean. Please see the related documentation for details
dtype: {dtype, “auto”}
Data type toward which the data should be converted. If “auto”, the data will be converted to int32 if dtype is discrete and float32 if it is continuous.
resampling_target: {“data”, “labels”, None}, optional.
Gives which image gives the final shape/size. For example, if resampling_target is “data”, the atlas is resampled to the shape of the data if needed. If it is “labels” then mask_img and images provided to fit() are resampled to the shape and affine of maps_img. “None” means no resampling: if shapes and affines do not match, a ValueError is raised. Defaults to “data”.
memory: joblib.Memory or str, optional
Used to cache the region extraction process. By default, no caching is done. If a string is given, it is the path to the caching directory.
memory_level: int, optional
Aggressiveness of memory caching. The higher the number, the higher the number of functions that will be cached. Zero means no caching.
verbose: integer, optional
Indicate the level of verbosity. By default, nothing is printed
strategy: str
The name of a valid function to reduce the region with. Must be one of: sum, mean, median, mininum, maximum, variance, standard_deviation
See also
__init__
(labels_img, background_label=0, mask_img=None, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, dtype=None, resampling_target='data', memory=Memory(location=None), memory_level=1, verbose=0, strategy='mean')¶Initialize self. See help(type(self)) for accurate signature.
fit
(X=None, y=None)¶Prepare signal extraction from regions.
All parameters are unused, they are for scikit-learn compatibility.
fit_transform
(imgs, confounds=None)¶Prepare and perform signal extraction from regions.
get_params
(deep=True)¶Get parameters for this estimator.
Parameters: deep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns: params : mapping of string to any
Parameter names mapped to their values.
inverse_transform
(signals)¶Compute voxel signals from region signals
Any mask given at initialization is taken into account.
Parameters: signals (2D numpy.ndarray)
Signal for each region. shape: (number of scans, number of regions)
Returns: voxel_signals (Nifti1Image)
Signal for each voxel shape: (number of scans, number of voxels)
set_params
(**params)¶Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.Parameters: **params : dict
Estimator parameters.
Returns: self : object
Estimator instance.
transform
(imgs, confounds=None)¶Apply mask, spatial and temporal preprocessing
Parameters: imgs: 3D/4D Niimg-like object
See http://nilearn.github.io/manipulating_images/input_output.html Images to process. It must boil down to a 4D image with scans number as last dimension.
confounds: CSV file or array-like, optional
This parameter is passed to signal.clean. Please see the related documentation for details. shape: (number of scans, number of confounds)
Returns: region_signals: 2D numpy.ndarray
Signal for each element. shape: (number of scans, number of elements)
transform_single_imgs
(imgs, confounds=None)¶Extract signals from a single 4D niimg.
Parameters: imgs: 3D/4D Niimg-like object
See http://nilearn.github.io/manipulating_images/input_output.html Images to process. It must boil down to a 4D image with scans number as last dimension.
confounds: CSV file or array-like or pandas DataFrame, optional
This parameter is passed to signal.clean. Please see the related documentation for details. shape: (number of scans, number of confounds)
Returns: region_signals: 2D numpy.ndarray
Signal for each label. shape: (number of scans, number of labels)