Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

7.6.5. nilearn.input_data.NiftiSpheresMasker

class nilearn.input_data.NiftiSpheresMasker(seeds, radius=None, mask_img=None, allow_overlap=False, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, dtype=None, memory=Memory(location=None), memory_level=1, verbose=0)

Class for masking of Niimg-like objects using seeds.

NiftiSpheresMasker is useful when data from given seeds should be extracted. Use case: Summarize brain signals from seeds that were obtained from prior knowledge.

Parameters
seeds: List of triplet of coordinates in native space

Seed definitions. List of coordinates of the seeds in the same space as the images (typically MNI or TAL).

radius: float, optional

Indicates, in millimeters, the radius for the sphere around the seed. Default is None (signal is extracted on a single voxel).

mask_img: Niimg-like object, optional

See http://nilearn.github.io/manipulating_images/input_output.html Mask to apply to regions before extracting signals.

allow_overlap: boolean, optional

If False, an error is raised if the maps overlaps (ie at least two maps have a non-zero value for the same voxel). Default is False.

smoothing_fwhm: float, optional

If smoothing_fwhm is not None, it gives the full-width half maximum in millimeters of the spatial smoothing to apply to the signal.

standardize: {‘zscore’, ‘psc’, True, False}, default is ‘zscore’

Strategy to standardize the signal. ‘zscore’: the signal is z-scored. Timeseries are shifted to zero mean and scaled to unit variance. ‘psc’: Timeseries are shifted to zero mean value and scaled to percent signal change (as compared to original mean signal). True : the signal is z-scored. Timeseries are shifted to zero mean and scaled to unit variance. False : Do not standardize the data.

detrend: boolean, optional

This parameter is passed to signal.clean. Please see the related documentation for details.

low_pass: None or float, optional

This parameter is passed to signal.clean. Please see the related documentation for details.

high_pass: None or float, optional

This parameter is passed to signal.clean. Please see the related documentation for details.

t_r: float, optional

This parameter is passed to signal.clean. Please see the related documentation for details.

dtype: {dtype, “auto”}

Data type toward which the data should be converted. If “auto”, the data will be converted to int32 if dtype is discrete and float32 if it is continuous.

memory: joblib.Memory or str, optional

Used to cache the region extraction process. By default, no caching is done. If a string is given, it is the path to the caching directory.

memory_level: int, optional

Aggressiveness of memory caching. The higher the number, the higher the number of functions that will be cached. Zero means no caching.

verbose: integer, optional

Indicate the level of verbosity. By default, nothing is printed.

__init__(self, seeds, radius=None, mask_img=None, allow_overlap=False, smoothing_fwhm=None, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, dtype=None, memory=Memory(location=None), memory_level=1, verbose=0)

Initialize self. See help(type(self)) for accurate signature.

fit(self, X=None, y=None)

Prepare signal extraction from regions.

All parameters are unused, they are for scikit-learn compatibility.

fit_transform(self, imgs, confounds=None)

Prepare and perform signal extraction

get_params(self, deep=True)

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsmapping of string to any

Parameter names mapped to their values.

inverse_transform(self, X)

Transform the 2D data matrix back to an image in brain space.

set_params(self, **params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfobject

Estimator instance.

transform(self, imgs, confounds=None)

Apply mask, spatial and temporal preprocessing

Parameters
imgs: 3D/4D Niimg-like object

See http://nilearn.github.io/manipulating_images/input_output.html Images to process. It must boil down to a 4D image with scans number as last dimension.

confounds: CSV file or array-like, optional

This parameter is passed to signal.clean. Please see the related documentation for details. shape: (number of scans, number of confounds)

Returns
region_signals: 2D numpy.ndarray

Signal for each element. shape: (number of scans, number of elements)

transform_single_imgs(self, imgs, confounds=None)

Extract signals from a single 4D niimg.

Parameters
imgs: 3D/4D Niimg-like object

See http://nilearn.github.io/manipulating_images/input_output.html Images to process. It must boil down to a 4D image with scans number as last dimension.

confounds: CSV file or array-like, optional

This parameter is passed to signal.clean. Please see the related documentation for details. shape: (number of scans, number of confounds)

Returns
region_signals: 2D numpy.ndarray

Signal for each sphere. shape: (number of scans, number of spheres)