Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

7.8.8. nilearn.regions.Parcellations

class nilearn.regions.Parcellations(method, n_parcels=50, random_state=0, mask=None, smoothing_fwhm=4.0, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, target_affine=None, target_shape=None, mask_strategy='epi', mask_args=None, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=1)

Learn parcellations on fMRI images.

Four different types of clustering methods can be used such as kmeans, ward, complete, average. Kmeans will call MiniBatchKMeans whereas ward, complete, average are used within in Agglomerative Clustering. All methods are leveraged from scikit-learn.

New in version 0.4.1.

Parameters:

method : str, {‘kmeans’, ‘ward’, ‘complete’, ‘average’}

A method to choose between for brain parcellations.

n_parcels : int, default=50

Number of parcellations to divide the brain data into.

random_state : int or RandomState

Pseudo number generator state used for random sampling.

mask : Niimg-like object or NiftiMasker, MultiNiftiMasker instance

Mask/Masker used for masking the data. If mask image if provided, it will be used in the MultiNiftiMasker. If an instance of MultiNiftiMasker is provided, then this instance parameters will be used in masking the data by overriding the default masker parameters. If None, mask will be automatically computed by a MultiNiftiMasker with default parameters.

smoothing_fwhm : float, optional default=4.

If smoothing_fwhm is not None, it gives the full-width half maximum in millimeters of the spatial smoothing to apply to the signal.

standardize : boolean, optional

If standardize is True, the time-series are centered and normed: their mean is put to 0 and their variance to 1 in the time dimension.

detrend : boolean, optional

Whether to detrend signals or not. This parameter is passed to signal.clean. Please see the related documentation for details

low_pass: None or float, optional

This parameter is passed to signal.clean. Please see the related documentation for details

high_pass: None or float, optional

This parameter is passed to signal.clean. Please see the related documentation for details

t_r : float, optional

This parameter is passed to signal.clean. Please see the related documentation for details

target_affine : 3x3 or 4x4 matrix, optional

This parameter is passed to image.resample_img. Please see the related documentation for details. The given affine will be considered as same for all given list of images.

target_shape : 3-tuple of integers, optional

This parameter is passed to image.resample_img. Please see the related documentation for details.

memory : instance of joblib.Memory or str

Used to cache the masking process. By default, no caching is done. If a string is given, it is the path to the caching directory.

memory_level : integer, optional

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching.

n_jobs : integer, optional

The number of CPUs to use to do the computation. -1 means ‘all CPUs’, -2 ‘all CPUs but one’, and so on.

verbose : integer, optional

Indicate the level of verbosity. By default, nothing is printed.

Returns:

labels_img_ : Nifti1Image

Labels image to each parcellation learned on fmri images.

masker_ : instance of NiftiMasker or MultiNiftiMasker

The masker used to mask the data

connectivity_ : numpy.ndarray

voxel-to-voxel connectivity matrix computed from a mask. Note that this attribute is only seen if selected methods are Agglomerative Clustering type, ‘ward’, ‘complete’, ‘average’.

Notes

  • Transforming list of Nifti images to data matrix takes few steps. Reducing the data dimensionality using randomized SVD, build brain parcellations using KMeans or various Agglomerative methods.
  • This object uses spatially-constrained AgglomerativeClustering for method=’ward’ or ‘complete’ or ‘average’. Spatial connectivity matrix (voxel-to-voxel) is built-in object which means no need of explicitly giving the matrix.
__init__(method, n_parcels=50, random_state=0, mask=None, smoothing_fwhm=4.0, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, target_affine=None, target_shape=None, mask_strategy='epi', mask_args=None, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=1)

Initialize self. See help(type(self)) for accurate signature.

fit(imgs, y=None, confounds=None)

Compute the mask and the components across subjects

Parameters:

imgs: list of Niimg-like objects

See http://nilearn.github.io/manipulating_images/input_output.html Data on which the mask is calculated. If this is a list, the affine is considered the same for all.

confounds : list of CSV file paths or 2D matrices

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details. Should match with the list of imgs given.

Returns:

self : object

Returns the instance itself. Contains attributes listed at the object level.

fit_transform(imgs, confounds=None)

Fit the images to parcellations and then transform them.

Parameters:

imgs : List of Nifti-like images

See http://nilearn.github.io/manipulating_images/input_output.html. Images for process for fit as well for transform to signals.

confounds : List of CSV files or arrays-like, optional

Each file or numpy array in a list should have shape (number of scans, number of confounds). This parameter is passed to signal.clean. Given confounds should have same length as images if given as a list.

Note: same confounds will used for cleaning signals before learning parcellations.

Returns:

region_signals: List of or 2D numpy.ndarray

Signals extracted for each label for each image. Example, for single image shape will be (number of scans, number of labels)

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

inverse_transform(signals)

Transform signals extracted from parcellations back to brain images.

Uses labels_img_ (parcellations) built at fit() level.

Parameters:

signals : List of 2D numpy.ndarray

Each 2D array with shape (number of scans, number of regions)

Returns:

imgs : List of or Nifti-like image

Brain image(s)

score(imgs, confounds=None)

Score function based on explained variance on imgs.

Should only be used by DecompositionEstimator derived classes

Parameters:

imgs: iterable of Niimg-like objects

confounds: CSV file path or 2D matrix

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details

Returns:

score: float,

Holds the score for each subjects. Score is two dimensional if per_component is True. First dimension is squeezed if the number of subjects is one

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self
transform(imgs, confounds=None)

Extract signals from parcellations learned on fmri images.

Parameters:

imgs : List of Nifti-like images

confounds: List of CSV files or arrays-like, optional

Each file or numpy array in a list should have shape (number of scans, number of confounds) This parameter is passed to signal.clean. Please see the related documentation for details. Must be of same length of imgs.

Returns:

region_signals: List of or 2D numpy.ndarray

Signals extracted for each label for each image. Example, for single image shape will be (number of scans, number of labels)

7.8.8.1. Examples using nilearn.regions.Parcellations