Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nilearn.regions.Parcellations#

class nilearn.regions.Parcellations(method, n_parcels=50, random_state=0, mask=None, smoothing_fwhm=4.0, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, target_affine=None, target_shape=None, mask_strategy='epi', mask_args=None, scaling=False, n_iter=10, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=1)[source]#

Learn parcellations on fMRI images.

Five different types of clustering methods can be used: kmeans, ward, complete, average and rena. kmeans will call MiniBatchKMeans whereas ward, complete, average are used within in Agglomerative Clustering and rena will call ReNA. kmeans, ward, complete, average are leveraged from scikit-learn. rena is built into nilearn.

New in version 0.4.1.

Parameters:
method{‘kmeans’, ‘ward’, ‘complete’, ‘average’, ‘rena’, ‘hierarchical_kmeans’}

A method to choose between for brain parcellations. For a small number of parcels, kmeans is usually advisable. For a large number of parcellations (several hundreds, or thousands), ward and rena are the best options. Ward will give higher quality parcels, but with increased computation time. ReNA is most useful as a fast data-reduction step, typically dividing the signal size by ten.

n_parcelsint, default=50

Number of parcels to divide the data into.

random_stateint or RandomState, optional

Pseudo-random number generator state used for random sampling. Default=0.

maskNiimg-like object or nilearn.maskers.NiftiMasker, nilearn.maskers.MultiNiftiMasker, optional

Mask/Masker used for masking the data. If mask image if provided, it will be used in the MultiNiftiMasker. If an instance of MultiNiftiMasker is provided, then this instance parameters will be used in masking the data by overriding the default masker parameters. If None, mask will be automatically computed by a MultiNiftiMasker with default parameters.

smoothing_fwhmfloat, optional.

If smoothing_fwhm is not None, it gives the full-width at half maximum in millimeters of the spatial smoothing to apply to the signal. Default=4.0.

standardizebool, default=False

If standardize is True, the data are centered and normed: their mean is put to 0 and their variance is put to 1 in the time dimension.

detrendbool, optional

Whether to detrend signals or not.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

Default=False.

low_passfloat or None, default=None

Low cutoff frequency in Hertz. If specified, signals above this frequency will be filtered out. If None, no low-pass filtering will be performed.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

high_passfloat, default=None

High cutoff frequency in Hertz. If specified, signals below this frequency will be filtered out.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

t_rfloat or None, default=None

Repetition time, in seconds (sampling period). Set to None if not provided.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

target_affinenumpy.ndarray, default=None

If specified, the image is resampled corresponding to this new affine. target_affine can be a 3x3 or a 4x4 matrix.

Note

This parameter is passed to nilearn.image.resample_img. Please see the related documentation for details.

Note

The given affine will be considered as same for all given list of images.

target_shapetuple or list, default=None

If specified, the image will be resized to match this new shape. len(target_shape) must be equal to 3.

Note

If target_shape is specified, a target_affine of shape (4, 4) must also be given.

Note

This parameter is passed to nilearn.image.resample_img. Please see the related documentation for details.

mask_strategy{“background”, “epi”, “whole-brain-template”,”gm-template”, “wm-template”}, optional

The strategy used to compute the mask:

  • “background”: Use this option if your images present a clear homogeneous background.

  • “epi”: Use this option if your images are raw EPI images

  • “whole-brain-template”: This will extract the whole-brain part of your data by resampling the MNI152 brain mask for your data’s field of view.

Note

This option is equivalent to the previous ‘template’ option which is now deprecated.

  • “gm-template”: This will extract the gray matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    New in version 0.8.1.

  • “wm-template”: This will extract the white matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    New in version 0.8.1.

Note

Depending on this value, the mask will be computed from nilearn.masking.compute_background_mask, nilearn.masking.compute_epi_mask, or nilearn.masking.compute_brain_mask.

Default=’epi’.

mask_argsdict, optional

If mask is None, these are additional parameters passed to masking.compute_background_mask or masking.compute_epi_mask to fine-tune mask computation. Please see the related documentation for details.

scalingbool, default=False

Used only when the method selected is ‘rena’. If scaling is True, each cluster is scaled by the square root of its size, preserving the l2-norm of the image.

n_iterint, default=10

Used only when the method selected is ‘rena’. Number of iterations of the recursive neighbor agglomeration.

memoryinstance of joblib.Memory, str, or pathlib.Path

Used to cache the masking process. By default, no caching is done. If a str is given, it is the path to the caching directory.

memory_levelint, default=0

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching. Zero means no caching.

n_jobsint, default=1

The number of CPUs to use to do the computation. -1 means ‘all CPUs’.

verboseint, default=0

Verbosity level (0 means no message).

Notes

  • Transforming list of Nifti images to data matrix takes few steps. Reducing the data dimensionality using randomized SVD, build brain parcellations using KMeans or various Agglomerative methods.

  • This object uses spatially-constrained AgglomerativeClustering for method=’ward’ or ‘complete’ or ‘average’ and spatially-constrained ReNA clustering for method=’rena’. Spatial connectivity matrix (voxel-to-voxel) is built-in object which means no need of explicitly giving the matrix.

Attributes:
labels_img_nibabel.nifti1.Nifti1Image

Labels image to each parcellation learned on fmri images.

masker_nilearn.maskers.NiftiMasker or nilearn.maskers.MultiNiftiMasker

The masker used to mask the data.

connectivity_numpy.ndarray

Voxel-to-voxel connectivity matrix computed from a mask. Note that this attribute is only seen if selected methods are Agglomerative Clustering type, ‘ward’, ‘complete’, ‘average’.

__init__(method, n_parcels=50, random_state=0, mask=None, smoothing_fwhm=4.0, standardize=False, detrend=False, low_pass=None, high_pass=None, t_r=None, target_affine=None, target_shape=None, mask_strategy='epi', mask_args=None, scaling=False, n_iter=10, memory=Memory(location=None), memory_level=0, n_jobs=1, verbose=1)[source]#
VALID_METHODS = ['kmeans', 'ward', 'complete', 'average', 'rena', 'hierarchical_kmeans']#
transform(imgs, confounds=None)[source]#

Extract signals from parcellations learned on fMRI images.

Parameters:
imgslist of Niimg-like objects

See Input and output: neuroimaging data representation. Images to process.

confoundslist of CSV files, arrays-like, or pandas.DataFrame, optional

Each file or numpy array in a list should have shape (number of scans, number of confounds) Must be of same length as imgs.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

Returns:
region_signalslist of or 2D numpy.ndarray

Signals extracted for each label for each image. Example, for single image shape will be (number of scans, number of labels)

fit_transform(imgs, confounds=None)[source]#

Fit the images to parcellations and then transform them.

Parameters:
imgslist of Niimg-like objects

See Input and output: neuroimaging data representation. Images for process for fit as well for transform to signals.

confoundslist of CSV files, arrays-like or pandas.DataFrame, optional

Each file or numpy array in a list should have shape (number of scans, number of confounds). Given confounds should have same length as images if given as a list.

Note

This parameter is passed to nilearn.signal.clean. Please see the related documentation for details.

Note

Confounds will be used for cleaning signals before learning parcellations.

Returns:
region_signalslist of or 2D numpy.ndarray

Signals extracted for each label for each image. Example, for single image shape will be (number of scans, number of labels)

inverse_transform(signals)[source]#

Transform signals extracted from parcellations back to brain images.

Uses labels_img_ (parcellations) built at fit() level.

Parameters:
signalslist of 2D numpy.ndarray

Each 2D array with shape (number of scans, number of regions).

Returns:
imgslist of Niimg-like objects

See Input and output: neuroimaging data representation. Brain image(s).

fit(imgs, y=None, confounds=None)[source]#

Compute the mask and the components across subjects.

Parameters:
imgslist of Niimg-like objects

See Input and output: neuroimaging data representation. Data on which the mask is calculated. If this is a list, the affine is considered the same for all.

confoundslist of CSV file paths, numpy.ndarrays

or pandas DataFrames, optional. This parameter is passed to nilearn.signal.clean. Please see the related documentation for details. Should match with the list of imgs given.

Returns:
selfobject

Returns the instance itself. Contains attributes listed at the object level.

get_metadata_routing()#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

score(imgs, confounds=None, per_component=False)[source]#

Score function based on explained variance on imgs.

Should only be used by DecompositionEstimator derived classes

Parameters:
imgsiterable of Niimg-like objects

See Input and output: neuroimaging data representation. Data to be scored

confoundsCSV file path or numpy.ndarray

or pandas DataFrame, optional This parameter is passed to nilearn.signal.clean. Please see the related documentation for details

per_componentbool, default=False

Specify whether the explained variance ratio is desired for each map or for the global set of components.

Returns:
scorefloat

Holds the score for each subjects. Score is two dimensional if per_component is True. First dimension is squeezed if the number of subjects is one

set_fit_request(*, confounds='$UNCHANGED$', imgs='$UNCHANGED$')#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
confoundsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for confounds parameter in fit.

imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in fit.

Returns:
selfobject

The updated object.

set_inverse_transform_request(*, signals='$UNCHANGED$')#

Request metadata passed to the inverse_transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to inverse_transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to inverse_transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
signalsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for signals parameter in inverse_transform.

Returns:
selfobject

The updated object.

set_output(*, transform=None)#

Set output container.

See Introducing the set_output API for an example on how to use the API.

Parameters:
transform{“default”, “pandas”}, default=None

Configure output of transform and fit_transform.

  • “default”: Default output format of a transformer

  • “pandas”: DataFrame output

  • “polars”: Polars output

  • None: Transform configuration is unchanged

New in version 1.4: “polars” option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, confounds='$UNCHANGED$', imgs='$UNCHANGED$', per_component='$UNCHANGED$')#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
confoundsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for confounds parameter in score.

imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in score.

per_componentstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for per_component parameter in score.

Returns:
selfobject

The updated object.

set_transform_request(*, confounds='$UNCHANGED$', imgs='$UNCHANGED$')#

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
confoundsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for confounds parameter in transform.

imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in transform.

Returns:
selfobject

The updated object.

Examples using nilearn.regions.Parcellations#

Clustering methods to learn a brain parcellation from fMRI

Clustering methods to learn a brain parcellation from fMRI