Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nilearn.decoding.Decoder

class nilearn.decoding.Decoder(estimator='svc', mask=None, cv=10, param_grid=None, screening_percentile=20, scoring='roc_auc', smoothing_fwhm=None, standardize=True, target_affine=None, target_shape=None, mask_strategy='background', low_pass=None, high_pass=None, t_r=None, memory=None, memory_level=0, n_jobs=1, verbose=0)[source]

A wrapper for popular classification strategies in neuroimaging.

The Decoder object supports classification methods. It implements a model selection scheme that averages the best models within a cross validation loop. The resulting average model is the one used as a classifier. This object also leverages the`NiftiMaskers` to provide a direct interface with the Nifti files on disk.

Parameters:
estimatorstr, default=’svc’

The estimator to choose among:

svc = LinearSVC(penalty="l2", max_iter=1e4)

Note

Same as option svc.

svc_l1 = LinearSVC(penalty="l1", dual=False, max_iter=1e4)
logistic = LogisticRegressionCV(penalty="l2", solver="liblinear")
logistic_l1 = LogisticRegressionCV(penalty="l1", solver="liblinear")

Note

Same as option logistic.

ridge_classifier = RidgeClassifierCV()
dummy = DummyClassifier(strategy="stratified", random_state=0)
maskfilename, Nifti1Image, NiftiMasker, MultiNiftiMasker, SurfaceImage or SurfaceMasker, default=None

Mask to be used on data. If an instance of masker is passed, then its mask and parameters will be used. If no mask is given, mask will be computed automatically from provided images by an inbuilt masker with default parameters. Refer to NiftiMasker or MultiNiftiMasker or SurfaceMasker to check for default parameters.

cvcross-validation generator or int, default=10

A cross-validation generator. See: https://scikit-learn.org/stable/modules/cross_validation.html. The default 10 refers to K = 10 folds of StratifiedKFold when groups is None in the fit method for this class. If groups is specified but cv is not set to custom CV splitter, default is LeaveOneGroupOut.

param_griddict of str to sequence, or sequence of such, default=None

The parameter grid to explore, as a dictionary mapping estimator parameters to sequences of allowed values.

None or an empty dict signifies default parameters.

A sequence of dicts signifies a sequence of grids to search, and is useful to avoid exploring parameter combinations that make no sense or have no effect. See scikit-learn documentation for more information, for example: https://scikit-learn.org/stable/modules/grid_search.html

For DummyClassifier, parameter grid defaults to empty dictionary, class predictions are estimated using default strategy.

screening_percentileint, float, optional, in the closed interval [0, 100], default=20

The percentage of brain volume that will be kept with respect to a full MNI template. In particular, if it is lower than 100, a univariate feature selection based on the Anova F-value for the input data will be performed. A float according to a percentile of the highest scores.

scoringstr, callable or None, default=’roc_auc’

The scoring strategy to use. See the scikit-learn documentation at https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules If callable, takes as arguments the fitted estimator, the test data (X_test) and the test target (y_test) if y is not None. e.g. scorer(estimator, X_test, y_test)

For classification, valid entries are: ‘accuracy’, ‘f1’, ‘precision’, ‘recall’ or ‘roc_auc’.

smoothing_fwhmfloat, optional.

If smoothing_fwhm is not None, it gives the full-width at half maximum in millimeters of the spatial smoothing to apply to the signal.

standardizebool, default=True

If standardize is True, the data are centered and normed: their mean is put to 0 and their variance is put to 1 in the time dimension.

target_affinenumpy.ndarray, default=None

If specified, the image is resampled corresponding to this new affine. target_affine can be a 3x3 or a 4x4 matrix.

target_shapetuple or list, default=None

If specified, the image will be resized to match this new shape. len(target_shape) must be equal to 3.

Note

If target_shape is specified, a target_affine of shape (4, 4) must also be given.

low_passfloat or None, default=None

Low cutoff frequency in Hertz. If specified, signals above this frequency will be filtered out. If None, no low-pass filtering will be performed.

high_passfloat, default=None

High cutoff frequency in Hertz. If specified, signals below this frequency will be filtered out.

t_rfloat or None, default=None

Repetition time, in seconds (sampling period). Set to None if not provided.

mask_strategy{“background”, “epi”, “whole-brain-template”,”gm-template”, “wm-template”}, optional

The strategy used to compute the mask:

  • "background": Use this option if your images present a clear homogeneous background.

  • "epi": Use this option if your images are raw EPI images

  • "whole-brain-template": This will extract the whole-brain part of your data by resampling the MNI152 brain mask for your data’s field of view.

    Note

    This option is equivalent to the previous ‘template’ option which is now deprecated.

  • "gm-template": This will extract the gray matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    Added in version 0.8.1.

  • "wm-template": This will extract the white matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    Added in version 0.8.1.

Note

This parameter will be ignored if a mask image is provided.

Note

Depending on this value, the mask will be computed from nilearn.masking.compute_background_mask, nilearn.masking.compute_epi_mask, or nilearn.masking.compute_brain_mask.

Default=’background’.

memoryNone, instance of joblib.Memory, str, or pathlib.Path

Used to cache the masking process. By default, no caching is done. If a str is given, it is the path to the caching directory.

memory_levelint, default=0

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching. Zero means no caching.

n_jobsint, default=1

The number of CPUs to use to do the computation. -1 means ‘all CPUs’.

verboseint, default=0

Verbosity level (0 means no message).

See also

nilearn.decoding.DecoderRegressor

regression strategies for Neuro-imaging,

nilearn.decoding.FREMClassifier

State of the art classification pipeline for Neuroimaging

nilearn.decoding.SpaceNetClassifier

Graph-Net and TV-L1 priors/penalties

__init__(estimator='svc', mask=None, cv=10, param_grid=None, screening_percentile=20, scoring='roc_auc', smoothing_fwhm=None, standardize=True, target_affine=None, target_shape=None, mask_strategy='background', low_pass=None, high_pass=None, t_r=None, memory=None, memory_level=0, n_jobs=1, verbose=0)[source]
decision_function(X)[source]

Predict class labels for samples in X.

Parameters:
XNiimg-like, list of either Niimg-like objects or str or path-like

See Input and output: neuroimaging data representation. Data on prediction is to be made. If this is a list, the affine is considered the same for all.

Returns:
y_prednumpy.ndarray, shape (n_samples,)

Predicted class label per sample.

fit(X, y, groups=None)[source]

Fit the decoder (learner).

Parameters:
Xlist of Niimg-like or SurfaceImage objects

See Input and output: neuroimaging data representation. Data on which model is to be fitted. If this is a list, the affine is considered the same for all.

ynumpy.ndarray of shape=(n_samples) or list of length n_samples

The dependent variable (age, sex, IQ, yes/no, etc.). Target variable to predict. Must have exactly as many elements as 3D images in niimg.

groupsNone

Group labels for the samples used while splitting the dataset into train/test set. Default None.

Note that this parameter must be specified in some scikit-learn cross-validation generators to calculate the number of splits, e.g. sklearn.model_selection.LeaveOneGroupOut or sklearn.model_selection.LeavePGroupsOut.

For more details see https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators-for-grouped-data

Attributes:
masker_instance of NiftiMasker, MultiNiftiMasker, or SurfaceMasker

The masker used to mask the data.

mask_img_Nifti1Image or SurfaceImage

Mask computed by the masker object.

classes_numpy.ndarray

Classes to predict. For classification only.

screening_percentile_float

Screening percentile corrected according to volume of mask, relative to the volume of standard brain.

coef_numpy.ndarray, shape=(n_classes, n_features)

Contains the mean of the models weight vector across fold for each class. Returns None for Dummy estimators.

coef_img_dict of Nifti1Image

Dictionary containing coef_ with class names as keys, and coef_ transformed in Nifti1Images as values. In the case of a regression, it contains a single Nifti1Image at the key ‘beta’. Ignored if Dummy estimators are provided.

intercept_ndarray, shape (nclasses,)

Intercept (a.k.a. bias) added to the decision function. Ignored if Dummy estimators are provided.

cv_list of pairs of lists

List of the (n_folds,) folds. For the corresponding fold, each pair is composed of two lists of indices, one for the train samples and one for the test samples.

std_coef_numpy.ndarray, shape=(n_classes, n_features)

Contains the standard deviation of the models weight vector across fold for each class. Note that folds are not independent, see https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators-for-grouped-data Ignored if Dummy estimators are provided.

std_coef_img_dict of Nifti1Image

Dictionary containing std_coef_ with class names as keys, and coef_ transformed in Nifti1Image as values. In the case of a regression, it contains a single Nifti1Image at the key ‘beta’. Ignored if Dummy estimators are provided.

cv_params_dict of lists

Best point in the parameter grid for each tested fold in the inner cross validation loop. The grid is empty when Dummy estimators are provided. Note: if the estimator used its built-in cross-validation, this will include an additional key for the single best value estimated by the built-in cross-validation (‘best_C’ for LogisticRegressionCV and ‘best_alpha’ for RidgeCV/RidgeClassifierCV/LassoCV), in addition to the input list of values.

scorer_function

Scorer function used on the held out data to choose the best parameters for the model.

cv_scores_dict, (classes, n_folds)

Scores (misclassification) for each parameter, and on each fold

n_outputs_int

Number of outputs (column-wise)

dummy_output_ndarray, shape=(n_classes, 2) or shape=(1, 1) for regression

Contains dummy estimator attributes after class predictions using strategies of sklearn.dummy.DummyClassifier (class_prior) and sklearn.dummy.DummyRegressor (constant) from scikit-learn. This attribute is necessary for estimating class predictions after fit. Returns None if non-dummy estimators are provided.

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

predict(X)[source]

Predict a label for all X vectors indexed by the first axis.

Parameters:
XNiimg-like, list of either Niimg-like objects or str or path-like

See Input and output: neuroimaging data representation. Data on which prediction is to be made.

Returns:
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)

Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_fit_request(*, groups='$UNCHANGED$')

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
groupsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for groups parameter in fit.

Returns:
selfobject

The updated object.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight='$UNCHANGED$')

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

Examples using nilearn.decoding.Decoder

A introduction tutorial to fMRI decoding

A introduction tutorial to fMRI decoding

Decoding with ANOVA + SVM: face vs house in the Haxby dataset

Decoding with ANOVA + SVM: face vs house in the Haxby dataset

Decoding of a dataset after GLM fit for signal extraction

Decoding of a dataset after GLM fit for signal extraction

ROI-based decoding analysis in Haxby et al. dataset

ROI-based decoding analysis in Haxby et al. dataset

Setting a parameter by cross-validation

Setting a parameter by cross-validation

Different classifiers in decoding the Haxby dataset

Different classifiers in decoding the Haxby dataset

Understanding nilearn.decoding.Decoder

Understanding nilearn.decoding.Decoder

A short demo of the surface images & maskers

A short demo of the surface images & maskers

Advanced decoding using scikit learn

Advanced decoding using scikit learn