Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nilearn.decoding.FREMClassifier#

class nilearn.decoding.FREMClassifier(estimator='svc', mask=None, cv=30, param_grid=None, clustering_percentile=10, screening_percentile=20, scoring='roc_auc', smoothing_fwhm=None, standardize=True, target_affine=None, target_shape=None, mask_strategy='background', low_pass=None, high_pass=None, t_r=None, memory=None, memory_level=0, n_jobs=1, verbose=0)[source]#

State of the art decoding scheme applied to usual classifiers.

FREM uses an implicit spatial regularization through fast clustering and aggregates a high number of estimators trained on various splits of the training set, thus returning a very robust decoder at a lower computational cost than other spatially regularized methods [1].

Parameters:
estimatorstr, optional, (default ‘svc’)

The estimator to choose among:

svc = LinearSVC(penalty="l2", max_iter=1e4)

Note

Same as option svc.

svc_l1 = LinearSVC(penalty="l1", dual=False, max_iter=1e4)
logistic = LogisticRegression(penalty="l2", solver="liblinear")
logistic_l1 = LogisticRegression(penalty="l1", solver="liblinear")

Note

Same as option logistic.

ridge_classifier = RidgeClassifierCV()
dummy = DummyClassifier(strategy="stratified", random_state=0)
maskfilename, Nifti1Image, NiftiMasker, or MultiNiftiMasker, optional, (default None)

Mask to be used on data. If an instance of masker is passed, then its mask and parameters will be used. If no mask is given, mask will be computed automatically from provided images by an inbuilt masker with default parameters. Refer to NiftiMasker or MultiNiftiMasker to check for default parameters.

cvint or cross-validation generator, optional (default 30)

If int, number of stratified shuffled splits returned, which is usually the right way to train many different classifiers. A good trade-off between stability of the aggregated model and computation time is 50 splits. Shuffled splits are seeded by default for reproducibility. Can also be a cross-validation generator.

param_griddict of str to sequence, or sequence of such. Default None

The parameter grid to explore, as a dictionary mapping estimator parameters to sequences of allowed values.

None or an empty dict signifies default parameters.

A sequence of dicts signifies a sequence of grids to search, and is useful to avoid exploring parameter combinations that make no sense or have no effect. See scikit-learn documentation for more information, for example: https://scikit-learn.org/stable/modules/grid_search.html

clustering_percentileint, float, optional, in closed interval [0, 100] (default 10)

Used to perform a fast ReNA clustering on input data as a first step of fit. It agglomerates similar features together to reduce their number down to this percentile. ReNA is typically efficient for cluster_percentile equal to 10.

screening_percentileint, float, optional, in closed interval [0, 100], (default 20)

The percentage of brain volume that will be kept with respect to a full MNI template. In particular, if it is lower than 100, a univariate feature selection based on the Anova F-value for the input data will be performed. A float according to a percentile of the highest scores.

scoringstr, callable or None, optional. (default: ‘roc_auc’)

The scoring strategy to use. See the scikit-learn documentation at https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules If callable, takes as arguments the fitted estimator, the test data (X_test) and the test target (y_test) if y is not None. e.g. scorer(estimator, X_test, y_test)

For classification, valid entries are: ‘accuracy’, ‘f1’, ‘precision’, ‘recall’ or ‘roc_auc’. (default ‘roc_auc’).

smoothing_fwhmfloat, optional.

If smoothing_fwhm is not None, it gives the full-width at half maximum in millimeters of the spatial smoothing to apply to the signal.

standardizebool, default=True

If standardize is True, the data are centered and normed: their mean is put to 0 and their variance is put to 1 in the time dimension.

target_affinenumpy.ndarray, default=None

If specified, the image is resampled corresponding to this new affine. target_affine can be a 3x3 or a 4x4 matrix.

target_shapetuple or list, default=None

If specified, the image will be resized to match this new shape. len(target_shape) must be equal to 3.

Note

If target_shape is specified, a target_affine of shape (4, 4) must also be given.

low_passfloat or None, default=None

Low cutoff frequency in Hertz. If specified, signals above this frequency will be filtered out. If None, no low-pass filtering will be performed.

high_passfloat, default=None

High cutoff frequency in Hertz. If specified, signals below this frequency will be filtered out.

t_rfloat or None, default=None

Repetition time, in seconds (sampling period). Set to None if not provided.

mask_strategy{“background”, “epi”, “whole-brain-template”,”gm-template”, “wm-template”}, optional

The strategy used to compute the mask:

  • “background”: Use this option if your images present a clear homogeneous background.

  • “epi”: Use this option if your images are raw EPI images

  • “whole-brain-template”: This will extract the whole-brain part of your data by resampling the MNI152 brain mask for your data’s field of view.

Note

This option is equivalent to the previous ‘template’ option which is now deprecated.

  • “gm-template”: This will extract the gray matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    New in version 0.8.1.

  • “wm-template”: This will extract the white matter part of your data by resampling the corresponding MNI152 template for your data’s field of view.

    New in version 0.8.1.

Note

This parameter will be ignored if a mask image is provided.

Note

Depending on this value, the mask will be computed from nilearn.masking.compute_background_mask, nilearn.masking.compute_epi_mask, or nilearn.masking.compute_brain_mask.

Default=’background’.

memoryinstance of joblib.Memory, str, or pathlib.Path

Used to cache the masking process. By default, no caching is done. If a str is given, it is the path to the caching directory.

memory_levelint, default=0

Rough estimator of the amount of memory used by caching. Higher value means more memory for caching. Zero means no caching.

n_jobsint, default=1

The number of CPUs to use to do the computation. -1 means ‘all CPUs’.

verboseint, default=0

Verbosity level (0 means no message).

See also

nilearn.decoding.Decoder

Classification strategies for Neuroimaging,

nilearn.decoding.FREMRegressor

State of the art regression pipeline for Neuroimaging

References

__init__(estimator='svc', mask=None, cv=30, param_grid=None, clustering_percentile=10, screening_percentile=20, scoring='roc_auc', smoothing_fwhm=None, standardize=True, target_affine=None, target_shape=None, mask_strategy='background', low_pass=None, high_pass=None, t_r=None, memory=None, memory_level=0, n_jobs=1, verbose=0)[source]#
decision_function(X)[source]#

Predict class labels for samples in X.

Parameters:
XNiimg-like, list of either Niimg-like objects or str or path-like

See Input and output: neuroimaging data representation. Data on prediction is to be made. If this is a list, the affine is considered the same for all.

Returns:
y_pred: numpy.ndarray, shape (n_samples,)

Predicted class label per sample.

fit(X, y, groups=None)[source]#

Fit the decoder (learner).

Parameters:
X: list of Niimg-like or SurfaceImage objects

See Input and output: neuroimaging data representation. Data on which model is to be fitted. If this is a list, the affine is considered the same for all.

y: numpy.ndarray of shape=(n_samples) or list of length n_samples

The dependent variable (age, sex, IQ, yes/no, etc.). Target variable to predict. Must have exactly as many elements as 3D images in niimg.

groups: None

Group labels for the samples used while splitting the dataset into train/test set. Default None.

Note that this parameter must be specified in some scikit-learn cross-validation generators to calculate the number of splits, e.g. sklearn.model_selection.LeaveOneGroupOut or sklearn.model_selection.LeavePGroupsOut.

For more details see https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators-for-grouped-data

Attributes:
masker_instance of NiftiMasker, MultiNiftiMasker, or SurfaceMasker

The masker used to mask the data.

mask_img_Nifti1Image or SurfaceImage

Mask computed by the masker object.

classes_numpy.ndarray

Classes to predict. For classification only.

screening_percentile_float

Screening percentile corrected according to volume of mask, relative to the volume of standard brain.

coef_numpy.ndarray, shape=(n_classes, n_features)

Contains the mean of the models weight vector across fold for each class. Returns None for Dummy estimators.

coef_img_dict of Nifti1Image

Dictionary containing coef_ with class names as keys, and coef_ transformed in Nifti1Images as values. In the case of a regression, it contains a single Nifti1Image at the key ‘beta’. Ignored if Dummy estimators are provided.

intercept_ndarray, shape (nclasses,)

Intercept (a.k.a. bias) added to the decision function. Ignored if Dummy estimators are provided.

cv_list of pairs of lists

List of the (n_folds,) folds. For the corresponding fold, each pair is composed of two lists of indices, one for the train samples and one for the test samples.

std_coef_numpy.ndarray, shape=(n_classes, n_features)

Contains the standard deviation of the models weight vector across fold for each class. Note that folds are not independent, see https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators-for-grouped-data Ignored if Dummy estimators are provided.

std_coef_img_dict of Nifti1Image

Dictionary containing std_coef_ with class names as keys, and coef_ transformed in Nifti1Image as values. In the case of a regression, it contains a single Nifti1Image at the key ‘beta’. Ignored if Dummy estimators are provided.

cv_params_dict of lists

Best point in the parameter grid for each tested fold in the inner cross validation loop. The grid is empty when Dummy estimators are provided. Note: if the estimator used its built-in cross-validation, this will include an additional key for the single best value estimated by the built-in cross-validation (‘best_C’ for LogisticRegressionCV and ‘best_alpha’ for RidgeCV/RidgeClassifierCV/LassoCV), in addition to the input list of values.

scorer_function

Scorer function used on the held out data to choose the best parameters for the model.

cv_scores_dict, (classes, n_folds)

Scores (misclassification) for each parameter, and on each fold

n_outputs_int

Number of outputs (column-wise)

dummy_output_: ndarray, shape=(n_classes, 2)

or shape=(1, 1) for regression Contains dummy estimator attributes after class predictions using strategies of DummyClassifier (class_prior) and DummyRegressor (constant) from scikit-learn. This attribute is necessary for estimating class predictions after fit. Returns None if non-dummy estimators are provided.

get_metadata_routing()#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

predict(X)[source]#

Predict a label for all X vectors indexed by the first axis.

Parameters:
XNiimg-like, list of either Niimg-like objects or str or path-like

See Input and output: neuroimaging data representation. Data on which prediction is to be made.

Returns:
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)

Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.

score(X, y, *args)[source]#

Compute the prediction score using the scoring metric defined by the scoring attribute.

Parameters:
XNiimg-like, list of either Niimg-like objects or str or path-like

See Input and output: neuroimaging data representation. Data on which prediction is to be made.

ynumpy.ndarray

Target values.

argsOptional arguments that can be passed to

scoring metrics. Example: sample_weight.

Returns:
scorefloat

Prediction score.

set_fit_request(*, groups='$UNCHANGED$')#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
groupsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for groups parameter in fit.

Returns:
selfobject

The updated object.

set_params(**params)#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

Examples using nilearn.decoding.FREMClassifier#

Decoding with FREM: face vs house vs chair object recognition

Decoding with FREM: face vs house vs chair object recognition