This page is a reference documentation. It only explains the function signature, and not how to use it. Please refer to the user guide for the big picture.

7.5.7. nilearn.image.high_variance_confounds

nilearn.image.high_variance_confounds(imgs, n_confounds=5, percentile=2.0, detrend=True, mask_img=None)

Return confounds signals extracted from input signals with highest variance.

imgs: Niimg-like object

See 4D image.

mask_img: Niimg-like object

See If provided, confounds are extracted from voxels inside the mask. If not provided, all voxels are used.

n_confounds: int

Number of confounds to return

percentile: float

Highest-variance signals percentile to keep before computing the singular value decomposition, 0. <= percentile <= 100. mask_img.sum() * percentile / 100. must be greater than n_confounds.

detrend: bool

If True, detrend signals before processing.

v: numpy.ndarray

highest variance confounds. Shape: (number of scans, n_confounds)


This method is related to what has been published in the literature as ‘CompCor’ (Behzadi NeuroImage 2007).

The implemented algorithm does the following:

  • compute sum of squares for each signals (no mean removal)

  • keep a given percentile of signals with highest variance (percentile)

  • compute an svd of the extracted signals

  • return a given number (n_confounds) of signals from the svd with highest singular values. Examples using nilearn.image.high_variance_confounds