Giving credit

Table Of Contents

Previous topic

3.3. Extracting resting-state networks: ICA and related

Next topic

3.5. Clustering to parcellate the brain in regions

3.4. Region Extraction for better brain parcellations

Page summary

This section shows how to use Region Extractor to extract brain connected regions/components into a separate brain activation region and also shows how to learn functional connectivity interactions between each separate region.

3.4.1. Fetching resting state functional datasets

We use ADHD resting state functional connectivity datasets of 20 subjects, which is already preprocessed and publicly available at We use utilities fetch_adhd implemented in nilearn for automatic fetching of these datasets.

from nilearn import datasets

adhd_dataset = datasets.fetch_adhd(n_subjects=20)
func_filenames = adhd_dataset.func
confounds = adhd_dataset.confounds

3.4.2. Brain maps using Dictionary Learning

Here, we use object DictLearning, a multi subject model to decompose multi subjects fMRI datasets into functionally defined maps. We do this by setting the parameters and calling the object fit on the filenames of datasets without necessarily converting each file to Nifti1Image object.

from nilearn.decomposition import DictLearning

# Initialize DictLearning object
dict_learn = DictLearning(n_components=5, smoothing_fwhm=6.,
                          memory="nilearn_cache", memory_level=2,
# Fit to the data
# Resting state networks/maps
components_img = dict_learn.masker_.inverse_transform(dict_learn.components_)

3.4.3. Visualization of Dictionary Learning maps

Showing maps stored in components_img using nilearn plotting utilities. Here, we use plot_prob_atlas for easy visualization of 4D atlas maps onto the anatomical standard template. Each map is displayed in different color and colors are random and automatically picked.

from nilearn import plotting

plotting.plot_prob_atlas(components_img, view_type='filled_contours',
                         title='Dictionary Learning maps')

3.4.4. Region Extraction with Dictionary Learning maps

We use object RegionExtractor for extracting brain connected regions from dictionary maps into separated brain activation regions with automatic thresholding strategy selected as thresholding_strategy=’ratio_n_voxels’. We use thresholding strategy to first get foreground information present in the maps and then followed by robust region extraction on foreground information using Random Walker algorithm selected as extractor=’local_regions’.

Here, we control foreground extraction using parameter threshold=.5, which represents the expected proportion of voxels included in the regions (i.e. with a non-zero value in one of the maps). If you need to keep more proportion of voxels then threshold should be tweaked according to the maps data.

The parameter min_region_size=1350 mm^3 is to keep the minimum number of extracted regions. We control the small spurious regions size by thresholding in voxel units to adapt well to the resolution of the image. Please see the documentation of nilearn.regions.connected_regions for more details.

from nilearn.regions import RegionExtractor

extractor = RegionExtractor(components_img, threshold=0.5,
                            standardize=True, min_region_size=1350)
# Just call fit() to process for regions extraction
# Extracted regions are stored in regions_img_
regions_extracted_img = extractor.regions_img_
# Each region index is stored in index_
regions_index = extractor.index_
# Total number of regions extracted
n_regions_extracted = regions_extracted_img.shape[-1]

3.4.5. Visualization of Region Extraction results

Showing region extraction results. The same plot_prob_atlas is used for visualizing extracted regions on a standard template. Each extracted brain region is assigned a color and as you can see that visual cortex area is extracted quite nicely into each hemisphere.

title = ('%d regions are extracted from %d components.'
         '\nEach separate color of region indicates extracted region'
         % (n_regions_extracted, 5))
plotting.plot_prob_atlas(regions_extracted_img, view_type='filled_contours',

3.4.6. Computing functional connectivity matrices

Here, we use the object called ConnectivityMeasure to compute functional connectivity measured between each extracted brain regions. Many different kinds of measures exists in nilearn such as “correlation”, “partial correlation”, “tangent”, “covariance”, “precision”. But, here we show how to compute only correlations by selecting parameter as kind=’correlation’ as initialized in the object.

The first step to do is to extract subject specific time series signals using functional data stored in func_filenames and the second step is to call fit_tranform() on the time series signals. Here, for each subject we have time series signals of shape=(176, 23) where 176 is the length of time series and 23 is the number of extracted regions. Likewise, we have a total of 20 subject specific time series signals. The third step, we compute the mean correlation across all subjects.

from nilearn.connectome import ConnectivityMeasure

correlations = []
# Initializing ConnectivityMeasure object with kind='correlation'
connectome_measure = ConnectivityMeasure(kind='correlation')
for filename, confound in zip(func_filenames, confounds):
    # call transform from RegionExtractor object to extract timeseries signals
    timeseries_each_subject = extractor.transform(filename, confounds=confound)
    # call fit_transform from ConnectivityMeasure object
    correlation = connectome_measure.fit_transform([timeseries_each_subject])
    # saving each subject correlation to correlations

# Mean of all correlations
import numpy as np

mean_correlations = np.mean(correlations, axis=0).reshape(n_regions_extracted,

3.4.7. Visualization of functional connectivity matrices

Showing mean of correlation matrices computed between each extracted brain regions. At this point, we make use of nilearn image and plotting utilities to find automatically the coordinates required, for plotting connectome relations. Left image is the correlations in a matrix form and right image is the connectivity relations to brain regions plotted using plot_connectome

import matplotlib.pyplot as plt
from nilearn import image

regions_imgs = image.iter_img(regions_extracted_img)
coords_connectome = [plotting.find_xyz_cut_coords(img) for img in regions_imgs]
title = 'Correlation interactions between %d regions' % n_regions_extracted
plt.imshow(mean_correlations, interpolation="nearest",
           vmax=1, vmin=-1,
plotting.plot_connectome(mean_correlations, coords_connectome,
                         edge_threshold='90%', title=title)

matrix connectome

3.4.8. Validating results

Showing only one specific network regions before and after region extraction.

Left image displays the regions of one specific resting network without region extraction and right image displays the regions split apart after region extraction. Here, we can validate that regions are nicely separated identified by each extracted region in different color.

# First, we plot a network of index=4 without region extraction (left plot)
img = image.index_img(components_img, 4)
coords = plotting.find_xyz_cut_coords(img)
display = plotting.plot_stat_map(img, cut_coords=coords,
                                 colorbar=False, title='Showing one specific network')

# Now, we plot (right side) same network after region extraction to show that
# connected regions are nicely seperated.
# Each brain extracted region is identified as separate color.

# For this, we take the indices of the all regions extracted related to original
# network given as 4.
regions_indices_of_map3 = np.where(np.array(regions_index) == 4)

display = plotting.plot_anat(cut_coords=coords,
                             title='Extracted regions in one specific network')

# Now add as an overlay by looping over all the regions of index 4
# color list is random (you can choose your own color)
color_list = [[0., 1., 0.29, 1.], [0., 1., 0.54, 1.],
              [0., 1., 0.78, 1.], [0., 0.96, 1., 1.],
              [0., 0.73, 1., 1.], [0., 0.47, 1., 1.],
              [0., 0.22, 1., 1.], [0.01, 0., 1., 1.],
              [0.26, 0., 1., 1.]]
for each_index_of_map3, color in zip(regions_indices_of_map3[0], color_list):
    display.add_overlay(image.index_img(regions_extracted_img, each_index_of_map3),

dmn dmn_reg

See also

The full code can be found as an example: Regions extraction using Dictionary Learning and functional connectomes