ContentsMenuExpandLight modeDark modeAuto light/dark mode
Nilearn
Logo
Nilearn
  • Quickstart
  • Examples
    • Basic tutorials
      • Basic numerics and plotting with Python
      • 3D and 4D niimgs: handling and visualizing
      • Basic nilearn example: manipulating and looking at data
      • A introduction tutorial to fMRI decoding
      • Intro to GLM Analysis: a single-run, single-subject fMRI dataset
    • Visualization of brain images
      • Glass brain plotting in nilearn
      • Visualizing Megatrawls Network Matrices from Human Connectome Project
      • Visualizing 4D probabilistic atlas maps
      • Basic Atlas plotting
      • Visualizing a probabilistic atlas: the default mode in the MSDL atlas
      • Controlling the contrast of the background when plotting
      • Visualizing multiscale functional brain parcellations
      • Matplotlib colormaps in Nilearn
      • NeuroImaging volumes visualization
      • Visualizing global patterns with a carpet plot
      • Plot Haxby masks
      • Technical point: Illustration of the volume to surface sampling schemes
      • Plotting tools in nilearn
      • Loading and plotting of a cortical surface atlas
      • Making a surface plot of a 3D statistical map
      • More plotting tools from nilearn
      • Glass brain plotting in nilearn (all options)
      • Seed-based connectivity on the surface
    • Decoding and predicting from brain images
      • Show stimuli of Haxby et al. dataset
      • FREM on Jimura et al “mixed gambles” dataset
      • Voxel-Based Morphometry on Oasis dataset with Space-Net prior
      • Cortical surface-based searchlight decoding
      • Decoding with FREM: face vs house vs chair object recognition
      • Decoding with ANOVA + SVM: face vs house in the Haxby dataset
      • The haxby dataset: different multi-class strategies
      • Searchlight analysis of face vs house recognition
      • Decoding of a dataset after GLM fit for signal extraction
      • ROI-based decoding analysis in Haxby et al. dataset
      • Setting a parameter by cross-validation
      • Voxel-Based Morphometry on Oasis dataset
      • Different classifiers in decoding the Haxby dataset
      • Example of pattern recognition on simulated data
      • Encoding models for visual stimuli from Miyawaki et al. 2008
      • Reconstruction of visual stimuli from Miyawaki et al. 2008
    • Functional connectivity
      • Computing a connectome with sparse inverse covariance
      • Extracting signals of a probabilistic atlas of functional regions
      • Connectivity structure estimation on simulated data
      • Group Sparse inverse covariance for multi-subject connectome
      • Producing single subject maps of seed-to-voxel correlation
      • Deriving spatial maps from group fMRI data using ICA and Dictionary Learning
      • Regions extraction using dictionary learning and functional connectomes
      • Comparing connectomes on different reference atlases
      • Classification of age groups using functional connectivity
      • Extracting signals from a brain parcellation
      • Extract signals on spheres and plot a connectome
      • Clustering methods to learn a brain parcellation from fMRI
    • GLM: First level analysis
      • Examples of design matrices
      • Generate an events.tsv file for the NeuroSpin localizer task
      • Default Mode Network extraction of ADHD dataset
      • Analysis of an fMRI dataset with a Finite Impule Response (FIR) model
      • Single-subject data (two runs) in native space
      • Example of MRI response functions
      • Predicted time series and residuals
      • First level analysis of a complete BIDS dataset from openneuro
      • Example of surface-based first-level analysis
      • Simple example of two-runs fMRI model fitting
      • Understanding parameters of the first-level model
    • GLM: Second level analysis
      • Example of second level design matrix
      • Second-level fMRI model: true positive proportion in clusters
      • Statistical testing of a second-level analysis
      • Voxel-Based Morphometry on OASIS dataset
      • Second-level fMRI model: two-sample test, unpaired and paired
      • Second-level fMRI model: one sample test
      • Example of generic design in second-level models
    • Manipulating brain image volumes
      • Negating an image with math_img
      • Comparing the means of 2 images
      • Smoothing an image
      • Breaking an atlas of labels in separated regions
      • Resample an image to a template
      • Regions Extraction of Default Mode Networks using Smith Atlas
      • Simple example of NiftiMasker use
      • Region Extraction using a t-statistical map (3D)
      • Extracting signals from brain regions using the NiftiLabelsMasker
      • Understanding NiftiMasker and mask computation
      • Visualization of affine resamplings
      • Computing a Region of Interest (ROI) mask manually
    • Advanced statistical analysis of brain images
      • Multivariate decompositions: Independent component analysis of fMRI
      • Massively univariate analysis of a calculation task from the Localizer dataset
      • BIDS dataset first and second level analysis
      • NeuroVault meta-analysis of stop-go paradigm studies
      • Functional connectivity predicts age group
      • Surface-based dataset first and second level analysis of a dataset
      • Massively univariate analysis of a motor task from the Localizer dataset
      • NeuroVault cross-study ICA maps
      • Massively univariate analysis of face vs house recognition
      • Advanced decoding using scikit learn
      • Beta-Series Modeling for Task-Based Functional Connectivity and Decoding
    • Examples for experimental modules
      • A short demo of the surface images & maskers
  • User guide
    • 1. Introduction
    • 2. What is nilearn?
    • 3. Using nilearn for the first time
    • 4. Machine learning applications to Neuroimaging
    • 5. Decoding and MVPA: predicting from brain images
      • 5.1. An introduction to decoding
      • 5.2. Choosing the right predictive model for neuroimaging
      • 5.3. FREM: fast ensembling of regularized models for robust decoding
      • 5.4. SpaceNet: decoding with spatial structure for better maps
      • 5.5. Searchlight : finding voxels containing information
      • 5.6. Running scikit-learn functions for more control on the analysis
    • 6. Functional connectivity and resting state
      • 6.1. Extracting times series to build a functional connectome
      • 6.2. Connectome extraction: inverse covariance for direct connections
        • 6.2.3.1. Group-sparse covariance estimation
      • 6.3. Extracting functional brain networks: ICA and related
      • 6.4. Region Extraction for better brain parcellations
      • 6.5. Clustering to parcellate the brain in regions
    • 7. Plotting brain images
    • 8. Analyzing fMRI using GLMs
      • 8.1. An introduction to GLMs in fMRI statistical analysis
      • 8.2. First level models
      • 8.3. Second level models
    • 9. Manipulation brain volumes with nilearn
      • 9.1. Input and output: neuroimaging data representation
      • 9.2. Manipulating images: resampling, smoothing, masking, ROIs…
      • 9.3. From neuroimaging volumes to data matrices: the masker objects
    • 10. Advanced usage: manual pipelines and scaling up
      • 10.1. Building your own neuroimaging machine-learning pipeline
      • 10.2. Downloading statistical maps from the Neurovault repository
  • API References
    • nilearn.connectome: Functional Connectivity
      • nilearn.connectome.ConnectivityMeasure
      • nilearn.connectome.GroupSparseCovariance
      • nilearn.connectome.GroupSparseCovarianceCV
      • nilearn.connectome.sym_matrix_to_vec
      • nilearn.connectome.vec_to_sym_matrix
      • nilearn.connectome.group_sparse_covariance
      • nilearn.connectome.cov_to_corr
      • nilearn.connectome.prec_to_partial
    • nilearn.datasets: Automatic Dataset Fetching
      • nilearn.datasets.fetch_icbm152_2009
      • nilearn.datasets.fetch_icbm152_brain_gm_mask
      • nilearn.datasets.fetch_surf_fsaverage
      • nilearn.datasets.load_mni152_brain_mask
      • nilearn.datasets.load_mni152_gm_mask
      • nilearn.datasets.load_mni152_gm_template
      • nilearn.datasets.load_mni152_template
      • nilearn.datasets.load_mni152_wm_mask
      • nilearn.datasets.load_mni152_wm_template
      • nilearn.datasets.fetch_atlas_aal
      • nilearn.datasets.fetch_atlas_allen_2011
      • nilearn.datasets.fetch_atlas_basc_multiscale_2015
      • nilearn.datasets.fetch_atlas_craddock_2012
      • nilearn.datasets.fetch_atlas_destrieux_2009
      • nilearn.datasets.fetch_atlas_difumo
      • nilearn.datasets.fetch_atlas_harvard_oxford
      • nilearn.datasets.fetch_atlas_juelich
      • nilearn.datasets.fetch_atlas_msdl
      • nilearn.datasets.fetch_atlas_pauli_2017
      • nilearn.datasets.fetch_atlas_schaefer_2018
      • nilearn.datasets.fetch_atlas_smith_2009
      • nilearn.datasets.fetch_atlas_surf_destrieux
      • nilearn.datasets.fetch_atlas_talairach
      • nilearn.datasets.fetch_atlas_yeo_2011
      • nilearn.datasets.fetch_coords_dosenbach_2010
      • nilearn.datasets.fetch_coords_power_2011
      • nilearn.datasets.fetch_coords_seitzman_2018
      • nilearn.datasets.fetch_abide_pcp
      • nilearn.datasets.fetch_adhd
      • nilearn.datasets.fetch_bids_langloc_dataset
      • nilearn.datasets.fetch_development_fmri
      • nilearn.datasets.fetch_ds000030_urls
      • nilearn.datasets.fetch_fiac_first_level
      • nilearn.datasets.fetch_haxby
      • nilearn.datasets.fetch_language_localizer_demo_dataset
      • nilearn.datasets.fetch_localizer_first_level
      • nilearn.datasets.fetch_miyawaki2008
      • nilearn.datasets.fetch_openneuro_dataset_index
      • nilearn.datasets.fetch_spm_auditory
      • nilearn.datasets.fetch_spm_multimodal_fmri
      • nilearn.datasets.fetch_surf_nki_enhanced
      • nilearn.datasets.fetch_localizer_button_task
      • nilearn.datasets.fetch_localizer_calculation_task
      • nilearn.datasets.fetch_localizer_contrasts
      • nilearn.datasets.fetch_megatrawls_netmats
      • nilearn.datasets.fetch_mixed_gambles
      • nilearn.datasets.fetch_oasis_vbm
      • nilearn.datasets.fetch_neurovault_auditory_computation_task
      • nilearn.datasets.fetch_neurovault_motor_task
      • nilearn.datasets.fetch_neurovault
      • nilearn.datasets.fetch_neurovault_ids
      • nilearn.datasets.fetch_openneuro_dataset
      • nilearn.datasets.get_data_dirs
      • nilearn.datasets.patch_openneuro_dataset
      • nilearn.datasets.select_from_index
      • nilearn.datasets.load_sample_motor_activation_image
    • nilearn.decoding: Decoding
      • nilearn.decoding.Decoder
      • nilearn.decoding.DecoderRegressor
      • nilearn.decoding.FREMClassifier
      • nilearn.decoding.FREMRegressor
      • nilearn.decoding.SpaceNetClassifier
      • nilearn.decoding.SpaceNetRegressor
      • nilearn.decoding.SearchLight
    • nilearn.decomposition: Multivariate Decompositions
      • nilearn.decomposition.CanICA
      • nilearn.decomposition.DictLearning
    • nilearn.experimental: Experimental Modules
      • nilearn.experimental.surface.FileMesh
      • nilearn.experimental.surface.InMemoryMesh
      • nilearn.experimental.surface.Mesh
      • nilearn.experimental.surface.PolyMesh
      • nilearn.experimental.surface.SurfaceImage
      • nilearn.experimental.surface.SurfaceLabelsMasker
      • nilearn.experimental.surface.SurfaceMasker
      • nilearn.experimental.surface.fetch_destrieux
      • nilearn.experimental.surface.fetch_nki
      • nilearn.experimental.surface.load_fsaverage
    • nilearn.glm: Generalized Linear Models
      • nilearn.glm.Contrast
      • nilearn.glm.FContrastResults
      • nilearn.glm.TContrastResults
      • nilearn.glm.ARModel
      • nilearn.glm.OLSModel
      • nilearn.glm.LikelihoodModelResults
      • nilearn.glm.RegressionResults
      • nilearn.glm.SimpleRegressionResults
      • nilearn.glm.compute_contrast
      • nilearn.glm.compute_fixed_effects
      • nilearn.glm.expression_to_contrast_vector
      • nilearn.glm.fdr_threshold
      • nilearn.glm.cluster_level_inference
      • nilearn.glm.threshold_stats_img
      • nilearn.glm.first_level.FirstLevelModel
      • nilearn.glm.first_level.check_design_matrix
      • nilearn.glm.first_level.compute_regressor
      • nilearn.glm.first_level.first_level_from_bids
      • nilearn.glm.first_level.glover_dispersion_derivative
      • nilearn.glm.first_level.glover_hrf
      • nilearn.glm.first_level.glover_time_derivative
      • nilearn.glm.first_level.make_first_level_design_matrix
      • nilearn.glm.first_level.mean_scaling
      • nilearn.glm.first_level.run_glm
      • nilearn.glm.first_level.spm_dispersion_derivative
      • nilearn.glm.first_level.spm_hrf
      • nilearn.glm.first_level.spm_time_derivative
      • nilearn.glm.second_level.SecondLevelModel
      • nilearn.glm.second_level.make_second_level_design_matrix
      • nilearn.glm.second_level.non_parametric_inference
    • nilearn.image: Image Processing and Resampling Utilities
      • nilearn.image.binarize_img
      • nilearn.image.clean_img
      • nilearn.image.concat_imgs
      • nilearn.image.coord_transform
      • nilearn.image.copy_img
      • nilearn.image.crop_img
      • nilearn.image.get_data
      • nilearn.image.high_variance_confounds
      • nilearn.image.index_img
      • nilearn.image.iter_img
      • nilearn.image.largest_connected_component_img
      • nilearn.image.load_img
      • nilearn.image.math_img
      • nilearn.image.mean_img
      • nilearn.image.new_img_like
      • nilearn.image.resample_img
      • nilearn.image.resample_to_img
      • nilearn.image.reorder_img
      • nilearn.image.smooth_img
      • nilearn.image.swap_img_hemispheres
      • nilearn.image.threshold_img
    • nilearn.interfaces: Loading components from interfaces
      • nilearn.interfaces.bids.get_bids_files
      • nilearn.interfaces.bids.parse_bids_filename
      • nilearn.interfaces.bids.save_glm_to_bids
      • nilearn.interfaces.fmriprep.load_confounds
      • nilearn.interfaces.fmriprep.load_confounds_strategy
      • nilearn.interfaces.fsl.get_design_from_fslmat
    • nilearn.maskers: Extracting Signals from Brain Images
      • nilearn.maskers.BaseMasker
      • nilearn.maskers.NiftiMasker
      • nilearn.maskers.MultiNiftiMasker
      • nilearn.maskers.NiftiLabelsMasker
      • nilearn.maskers.MultiNiftiLabelsMasker
      • nilearn.maskers.NiftiMapsMasker
      • nilearn.maskers.MultiNiftiMapsMasker
      • nilearn.maskers.NiftiSpheresMasker
    • nilearn.masking: Data Masking Utilities
      • nilearn.masking.compute_epi_mask
      • nilearn.masking.compute_multi_epi_mask
      • nilearn.masking.compute_brain_mask
      • nilearn.masking.compute_multi_brain_mask
      • nilearn.masking.compute_background_mask
      • nilearn.masking.compute_multi_background_mask
      • nilearn.masking.intersect_masks
      • nilearn.masking.apply_mask
      • nilearn.masking.unmask
    • nilearn.mass_univariate: Mass-Univariate Analysis
      • nilearn.mass_univariate.permuted_ols
    • nilearn.plotting: Plotting Brain Data
      • nilearn.plotting.find_cut_slices
      • nilearn.plotting.find_xyz_cut_coords
      • nilearn.plotting.find_parcellation_cut_coords
      • nilearn.plotting.find_probabilistic_atlas_cut_coords
      • nilearn.plotting.plot_anat
      • nilearn.plotting.plot_img
      • nilearn.plotting.plot_epi
      • nilearn.plotting.plot_matrix
      • nilearn.plotting.plot_roi
      • nilearn.plotting.plot_stat_map
      • nilearn.plotting.plot_glass_brain
      • nilearn.plotting.plot_connectome
      • nilearn.plotting.plot_markers
      • nilearn.plotting.plot_prob_atlas
      • nilearn.plotting.plot_carpet
      • nilearn.plotting.plot_surf
      • nilearn.plotting.plot_surf_roi
      • nilearn.plotting.plot_surf_contours
      • nilearn.plotting.plot_surf_stat_map
      • nilearn.plotting.plot_img_on_surf
      • nilearn.plotting.plot_img_comparison
      • nilearn.plotting.plot_design_matrix
      • nilearn.plotting.plot_event
      • nilearn.plotting.plot_contrast_matrix
      • nilearn.plotting.view_surf
      • nilearn.plotting.view_img_on_surf
      • nilearn.plotting.view_connectome
      • nilearn.plotting.view_markers
      • nilearn.plotting.view_img
      • nilearn.plotting.show
      • nilearn.plotting.displays.get_projector
      • nilearn.plotting.displays.get_slicer
      • nilearn.plotting.displays.OrthoProjector
      • nilearn.plotting.displays.XZProjector
      • nilearn.plotting.displays.YZProjector
      • nilearn.plotting.displays.YXProjector
      • nilearn.plotting.displays.XProjector
      • nilearn.plotting.displays.YProjector
      • nilearn.plotting.displays.ZProjector
      • nilearn.plotting.displays.LZRYProjector
      • nilearn.plotting.displays.LYRZProjector
      • nilearn.plotting.displays.LYRProjector
      • nilearn.plotting.displays.LZRProjector
      • nilearn.plotting.displays.LRProjector
      • nilearn.plotting.displays.LProjector
      • nilearn.plotting.displays.RProjector
      • nilearn.plotting.displays.BaseAxes
      • nilearn.plotting.displays.CutAxes
      • nilearn.plotting.displays.GlassBrainAxes
      • nilearn.plotting.displays.BaseSlicer
      • nilearn.plotting.displays.OrthoSlicer
      • nilearn.plotting.displays.PlotlySurfaceFigure
      • nilearn.plotting.displays.TiledSlicer
      • nilearn.plotting.displays.MosaicSlicer
      • nilearn.plotting.displays.XZSlicer
      • nilearn.plotting.displays.YZSlicer
      • nilearn.plotting.displays.YXSlicer
      • nilearn.plotting.displays.XSlicer
      • nilearn.plotting.displays.YSlicer
      • nilearn.plotting.displays.ZSlicer
    • nilearn.regions: Operating on Regions
      • nilearn.regions.connected_regions
      • nilearn.regions.connected_label_regions
      • nilearn.regions.img_to_signals_labels
      • nilearn.regions.signals_to_img_labels
      • nilearn.regions.img_to_signals_maps
      • nilearn.regions.signals_to_img_maps
      • nilearn.regions.recursive_neighbor_agglomeration
      • nilearn.regions.RegionExtractor
      • nilearn.regions.Parcellations
      • nilearn.regions.ReNA
      • nilearn.regions.HierarchicalKMeans
    • nilearn.reporting: Reporting Functions
      • nilearn.reporting.HTMLReport
      • nilearn.reporting.get_clusters_table
      • nilearn.reporting.make_glm_report
    • nilearn.signal: Preprocessing Time Series
      • nilearn.signal.butterworth
      • nilearn.signal.clean
      • nilearn.signal.high_variance_confounds
    • nilearn.surface: Manipulating Surface Data
      • nilearn.surface.load_surf_data
      • nilearn.surface.load_surf_mesh
      • nilearn.surface.vol_to_surf
  • Glossary

Development

  • Contributing
  • Maintenance
  • What’s new
  • Team
  • GitHub Repository
Back to top
Edit this page

6.3. Extracting functional brain networks: ICA and related#

Page summary

This page demonstrates the use of multi-subject decompositions models to extract brain-networks from fMRI data in a data-driven way. Specifically, we will apply Independent Component Analysis (ICA), which implements a multivariate random effects model across subjects. We will then compare ICA to a newer technique, based on dictionary learning.

6.3.1. Multi-subject ICA: CanICA#

References

  • G. Varoquaux et al. “A group model for stable multi-subject ICA on fMRI datasets”, NeuroImage Vol 51 (2010), p. 288-299

6.3.1.1. Objective#

ICA is a useful approach for finding independent sources from fMRI images. ICA and similar techniques can be therefore used to define regions or networks that share similar BOLD signal across time. The CanICA incorporates information both within-subjects and across subjects to arrive at consensus components.

Nilearn data for examples

Nilearn provides easy-to-analyze data to explore functional connectivity and resting: the brain development dataset, which has been preprocessed using FMRIPrep and Nilearn We use nilearn functions to fetch data from Internet and get the filenames (more on data loading).

6.3.1.2. Fitting CanICA model with nilearn#

CanICA is a ready-to-use object that can be applied to multi-subject Nifti data, for instance presented as filenames, and will perform a multi-subject ICA decomposition following the CanICA model. As with every object in nilearn, we give its parameters at construction, and then fit it on the data. For examples of this process, see here: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

Once an ICA object has been fit to an fMRI dataset, the individual components can be accessed as a 4D Nifti object using the components_img_ attribute.

6.3.1.3. Visualizing results#

We can visualize each component outlined over the brain:

../_images/sphx_glr_plot_compare_decomposition_001.png

We can also plot the map for different components separately:

ic1 ic2 ic3 ic4

See also

The full code can be found as an example: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

Note

Note that as the ICA components are not ordered, the two components displayed on your computer might not match those of the documentation. For a fair representation, you should display all components and investigate which one resemble those displayed above.

6.3.1.4. Interpreting such components#

ICA, and related algorithms, extract patterns that coactivate in the signal. As a result, it finds functional networks, but also patterns of non neural activity, ie confounding signals. Both are visible in the plots of the components.

6.3.2. An alternative to ICA: Dictionary learning#

Recent work has shown that Dictionary learning based techniques outperform ICA in term of stability and constitutes a better first step in a statistical analysis pipeline. Dictionary learning in neuro-imaging seeks to extract a few representative temporal elements along with their sparse spatial loadings, which constitute good extracted maps.

References

  • Arthur Mensch et al. Compressed online dictionary learning for fast resting-state fMRI decomposition, ISBI 2016, Lecture Notes in Computer Science

DictLearning is a ready-to-use class with the same interface as CanICA. Sparsity of output map is controlled by a parameter alpha: using a larger alpha yields sparser maps.

We can fit both estimators to compare them. 4D plotting (using nilearn.plotting.plot_prob_atlas) offers an efficient way to compare both resulting outputs.

../_images/sphx_glr_plot_compare_decomposition_022.png
../_images/sphx_glr_plot_compare_decomposition_001.png

Maps obtained with Dictionary learning are often easier to exploit as they are more contrasted than ICA maps, with blobs usually better defined. Typically, smoothing can be lower than when doing ICA.

dl1 dl2 dl3 dl4

While Dictionary learning computation time is comparable to CanICA, obtained atlases have been shown to outperform ICA in a variety of classification tasks.

See also

The full code can be found as an example: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

See also

Learn how to extract fMRI data from regions created with Dictionary learning with this example: Regions extraction using dictionary learning and functional connectomes

Next
6.4. Region Extraction for better brain parcellations
Previous
6.2.3.1. Group-sparse covariance estimation
Copyright © The nilearn developers 2010-2023
Made with Sphinx and @pradyunsg's Furo
On this page
  • 6.3. Extracting functional brain networks: ICA and related
    • 6.3.1. Multi-subject ICA: CanICA
      • 6.3.1.1. Objective
      • 6.3.1.2. Fitting CanICA model with nilearn
      • 6.3.1.3. Visualizing results
      • 6.3.1.4. Interpreting such components
    • 6.3.2. An alternative to ICA: Dictionary learning