Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content

This is the development documentation of nilearn (0.11.2.dev300+gab9eab132) Switch to stable version (0.11.1)

Nilearn
Logo
Nilearn
  • Quickstart
  • Examples
    • Basic tutorials
      • 3D and 4D niimgs: handling and visualizing
      • A introduction tutorial to fMRI decoding
      • Basic nilearn example: manipulating and looking at data
      • Basic numerics and plotting with Python
      • Intro to GLM Analysis: a single-run, single-subject fMRI dataset
      • Working with Surface images
    • Visualization of brain images
      • Basic Atlas plotting
      • Colormaps in Nilearn
      • Controlling the contrast of the background when plotting
      • Glass brain plotting in nilearn
      • Glass brain plotting in nilearn (all options)
      • Loading and plotting of a cortical surface atlas
      • Making a surface plot of a 3D statistical map
      • More plotting tools from nilearn
      • NeuroImaging volumes visualization
      • Plot Haxby masks
      • Plotting images with transparent thresholding
      • Plotting tools in nilearn
      • Seed-based connectivity on the surface
      • Technical point: Illustration of the volume to surface sampling schemes
      • Visualizing 4D probabilistic atlas maps
      • Visualizing Megatrawls Network Matrices from Human Connectome Project
      • Visualizing a probabilistic atlas: the default mode in the MSDL atlas
      • Visualizing global patterns with a carpet plot
      • Visualizing multiscale functional brain parcellations
    • Decoding and predicting from brain images
      • Cortical surface-based searchlight decoding
      • Decoding of a dataset after GLM fit for signal extraction
      • Decoding with ANOVA + SVM: face vs house in the Haxby dataset
      • Decoding with FREM: face vs house vs chair object recognition
      • Different classifiers in decoding the Haxby dataset
      • Encoding models for visual stimuli from Miyawaki et al. 2008
      • Example of pattern recognition on simulated data
      • FREM on Jimura et al “mixed gambles” dataset
      • ROI-based decoding analysis in Haxby et al. dataset
      • Reconstruction of visual stimuli from Miyawaki et al. 2008
      • Searchlight analysis of face vs house recognition
      • Setting a parameter by cross-validation
      • Show stimuli of Haxby et al. dataset
      • The haxby dataset: different multi-class strategies
      • Understanding Decoder
      • Voxel-Based Morphometry on Oasis dataset
      • Voxel-Based Morphometry on Oasis dataset with Space-Net prior
    • Functional connectivity
      • Classification of age groups using functional connectivity
      • Clustering methods to learn a brain parcellation from fMRI
      • Comparing connectomes on different reference atlases
      • Computing a connectome with sparse inverse covariance
      • Connectivity structure estimation on simulated data
      • Deriving spatial maps from group fMRI data using ICA and Dictionary Learning
      • Extract signals on spheres and plot a connectome
      • Extracting signals from a brain parcellation
      • Extracting signals of a probabilistic atlas of functional regions
      • Group Sparse inverse covariance for multi-subject connectome
      • Producing single subject maps of seed-to-voxel correlation
      • Regions extraction using dictionary learning and functional connectomes
    • GLM: First level analysis
      • Analysis of an fMRI dataset with a Finite Impule Response (FIR) model
      • Default Mode Network extraction of ADHD dataset
      • Example of MRI response functions
      • Example of surface-based first-level analysis
      • Examples of design matrices
      • First level analysis of a complete BIDS dataset from openneuro
      • Generate an events.tsv file for the NeuroSpin localizer task
      • Predicted time series and residuals
      • Simple example of two-runs fMRI model fitting
      • Single-subject data (two runs) in native space
      • Understanding parameters of the first-level model
    • GLM: Second level analysis
      • Example of generic design in second-level models
      • Example of second level design matrix
      • Second-level fMRI model: one sample test
      • Second-level fMRI model: true positive proportion in clusters
      • Second-level fMRI model: two-sample test, unpaired and paired
      • Statistical testing of a second-level analysis
      • Voxel-Based Morphometry on OASIS dataset
    • Manipulating brain image volumes
      • Breaking an atlas of labels in separated regions
      • Comparing the means of 2 images
      • Computing a Region of Interest (ROI) mask manually
      • Extracting signals from brain regions using the NiftiLabelsMasker
      • Image thresholding
      • Negating an image with math_img
      • Region Extraction using a t-statistical map (3D)
      • Regions Extraction of Default Mode Networks using Smith Atlas
      • Resample an image to a template
      • Simple example of NiftiMasker use
      • Smoothing an image
      • Understanding NiftiMasker and mask computation
      • Visualization of affine resamplings
    • Advanced statistical analysis of brain images
      • A short demo of the surface images & maskers
      • Advanced decoding using scikit learn
      • BIDS dataset first and second level analysis
      • Beta-Series Modeling for Task-Based Functional Connectivity and Decoding
      • Copying headers from input images with math_img
      • Functional connectivity predicts age group
      • Massively univariate analysis of a calculation task from the Localizer dataset
      • Massively univariate analysis of a motor task from the Localizer dataset
      • Massively univariate analysis of face vs house recognition
      • Multivariate decompositions: Independent component analysis of fMRI
      • NeuroVault cross-study ICA maps
      • NeuroVault meta-analysis of stop-go paradigm studies
      • Surface-based dataset first and second level analysis of a dataset
      • Working with long time series fMRI images
  • User guide
    • 1. Introduction
    • 2. What is nilearn?
    • 3. Using nilearn for the first time
    • 4. Machine learning applications to Neuroimaging
    • 5. Decoding and MVPA: predicting from brain images
      • 5.1. An introduction to decoding
      • 5.2. Choosing the right predictive model for neuroimaging
      • 5.3. FREM: fast ensembling of regularized models for robust decoding
      • 5.4. SpaceNet: decoding with spatial structure for better maps
      • 5.5. Searchlight : finding voxels containing information
      • 5.6. Running scikit-learn functions for more control on the analysis
    • 6. Functional connectivity and resting state
      • 6.1. Extracting times series to build a functional connectome
      • 6.2. Connectome extraction: inverse covariance for direct connections
        • 6.2.3.1. Group-sparse covariance estimation
      • 6.3. Extracting functional brain networks: ICA and related
      • 6.4. Region Extraction for better brain parcellations
      • 6.5. Clustering to parcellate the brain in regions
    • 7. Plotting brain images
    • 8. Analyzing fMRI using GLMs
      • 8.1. An introduction to GLMs in fMRI statistical analysis
      • 8.2. First level models
      • 8.3. Second level models
      • 8.4. Difference in meanings between different toolboxes
    • 9. Manipulation brain volumes with nilearn
      • 9.1. Input and output: neuroimaging data representation
      • 9.2. Manipulating images: resampling, smoothing, masking, ROIs…
      • 9.3. From neuroimaging volumes to data matrices: the masker objects
    • 10. Advanced usage: manual pipelines and scaling up
      • 10.1. Building your own neuroimaging machine-learning pipeline
      • 10.2. Downloading statistical maps from the Neurovault repository
  • API References
    • nilearn.connectome: Functional Connectivity
      • nilearn.connectome.ConnectivityMeasure
      • nilearn.connectome.GroupSparseCovariance
      • nilearn.connectome.GroupSparseCovarianceCV
      • nilearn.connectome.sym_matrix_to_vec
      • nilearn.connectome.vec_to_sym_matrix
      • nilearn.connectome.group_sparse_covariance
      • nilearn.connectome.cov_to_corr
      • nilearn.connectome.prec_to_partial
    • nilearn.datasets: Automatic Dataset Fetching
      • nilearn.datasets.fetch_icbm152_2009
      • nilearn.datasets.fetch_icbm152_brain_gm_mask
      • nilearn.datasets.fetch_surf_fsaverage
      • nilearn.datasets.load_fsaverage
      • nilearn.datasets.load_fsaverage_data
      • nilearn.datasets.load_mni152_brain_mask
      • nilearn.datasets.load_mni152_gm_mask
      • nilearn.datasets.load_mni152_gm_template
      • nilearn.datasets.load_mni152_template
      • nilearn.datasets.load_mni152_wm_mask
      • nilearn.datasets.load_mni152_wm_template
      • ICBM 152 template
      • fsaverage template
      • fsaverage3 template
      • fsaverage4 template
      • fsaverage5 template
      • fsaverage6 template
      • nilearn.datasets.fetch_atlas_aal
      • nilearn.datasets.fetch_atlas_basc_multiscale_2015
      • nilearn.datasets.fetch_atlas_destrieux_2009
      • nilearn.datasets.fetch_atlas_harvard_oxford
      • nilearn.datasets.fetch_atlas_juelich
      • nilearn.datasets.fetch_atlas_pauli_2017
      • nilearn.datasets.fetch_atlas_schaefer_2018
      • nilearn.datasets.fetch_atlas_surf_destrieux
      • nilearn.datasets.fetch_atlas_talairach
      • nilearn.datasets.fetch_atlas_yeo_2011
      • nilearn.datasets.fetch_coords_dosenbach_2010
      • nilearn.datasets.fetch_coords_power_2011
      • nilearn.datasets.fetch_coords_seitzman_2018
      • nilearn.datasets.fetch_atlas_allen_2011
      • nilearn.datasets.fetch_atlas_craddock_2012
      • nilearn.datasets.fetch_atlas_difumo
      • nilearn.datasets.fetch_atlas_harvard_oxford
      • nilearn.datasets.fetch_atlas_juelich
      • nilearn.datasets.fetch_atlas_msdl
      • nilearn.datasets.fetch_atlas_pauli_2017
      • nilearn.datasets.fetch_atlas_smith_2009
      • Craddock 2012 atlas
      • DiFuMo atlas
      • MSDL atlas
      • Dosenbach 2010 atlas
      • Power 2011 atlas
      • Seitzman 2018 atlas
      • AAL atlas
      • Allen 2011 atlas
      • BASC multiscale atlas
      • Destrieux atlas
      • Harvard Oxford atlas
      • Juelich atlas
      • Pauli 2007 atlas
      • Schaefer 2018 atlas
      • Smith 2009 atlas
      • Talairach atlas
      • Yeo 2011 atlas
      • nilearn.datasets.fetch_abide_pcp
      • nilearn.datasets.fetch_adhd
      • nilearn.datasets.fetch_bids_langloc_dataset
      • nilearn.datasets.fetch_development_fmri
      • nilearn.datasets.fetch_ds000030_urls
      • nilearn.datasets.fetch_fiac_first_level
      • nilearn.datasets.fetch_haxby
      • nilearn.datasets.fetch_language_localizer_demo_dataset
      • nilearn.datasets.fetch_localizer_first_level
      • nilearn.datasets.fetch_miyawaki2008
      • nilearn.datasets.fetch_spm_auditory
      • nilearn.datasets.fetch_spm_multimodal_fmri
      • nilearn.datasets.fetch_surf_nki_enhanced
      • nilearn.datasets.load_nki
      • ABIDE PCP dataset
      • ADHD dataset
      • BIDS language localizer
      • development fMRI dataset
      • fiac first level dataset
      • Haxby dataset
      • language localizer demo dataset
      • localizer first level dataset
      • Miyawaki 2008 dataset
      • SPM auditory dataset
      • SPM multimodal dataset
      • NKI enhanced surface dataset
      • Brainomics Localizer
      • nilearn.datasets.fetch_localizer_button_task
      • nilearn.datasets.fetch_localizer_calculation_task
      • nilearn.datasets.fetch_localizer_contrasts
      • nilearn.datasets.fetch_megatrawls_netmats
      • nilearn.datasets.fetch_mixed_gambles
      • nilearn.datasets.fetch_oasis_vbm
      • nilearn.datasets.fetch_neurovault_auditory_computation_task
      • nilearn.datasets.fetch_neurovault_motor_task
      • MegaTrawls Network Matrices HCP
      • Mixed gambles statistical maps
      • OASIS volume based morphometry maps
      • nilearn.datasets.fetch_neurovault
      • nilearn.datasets.fetch_neurovault_ids
      • nilearn.datasets.fetch_openneuro_dataset
      • nilearn.datasets.get_data_dirs
      • nilearn.datasets.patch_openneuro_dataset
      • nilearn.datasets.select_from_index
      • nilearn.datasets.load_sample_motor_activation_image
      • Neurovault statistical maps
    • nilearn.decoding: Decoding
      • nilearn.decoding.Decoder
      • nilearn.decoding.DecoderRegressor
      • nilearn.decoding.FREMClassifier
      • nilearn.decoding.FREMRegressor
      • nilearn.decoding.SpaceNetClassifier
      • nilearn.decoding.SpaceNetRegressor
      • nilearn.decoding.SearchLight
    • nilearn.decomposition: Multivariate Decompositions
      • nilearn.decomposition.CanICA
      • nilearn.decomposition.DictLearning
    • nilearn.glm: Generalized Linear Models
      • nilearn.glm.Contrast
      • nilearn.glm.FContrastResults
      • nilearn.glm.TContrastResults
      • nilearn.glm.ARModel
      • nilearn.glm.OLSModel
      • nilearn.glm.LikelihoodModelResults
      • nilearn.glm.RegressionResults
      • nilearn.glm.SimpleRegressionResults
      • nilearn.glm.compute_contrast
      • nilearn.glm.compute_fixed_effects
      • nilearn.glm.expression_to_contrast_vector
      • nilearn.glm.fdr_threshold
      • nilearn.glm.cluster_level_inference
      • nilearn.glm.threshold_stats_img
      • nilearn.glm.first_level.FirstLevelModel
      • nilearn.glm.first_level.check_design_matrix
      • nilearn.glm.first_level.compute_regressor
      • nilearn.glm.first_level.first_level_from_bids
      • nilearn.glm.first_level.glover_dispersion_derivative
      • nilearn.glm.first_level.glover_hrf
      • nilearn.glm.first_level.glover_time_derivative
      • nilearn.glm.first_level.make_first_level_design_matrix
      • nilearn.glm.first_level.mean_scaling
      • nilearn.glm.first_level.run_glm
      • nilearn.glm.first_level.spm_dispersion_derivative
      • nilearn.glm.first_level.spm_hrf
      • nilearn.glm.first_level.spm_time_derivative
      • nilearn.glm.second_level.SecondLevelModel
      • nilearn.glm.second_level.make_second_level_design_matrix
      • nilearn.glm.second_level.non_parametric_inference
    • nilearn.image: Image Processing and Resampling Utilities
      • nilearn.image.binarize_img
      • nilearn.image.clean_img
      • nilearn.image.concat_imgs
      • nilearn.image.coord_transform
      • nilearn.image.copy_img
      • nilearn.image.crop_img
      • nilearn.image.get_data
      • nilearn.image.high_variance_confounds
      • nilearn.image.index_img
      • nilearn.image.iter_img
      • nilearn.image.largest_connected_component_img
      • nilearn.image.load_img
      • nilearn.image.math_img
      • nilearn.image.mean_img
      • nilearn.image.new_img_like
      • nilearn.image.resample_img
      • nilearn.image.resample_to_img
      • nilearn.image.reorder_img
      • nilearn.image.smooth_img
      • nilearn.image.swap_img_hemispheres
      • nilearn.image.threshold_img
    • nilearn.interfaces: Loading components from interfaces
      • nilearn.interfaces.bids.get_bids_files
      • nilearn.interfaces.bids.parse_bids_filename
      • nilearn.interfaces.bids.save_glm_to_bids
      • nilearn.interfaces.fmriprep.load_confounds
      • nilearn.interfaces.fmriprep.load_confounds_strategy
      • nilearn.interfaces.fsl.get_design_from_fslmat
    • nilearn.maskers: Extracting Signals from Brain Images
      • nilearn.maskers.BaseMasker
      • nilearn.maskers.NiftiMasker
      • nilearn.maskers.MultiNiftiMasker
      • nilearn.maskers.NiftiLabelsMasker
      • nilearn.maskers.MultiNiftiLabelsMasker
      • nilearn.maskers.NiftiMapsMasker
      • nilearn.maskers.MultiNiftiMapsMasker
      • nilearn.maskers.NiftiSpheresMasker
      • nilearn.maskers.SurfaceLabelsMasker
      • nilearn.maskers.SurfaceMasker
      • nilearn.maskers.SurfaceMapsMasker
      • Examples masker reports
    • nilearn.masking: Data Masking Utilities
      • nilearn.masking.compute_epi_mask
      • nilearn.masking.compute_multi_epi_mask
      • nilearn.masking.compute_brain_mask
      • nilearn.masking.compute_multi_brain_mask
      • nilearn.masking.compute_background_mask
      • nilearn.masking.compute_multi_background_mask
      • nilearn.masking.intersect_masks
      • nilearn.masking.apply_mask
      • nilearn.masking.unmask
    • nilearn.mass_univariate: Mass-Univariate Analysis
      • nilearn.mass_univariate.permuted_ols
    • nilearn.plotting: Plotting Brain Data
      • nilearn.plotting.find_cut_slices
      • nilearn.plotting.find_xyz_cut_coords
      • nilearn.plotting.find_parcellation_cut_coords
      • nilearn.plotting.find_probabilistic_atlas_cut_coords
      • nilearn.plotting.plot_anat
      • nilearn.plotting.plot_carpet
      • nilearn.plotting.plot_connectome
      • nilearn.plotting.plot_contrast_matrix
      • nilearn.plotting.plot_design_matrix
      • nilearn.plotting.plot_design_matrix_correlation
      • nilearn.plotting.plot_epi
      • nilearn.plotting.plot_event
      • nilearn.plotting.plot_glass_brain
      • nilearn.plotting.plot_img
      • nilearn.plotting.plot_img_on_surf
      • nilearn.plotting.plot_markers
      • nilearn.plotting.plot_matrix
      • nilearn.plotting.plot_prob_atlas
      • nilearn.plotting.plot_roi
      • nilearn.plotting.plot_stat_map
      • nilearn.plotting.plot_surf
      • nilearn.plotting.plot_surf_contours
      • nilearn.plotting.plot_surf_roi
      • nilearn.plotting.plot_surf_stat_map
      • nilearn.plotting.show
      • nilearn.plotting.view_surf
      • nilearn.plotting.view_img_on_surf
      • nilearn.plotting.view_connectome
      • nilearn.plotting.view_markers
      • nilearn.plotting.view_img
      • nilearn.plotting.img_comparison.plot_bland_altman
      • nilearn.plotting.img_comparison.plot_img_comparison
      • nilearn.plotting.displays.PlotlySurfaceFigure
      • nilearn.plotting.displays.BaseAxes
      • nilearn.plotting.displays.CutAxes
      • nilearn.plotting.displays.GlassBrainAxes
      • nilearn.plotting.displays.OrthoProjector
      • nilearn.plotting.displays.LZRYProjector
      • nilearn.plotting.displays.LYRZProjector
      • nilearn.plotting.displays.LYRProjector
      • nilearn.plotting.displays.LZRProjector
      • nilearn.plotting.displays.LRProjector
      • nilearn.plotting.displays.LProjector
      • nilearn.plotting.displays.RProjector
      • nilearn.plotting.displays.XZProjector
      • nilearn.plotting.displays.YZProjector
      • nilearn.plotting.displays.YXProjector
      • nilearn.plotting.displays.XProjector
      • nilearn.plotting.displays.YProjector
      • nilearn.plotting.displays.ZProjector
      • nilearn.plotting.displays.BaseSlicer
      • nilearn.plotting.displays.OrthoSlicer
      • nilearn.plotting.displays.MosaicSlicer
      • nilearn.plotting.displays.TiledSlicer
      • nilearn.plotting.displays.XSlicer
      • nilearn.plotting.displays.XZSlicer
      • nilearn.plotting.displays.YXSlicer
      • nilearn.plotting.displays.YZSlicer
      • nilearn.plotting.displays.YSlicer
      • nilearn.plotting.displays.ZSlicer
      • nilearn.plotting.displays.get_projector
      • nilearn.plotting.displays.get_slicer
    • nilearn.regions: Operating on Regions
      • nilearn.regions.RegionExtractor
      • nilearn.regions.Parcellations
      • nilearn.regions.ReNA
      • nilearn.regions.HierarchicalKMeans
      • nilearn.regions.connected_regions
      • nilearn.regions.connected_label_regions
      • nilearn.regions.img_to_signals_labels
      • nilearn.regions.signals_to_img_labels
      • nilearn.regions.img_to_signals_maps
      • nilearn.regions.signals_to_img_maps
      • nilearn.regions.recursive_neighbor_agglomeration
    • nilearn.reporting: Reporting Functions
      • nilearn.reporting.HTMLReport
      • nilearn.reporting.get_clusters_table
      • nilearn.reporting.make_glm_report
      • Examples of GLM reports
    • nilearn.signal: Preprocessing Time Series
      • nilearn.signal.butterworth
      • nilearn.signal.clean
      • nilearn.signal.high_variance_confounds
    • nilearn.surface: Manipulating Surface Data
      • nilearn.surface.FileMesh
      • nilearn.surface.InMemoryMesh
      • nilearn.surface.PolyData
      • nilearn.surface.PolyMesh
      • nilearn.surface.SurfaceImage
      • nilearn.surface.SurfaceMesh
      • nilearn.surface.load_surf_data
      • nilearn.surface.load_surf_mesh
      • nilearn.surface.vol_to_surf
  • Glossary

Development

  • Contributing
  • Continuous integration
  • Maintenance
  • What’s new
  • Team
  • Versions
  • GitHub Repository
Back to top
View this page
Edit this page

6.3. Extracting functional brain networks: ICA and related¶

Page summary

This page demonstrates the use of multi-subject decompositions models to extract brain-networks from fMRI data in a data-driven way. Specifically, we will apply Independent Component Analysis (ICA), which implements a multivariate random effects model across subjects. We will then compare ICA to a newer technique, based on dictionary learning.

6.3.1. Multi-subject ICA: CanICA¶

References

  • A group model for stable multi-subject ICA on fMRI datasets [1]

6.3.1.1. Objective¶

ICA is a useful approach for finding independent sources from fMRI images. ICA and similar techniques can be therefore used to define regions or networks that share similar BOLD signal across time. The CanICA incorporates information both within-subjects and across subjects to arrive at consensus components.

Nilearn data for examples

Nilearn provides easy-to-analyze data to explore functional connectivity and resting: the brain development dataset, which has been preprocessed using FMRIPrep and Nilearn We use nilearn functions to fetch data from Internet and get the filenames (more on data loading).

6.3.1.2. Fitting CanICA model with nilearn¶

CanICA is a ready-to-use object that can be applied to multi-subject Nifti data, for instance presented as filenames, and will perform a multi-subject ICA decomposition following the CanICA model. As with every object in nilearn, we give its parameters at construction, and then fit it on the data. For examples of this process, see here: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

Once an ICA object has been fit to an fMRI dataset, the individual components can be accessed as a 4D Nifti object using the components_img_ attribute.

6.3.1.3. Visualizing results¶

We can visualize each component outlined over the brain:

../_images/sphx_glr_plot_compare_decomposition_001.png

We can also plot the map for different components separately:

ic1 ic2 ic3 ic4

See also

The full code can be found as an example: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

Note

Note that as the ICA components are not ordered, the two components displayed on your computer might not match those of the documentation. For a fair representation, you should display all components and investigate which one resemble those displayed above.

6.3.1.4. Interpreting such components¶

ICA, and related algorithms, extract patterns that coactivate in the signal. As a result, it finds functional networks, but also patterns of non neural activity, ie confounding signals. Both are visible in the plots of the components.

6.3.2. An alternative to ICA: Dictionary learning¶

Recent work has shown that Dictionary learning based techniques outperform ICA in term of stability and constitutes a better first step in a statistical analysis pipeline. Dictionary learning in neuro-imaging seeks to extract a few representative temporal elements along with their sparse spatial loadings, which constitute good extracted maps.

References

  • Compressed online dictionary learning for fast resting-state fMRI decomposition [2]

DictLearning is a ready-to-use class with the same interface as CanICA. Sparsity of output map is controlled by a parameter alpha: using a larger alpha yields sparser maps.

We can fit both estimators to compare them. 4D plotting (using nilearn.plotting.plot_prob_atlas) offers an efficient way to compare both resulting outputs.

../_images/sphx_glr_plot_compare_decomposition_022.png
../_images/sphx_glr_plot_compare_decomposition_001.png

Maps obtained with Dictionary learning are often easier to exploit as they are more contrasted than ICA maps, with blobs usually better defined. Typically, smoothing can be lower than when doing ICA.

dl1 dl2 dl3 dl4

While Dictionary learning computation time is comparable to CanICA, obtained atlases have been shown to outperform ICA in a variety of classification tasks.

See also

The full code can be found as an example: Deriving spatial maps from group fMRI data using ICA and Dictionary Learning

See also

Learn how to extract fMRI data from regions created with Dictionary learning with this example: Regions extraction using dictionary learning and functional connectomes

6.3.2.1. References¶

[1]

Gael Varoquaux, Sepideh Sadaghiani, Philippe Pinel, Andreas Kleinschmidt, Jean-Baptiste Poline, and Bertrand Thirion. A group model for stable multi-subject ica on fmri datasets. NeuroImage, 51(1):288–299, 2010. URL: https://pubmed.ncbi.nlm.nih.gov/20153834, doi:10.1016/j.neuroimage.2010.02.010.

[2]

Arthur Mensch, Gael Varoquaux, and Bertrand Thirion. Compressed online dictionary learning for fast resting-state fmri decomposition. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), volume, 1282–1285. 2016. doi:10.1109/ISBI.2016.7493501.

Next
6.4. Region Extraction for better brain parcellations
Previous
6.2.3.1. Group-sparse covariance estimation
Copyright © The nilearn developers - Code and documentation distributed under BSD license.
Made with Sphinx and @pradyunsg's Furo
On this page
  • 6.3. Extracting functional brain networks: ICA and related
    • 6.3.1. Multi-subject ICA: CanICA
      • 6.3.1.1. Objective
      • 6.3.1.2. Fitting CanICA model with nilearn
      • 6.3.1.3. Visualizing results
      • 6.3.1.4. Interpreting such components
    • 6.3.2. An alternative to ICA: Dictionary learning
      • 6.3.2.1. References