.. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_02_decoding_plot_oasis_vbm_space_net.py: Voxel-Based Morphometry on Oasis dataset with Space-Net prior ============================================================= Predicting age from gray-matter concentration maps from OASIS dataset. Note that age is a continuous variable, we use the regressor here, and not the classification object. See also the documentation: :ref:`space_net`. Load the Oasis VBM dataset --------------------------- .. code-block:: default import numpy as np from nilearn import datasets n_subjects = 200 # increase this number if you have more RAM on your box dataset_files = datasets.fetch_oasis_vbm(n_subjects=n_subjects) age = dataset_files.ext_vars['age'].astype(float) age = np.array(age) gm_imgs = np.array(dataset_files.gray_matter_maps) # Split data into training set and test set from sklearn.utils import check_random_state from sklearn.model_selection import train_test_split rng = check_random_state(42) gm_imgs_train, gm_imgs_test, age_train, age_test = train_test_split( gm_imgs, age, train_size=.6, random_state=rng) # Sort test data for better visualization (trend, etc.) perm = np.argsort(age_test)[::-1] age_test = age_test[perm] gm_imgs_test = gm_imgs_test[perm] .. rst-class:: sphx-glr-script-out Out: .. code-block:: none /usr/lib/python3/dist-packages/numpy/lib/npyio.py:2358: VisibleDeprecationWarning: Reading unicode strings without specifying the encoding argument is deprecated. Set the encoding, use None for the system default. output = genfromtxt(fname, **kwargs) Fit the SpaceNet and predict with it ------------------------------------- To save time (because these are anat images with many voxels), we include only the 5-percent voxels most correlated with the age variable to fit. Also, we set memory_level=2 so that more of the intermediate computations are cached. We used a graph-net penalty here but more beautiful results can be obtained using the TV-l1 penalty, at the expense of longer runtimes. Also, you may pass and n_jobs= to the SpaceNetRegressor class, to take advantage of a multi-core system. .. code-block:: default from nilearn.decoding import SpaceNetRegressor decoder = SpaceNetRegressor(memory="nilearn_cache", penalty="graph-net", screening_percentile=5., memory_level=2) decoder.fit(gm_imgs_train, age_train) # fit coef_img = decoder.coef_img_ y_pred = decoder.predict(gm_imgs_test).ravel() # predict mse = np.mean(np.abs(age_test - y_pred)) print('Mean square error (MSE) on the predicted age: %.2f' % mse) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [NiftiMasker.fit] Loading data from [/home/varoquau/nilearn_data/oasis1/OAS1_0003_MR1/mwrc1OAS1_0003_MR1_mpr_anon_fslswapdim_bet.nii.gz, /home/varoquau/nilearn_data/oasis1/OAS1_0086_MR1/mwrc1OAS1_0086_MR1_mpr_anon_fslswapdim_bet.nii.gz, [NiftiMasker.fit] Computing the mask [NiftiMasker.fit] Resampling mask [NiftiMasker.transform_single_imgs] Loading data from Nifti1Image( shape=(91, 109, 91, 120), affine=array([[ -2., 0., 0., 90.], [ 0., 2., 0., -126.], [ 0., 0., 2., -72.], [ 0., 0., 0., 1.]]) ) [NiftiMasker.transform_single_imgs] Extracting region signals [NiftiMasker.transform_single_imgs] Cleaning extracted signals /home/varoquau/dev/nilearn/nilearn/decoding/space_net.py:835: UserWarning: Brain mask is bigger than the volume of a standard human brain. This object is probably not tuned to be used on such data. self.screening_percentile_ = _adjust_screening_percentile( [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 15, ..., 119]), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 0), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.40s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 33.3s, 0.6min [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 36.0s remaining: 0.0s ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 1), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.37s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 31.5s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 2), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.39s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 32.5s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 3), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.37s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 32.9s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 4), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.43s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 32.6s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 5), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.37s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 32.2s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 119]), array([ 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 6), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.37s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 31.6s, 0.5min ________________________________________________________________________________ [Memory] Calling nilearn.decoding.space_net.path_scores... path_scores(, array([[0., ..., 0.], ..., [0., ..., 0.]]), array([73., ..., 62.]), array([[[ True, ..., True], ..., [ True, ..., True]], ..., [[ True, ..., True], ..., [ True, ..., True]]]), None, [0.5], array([ 0, ..., 104]), array([105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119]), {'max_iter': 200, 'tol': 0.0001}, n_alphas=10, eps=0.001, is_classif=False, key=(0, 7), debias=False, verbose=1, screening_percentile=1.2652228933482088) ........./home/varoquau/dev/joblib/joblib/parallel.py:262: UserWarning: Persisting input arguments took 1.37s to run. If this happens often in your code, it can cause performance problems (results will be correct in all cases). The reason for this is probably some large input arguments for a wrapped function (e.g. large strings). THIS IS A JOBLIB ISSUE. If you can, kindly provide the joblib's team with an example so that they can fix the problem. return [func(*args, **kwargs) _____________________________________________________path_scores - 31.9s, 0.5min [Parallel(n_jobs=1)]: Done 8 out of 8 | elapsed: 4.7min finished Time Elapsed: 304.144 seconds, 5 minutes. [NiftiMasker.transform_single_imgs] Loading data from Nifti1Image( shape=(91, 109, 91, 80), affine=array([[ -2., 0., 0., 90.], [ 0., 2., 0., -126.], [ 0., 0., 2., -72.], [ 0., 0., 0., 1.]]) ) [NiftiMasker.transform_single_imgs] Extracting region signals [NiftiMasker.transform_single_imgs] Cleaning extracted signals Mean square error (MSE) on the predicted age: 12.33 Visualize the decoding maps and quality of predictions ------------------------------------------------------- Visualize the resulting maps .. code-block:: default from nilearn.plotting import plot_stat_map, show # weights map background_img = gm_imgs[0] plot_stat_map(coef_img, background_img, title="graph-net weights", display_mode="z", cut_coords=1) # Plot the prediction errors. import matplotlib.pyplot as plt plt.figure() plt.suptitle("graph-net: Mean Absolute Error %.2f years" % mse) linewidth = 3 ax1 = plt.subplot('211') ax1.plot(age_test, label="True age", linewidth=linewidth) ax1.plot(y_pred, '--', c="g", label="Predicted age", linewidth=linewidth) ax1.set_ylabel("age") plt.legend(loc="best") ax2 = plt.subplot("212") ax2.plot(age_test - y_pred, label="True age - predicted age", linewidth=linewidth) ax2.set_xlabel("subject") plt.legend(loc="best") show() .. rst-class:: sphx-glr-horizontal * .. image:: /auto_examples/02_decoding/images/sphx_glr_plot_oasis_vbm_space_net_001.png :alt: plot oasis vbm space net :class: sphx-glr-multi-img * .. image:: /auto_examples/02_decoding/images/sphx_glr_plot_oasis_vbm_space_net_002.png :alt: graph-net: Mean Absolute Error 12.33 years :class: sphx-glr-multi-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none /home/varoquau/dev/nilearn/nilearn/plotting/displays.py:1608: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. ax = fh.add_axes([fraction * index * (x1 - x0) + x0, y0, .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 5 minutes 12.326 seconds) .. _sphx_glr_download_auto_examples_02_decoding_plot_oasis_vbm_space_net.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/master?filepath=examples/auto_examples/02_decoding/plot_oasis_vbm_space_net.ipynb :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_oasis_vbm_space_net.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_oasis_vbm_space_net.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_