.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/02_decoding/plot_haxby_frem.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_02_decoding_plot_haxby_frem.py: Decoding with FREM: face vs house vs chair object recognition ============================================================= This example uses fast ensembling of regularized models (FREM) to decode a face vs house vs chair discrimination task from :footcite:t:`Haxby2001` study. :term:`FREM` uses an implicit spatial regularization through fast clustering and aggregates a high number of estimators trained on various splits of the training set, thus returning a very robust decoder at a lower computational cost than other spatially regularized methods. To have more details, see: :ref:`frem`. .. GENERATED FROM PYTHON SOURCE LINES 18-20 Load the Haxby dataset ---------------------- .. GENERATED FROM PYTHON SOURCE LINES 20-24 .. code-block:: Python from nilearn.datasets import fetch_haxby data_files = fetch_haxby() .. rst-class:: sphx-glr-script-out .. code-block:: none [fetch_haxby] Dataset found in /home/runner/nilearn_data/haxby2001 .. GENERATED FROM PYTHON SOURCE LINES 25-26 Load behavioral data .. GENERATED FROM PYTHON SOURCE LINES 26-30 .. code-block:: Python import pandas as pd behavioral = pd.read_csv(data_files.session_target[0], sep=" ") .. GENERATED FROM PYTHON SOURCE LINES 31-32 Restrict to face, house, and chair conditions .. GENERATED FROM PYTHON SOURCE LINES 32-35 .. code-block:: Python conditions = behavioral["labels"] condition_mask = conditions.isin(["face", "house", "chair"]) .. GENERATED FROM PYTHON SOURCE LINES 36-37 Split data into train and test samples, using the chunks .. GENERATED FROM PYTHON SOURCE LINES 37-40 .. code-block:: Python condition_mask_train = (condition_mask) & (behavioral["chunks"] <= 6) condition_mask_test = (condition_mask) & (behavioral["chunks"] > 6) .. GENERATED FROM PYTHON SOURCE LINES 41-44 Apply this sample mask to X (fMRI data) and y (behavioral labels) Because the data is in one single large 4D image, we need to use index_img to do the split easily .. GENERATED FROM PYTHON SOURCE LINES 44-53 .. code-block:: Python from nilearn.image import index_img func_filenames = data_files.func[0] X_train = index_img(func_filenames, condition_mask_train) X_test = index_img(func_filenames, condition_mask_test) y_train = conditions[condition_mask_train].to_numpy() y_test = conditions[condition_mask_test].to_numpy() .. GENERATED FROM PYTHON SOURCE LINES 54-55 Compute the mean EPI to be used for the background of the plotting .. GENERATED FROM PYTHON SOURCE LINES 55-59 .. code-block:: Python from nilearn.image import mean_img background_img = mean_img(func_filenames, copy_header=True) .. GENERATED FROM PYTHON SOURCE LINES 60-62 Fit FREM -------- .. GENERATED FROM PYTHON SOURCE LINES 62-64 .. code-block:: Python from nilearn.decoding import FREMClassifier .. GENERATED FROM PYTHON SOURCE LINES 65-66 Restrict analysis to within the brain mask .. GENERATED FROM PYTHON SOURCE LINES 66-72 .. code-block:: Python mask = data_files.mask decoder = FREMClassifier( mask=mask, cv=10, standardize="zscore_sample", n_jobs=2, verbose=1 ) .. GENERATED FROM PYTHON SOURCE LINES 73-74 Fit model on train data and predict on test data .. GENERATED FROM PYTHON SOURCE LINES 74-79 .. code-block:: Python decoder.fit(X_train, y_train) y_pred = decoder.predict(X_test) accuracy = (y_pred == y_test).mean() * 100.0 print(f"FREM classification accuracy : {accuracy:g}%") .. rst-class:: sphx-glr-script-out .. code-block:: none [FREMClassifier.fit] Loading data from None [FREMClassifier.fit] loading mask from /home/runner/nilearn_data/haxby2001/mask.nii.gz [FREMClassifier.fit] Resampling mask [FREMClassifier.fit] Finished fit [FREMClassifier.fit] Loading data from Nifti1Image( shape=(40, 64, 64, 189), affine=array([[ -3.5 , 0. , 0. , 68.25 ], [ 0. , 3.75 , 0. , -118.125], [ 0. , 0. , 3.75 , -118.125], [ 0. , 0. , 0. , 1. ]]) ) [FREMClassifier.fit] Extracting region signals [FREMClassifier.fit] Cleaning extracted signals [Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers. [Parallel(n_jobs=2)]: Done 30 out of 30 | elapsed: 20.9s finished [FREMClassifier.predict] Loading data from Nifti1Image( shape=(40, 64, 64, 135), affine=array([[ -3.5 , 0. , 0. , 68.25 ], [ 0. , 3.75 , 0. , -118.125], [ 0. , 0. , 3.75 , -118.125], [ 0. , 0. , 0. , 1. ]]) ) [FREMClassifier.predict] Extracting region signals [FREMClassifier.predict] Cleaning extracted signals FREM classification accuracy : 60.7407% .. GENERATED FROM PYTHON SOURCE LINES 80-82 Plot confusion matrix ------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 82-88 .. code-block:: Python import numpy as np from sklearn.metrics import confusion_matrix from nilearn.plotting import plot_matrix, plot_stat_map, show .. GENERATED FROM PYTHON SOURCE LINES 89-90 Calculate the confusion matrix .. GENERATED FROM PYTHON SOURCE LINES 90-96 .. code-block:: Python matrix = confusion_matrix( y_test, y_pred, normalize="true", ) .. GENERATED FROM PYTHON SOURCE LINES 97-98 Plot the confusion matrix .. GENERATED FROM PYTHON SOURCE LINES 98-112 .. code-block:: Python im = plot_matrix( matrix, labels=sorted(np.unique(y_test)), vmin=0, cmap="inferno", ) # Add x/y-axis labels ax = im.axes ax.set_ylabel("True label") ax.set_xlabel("Predicted label") show() .. image-sg:: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_001.png :alt: plot haxby frem :srcset: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 113-115 Visualization of :term:`FREM` weights ------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 115-123 .. code-block:: Python plot_stat_map( decoder.coef_img_["face"], background_img, title=f"FREM: accuracy {accuracy:g}%, 'face coefs'", cut_coords=(-50, -4), display_mode="yz", ) show() .. image-sg:: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_002.png :alt: plot haxby frem :srcset: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 124-131 :term:`FREM` ensembling procedure yields an important improvement of decoding accuracy on this simple example compared to fitting only one model per fold and the clustering mechanism keeps its computational cost reasonable even on heavier examples. Here we ensembled several instances of l2-SVC, but FREMClassifier also works with ridge or logistic. FREMRegressor object is also available to solve regression problems. .. GENERATED FROM PYTHON SOURCE LINES 133-137 References ---------- .. footbibliography:: .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 34.753 seconds) **Estimated memory usage:** 1014 MB .. _sphx_glr_download_auto_examples_02_decoding_plot_haxby_frem.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.12.0?urlpath=lab/tree/notebooks/auto_examples/02_decoding/plot_haxby_frem.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_haxby_frem.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_haxby_frem.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_haxby_frem.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_