.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/02_decoding/plot_haxby_frem.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_02_decoding_plot_haxby_frem.py: Decoding with FREM: face vs house vs chair object recognition ============================================================= This example uses fast ensembling of regularized models (FREM) to decode a face vs house vs chair discrimination task from :footcite:t:`Haxby2001` study. :term:`FREM` uses an implicit spatial regularization through fast clustering and aggregates a high number of estimators trained on various splits of the training set, thus returning a very robust decoder at a lower computational cost than other spatially regularized methods. To have more details, see: :ref:`frem`. .. GENERATED FROM PYTHON SOURCE LINES 18-20 Load the Haxby dataset ---------------------- .. GENERATED FROM PYTHON SOURCE LINES 20-54 .. code-block:: Python from nilearn.datasets import fetch_haxby data_files = fetch_haxby() # Load behavioral data import pandas as pd behavioral = pd.read_csv(data_files.session_target[0], sep=" ") # Restrict to face, house, and chair conditions conditions = behavioral["labels"] condition_mask = conditions.isin(["face", "house", "chair"]) # Split data into train and test samples, using the chunks condition_mask_train = (condition_mask) & (behavioral["chunks"] <= 6) condition_mask_test = (condition_mask) & (behavioral["chunks"] > 6) # Apply this sample mask to X (fMRI data) and y (behavioral labels) # Because the data is in one single large 4D image, we need to use # index_img to do the split easily from nilearn.image import index_img func_filenames = data_files.func[0] X_train = index_img(func_filenames, condition_mask_train) X_test = index_img(func_filenames, condition_mask_test) y_train = conditions[condition_mask_train].values y_test = conditions[condition_mask_test].values # Compute the mean EPI to be used for the background of the plotting from nilearn.image import mean_img background_img = mean_img(func_filenames) .. GENERATED FROM PYTHON SOURCE LINES 55-57 Fit FREM -------- .. GENERATED FROM PYTHON SOURCE LINES 57-66 .. code-block:: Python from nilearn.decoding import FREMClassifier decoder = FREMClassifier(cv=10, standardize="zscore_sample", n_jobs=2) # Fit model on train data and predict on test data decoder.fit(X_train, y_train) y_pred = decoder.predict(X_test) accuracy = (y_pred == y_test).mean() * 100.0 print(f"FREM classification accuracy : {accuracy:g}%") .. rst-class:: sphx-glr-script-out .. code-block:: none /home/himanshu/.local/miniconda3/envs/nilearnpy/lib/python3.12/site-packages/nilearn/decoding/decoder.py:742: UserWarning: Brain mask is bigger than the volume of a standard human brain. This object is probably not tuned to be used on such data. FREM classification accuracy : 57.7778% .. GENERATED FROM PYTHON SOURCE LINES 67-69 Plot confusion matrix ------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 69-100 .. code-block:: Python import numpy as np from sklearn.metrics import confusion_matrix from nilearn import plotting # Calculate the confusion matrix matrix = confusion_matrix( y_test, y_pred, normalize="true", ) # Plot the confusion matrix im = plotting.plot_matrix( matrix, labels=sorted(np.unique(y_test)), vmin=0, cmap="hot_r", ) # Add x/y-axis labels ax = im.axes ax.set_ylabel("True label") ax.set_xlabel("Predicted label") # Adjust figure to make labels fit ax.get_figure().tight_layout() plotting.show() .. image-sg:: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_001.png :alt: plot haxby frem :srcset: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 101-103 Visualization of :term:`FREM` weights ------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 103-113 .. code-block:: Python from nilearn import plotting plotting.plot_stat_map( decoder.coef_img_["face"], background_img, title=f"FREM: accuracy {accuracy:g}%, 'face coefs'", cut_coords=(-50, -4), display_mode="yz", ) plotting.show() .. image-sg:: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_002.png :alt: plot haxby frem :srcset: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_frem_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 114-121 :term:`FREM` ensembling procedure yields an important improvement of decoding accuracy on this simple example compared to fitting only one model per fold and the clustering mechanism keeps its computational cost reasonable even on heavier examples. Here we ensembled several instances of l2-SVC, but FREMClassifier also works with ridge or logistic. FREMRegressor object is also available to solve regression problems. .. GENERATED FROM PYTHON SOURCE LINES 123-127 References ---------- .. footbibliography:: .. rst-class:: sphx-glr-timing **Total running time of the script:** (3 minutes 37.566 seconds) **Estimated memory usage:** 916 MB .. _sphx_glr_download_auto_examples_02_decoding_plot_haxby_frem.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/main?urlpath=lab/tree/notebooks/auto_examples/02_decoding/plot_haxby_frem.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_haxby_frem.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_haxby_frem.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_