.. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_02_decoding_plot_haxby_anova_svm.py: Decoding with ANOVA + SVM: face vs house in the Haxby dataset =============================================================== This example does a simple but efficient decoding on the Haxby dataset: using a feature selection, followed by an SVM. Retrieve the files of the Haxby dataset ---------------------------------------- .. code-block:: default from nilearn import datasets # By default 2nd subject will be fetched haxby_dataset = datasets.fetch_haxby() func_img = haxby_dataset.func[0] # print basic information on the dataset print('Mask nifti image (3D) is located at: %s' % haxby_dataset.mask) print('Functional nifti image (4D) is located at: %s' % func_img) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Mask nifti image (3D) is located at: /home/varoquau/nilearn_data/haxby2001/mask.nii.gz Functional nifti image (4D) is located at: /home/varoquau/nilearn_data/haxby2001/subj2/bold.nii.gz Load the behavioral data ------------------------- .. code-block:: default import pandas as pd # Load target information as string and give a numerical identifier to each behavioral = pd.read_csv(haxby_dataset.session_target[0], sep=" ") conditions = behavioral['labels'] # Restrict the analysis to faces and places from nilearn.image import index_img condition_mask = behavioral['labels'].isin(['face', 'house']) conditions = conditions[condition_mask] func_img = index_img(func_img, condition_mask) # Confirm that we now have 2 conditions print(conditions.unique()) # The number of the session is stored in the CSV file giving the behavioral # data. We have to apply our session mask, to select only faces and houses. session_label = behavioral['chunks'][condition_mask] .. rst-class:: sphx-glr-script-out Out: .. code-block:: none ['face' 'house'] ANOVA pipeline with :class:`nilearn.decoding.Decoder` object ------------------------------------------------------------ Nilearn Decoder object aims to provide smooth user experience by acting as a pipeline of several tasks: preprocessing with NiftiMasker, reducing dimension by selecting only relevant features with ANOVA -- a classical univariate feature selection based on F-test, and then decoding with different types of estimators (in this example is Support Vector Machine with a linear kernel) on nested cross-validation. .. code-block:: default from nilearn.decoding import Decoder # Here screening_percentile is set to 5 percent mask_img = haxby_dataset.mask decoder = Decoder(estimator='svc', mask=mask_img, smoothing_fwhm=4, standardize=True, screening_percentile=5, scoring='accuracy') Fit the decoder and predict ---------------------------- .. code-block:: default decoder.fit(func_img, conditions) y_pred = decoder.predict(func_img) Obtain prediction scores via cross validation ----------------------------------------------- Define the cross-validation scheme used for validation. Here we use a LeaveOneGroupOut cross-validation on the session group which corresponds to a leave a session out scheme, then pass the cross-validator object to the cv parameter of decoder.leave-one-session-out For more details please take a look at: .. code-block:: default from sklearn.model_selection import LeaveOneGroupOut cv = LeaveOneGroupOut() decoder = Decoder(estimator='svc', mask=mask_img, standardize=True, screening_percentile=5, scoring='accuracy', cv=cv) # Compute the prediction accuracy for the different folds (i.e. session) decoder.fit(func_img, conditions, groups=session_label) # Print the CV scores print(decoder.cv_scores_['face']) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [1.0, 0.9444444444444444, 1.0, 0.9444444444444444, 1.0, 1.0, 0.9444444444444444, 1.0, 0.6111111111111112, 1.0, 1.0, 1.0] Visualize the results ---------------------- Look at the SVC's discriminating weights using :class:`nilearn.plotting.plot_stat_map` .. code-block:: default weight_img = decoder.coef_img_['face'] from nilearn.plotting import plot_stat_map, show plot_stat_map(weight_img, bg_img=haxby_dataset.anat[0], title='SVM weights') show() .. image:: /auto_examples/02_decoding/images/sphx_glr_plot_haxby_anova_svm_001.png :alt: plot haxby anova svm :class: sphx-glr-single-img Or we can plot the weights using :class:`nilearn.plotting.view_img` as a dynamic html viewer .. code-block:: default from nilearn.plotting import view_img view_img(weight_img, bg_img=haxby_dataset.anat[0], title="SVM weights", dim=-1) .. only:: builder_html .. raw:: html

Saving the results as a Nifti file may also be important .. code-block:: default weight_img.to_filename('haxby_face_vs_house.nii') .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 22.505 seconds) .. _sphx_glr_download_auto_examples_02_decoding_plot_haxby_anova_svm.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/master?filepath=examples/auto_examples/02_decoding/plot_haxby_anova_svm.ipynb :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_haxby_anova_svm.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_haxby_anova_svm.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_