.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_decoding_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_decoding_tutorial.py: A introduction tutorial to fMRI decoding ========================================== Here is a simple tutorial on decoding with nilearn. It reproduces the Haxby 2001 study on a face vs cat discrimination task in a mask of the ventral stream. * J.V. Haxby et al. "Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex", Science vol 293 (2001), p 2425.-2430. This tutorial is meant as an introduction to the various steps of a decoding analysis using Nilearn meta-estimator: :class:`nilearn.decoding.Decoder` It is not a minimalistic example, as it strives to be didactic. It is not meant to be copied to analyze new data: many of the steps are unnecessary. .. contents:: **Contents** :local: :depth: 1 .. GENERATED FROM PYTHON SOURCE LINES 26-35 Retrieve and load the fMRI data from the Haxby study ------------------------------------------------------ First download the data ........................ The :func:`nilearn.datasets.fetch_haxby` function will download the Haxby dataset if not present on the disk, in the nilearn data directory. It can take a while to download about 310 Mo of data from the Internet. .. GENERATED FROM PYTHON SOURCE LINES 35-45 .. code-block:: default from nilearn import datasets # By default 2nd subject will be fetched haxby_dataset = datasets.fetch_haxby() # 'func' is a list of filenames: one for each subject fmri_filename = haxby_dataset.func[0] # print basic information on the dataset print('First subject functional nifti images (4D) are at: %s' % fmri_filename) # 4D data .. rst-class:: sphx-glr-script-out Out: .. code-block:: none First subject functional nifti images (4D) are at: /home/nicolas/nilearn_data/haxby2001/subj2/bold.nii.gz .. GENERATED FROM PYTHON SOURCE LINES 46-58 Visualizing the fmri volume ............................ One way to visualize a :term:`fmri` volume is using :func:`nilearn.plotting.plot_epi`. We will visualize the previously fetched :term:`fmri` data from Haxby dataset. Because :term:`fmri` data are 4D (they consist of many 3D EPI images), we cannot plot them directly using :func:`nilearn.plotting.plot_epi` (which accepts just 3D input). Here we are using :func:`nilearn.image.mean_img` to extract a single 3D EPI image from the :term:`fmri` data. .. GENERATED FROM PYTHON SOURCE LINES 58-62 .. code-block:: default from nilearn import plotting from nilearn.image import mean_img plotting.view_img(mean_img(fmri_filename), threshold=None) .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 63-74 Feature extraction: from fMRI volumes to a data matrix ....................................................... These are some really lovely images, but for machine learning we need matrices to work with the actual data. Fortunately, the :class:`nilearn.decoding.Decoder` object we will use later on can automatically transform Nifti images into matrices. All we have to do for now is define a mask filename. A mask of the Ventral Temporal (VT) cortex coming from the Haxby study is available: .. GENERATED FROM PYTHON SOURCE LINES 74-81 .. code-block:: default mask_filename = haxby_dataset.mask_vt[0] # Let's visualize it, using the subject's anatomical image as a # background plotting.plot_roi(mask_filename, bg_img=haxby_dataset.anat[0], cmap='Paired') .. image-sg:: /auto_examples/images/sphx_glr_plot_decoding_tutorial_001.png :alt: plot decoding tutorial :srcset: /auto_examples/images/sphx_glr_plot_decoding_tutorial_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 82-91 Load the behavioral labels ........................... Now that the brain images are converted to a data matrix, we can apply machine-learning to them, for instance to predict the task that the subject was doing. The behavioral labels are stored in a CSV file, separated by spaces. We use pandas to load them in an array. .. GENERATED FROM PYTHON SOURCE LINES 91-96 .. code-block:: default import pandas as pd # Load behavioral information behavioral = pd.read_csv(haxby_dataset.session_target[0], delimiter=' ') print(behavioral) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none labels chunks 0 rest 0 1 rest 0 2 rest 0 3 rest 0 4 rest 0 ... ... ... 1447 rest 11 1448 rest 11 1449 rest 11 1450 rest 11 1451 rest 11 [1452 rows x 2 columns] .. GENERATED FROM PYTHON SOURCE LINES 97-100 The task was a visual-recognition task, and the labels denote the experimental condition: the type of object that was presented to the subject. This is what we are going to try to predict. .. GENERATED FROM PYTHON SOURCE LINES 100-103 .. code-block:: default conditions = behavioral['labels'] print(conditions) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 0 rest 1 rest 2 rest 3 rest 4 rest ... 1447 rest 1448 rest 1449 rest 1450 rest 1451 rest Name: labels, Length: 1452, dtype: object .. GENERATED FROM PYTHON SOURCE LINES 104-115 Restrict the analysis to cats and faces ........................................ As we can see from the targets above, the experiment contains many conditions. As a consequence, the data is quite big. Not all of this data has an interest to us for decoding, so we will keep only :term:`fmri` signals corresponding to faces or cats. We create a mask of the samples belonging to the condition; this mask is then applied to the :term:`fmri` data to restrict the classification to the face vs cat discrimination. The input data will become much smaller (i.e. :term:`fmri` signal is shorter): .. GENERATED FROM PYTHON SOURCE LINES 115-117 .. code-block:: default condition_mask = conditions.isin(['face', 'cat']) .. GENERATED FROM PYTHON SOURCE LINES 118-120 Because the data is in one single large 4D image, we need to use index_img to do the split easily. .. GENERATED FROM PYTHON SOURCE LINES 120-123 .. code-block:: default from nilearn.image import index_img fmri_niimgs = index_img(fmri_filename, condition_mask) .. GENERATED FROM PYTHON SOURCE LINES 124-125 We apply the same mask to the targets .. GENERATED FROM PYTHON SOURCE LINES 125-130 .. code-block:: default conditions = conditions[condition_mask] # Convert to numpy array conditions = conditions.values print(conditions.shape) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none (216,) .. GENERATED FROM PYTHON SOURCE LINES 131-136 Decoding with Support Vector Machine ------------------------------------ As a decoder, we use a Support Vector Classifier with a linear kernel. We first create it using by using :class:`nilearn.decoding.Decoder`. .. GENERATED FROM PYTHON SOURCE LINES 136-139 .. code-block:: default from nilearn.decoding import Decoder decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True) .. GENERATED FROM PYTHON SOURCE LINES 140-144 The decoder object is an object that can be fit (or trained) on data with labels, and then predict labels on data without. We first fit it on the data .. GENERATED FROM PYTHON SOURCE LINES 144-146 .. code-block:: default decoder.fit(fmri_niimgs, conditions) .. GENERATED FROM PYTHON SOURCE LINES 147-148 We can then predict the labels from the data .. GENERATED FROM PYTHON SOURCE LINES 148-151 .. code-block:: default prediction = decoder.predict(fmri_niimgs) print(prediction) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none ['face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'] .. GENERATED FROM PYTHON SOURCE LINES 152-157 Note that for this classification task both classes contain the same number of samples (the problem is balanced). Then, we can use accuracy to measure the performance of the decoder. This is done by defining accuracy as the `scoring`. Let's measure the prediction accuracy: .. GENERATED FROM PYTHON SOURCE LINES 157-159 .. code-block:: default print((prediction == conditions).sum() / float(len(conditions))) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 1.0 .. GENERATED FROM PYTHON SOURCE LINES 160-161 This prediction accuracy score is meaningless. Why? .. GENERATED FROM PYTHON SOURCE LINES 163-174 Measuring prediction scores using cross-validation --------------------------------------------------- The proper way to measure error rates or prediction accuracy is via cross-validation: leaving out some data and testing on it. Manually leaving out data .......................... Let's leave out the 30 last data points during training, and test the prediction on these 30 last points: .. GENERATED FROM PYTHON SOURCE LINES 174-191 .. code-block:: default fmri_niimgs_train = index_img(fmri_niimgs, slice(0, -30)) fmri_niimgs_test = index_img(fmri_niimgs, slice(-30, None)) conditions_train = conditions[:-30] conditions_test = conditions[-30:] decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True) decoder.fit(fmri_niimgs_train, conditions_train) prediction = decoder.predict(fmri_niimgs_test) # The prediction accuracy is calculated on the test data: this is the accuracy # of our model on examples it hasn't seen to examine how well the model perform # in general. print("Prediction Accuracy: {:.3f}".format( (prediction == conditions_test).sum() / float(len(conditions_test)))) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Prediction Accuracy: 0.767 .. GENERATED FROM PYTHON SOURCE LINES 192-197 Implementing a KFold loop .......................... We can manually split the data in train and test set repetitively in a `KFold` strategy by importing scikit-learn's object: .. GENERATED FROM PYTHON SOURCE LINES 197-214 .. code-block:: default from sklearn.model_selection import KFold cv = KFold(n_splits=5) # The "cv" object's split method can now accept data and create a # generator which can yield the splits. fold = 0 for train, test in cv.split(conditions): fold += 1 decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True) decoder.fit(index_img(fmri_niimgs, train), conditions[train]) prediction = decoder.predict(index_img(fmri_niimgs, test)) print( "CV Fold {:01d} | Prediction Accuracy: {:.3f}".format( fold, (prediction == conditions[test]).sum() / float(len( conditions[test])))) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none CV Fold 1 | Prediction Accuracy: 0.886 CV Fold 2 | Prediction Accuracy: 0.767 CV Fold 3 | Prediction Accuracy: 0.767 CV Fold 4 | Prediction Accuracy: 0.698 CV Fold 5 | Prediction Accuracy: 0.744 .. GENERATED FROM PYTHON SOURCE LINES 215-222 Cross-validation with the decoder ................................... The decoder also implements a cross-validation loop by default and returns an array of shape (cross-validation parameters, `n_folds`). We can use accuracy score to measure its performance by defining `accuracy` as the `scoring` parameter. .. GENERATED FROM PYTHON SOURCE LINES 222-230 .. code-block:: default n_folds = 5 decoder = Decoder( estimator='svc', mask=mask_filename, standardize=True, cv=n_folds, scoring='accuracy' ) decoder.fit(fmri_niimgs, conditions) .. GENERATED FROM PYTHON SOURCE LINES 231-236 Cross-validation pipeline can also be implemented manually. More details can be found on `scikit-learn website `_. Then we can check the best performing parameters per fold. .. GENERATED FROM PYTHON SOURCE LINES 236-238 .. code-block:: default print(decoder.cv_params_['face']) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none {'C': [100.0, 100.0, 100.0, 100.0, 100.0]} .. GENERATED FROM PYTHON SOURCE LINES 239-250 .. note:: We can speed things up to use all the CPUs of our computer with the n_jobs parameter. The best way to do cross-validation is to respect the structure of the experiment, for instance by leaving out full sessions of acquisition. The number of the session is stored in the CSV file giving the behavioral data. We have to apply our session mask, to select only cats and faces. .. GENERATED FROM PYTHON SOURCE LINES 250-252 .. code-block:: default session_label = behavioral['chunks'][condition_mask] .. GENERATED FROM PYTHON SOURCE LINES 253-257 The :term:`fMRI` data is acquired by sessions, and the noise is autocorrelated in a given session. Hence, it is better to predict across sessions when doing cross-validation. To leave a session out, pass the cross-validator object to the cv parameter of decoder. .. GENERATED FROM PYTHON SOURCE LINES 257-266 .. code-block:: default from sklearn.model_selection import LeaveOneGroupOut cv = LeaveOneGroupOut() decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True, cv=cv) decoder.fit(fmri_niimgs, conditions, groups=session_label) print(decoder.cv_scores_) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none {'cat': [1.0, 1.0, 1.0, 1.0, 0.9629629629629629, 0.8518518518518519, 0.9753086419753086, 0.40740740740740744, 0.9876543209876543, 1.0, 0.9259259259259259, 0.8765432098765432], 'face': [1.0, 1.0, 1.0, 1.0, 0.9629629629629629, 0.8518518518518519, 0.9753086419753086, 0.40740740740740744, 0.9876543209876543, 1.0, 0.9259259259259259, 0.8765432098765432]} .. GENERATED FROM PYTHON SOURCE LINES 267-276 Inspecting the model weights ----------------------------- Finally, it may be useful to inspect and display the model weights. Turning the weights into a nifti image ....................................... We retrieve the SVC discriminating weights .. GENERATED FROM PYTHON SOURCE LINES 276-279 .. code-block:: default coef_ = decoder.coef_ print(coef_) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none [[-3.88471474e-02 -1.86752698e-02 -3.22275065e-02 -2.88102991e-02 4.17748810e-02 1.10473660e-02 1.69628257e-02 -5.49688654e-02 -1.93774326e-02 -3.50417593e-02 1.08277912e-02 -1.28502224e-02 -1.54318657e-02 -3.78043744e-02 -3.68281077e-02 2.27557384e-02 6.55001884e-03 -7.64051051e-03 1.66732471e-02 -8.00451570e-03 5.28261410e-02 -8.15726928e-02 -6.35570435e-02 2.40758513e-02 4.58825050e-02 -2.22076016e-02 -1.76882499e-02 2.21689199e-02 -9.51041965e-03 5.74705392e-02 2.13813720e-02 -9.12118059e-02 4.02849473e-03 -2.88624493e-02 -3.88125730e-02 -3.34318183e-02 2.20860768e-03 8.71162189e-03 -3.36652710e-02 -2.40701315e-02 -6.80081240e-02 1.65044545e-02 2.70136462e-02 -6.55149158e-03 -1.21379858e-02 5.46412703e-02 8.11520828e-03 3.60126798e-02 -1.52386730e-02 7.01266269e-02 1.28207385e-03 2.07521965e-02 -4.09195931e-03 3.71552102e-02 -3.76545829e-02 -1.03610425e-02 -2.37699346e-02 -5.47603623e-02 4.41999694e-02 -1.47077305e-01 -2.33518087e-02 1.86654943e-02 6.64350520e-02 -9.05549398e-02 -1.21726575e-02 -2.94685989e-03 3.21323904e-02 -3.03306583e-02 6.13950892e-02 1.12014008e-02 1.93353881e-02 -1.30258146e-02 4.41941775e-02 -2.22555738e-02 6.86547156e-02 1.69012508e-02 1.78551707e-02 1.00063220e-02 2.98481856e-02 -2.51579744e-02 1.05947309e-02 -6.30485281e-03 2.21070285e-03 -2.22817518e-02 1.42258762e-02 -1.52784710e-02 -1.97786904e-02 -4.31614092e-02 -4.54073902e-02 3.40811317e-02 -2.78551896e-02 -2.80246365e-02 -3.69302917e-02 -5.70139089e-02 -6.97322689e-02 3.19254808e-03 -8.33527345e-03 -3.36854622e-02 3.03554384e-02 8.66312231e-03 6.18005231e-03 5.92799271e-02 9.05107279e-03 -1.48581594e-02 1.43215503e-02 -1.08763077e-02 2.67058107e-02 4.72688206e-02 -2.95716018e-02 3.08742381e-02 1.57553268e-02 -3.16008567e-02 -3.99191512e-02 -5.38977234e-02 2.81972390e-02 -1.11834568e-02 -5.44118497e-02 6.30729946e-02 -1.49667482e-02 2.47164043e-03 -4.55571388e-02 -1.83424169e-02 1.19705799e-02 -3.71288402e-02 -2.24775575e-03 4.57568720e-02 4.78066908e-02 2.51053970e-03 -4.30720618e-02 -5.33645113e-03 5.75660192e-02 7.39160384e-03 -3.19808680e-02 4.34829003e-03 1.67904458e-02 -2.91879634e-02 -2.23718036e-03 -8.28196242e-03 -9.97898451e-03 2.16654932e-02 -1.92048026e-03 -1.32900314e-02 -2.79642960e-02 -1.74906411e-02 -9.15562893e-03 -7.08017908e-03 -1.42698555e-02 5.05700923e-02 -1.84396565e-02 -4.70412838e-02 1.72189620e-02 -4.75570757e-02 -9.07032582e-04 3.99829711e-02 7.52268336e-02 7.24262019e-03 4.81496146e-02 4.49524148e-02 3.60350346e-02 -8.14557990e-03 1.94961202e-02 3.57077943e-02 4.88178003e-02 3.82079527e-02 6.22477054e-02 6.12260547e-02 -1.68378257e-02 1.66159618e-02 3.34742035e-02 -1.79794232e-02 4.45397077e-02 -3.52425345e-02 -3.66475243e-02 -4.61404552e-03 4.85714236e-02 3.38887271e-02 6.20133259e-03 1.73238094e-02 2.01274409e-02 2.16579775e-02 2.90731923e-02 2.37270806e-02 4.83610933e-02 -9.20485992e-03 -2.81970224e-02 -2.13305132e-02 1.80416048e-03 4.78567250e-02 -9.76570138e-03 1.11161684e-02 -1.64704905e-02 -2.88446598e-02 2.42268822e-02 -1.22079044e-02 -2.92192277e-02 -2.89205835e-02 -3.38759848e-02 -3.64225374e-03 2.64728393e-02 4.57031031e-02 -5.92019321e-02 -2.13147651e-02 -3.08700292e-02 5.48929429e-02 -3.38041329e-02 6.11194383e-03 1.41179893e-02 1.09945772e-02 5.32575913e-02 -2.11837837e-02 6.35998867e-03 -1.12817560e-02 -2.63616199e-02 -2.21910171e-02 -5.30671217e-02 -3.97749783e-02 -1.29431049e-01 -3.27321909e-02 -2.89007850e-02 -9.11544522e-03 -7.26998219e-03 -3.70177300e-02 -6.33422502e-02 2.04151488e-03 -8.24863156e-02 -6.69633355e-02 -2.28575480e-03 -2.32902182e-02 1.77469104e-02 -8.72666080e-02 -2.75716329e-03 -4.37286288e-02 -1.27745656e-02 2.77376000e-02 -4.31669173e-02 -3.21908792e-02 -2.27496918e-02 -2.56846189e-02 2.03153804e-02 -9.88056142e-03 -3.14294174e-02 -1.81002264e-02 -1.11816583e-03 -4.16457585e-02 -6.22037977e-02 2.55669847e-04 -6.72171423e-02 6.52437693e-02 1.06279366e-02 2.21492892e-02 -1.98226222e-02 -1.85108541e-02 4.04761240e-02 -3.02131050e-02 -8.08214135e-02 -7.40777012e-02 -4.92687700e-02 -1.01543286e-02 1.09171335e-02 -4.48197713e-02 2.92094192e-02 7.03561372e-03 5.06281738e-03 -4.82932942e-03 2.48280304e-03 2.99975550e-02 -2.62550582e-03 4.63553213e-03 7.88405893e-02 1.04607214e-02 1.67696906e-02 -4.35720340e-02 -1.08620678e-02 2.09746603e-02 -4.40931811e-02 3.15756506e-03 6.97069521e-02 8.59639915e-02 4.95098296e-02 6.02611213e-03 5.55188119e-02 -2.98207598e-02 4.11957084e-03 -3.21182752e-02 -3.14239649e-02 -5.30017630e-02 2.66641958e-02 3.13670310e-02 6.65651729e-03 -1.28393341e-02 2.19674820e-02 5.67233647e-02 2.25086444e-02 -2.04145480e-02 5.09086354e-03 2.84696077e-02 -1.81223172e-02 -8.46498874e-03 -3.18111547e-02 -1.18214896e-02 -4.09899731e-02 3.11042024e-02 9.61298420e-03 -8.24099705e-03 -3.11481991e-02 8.55855719e-03 -9.67726349e-03 1.32032669e-02 4.05486322e-02 8.21003357e-03 -3.26565066e-02 -4.32641444e-03 -1.75123384e-02 6.87118831e-03 3.44345247e-02 7.01686749e-02 2.16268933e-02 5.30881898e-03 8.15657118e-02 6.38546186e-02 -2.30739092e-03 -1.17255034e-02 1.75482719e-01 3.17387751e-02 -3.15194470e-02 3.33276022e-02 2.22249795e-02 9.99731683e-03 -4.73822726e-02 -2.12285187e-02 -3.97808881e-02 -6.02666393e-02 -4.63978532e-02 1.02756110e-02 -3.05539275e-04 1.80352026e-02 -1.75048813e-02 -8.70569682e-02 1.00429880e-01 4.45200071e-03 7.45125517e-02 -6.11976730e-02 2.81053058e-02 -1.40642954e-02 3.13915978e-02 -1.63458002e-02 3.65700026e-02 -5.14578567e-03 1.44761905e-02 6.34384740e-02 2.34025363e-02 8.79029499e-02 6.13932764e-02 -1.39021318e-02 2.06770412e-02 -3.14566537e-03 5.14253427e-02 -2.88096575e-02 1.59905891e-02 2.09225471e-02 -3.28412602e-02 -2.58827453e-02 -5.59115609e-02 -3.63771379e-02 1.12593882e-02 2.16793538e-02 -1.51321381e-02 -7.81037898e-03 2.42019042e-02 9.44820270e-02 -2.62431181e-02 1.16215826e-04 -5.23302423e-03 4.17021354e-02 8.83630524e-02 6.22087048e-03 1.86170873e-02 1.54274946e-02 3.49553225e-03 6.19259306e-03 -1.19521081e-02 1.59130713e-02 7.10279664e-03 -8.91138770e-02 -3.53345730e-03 1.23198426e-02 3.03211669e-02 -2.36743151e-02 -3.81953142e-02 -4.97581304e-02 4.65828436e-02 -1.23003062e-02 -1.10092926e-02 2.17590926e-02 2.18214022e-02 2.62921062e-02 1.05030193e-02 1.84181030e-02 8.31932277e-04 -6.63467898e-03 3.48614301e-02 1.48989493e-02 -1.11346844e-02 6.67399322e-03 -1.99599028e-02 -3.98119070e-02 3.01189062e-02 -1.09587831e-02 -4.10854906e-02 2.71434223e-02 1.16142432e-02 -1.55126223e-02 3.26915309e-02 3.94583648e-02 8.47102691e-03 2.19426122e-02 -9.86316420e-03 -3.60583081e-02 -4.75927988e-02 1.89646006e-02 -5.57028373e-02 -3.31012515e-02 -2.24390347e-02 -3.35396585e-02 -4.06399601e-02 1.08607530e-02 1.12544796e-02 7.61359179e-02 4.03806340e-03 3.06311235e-02 2.88498479e-02 4.70369969e-03 5.12217430e-02 -4.09447752e-02 1.23001689e-03 -2.49838400e-02 5.84586569e-02 -1.04721999e-01 -4.40680742e-02 1.18251148e-02 -5.81869818e-02 -4.81118842e-02 9.15431822e-03 1.03017434e-02 -5.07929860e-03 -3.22633311e-02 -3.18666324e-02 -1.53434335e-02 -5.19996592e-02 1.55244932e-02 2.92795400e-02 -1.92076779e-02 1.76309679e-02 2.67380185e-02 5.75230879e-02 -1.37856752e-02 2.59815388e-02 1.50062168e-02 1.27146432e-02 -2.28689929e-02 -1.06426763e-02 9.79331908e-03 -4.76402098e-02 1.63869458e-02]] .. GENERATED FROM PYTHON SOURCE LINES 280-281 It's a numpy array with only one coefficient per voxel: .. GENERATED FROM PYTHON SOURCE LINES 281-283 .. code-block:: default print(coef_.shape) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none (1, 464) .. GENERATED FROM PYTHON SOURCE LINES 284-286 To get the Nifti image of these coefficients, we only need retrieve the `coef_img_` in the decoder and select the class .. GENERATED FROM PYTHON SOURCE LINES 286-289 .. code-block:: default coef_img = decoder.coef_img_['face'] .. GENERATED FROM PYTHON SOURCE LINES 290-291 coef_img is now a NiftiImage. We can save the coefficients as a nii.gz file: .. GENERATED FROM PYTHON SOURCE LINES 291-293 .. code-block:: default decoder.coef_img_['face'].to_filename('haxby_svc_weights.nii.gz') .. GENERATED FROM PYTHON SOURCE LINES 294-298 Plotting the SVM weights ......................... We can plot the weights, using the subject's anatomical as a background .. GENERATED FROM PYTHON SOURCE LINES 298-303 .. code-block:: default plotting.view_img( decoder.coef_img_['face'], bg_img=haxby_dataset.anat[0], title="SVM weights", dim=-1 ) .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 304-311 What is the chance level accuracy? ---------------------------------- Does the model above perform better than chance? To answer this question, we measure a score at random using simple strategies that are implemented in the :class:`nilearn.decoding.Decoder` object. This is useful to inspect the decoding performance by comparing to a score at chance. .. GENERATED FROM PYTHON SOURCE LINES 313-315 Let's define a object with Dummy estimator replacing 'svc' for classification setting. This object initializes estimator with default dummy strategy. .. GENERATED FROM PYTHON SOURCE LINES 315-322 .. code-block:: default dummy_decoder = Decoder(estimator='dummy_classifier', mask=mask_filename, cv=cv) dummy_decoder.fit(fmri_niimgs, conditions, groups=session_label) # Now, we can compare these scores by simply taking a mean over folds print(dummy_decoder.cv_scores_) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none {'cat': [0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895], 'face': [0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.38888888888888895, 0.6111111111111112, 0.38888888888888895]} .. GENERATED FROM PYTHON SOURCE LINES 323-336 Further reading ---------------- * The :ref:`section of the documentation on decoding ` * :ref:`sphx_glr_auto_examples_02_decoding_plot_haxby_anova_svm.py` For decoding without a precomputed mask * :ref:`frem` * :ref:`space_net` ______________ .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 34.622 seconds) **Estimated memory usage:** 936 MB .. _sphx_glr_download_auto_examples_plot_decoding_tutorial.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn.github.io/main?filepath=examples/auto_examples/plot_decoding_tutorial.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_decoding_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_decoding_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_