.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here ` to download the full example code or to run this example in your browser via Binder
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_auto_examples_plot_decoding_tutorial.py:
A introduction tutorial to fMRI decoding
==========================================
Here is a simple tutorial on decoding with nilearn. It reproduces the
Haxby 2001 study on a face vs cat discrimination task in a mask of the
ventral stream.
* J.V. Haxby et al. "Distributed and Overlapping Representations of Faces
and Objects in Ventral Temporal Cortex", Science vol 293 (2001), p
2425.-2430.
This tutorial is meant as an introduction to the various steps of a decoding
analysis using Nilearn meta-estimator: :class:`nilearn.decoding.Decoder`
It is not a minimalistic example, as it strives to be didactic. It is not
meant to be copied to analyze new data: many of the steps are unnecessary.
.. contents:: **Contents**
:local:
:depth: 1
Retrieve and load the fMRI data from the Haxby study
------------------------------------------------------
First download the data
........................
The :func:`nilearn.datasets.fetch_haxby` function will download the
Haxby dataset if not present on the disk, in the nilearn data directory.
It can take a while to download about 310 Mo of data from the Internet.
.. code-block:: default
from nilearn import datasets
# By default 2nd subject will be fetched
haxby_dataset = datasets.fetch_haxby()
# 'func' is a list of filenames: one for each subject
fmri_filename = haxby_dataset.func[0]
# print basic information on the dataset
print('First subject functional nifti images (4D) are at: %s' %
fmri_filename) # 4D data
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
First subject functional nifti images (4D) are at: /home/varoquau/nilearn_data/haxby2001/subj2/bold.nii.gz
Visualizing the fmri volume
............................
One way to visualize a fmri volume is
using :func:`nilearn.plotting.plot_epi`.
We will visualize the previously fetched fmri data from Haxby dataset.
Because fmri data are 4D (they consist of many 3D EPI images), we cannot
plot them directly using :func:`nilearn.plotting.plot_epi` (which accepts
just 3D input). Here we are using :func:`nilearn.image.mean_img` to
extract a single 3D EPI image from the fmri data.
.. code-block:: default
from nilearn import plotting
from nilearn.image import mean_img
plotting.view_img(mean_img(fmri_filename), threshold=None)
.. only:: builder_html
.. raw:: html
Feature extraction: from fMRI volumes to a data matrix
.......................................................
These are some really lovely images, but for machine learning
we need matrices to work with the actual data. Fortunately, the
:class:`nilearn.decoding.Decoder` object we will use later on can
automatically transform Nifti images into matrices.
All we have to do for now is define a mask filename.
A mask of the Ventral Temporal (VT) cortex coming from the
Haxby study is available:
.. code-block:: default
mask_filename = haxby_dataset.mask_vt[0]
# Let's visualize it, using the subject's anatomical image as a
# background
plotting.plot_roi(mask_filename, bg_img=haxby_dataset.anat[0],
cmap='Paired')
.. image:: /auto_examples/images/sphx_glr_plot_decoding_tutorial_001.png
:alt: plot decoding tutorial
:class: sphx-glr-single-img
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
Load the behavioral labels
...........................
Now that the brain images are converted to a data matrix, we can apply
machine-learning to them, for instance to predict the task that the subject
was doing. The behavioral labels are stored in a CSV file, separated by
spaces.
We use pandas to load them in an array.
.. code-block:: default
import pandas as pd
# Load behavioral information
behavioral = pd.read_csv(haxby_dataset.session_target[0], delimiter=' ')
print(behavioral)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
labels chunks
0 rest 0
1 rest 0
2 rest 0
3 rest 0
4 rest 0
... ... ...
1447 rest 11
1448 rest 11
1449 rest 11
1450 rest 11
1451 rest 11
[1452 rows x 2 columns]
The task was a visual-recognition task, and the labels denote the
experimental condition: the type of object that was presented to the
subject. This is what we are going to try to predict.
.. code-block:: default
conditions = behavioral['labels']
print(conditions)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
0 rest
1 rest
2 rest
3 rest
4 rest
...
1447 rest
1448 rest
1449 rest
1450 rest
1451 rest
Name: labels, Length: 1452, dtype: object
Restrict the analysis to cats and faces
........................................
As we can see from the targets above, the experiment contains many
conditions. As a consequence, the data is quite big. Not all of this data
has an interest to us for decoding, so we will keep only fmri signals
corresponding to faces or cats. We create a mask of the samples belonging to
the condition; this mask is then applied to the fmri data to restrict the
classification to the face vs cat discrimination.
The input data will become much smaller (i.e. fmri signal is shorter):
.. code-block:: default
condition_mask = conditions.isin(['face', 'cat'])
Because the data is in one single large 4D image, we need to use
index_img to do the split easily.
.. code-block:: default
from nilearn.image import index_img
fmri_niimgs = index_img(fmri_filename, condition_mask)
We apply the same mask to the targets
.. code-block:: default
conditions = conditions[condition_mask]
# Convert to numpy array
conditions = conditions.values
print(conditions.shape)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
(216,)
Decoding with Support Vector Machine
------------------------------------
As a decoder, we use a Support Vector Classifier with a linear kernel. We
first create it using by using :class:`nilearn.decoding.Decoder`.
.. code-block:: default
from nilearn.decoding import Decoder
decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True)
The decoder object is an object that can be fit (or trained) on data with
labels, and then predict labels on data without.
We first fit it on the data
.. code-block:: default
decoder.fit(fmri_niimgs, conditions)
We can then predict the labels from the data
.. code-block:: default
prediction = decoder.predict(fmri_niimgs)
print(prediction)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
['face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'
'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'
'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'face' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'face'
'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat'
'cat' 'cat' 'cat' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face' 'face'
'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat' 'cat']
Note that for this classification task both classes contain the same number
of samples (the problem is balanced). Then, we can use accuracy to measure
the performance of the decoder. This is done by defining accuracy as the
`scoring`.
Let's measure the prediction accuracy:
.. code-block:: default
print((prediction == conditions).sum() / float(len(conditions)))
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
1.0
This prediction accuracy score is meaningless. Why?
Measuring prediction scores using cross-validation
---------------------------------------------------
The proper way to measure error rates or prediction accuracy is via
cross-validation: leaving out some data and testing on it.
Manually leaving out data
..........................
Let's leave out the 30 last data points during training, and test the
prediction on these 30 last points:
.. code-block:: default
fmri_niimgs_train = index_img(fmri_niimgs, slice(0, -30))
fmri_niimgs_test = index_img(fmri_niimgs, slice(-30, None))
conditions_train = conditions[:-30]
conditions_test = conditions[-30:]
decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True)
decoder.fit(fmri_niimgs_train, conditions_train)
prediction = decoder.predict(fmri_niimgs_test)
# The prediction accuracy is calculated on the test data: this is the accuracy
# of our model on examples it hasn't seen to examine how well the model perform
# in general.
print("Prediction Accuracy: {:.3f}".format(
(prediction == conditions_test).sum() / float(len(conditions_test))))
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
Prediction Accuracy: 0.767
Implementing a KFold loop
..........................
We can manually split the data in train and test set repetitively in a
`KFold` strategy by importing scikit-learn's object:
.. code-block:: default
from sklearn.model_selection import KFold
cv = KFold(n_splits=5)
# The "cv" object's split method can now accept data and create a
# generator which can yield the splits.
fold = 0
for train, test in cv.split(conditions):
fold += 1
decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True)
decoder.fit(index_img(fmri_niimgs, train), conditions[train])
prediction = decoder.predict(index_img(fmri_niimgs, test))
print(
"CV Fold {:01d} | Prediction Accuracy: {:.3f}".format(
fold,
(prediction == conditions[test]).sum() / float(len(
conditions[test]))))
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
CV Fold 1 | Prediction Accuracy: 0.886
CV Fold 2 | Prediction Accuracy: 0.767
CV Fold 3 | Prediction Accuracy: 0.767
CV Fold 4 | Prediction Accuracy: 0.698
CV Fold 5 | Prediction Accuracy: 0.744
Cross-validation with the decoder
...................................
The decoder also implements a cross-validation loop by default and returns
an array of shape (cross-validation parameters, `n_folds`). We can use
accuracy score to measure its performance by defining `accuracy` as the
`scoring` parameter.
.. code-block:: default
n_folds = 5
decoder = Decoder(
estimator='svc', mask=mask_filename,
standardize=True, cv=n_folds,
scoring='accuracy'
)
decoder.fit(fmri_niimgs, conditions)
Cross-validation pipeline can also be implemented manually. More details can
be found on `scikit-learn website
`_.
Then we can check the best performing parameters per fold.
.. code-block:: default
print(decoder.cv_params_['face'])
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
{'C': [100.0, 100.0, 100.0, 100.0, 100.0]}
.. note::
We can speed things up to use all the CPUs of our computer with the
n_jobs parameter.
The best way to do cross-validation is to respect the structure of
the experiment, for instance by leaving out full sessions of
acquisition.
The number of the session is stored in the CSV file giving the
behavioral data. We have to apply our session mask, to select only cats
and faces.
.. code-block:: default
session_label = behavioral['chunks'][condition_mask]
The fMRI data is acquired by sessions, and the noise is autocorrelated in a
given session. Hence, it is better to predict across sessions when doing
cross-validation. To leave a session out, pass the cross-validator object
to the cv parameter of decoder.
.. code-block:: default
from sklearn.model_selection import LeaveOneGroupOut
cv = LeaveOneGroupOut()
decoder = Decoder(estimator='svc', mask=mask_filename, standardize=True,
cv=cv)
decoder.fit(fmri_niimgs, conditions, groups=session_label)
print(decoder.cv_scores_)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
{'cat': [1.0, 1.0, 1.0, 1.0, 0.9629629629629629, 0.8518518518518519, 0.9753086419753086, 0.40740740740740744, 0.9876543209876543, 1.0, 0.9259259259259259, 0.8765432098765432], 'face': [1.0, 1.0, 1.0, 1.0, 0.9629629629629629, 0.8518518518518519, 0.9753086419753086, 0.40740740740740744, 0.9876543209876543, 1.0, 0.9259259259259259, 0.8765432098765432]}
Inspecting the model weights
-----------------------------
Finally, it may be useful to inspect and display the model weights.
Turning the weights into a nifti image
.......................................
We retrieve the SVC discriminating weights
.. code-block:: default
coef_ = decoder.coef_
print(coef_)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
[[-3.88470496e-02 -1.86752301e-02 -3.22274216e-02 -2.88102959e-02
4.17749325e-02 1.10475230e-02 1.69629496e-02 -5.49689746e-02
-1.93774964e-02 -3.50417282e-02 1.08279420e-02 -1.28500904e-02
-1.54318246e-02 -3.78044766e-02 -3.68278843e-02 2.27559486e-02
6.55001569e-03 -7.64035810e-03 1.66730533e-02 -8.00465290e-03
5.28260841e-02 -8.15726427e-02 -6.35569770e-02 2.40756228e-02
4.58822986e-02 -2.22076987e-02 -1.76884867e-02 2.21688791e-02
-9.51060955e-03 5.74705437e-02 2.13813970e-02 -9.12121553e-02
4.02834813e-03 -2.88625303e-02 -3.88125004e-02 -3.34318339e-02
2.20849786e-03 8.71166910e-03 -3.36652654e-02 -2.40699164e-02
-6.80080915e-02 1.65046603e-02 2.70137797e-02 -6.55172891e-03
-1.21378785e-02 5.46414482e-02 8.11539605e-03 3.60127949e-02
-1.52387259e-02 7.01266307e-02 1.28209707e-03 2.07523141e-02
-4.09182233e-03 3.71552557e-02 -3.76543826e-02 -1.03610833e-02
-2.37698340e-02 -5.47603290e-02 4.41999453e-02 -1.47077322e-01
-2.33517585e-02 1.86654146e-02 6.64352621e-02 -9.05549619e-02
-1.21725698e-02 -2.94679392e-03 3.21326085e-02 -3.03306030e-02
6.13950625e-02 1.12012184e-02 1.93353725e-02 -1.30258438e-02
4.41940442e-02 -2.22555559e-02 6.86548103e-02 1.69012021e-02
1.78552870e-02 1.00063616e-02 2.98480821e-02 -2.51580472e-02
1.05947293e-02 -6.30480956e-03 2.21078703e-03 -2.22817281e-02
1.42260127e-02 -1.52785552e-02 -1.97786728e-02 -4.31616469e-02
-4.54072427e-02 3.40811516e-02 -2.78551502e-02 -2.80247240e-02
-3.69302239e-02 -5.70139068e-02 -6.97323201e-02 3.19241723e-03
-8.33510335e-03 -3.36854771e-02 3.03555194e-02 8.66308758e-03
6.17999734e-03 5.92798917e-02 9.05111104e-03 -1.48580755e-02
1.43214948e-02 -1.08762837e-02 2.67057762e-02 4.72688937e-02
-2.95716194e-02 3.08742349e-02 1.57552956e-02 -3.16005309e-02
-3.99190412e-02 -5.38977754e-02 2.81973107e-02 -1.11834233e-02
-5.44116841e-02 6.30731455e-02 -1.49667423e-02 2.47161026e-03
-4.55571956e-02 -1.83424158e-02 1.19705427e-02 -3.71288818e-02
-2.24789274e-03 4.57569977e-02 4.78066089e-02 2.51037931e-03
-4.30721546e-02 -5.33655494e-03 5.75657772e-02 7.39176239e-03
-3.19811241e-02 4.34825118e-03 1.67904380e-02 -2.91879779e-02
-2.23727315e-03 -8.28199549e-03 -9.97902833e-03 2.16656229e-02
-1.92063443e-03 -1.32898766e-02 -2.79642711e-02 -1.74906015e-02
-9.15528100e-03 -7.08025319e-03 -1.42698520e-02 5.05702430e-02
-1.84396745e-02 -4.70413129e-02 1.72190631e-02 -4.75572704e-02
-9.07173538e-04 3.99828969e-02 7.52267793e-02 7.24263299e-03
4.81497704e-02 4.49523379e-02 3.60351787e-02 -8.14533608e-03
1.94961186e-02 3.57077647e-02 4.88176479e-02 3.82080231e-02
6.22475993e-02 6.12260882e-02 -1.68378582e-02 1.66159384e-02
3.34741207e-02 -1.79793491e-02 4.45397278e-02 -3.52426285e-02
-3.66473886e-02 -4.61409223e-03 4.85716305e-02 3.38888095e-02
6.20125212e-03 1.73238804e-02 2.01273546e-02 2.16579963e-02
2.90731527e-02 2.37270512e-02 4.83611676e-02 -9.20501127e-03
-2.81969339e-02 -2.13305250e-02 1.80406458e-03 4.78568230e-02
-9.76550106e-03 1.11160941e-02 -1.64705732e-02 -2.88446602e-02
2.42268357e-02 -1.22079300e-02 -2.92193305e-02 -2.89203731e-02
-3.38761598e-02 -3.64215343e-03 2.64728367e-02 4.57032443e-02
-5.92020175e-02 -2.13146935e-02 -3.08698354e-02 5.48930352e-02
-3.38041466e-02 6.11185692e-03 1.41178873e-02 1.09946985e-02
5.32574663e-02 -2.11838562e-02 6.35988176e-03 -1.12818106e-02
-2.63615194e-02 -2.21910538e-02 -5.30672549e-02 -3.97748526e-02
-1.29431073e-01 -3.27318992e-02 -2.89007278e-02 -9.11560565e-03
-7.26992350e-03 -3.70177539e-02 -6.33422851e-02 2.04153743e-03
-8.24863181e-02 -6.69635553e-02 -2.28586757e-03 -2.32903052e-02
1.77469490e-02 -8.72663964e-02 -2.75697050e-03 -4.37286444e-02
-1.27746226e-02 2.77375396e-02 -4.31668490e-02 -3.21909835e-02
-2.27495813e-02 -2.56845772e-02 2.03155890e-02 -9.88056652e-03
-3.14295368e-02 -1.81001958e-02 -1.11810948e-03 -4.16456522e-02
-6.22036331e-02 2.55427685e-04 -6.72172777e-02 6.52438849e-02
1.06279181e-02 2.21492706e-02 -1.98227544e-02 -1.85107108e-02
4.04761053e-02 -3.02130001e-02 -8.08212852e-02 -7.40773301e-02
-4.92687632e-02 -1.01544870e-02 1.09172301e-02 -4.48197846e-02
2.92093404e-02 7.03567732e-03 5.06297975e-03 -4.82924061e-03
2.48271160e-03 2.99976042e-02 -2.62546227e-03 4.63550320e-03
7.88406440e-02 1.04606760e-02 1.67696076e-02 -4.35721181e-02
-1.08621517e-02 2.09745496e-02 -4.40930298e-02 3.15757738e-03
6.97069224e-02 8.59640338e-02 4.95096261e-02 6.02632267e-03
5.55187662e-02 -2.98208880e-02 4.11946592e-03 -3.21184072e-02
-3.14239366e-02 -5.30016029e-02 2.66641737e-02 3.13671460e-02
6.65645366e-03 -1.28393653e-02 2.19674896e-02 5.67231369e-02
2.25086109e-02 -2.04145898e-02 5.09075541e-03 2.84696658e-02
-1.81224202e-02 -8.46496915e-03 -3.18112018e-02 -1.18216117e-02
-4.09899129e-02 3.11041173e-02 9.61315440e-03 -8.24098277e-03
-3.11481432e-02 8.55845882e-03 -9.67734188e-03 1.32032237e-02
4.05486533e-02 8.21010036e-03 -3.26566678e-02 -4.32627041e-03
-1.75124000e-02 6.87123713e-03 3.44346675e-02 7.01687243e-02
2.16269413e-02 5.30865354e-03 8.15657716e-02 6.38543194e-02
-2.30760087e-03 -1.17255317e-02 1.75482671e-01 3.17386365e-02
-3.15194631e-02 3.33275442e-02 2.22248333e-02 9.99729448e-03
-4.73823357e-02 -2.12285309e-02 -3.97808585e-02 -6.02664877e-02
-4.63979431e-02 1.02755643e-02 -3.05362102e-04 1.80352494e-02
-1.75047995e-02 -8.70569631e-02 1.00429828e-01 4.45197303e-03
7.45126483e-02 -6.11978837e-02 2.81053600e-02 -1.40642871e-02
3.13916044e-02 -1.63457985e-02 3.65700295e-02 -5.14604248e-03
1.44762348e-02 6.34381595e-02 2.34023563e-02 8.79030548e-02
6.13933626e-02 -1.39020262e-02 2.06770969e-02 -3.14576941e-03
5.14253734e-02 -2.88097191e-02 1.59905296e-02 2.09224028e-02
-3.28411153e-02 -2.58827170e-02 -5.59115227e-02 -3.63771077e-02
1.12592690e-02 2.16792660e-02 -1.51322002e-02 -7.81031897e-03
2.42020438e-02 9.44817325e-02 -2.62430992e-02 1.16214592e-04
-5.23300137e-03 4.17021198e-02 8.83630746e-02 6.22090665e-03
1.86171395e-02 1.54275525e-02 3.49538821e-03 6.19235901e-03
-1.19520351e-02 1.59130990e-02 7.10295111e-03 -8.91137635e-02
-3.53347077e-03 1.23196842e-02 3.03209946e-02 -2.36741571e-02
-3.81955289e-02 -4.97580907e-02 4.65828711e-02 -1.23004149e-02
-1.10093378e-02 2.17590868e-02 2.18215021e-02 2.62921086e-02
1.05029259e-02 1.84180740e-02 8.31863689e-04 -6.63456331e-03
3.48613706e-02 1.48989047e-02 -1.11347730e-02 6.67404896e-03
-1.99598221e-02 -3.98118744e-02 3.01188722e-02 -1.09588144e-02
-4.10855524e-02 2.71430985e-02 1.16142877e-02 -1.55127583e-02
3.26914658e-02 3.94583199e-02 8.47078829e-03 2.19426825e-02
-9.86320779e-03 -3.60583580e-02 -4.75927887e-02 1.89644982e-02
-5.57029385e-02 -3.31010398e-02 -2.24391037e-02 -3.35395732e-02
-4.06399133e-02 1.08606942e-02 1.12545843e-02 7.61359079e-02
4.03814581e-03 3.06311858e-02 2.88500098e-02 4.70360031e-03
5.12218779e-02 -4.09447951e-02 1.22994825e-03 -2.49838000e-02
5.84588157e-02 -1.04721987e-01 -4.40682531e-02 1.18250700e-02
-5.81868755e-02 -4.81118336e-02 9.15446344e-03 1.03017952e-02
-5.07928493e-03 -3.22632790e-02 -3.18664812e-02 -1.53435935e-02
-5.19995850e-02 1.55243814e-02 2.92794304e-02 -1.92077667e-02
1.76309598e-02 2.67380860e-02 5.75230403e-02 -1.37856317e-02
2.59815284e-02 1.50061350e-02 1.27145217e-02 -2.28689359e-02
-1.06428882e-02 9.79310377e-03 -4.76402200e-02 1.63869589e-02]]
It's a numpy array with only one coefficient per voxel:
.. code-block:: default
print(coef_.shape)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
(1, 464)
To get the Nifti image of these coefficients, we only need retrieve the
`coef_img_` in the decoder and select the class
.. code-block:: default
coef_img = decoder.coef_img_['face']
coef_img is now a NiftiImage. We can save the coefficients as a nii.gz file:
.. code-block:: default
decoder.coef_img_['face'].to_filename('haxby_svc_weights.nii.gz')
Plotting the SVM weights
.........................
We can plot the weights, using the subject's anatomical as a background
.. code-block:: default
plotting.view_img(
decoder.coef_img_['face'], bg_img=haxby_dataset.anat[0],
title="SVM weights", dim=-1
)
.. only:: builder_html
.. raw:: html