.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/05_glm_second_level/plot_second_level_one_sample_test.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_05_glm_second_level_plot_second_level_one_sample_test.py: Second-level fMRI model: one sample test ======================================== Full step-by-step example of fitting a :term:`GLM` to perform a second-level analysis (one-sample test) and visualizing the results. More specifically: 1. A sequence of subject :term:`fMRI` button press contrasts is downloaded. 2. A mask of the useful brain volume is computed. 3. A one-sample t-test is applied to the brain maps. We focus on a given contrast of the localizer dataset: the motor response to left versus right button press. Both at the individual and group level, this is expected to elicit activity in the motor cortex (positive in the right hemisphere, negative in the left hemisphere). .. GENERATED FROM PYTHON SOURCE LINES 20-23 .. code-block:: Python from nilearn import plotting .. GENERATED FROM PYTHON SOURCE LINES 24-30 Fetch dataset ------------- We download a list of left vs right button press :term:`contrasts` from a localizer dataset. Note that we fetch individual t-maps that represent the :term:`BOLD` activity estimate divided by the uncertainty about this estimate. .. GENERATED FROM PYTHON SOURCE LINES 30-40 .. code-block:: Python from nilearn.datasets import fetch_localizer_contrasts n_subjects = 16 data = fetch_localizer_contrasts( ["left vs right button press"], n_subjects, get_tmaps=True, legacy_format=False, ) .. GENERATED FROM PYTHON SOURCE LINES 41-46 Display subject t_maps ---------------------- We plot a grid with all the subjects t-maps thresholded at t = 2 for simple visualization purposes. The button press effect is visible among all subjects. .. GENERATED FROM PYTHON SOURCE LINES 46-63 .. code-block:: Python import matplotlib.pyplot as plt subjects = data["ext_vars"]["participant_id"].tolist() fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8)) for cidx, tmap in enumerate(data["tmaps"]): plotting.plot_glass_brain( tmap, colorbar=False, threshold=2.0, title=subjects[cidx], axes=axes[int(cidx / 4), int(cidx % 4)], plot_abs=False, display_mode="z", ) fig.suptitle("subjects t_map left-right button press") plt.show() .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_001.png :alt: subjects t_map left-right button press :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 64-71 Estimate second level model --------------------------- We wish to perform a one-sample test. In order to do so, we need to create a design matrix that determines how the analysis will be performed. For a one-sample test, all we need to include in the design matrix is a single column of ones, corresponding to the model intercept. .. GENERATED FROM PYTHON SOURCE LINES 71-79 .. code-block:: Python import pandas as pd second_level_input = data["cmaps"] design_matrix = pd.DataFrame( [1] * len(second_level_input), columns=["intercept"], ) .. GENERATED FROM PYTHON SOURCE LINES 80-81 Next, we specify the model and fit it. .. GENERATED FROM PYTHON SOURCE LINES 81-89 .. code-block:: Python from nilearn.glm.second_level import SecondLevelModel second_level_model = SecondLevelModel(smoothing_fwhm=8.0, n_jobs=2) second_level_model = second_level_model.fit( second_level_input, design_matrix=design_matrix, ) .. GENERATED FROM PYTHON SOURCE LINES 90-92 To estimate the :term:`contrast` is very simple. We can just provide the column name of the design matrix. .. GENERATED FROM PYTHON SOURCE LINES 92-97 .. code-block:: Python z_map = second_level_model.compute_contrast( second_level_contrast="intercept", output_type="z_score", ) .. GENERATED FROM PYTHON SOURCE LINES 98-100 We threshold the second level :term:`contrast` at uncorrected p < 0.001 and plot it. .. GENERATED FROM PYTHON SOURCE LINES 100-115 .. code-block:: Python from scipy.stats import norm p_val = 0.001 p001_unc = norm.isf(p_val) display = plotting.plot_glass_brain( z_map, threshold=p001_unc, colorbar=True, display_mode="z", plot_abs=False, title="group left-right button press (unc p<0.001)", figure=plt.figure(figsize=(5, 5)), ) plotting.show() .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_002.png :alt: plot second level one sample test :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 116-117 As expected, we find the motor cortex. .. GENERATED FROM PYTHON SOURCE LINES 119-121 Next, we compute the (corrected) p-values with a parametric test to compare them with the results from a nonparametric test. .. GENERATED FROM PYTHON SOURCE LINES 121-133 .. code-block:: Python import numpy as np from nilearn.image import get_data, math_img p_val = second_level_model.compute_contrast(output_type="p_value") n_voxels = np.sum(get_data(second_level_model.masker_.mask_img_)) # Correcting the p-values for multiple testing and taking negative logarithm neg_log_pval = math_img( f"-np.log10(np.minimum(1, img * {str(n_voxels)}))", img=p_val, ) .. rst-class:: sphx-glr-script-out .. code-block:: none :1: RuntimeWarning: divide by zero encountered in log10 .. GENERATED FROM PYTHON SOURCE LINES 134-161 Now, we compute the (corrected) p-values with a permutation test. We will use :func:`~nilearn.glm.second_level.non_parametric_inference` for this step, although :func:`~nilearn.mass_univariate.permuted_ols` could be used as well (pending additional steps to mask and reformat the inputs). .. important:: One key difference between :obj:`~nilearn.glm.second_level.SecondLevelModel` and :func:`~nilearn.glm.second_level.non_parametric_inference`/ :func:`~nilearn.mass_univariate.permuted_ols` is that the one-sample test in non_parametric_inference/permuted_ols assumes that the distribution is symmetric about 0, which is is weaker than the SecondLevelModel's assumption that the null distribution is Gaussian and centered about 0. .. important:: In this example, ``threshold`` is set to 0.001, which enables cluster-level inference. Performing cluster-level inference will increase the computation time of the permutation procedure. Increasing the number of parallel jobs (``n_jobs``) can reduce the time cost. .. hint:: If you wish to only run voxel-level correction, set ``threshold`` to None (the default). .. GENERATED FROM PYTHON SOURCE LINES 161-174 .. code-block:: Python from nilearn.glm.second_level import non_parametric_inference out_dict = non_parametric_inference( second_level_input, design_matrix=design_matrix, model_intercept=True, n_perm=500, # 500 for the sake of time. Ideally, this should be 10,000. two_sided_test=False, smoothing_fwhm=8.0, n_jobs=2, threshold=0.001, ) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/himanshu/Desktop/nilearn_work/nilearn/nilearn/mass_univariate/permuted_least_squares.py:986: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32. image.new_img_like(masker.mask_img_, metric_map), /home/himanshu/Desktop/nilearn_work/nilearn/nilearn/masking.py:980: UserWarning: Data array used to create a new image contains 64-bit ints. This is likely due to creating the array with numpy and passing `int` as the `dtype`. Many tools such as FSL and SPM cannot deal with int64 in Nifti images, so for compatibility the data has been converted to int32. return new_img_like(mask_img, unmasked, affine) .. GENERATED FROM PYTHON SOURCE LINES 175-187 Let us plot the (corrected) negative log p-values for the both tests. We will use a negative log10 p threshold of 1, which corresponds to p<0.1. This threshold indicates that there is less than 10% probability to make a single false discovery (90% chance that we make no false discovery at all). This threshold is much more conservative than an uncorrected threshold, but is still more liberal than a typical corrected threshold for this kind of analysis, which tends to be ~0.05. We will also cap the negative log10 p-values at 2.69, because this is the maximum observable value for the nonparametric tests, which were run with only 500 permutations. .. GENERATED FROM PYTHON SOURCE LINES 187-227 .. code-block:: Python import itertools threshold = 1 # p < 0.1 vmax = 2.69 # ~= -np.log10(1 / 500) cut_coords = [0] IMAGES = [ neg_log_pval, out_dict["logp_max_t"], out_dict["logp_max_size"], out_dict["logp_max_mass"], ] TITLES = [ "Parametric Test", "Permutation Test\n(Voxel-Level Error Control)", "Permutation Test\n(Cluster-Size Error Control)", "Permutation Test\n(Cluster-Mass Error Control)", ] fig, axes = plt.subplots(figsize=(8, 8), nrows=2, ncols=2) for img_counter, (i_row, j_col) in enumerate( itertools.product(range(2), range(2)) ): ax = axes[i_row, j_col] plotting.plot_glass_brain( IMAGES[img_counter], colorbar=True, vmax=vmax, display_mode="z", plot_abs=False, cut_coords=cut_coords, threshold=threshold, figure=fig, axes=ax, ) ax.set_title(TITLES[img_counter]) fig.suptitle("Group left-right button press\n(negative log10 p-values)") plt.show() .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_003.png :alt: Group left-right button press (negative log10 p-values), Parametric Test, Permutation Test (Voxel-Level Error Control), Permutation Test (Cluster-Size Error Control), Permutation Test (Cluster-Mass Error Control) :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_one_sample_test_003.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none /home/himanshu/Desktop/nilearn_work/nilearn/nilearn/plotting/img_plotting.py:1471: UserWarning: Non-finite values detected. These values will be replaced with zeros. safe_get_data(stat_map_img, ensure_finite=True), .. GENERATED FROM PYTHON SOURCE LINES 228-234 The nonparametric test yields many more discoveries and is more powerful than the usual parametric procedure. Even within the nonparametric test, the different correction metrics produce different results. The voxel-level correction is more conservative than the cluster-size or cluster-mass corrections, which are very similar to one another. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 51.227 seconds) **Estimated memory usage:** 9 MB .. _sphx_glr_download_auto_examples_05_glm_second_level_plot_second_level_one_sample_test.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.10.4?urlpath=lab/tree/notebooks/auto_examples/05_glm_second_level/plot_second_level_one_sample_test.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_second_level_one_sample_test.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_second_level_one_sample_test.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_