.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/05_glm_second_level/plot_second_level_two_sample_test.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_05_glm_second_level_plot_second_level_two_sample_test.py: Second-level fMRI model: two-sample test, unpaired and paired ============================================================= Full step-by-step example of fitting a :term:`GLM` to perform a second level analysis in experimental data and visualizing the results More specifically: 1. A sample of n=16 visual activity fMRIs are downloaded. 2. An unpaired, two-sample t-test is applied to the brain maps in order to see the effect of the contrast difference across subjects. 3. A paired, two-sample t-test is applied to the brain maps in order to see the effect of the contrast difference across subjects, considering subject intercepts The contrast is between responses to retinotopically distinct vertical versus horizontal checkerboards. At the individual level, these stimuli are sometimes used to map the borders of primary visual areas. At the group level, such a mapping is not possible. Yet, we may observe some significant effects in these areas. .. GENERATED FROM PYTHON SOURCE LINES 29-34 .. code-block:: Python import pandas as pd from nilearn import plotting from nilearn.datasets import fetch_localizer_contrasts .. GENERATED FROM PYTHON SOURCE LINES 35-39 Fetch dataset ------------- We download a list of left vs right button press contrasts from a localizer dataset. .. GENERATED FROM PYTHON SOURCE LINES 39-55 .. code-block:: Python n_subjects = 16 sample_vertical = fetch_localizer_contrasts( ["vertical checkerboard"], n_subjects, legacy_format=False, ) sample_horizontal = fetch_localizer_contrasts( ["horizontal checkerboard"], n_subjects, legacy_format=False, ) # Implicitly, there is a one-to-one correspondence between the two samples: # the first image of both samples comes from subject S1, # the second from subject S2 etc. .. GENERATED FROM PYTHON SOURCE LINES 56-60 Estimate second level models ---------------------------- We define the input maps and the design matrix for the second level model and fit it. .. GENERATED FROM PYTHON SOURCE LINES 60-62 .. code-block:: Python second_level_input = sample_vertical["cmaps"] + sample_horizontal["cmaps"] .. GENERATED FROM PYTHON SOURCE LINES 63-64 Next, we model the effect of conditions (sample 1 vs sample 2). .. GENERATED FROM PYTHON SOURCE LINES 64-68 .. code-block:: Python import numpy as np condition_effect = np.hstack(([1] * n_subjects, [-1] * n_subjects)) .. GENERATED FROM PYTHON SOURCE LINES 69-71 The design matrix for the unpaired test doesn't need any more columns For the paired test, we include an intercept for each subject. .. GENERATED FROM PYTHON SOURCE LINES 71-74 .. code-block:: Python subject_effect = np.vstack((np.eye(n_subjects), np.eye(n_subjects))) subjects = [f"S{i:02d}" for i in range(1, n_subjects + 1)] .. GENERATED FROM PYTHON SOURCE LINES 75-76 We then assemble those into design matrices .. GENERATED FROM PYTHON SOURCE LINES 76-85 .. code-block:: Python unpaired_design_matrix = pd.DataFrame( condition_effect[:, np.newaxis], columns=["vertical vs horizontal"] ) paired_design_matrix = pd.DataFrame( np.hstack((condition_effect[:, np.newaxis], subject_effect)), columns=["vertical vs horizontal"] + subjects, ) .. GENERATED FROM PYTHON SOURCE LINES 86-87 and plot the designs. .. GENERATED FROM PYTHON SOURCE LINES 87-101 .. code-block:: Python import matplotlib.pyplot as plt _, (ax_unpaired, ax_paired) = plt.subplots( 1, 2, gridspec_kw={"width_ratios": [1, 17]} ) plotting.plot_design_matrix( unpaired_design_matrix, rescale=False, ax=ax_unpaired ) plotting.plot_design_matrix(paired_design_matrix, rescale=False, ax=ax_paired) ax_unpaired.set_title("unpaired design", fontsize=12) ax_paired.set_title("paired design", fontsize=12) plt.tight_layout() plotting.show() .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_001.png :alt: unpaired design, paired design :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 102-103 We specify the analysis models and fit them. .. GENERATED FROM PYTHON SOURCE LINES 103-113 .. code-block:: Python from nilearn.glm.second_level import SecondLevelModel second_level_model_unpaired = SecondLevelModel(n_jobs=2).fit( second_level_input, design_matrix=unpaired_design_matrix ) second_level_model_paired = SecondLevelModel(n_jobs=2).fit( second_level_input, design_matrix=paired_design_matrix ) .. GENERATED FROM PYTHON SOURCE LINES 114-118 Estimating the :term:`contrast` is simple. To do so, we provide the column name of the design matrix. The argument 'output_type' is set to return all available outputs so that we can compare differences in the effect size, variance, and z-score. .. GENERATED FROM PYTHON SOURCE LINES 118-126 .. code-block:: Python stat_maps_unpaired = second_level_model_unpaired.compute_contrast( "vertical vs horizontal", output_type="all" ) stat_maps_paired = second_level_model_paired.compute_contrast( "vertical vs horizontal", output_type="all" ) .. GENERATED FROM PYTHON SOURCE LINES 127-131 Plot the results ---------------- The two :term:`'effect_size'` images are essentially identical. .. GENERATED FROM PYTHON SOURCE LINES 131-136 .. code-block:: Python ( stat_maps_unpaired["effect_size"].get_fdata() - stat_maps_paired["effect_size"].get_fdata() ).max() .. rst-class:: sphx-glr-script-out .. code-block:: none 8.881784197001252e-16 .. GENERATED FROM PYTHON SOURCE LINES 137-138 But the variance in the unpaired image is larger. .. GENERATED FROM PYTHON SOURCE LINES 138-156 .. code-block:: Python plotting.plot_glass_brain( stat_maps_unpaired["effect_variance"], colorbar=True, vmin=0, vmax=6, title="vertical vs horizontal effect variance, unpaired", ) plotting.plot_glass_brain( stat_maps_paired["effect_variance"], colorbar=True, vmin=0, vmax=6, title="vertical vs horizontal effect variance, paired", ) plotting.show() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_002.png :alt: plot second level two sample test :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_002.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_003.png :alt: plot second level two sample test :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_003.png :class: sphx-glr-multi-img .. GENERATED FROM PYTHON SOURCE LINES 157-159 Together, this makes the z_scores from the paired test larger. We threshold the second level :term:`contrast` and plot it. .. GENERATED FROM PYTHON SOURCE LINES 159-182 .. code-block:: Python threshold = 3.1 # corresponds to p < .001, uncorrected display = plotting.plot_glass_brain( stat_maps_unpaired["z_score"], threshold=threshold, colorbar=True, plot_abs=False, title="vertical vs horizontal (unc p<0.001)", vmin=0, vmax=6, ) display = plotting.plot_glass_brain( stat_maps_paired["z_score"], threshold=threshold, colorbar=True, plot_abs=False, title="vertical vs horizontal (unc p<0.001)", vmin=0, vmax=6, ) plotting.show() .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_004.png :alt: plot second level two sample test :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_004.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_005.png :alt: plot second level two sample test :srcset: /auto_examples/05_glm_second_level/images/sphx_glr_plot_second_level_two_sample_test_005.png :class: sphx-glr-multi-img .. GENERATED FROM PYTHON SOURCE LINES 183-185 Unsurprisingly, we see activity in the primary visual cortex, both positive and negative. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 16.958 seconds) **Estimated memory usage:** 9 MB .. _sphx_glr_download_auto_examples_05_glm_second_level_plot_second_level_two_sample_test.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/nilearn/nilearn/0.10.4?urlpath=lab/tree/notebooks/auto_examples/05_glm_second_level/plot_second_level_two_sample_test.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_second_level_two_sample_test.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_second_level_two_sample_test.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_