9.8.3. BIDS dataset first and second level analysis¶
Full step-by-step example of fitting a GLM to perform a first and second level analysis in a BIDS dataset and visualizing the results. Details about the BIDS standard can be consulted at http://bids.neuroimaging.io/.
Extract first level model objects automatically from the BIDS dataset.
To run this example, you must launch IPython via
--matplotlib in a terminal, or use the Jupyter notebook.
We download a simplified BIDS dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files.
Here is the location of the dataset on disk.
From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the BIDS dataset. Moreover, we obtain for each model a dictionary with run_imgs, events and confounder regressors since in this case a confounds.tsv file is available in the BIDS dataset. To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names.
/home/nicolas/GitRepos/nilearn-fork/nilearn/glm/first_level/first_level.py:942: UserWarning: SliceTimingRef not found in file /home/nicolas/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json. It will be assumed that the slice timing reference is 0.0 percent of the repetition time. If it is not the case it will need to be set manually in the generated list of models warn('SliceTimingRef not found in file %s. It will be assumed'
Additional checks or information extraction from pre-processed data can be made here.
We just expect one run_img per subject.
The only confounds stored are regressors obtained from motion correction. As we can verify from the column headers of the confounds table corresponding to the only run_img present.
Index(['RotX', 'RotY', 'RotZ', 'X', 'Y', 'Z'], dtype='object')
During this acquisition the subject read blocks of sentences and consonant strings. So these are our only two conditions in events. We verify there are 12 blocks for each condition.
language 12 string 12 Name: trial_type, dtype: int64
Now we simply fit each first level model and plot for each subject the contrast that reveals the language network (language - string). Notice that we can define a contrast using the names of the conditions specified in the events dataframe. Sum, subtraction and scalar multiplication are allowed.
Set the threshold as the z-variate with an uncorrected p-value of 0.001.
Prepare figure for concurrent plot of individual maps.
from nilearn import plotting import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(8, 4.5)) model_and_args = zip(models, models_run_imgs, models_events, models_confounds) for midx, (model, imgs, events, confounds) in enumerate(model_and_args): # fit the GLM model.fit(imgs, events, confounds) # compute the contrast of interest zmap = model.compute_contrast('language-string') plotting.plot_glass_brain(zmap, colorbar=False, threshold=p001_unc, title=('sub-' + model.subject_label), axes=axes[int(midx / 5), int(midx % 5)], plot_abs=False, display_mode='x') fig.suptitle('subjects z_map language network (unc p<0.001)') plotting.show()
We just have to provide the list of fitted FirstLevelModel objects to the SecondLevelModel object for estimation. We can do this because all subjects share a similar design matrix (same variables reflected in column names).
Note that we apply a smoothing of 8mm.
Computing contrasts at the second level is as simple as at the first level. Since we are not providing confounders we are performing a one-sample test at the second level with the images determined by the specified first level contrast.
The group level contrast reveals a left lateralized fronto-temporal language network.
Total running time of the script: ( 0 minutes 49.682 seconds)