BIDS dataset first and second level analysis

Full step-by-step example of fitting a GLM to perform a first and second level analysis in a BIDS dataset and visualizing the results. Details about the BIDS standard can be consulted at https://bids.neuroimaging.io/.

More specifically:

  1. Download an fMRI BIDS dataset with two language conditions to contrast.

  2. Extract first level model objects automatically from the BIDS dataset.

  3. Fit a second level model on the fitted first level models. Notice that in this case the preprocessed bold images were already normalized to the same MNI space.

from nilearn import plotting

Fetch example BIDS dataset

We download a simplified BIDS dataset made available for illustrative purposes. It contains only the necessary information to run a statistical analysis using Nilearn. The raw data subject folders only contain bold.json and events.tsv files, while the derivatives folder includes the preprocessed files preproc.nii and the confounds.tsv files.

[fetch_language_localizer_demo_dataset] Dataset created in
/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset
[fetch_language_localizer_demo_dataset] Downloading data from
https://osf.io/3dj2a/download ...
[fetch_language_localizer_demo_dataset] Downloaded 4096000 of 749503182 bytes
(0.5%%,  3.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 10330112 of 749503182 bytes
(1.4%%,  2.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 15745024 of 749503182 bytes
(2.1%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 21217280 of 749503182 bytes
(2.8%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 26492928 of 749503182 bytes
(3.5%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 31309824 of 749503182 bytes
(4.2%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 36282368 of 749503182 bytes
(4.8%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 40992768 of 749503182 bytes
(5.5%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 45547520 of 749503182 bytes
(6.1%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 50077696 of 749503182 bytes
(6.7%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 54673408 of 749503182 bytes
(7.3%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 58990592 of 749503182 bytes
(7.9%%,  2.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 63586304 of 749503182 bytes
(8.5%%,  2.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 68272128 of 749503182 bytes
(9.1%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 72843264 of 749503182 bytes
(9.7%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 76980224 of 749503182 bytes
(10.3%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 81403904 of 749503182 bytes
(10.9%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 86294528 of 749503182 bytes
(11.5%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 91398144 of 749503182 bytes
(12.2%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 96493568 of 749503182 bytes
(12.9%%,  2.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 101752832 of 749503182 bytes
(13.6%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 106283008 of 749503182 bytes
(14.2%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 110673920 of 749503182 bytes
(14.8%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 115097600 of 749503182 bytes
(15.4%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 119644160 of 749503182 bytes
(16.0%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 124346368 of 749503182 bytes
(16.6%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 129277952 of 749503182 bytes
(17.2%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 134078464 of 749503182 bytes
(17.9%%,  2.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 138551296 of 749503182 bytes
(18.5%%,  2.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 143327232 of 749503182 bytes
(19.1%%,  2.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 148070400 of 749503182 bytes
(19.8%%,  2.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 153321472 of 749503182 bytes
(20.5%%,  2.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 158703616 of 749503182 bytes
(21.2%%,  2.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 163692544 of 749503182 bytes
(21.8%%,  2.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 168706048 of 749503182 bytes
(22.5%%,  2.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 174039040 of 749503182 bytes
(23.2%%,  2.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 179396608 of 749503182 bytes
(23.9%%,  2.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 184295424 of 749503182 bytes
(24.6%%,  2.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 189276160 of 749503182 bytes
(25.3%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 194043904 of 749503182 bytes
(25.9%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 198967296 of 749503182 bytes
(26.5%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 203325440 of 749503182 bytes
(27.1%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 207978496 of 749503182 bytes
(27.7%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 212516864 of 749503182 bytes
(28.4%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 217202688 of 749503182 bytes
(29.0%%,  1.9min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 222208000 of 749503182 bytes
(29.6%%,  1.8min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 227221504 of 749503182 bytes
(30.3%%,  1.8min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 232259584 of 749503182 bytes
(31.0%%,  1.8min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 237117440 of 749503182 bytes
(31.6%%,  1.8min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 242122752 of 749503182 bytes
(32.3%%,  1.8min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 247128064 of 749503182 bytes
(33.0%%,  1.7min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 252649472 of 749503182 bytes
(33.7%%,  1.7min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 257826816 of 749503182 bytes
(34.4%%,  1.7min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 263184384 of 749503182 bytes
(35.1%%,  1.7min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 268148736 of 749503182 bytes
(35.8%%,  1.7min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 272654336 of 749503182 bytes
(36.4%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 276889600 of 749503182 bytes
(36.9%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 281026560 of 749503182 bytes
(37.5%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 284966912 of 749503182 bytes
(38.0%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 289112064 of 749503182 bytes
(38.6%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 293208064 of 749503182 bytes
(39.1%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 296697856 of 749503182 bytes
(39.6%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 300990464 of 749503182 bytes
(40.2%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 305455104 of 749503182 bytes
(40.8%%,  1.6min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 309764096 of 749503182 bytes
(41.3%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 313581568 of 749503182 bytes
(41.8%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 317849600 of 749503182 bytes
(42.4%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 321798144 of 749503182 bytes
(42.9%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 325664768 of 749503182 bytes
(43.5%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 329621504 of 749503182 bytes
(44.0%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 333905920 of 749503182 bytes
(44.6%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 338452480 of 749503182 bytes
(45.2%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 343293952 of 749503182 bytes
(45.8%%,  1.5min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 348217344 of 749503182 bytes
(46.5%%,  1.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 353320960 of 749503182 bytes
(47.1%%,  1.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 358744064 of 749503182 bytes
(47.9%%,  1.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 364290048 of 749503182 bytes
(48.6%%,  1.4min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 369729536 of 749503182 bytes
(49.3%%,  1.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 375717888 of 749503182 bytes
(50.1%%,  1.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 382050304 of 749503182 bytes
(51.0%%,  1.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 388227072 of 749503182 bytes
(51.8%%,  1.3min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 394338304 of 749503182 bytes
(52.6%%,  1.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 400187392 of 749503182 bytes
(53.4%%,  1.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 405848064 of 749503182 bytes
(54.1%%,  1.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 411713536 of 749503182 bytes
(54.9%%,  1.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 417177600 of 749503182 bytes
(55.7%%,  1.2min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 422887424 of 749503182 bytes
(56.4%%,  1.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 428908544 of 749503182 bytes
(57.2%%,  1.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 434544640 of 749503182 bytes
(58.0%%,  1.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 440049664 of 749503182 bytes
(58.7%%,  1.1min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 445546496 of 749503182 bytes
(59.4%%,  1.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 450920448 of 749503182 bytes
(60.2%%,  1.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 456359936 of 749503182 bytes
(60.9%%,  1.0min remaining)
[fetch_language_localizer_demo_dataset] Downloaded 461864960 of 749503182 bytes
(61.6%%,   59.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 467148800 of 749503182 bytes
(62.3%%,   57.9s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 471900160 of 749503182 bytes
(63.0%%,   56.9s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 476569600 of 749503182 bytes
(63.6%%,   56.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 481304576 of 749503182 bytes
(64.2%%,   55.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 486146048 of 749503182 bytes
(64.9%%,   54.1s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 491266048 of 749503182 bytes
(65.5%%,   53.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 496304128 of 749503182 bytes
(66.2%%,   51.9s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 501981184 of 749503182 bytes
(67.0%%,   50.7s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 507658240 of 749503182 bytes
(67.7%%,   49.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 513540096 of 749503182 bytes
(68.5%%,   48.2s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 519913472 of 749503182 bytes
(69.4%%,   46.7s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 526614528 of 749503182 bytes
(70.3%%,   45.2s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 532086784 of 749503182 bytes
(71.0%%,   44.1s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 537001984 of 749503182 bytes
(71.6%%,   43.1s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 543875072 of 749503182 bytes
(72.6%%,   41.6s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 550060032 of 749503182 bytes
(73.4%%,   40.2s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 557899776 of 749503182 bytes
(74.4%%,   38.4s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 565477376 of 749503182 bytes
(75.4%%,   36.8s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 573734912 of 749503182 bytes
(76.5%%,   34.9s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 582369280 of 749503182 bytes
(77.7%%,   33.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 591208448 of 749503182 bytes
(78.9%%,   31.1s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 598720512 of 749503182 bytes
(79.9%%,   29.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 607993856 of 749503182 bytes
(81.1%%,   27.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 617562112 of 749503182 bytes
(82.4%%,   25.4s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 626794496 of 749503182 bytes
(83.6%%,   23.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 636674048 of 749503182 bytes
(84.9%%,   21.4s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 647176192 of 749503182 bytes
(86.3%%,   19.3s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 657203200 of 749503182 bytes
(87.7%%,   17.3s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 668205056 of 749503182 bytes
(89.2%%,   15.1s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 679116800 of 749503182 bytes
(90.6%%,   13.0s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 687857664 of 749503182 bytes
(91.8%%,   11.3s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 697155584 of 749503182 bytes
(93.0%%,    9.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 707919872 of 749503182 bytes
(94.5%%,    7.5s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 717160448 of 749503182 bytes
(95.7%%,    5.8s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 728064000 of 749503182 bytes
(97.1%%,    3.8s remaining)
[fetch_language_localizer_demo_dataset] Downloaded 739098624 of 749503182 bytes
(98.6%%,    1.8s remaining)
[fetch_language_localizer_demo_dataset]  ...done. (134 seconds, 2 min)

[fetch_language_localizer_demo_dataset] Extracting data from /home/runner/nilear
n_data/fMRI-language-localizer-demo-dataset/fMRI-language-localizer-demo-dataset
.zip...
[fetch_language_localizer_demo_dataset] .. done.

Here is the location of the dataset on disk.

/home/runner/nilearn_data/fMRI-language-localizer-demo-dataset

Obtain automatically FirstLevelModel objects and fit arguments

From the dataset directory we automatically obtain the FirstLevelModel objects with their subject_id filled from the BIDS dataset. Moreover, we obtain for each model a dictionary with run_imgs, events and confounder regressors since in this case a confounds.tsv file is available in the BIDS dataset. To get the first level models we only have to specify the dataset directory and the task_label as specified in the file names.

from nilearn.glm.first_level import first_level_from_bids

task_label = "languagelocalizer"
(
    models,
    models_run_imgs,
    models_events,
    models_confounds,
) = first_level_from_bids(
    data.data_dir,
    task_label,
    img_filters=[("desc", "preproc")],
    n_jobs=2,
    space_label="",
    smoothing_fwhm=8,
)
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_bids_analysis.py:61: RuntimeWarning:

'StartTime' not found in file /home/runner/nilearn_data/fMRI-language-localizer-demo-dataset/derivatives/sub-01/func/sub-01_task-languagelocalizer_desc-preproc_bold.json.

/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_bids_analysis.py:61: UserWarning:

'slice_time_ref' not provided and cannot be inferred from metadata.
It will be assumed that the slice timing reference is 0.0 percent of the repetition time.
If it is not the case it will need to be set manually in the generated list of models.

Quick sanity check on fit arguments

Additional checks or information extraction from pre-processed data can be made here.

We just expect one run_img per subject.

from pathlib import Path

print([Path(run).name for run in models_run_imgs[0]])
['sub-01_task-languagelocalizer_desc-preproc_bold.nii.gz']

The only confounds stored are regressors obtained from motion correction. As we can verify from the column headers of the confounds table corresponding to the only run_img present.

print(models_confounds[0][0].columns)
Index(['RotX', 'RotY', 'RotZ', 'X', 'Y', 'Z'], dtype='object')

During this acquisition the subject read blocks of sentences and consonant strings. So these are our only two conditions in events. We verify there are 12 blocks for each condition.

print(models_events[0][0]["trial_type"].value_counts())
trial_type
language    12
string      12
Name: count, dtype: int64

First level model estimation

Now we simply fit each first level model and plot for each subject the contrast that reveals the language network (language - string). Notice that we can define a contrast using the names of the conditions specified in the events dataframe. Sum, subtraction and scalar multiplication are allowed.

Set the threshold as the z-variate with an uncorrected p-value of 0.001.

from scipy.stats import norm

p001_unc = norm.isf(0.001)

Prepare figure for concurrent plot of individual maps.

from math import ceil

import matplotlib.pyplot as plt
import numpy as np

ncols = 2
nrows = ceil(len(models) / ncols)

fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(10, 12))
axes = np.atleast_2d(axes)
model_and_args = zip(
    models, models_run_imgs, models_events, models_confounds, strict=False
)
for midx, (model, imgs, events, confounds) in enumerate(model_and_args):
    # fit the GLM
    model.fit(imgs, events, confounds)
    # compute the contrast of interest
    zmap = model.compute_contrast("language-string")
    plotting.plot_glass_brain(
        zmap,
        threshold=p001_unc,
        title=f"sub-{model.subject_label}",
        axes=axes[int(midx / ncols), int(midx % ncols)],
        plot_abs=False,
        colorbar=True,
        display_mode="x",
        vmin=-12,
        vmax=12,
    )
fig.suptitle("subjects z_map language network (unc p<0.001)")
plotting.show()
subjects z_map language network (unc p<0.001)

Second level model estimation

We just have to provide the list of fitted FirstLevelModel objects to the SecondLevelModel object for estimation. We can do this because all subjects share a similar design matrix (same variables reflected in column names).

from nilearn.glm.second_level import SecondLevelModel

second_level_input = models

Note that we apply a smoothing of 8mm.

Computing contrasts at the second level is as simple as at the first level. Since we are not providing confounders we are performing a one-sample test at the second level with the images determined by the specified first level contrast.

zmap = second_level_model.compute_contrast(
    first_level_contrast="language-string"
)

The group level contrast reveals a left lateralized fronto-temporal language network.

plotting.plot_glass_brain(
    zmap,
    threshold=p001_unc,
    title="Group language network (unc p<0.001)",
    plot_abs=False,
    display_mode="x",
    figure=plt.figure(figsize=(5, 4)),
)
plotting.show()
plot bids analysis

Generate and save the GLM report at the group level.

report_slm = second_level_model.generate_report(
    contrasts="intercept",
    first_level_contrast="language-string",
    threshold=p001_unc,
    display_mode="x",
)
/home/runner/work/nilearn/nilearn/examples/07_advanced/plot_bids_analysis.py:183: UserWarning:


'threshold=3.090232306167813' is not used with 'height_control='fpr''.
'threshold' is only used when 'height_control=None'.
'threshold' was set to 'None'.

View the GLM report at the group level.

Note

The generated report can be:

  • displayed in a Notebook,

  • opened in a browser using the .open_in_browser() method,

  • or saved to a file using the .save_as_html(output_filepath) method.

Statistical Report - Second Level Model Implement the :term:`General Linear Model` for multiple subject :term:`fMRI` data.

WARNING

  • 'threshold=3.090232306167813' is not used with 'height_control='fpr''. 'threshold' is only used when 'height_control=None'. 'threshold' was set to 'None'.

Description

Data were analyzed using Nilearn (version= 0.13.2.dev93+gf9740f67e; RRID:SCR_001362).

At the group level, a mass univariate analysis was performed with a linear regression at each voxel of the brain.

Input images were smoothed with gaussian kernel (full-width at half maximum=8.0 mm).

The following contrasts were computed :

  • intercept

Model details

Value
Parameter
smoothing_fwhm (mm) 8.0

Mask

Mask image

The mask includes 23640 voxels (23.1 %) of the image.

Statistical Maps

intercept

Stat map plot for the contrast: intercept
Cluster Table
Height control fpr
α 0.001
Threshold (computed) 3.09
Cluster size threshold (voxels) 0
Minimum distance (mm) 8.0
Cluster ID X Y Z Peak Stat Cluster Size (mm3)
1 -57.5 -48.5 13.5 4.43 6652
1a -71.0 -53.0 18.0 3.64
2 -62.0 -8.0 49.5 4.32 1184
2a -53.0 -8.0 45.0 4.17
2b -48.5 -3.5 54.0 3.16
3 -48.5 -30.5 -22.5 3.98 637
4 46.0 5.5 -27.0 3.91 3371
4a 50.5 23.5 -27.0 3.80
4b 55.0 14.5 -18.0 3.79
5 -71.0 -17.0 -4.5 3.89 9841
5a -53.0 -8.0 -9.0 3.82
5b -66.5 1.0 -4.5 3.82
5c -48.5 19.0 -18.0 3.69
6 -39.5 -3.5 -40.5 3.60 455
7 50.5 -12.5 -9.0 3.51 364
8 -48.5 14.5 18.0 3.43 364
9 -75.5 -35.0 4.5 3.32 91
10 32.5 -17.0 -27.0 3.26 91
11 -44.0 -17.0 -31.5 3.13 182

About

  • Date preprocessed:


Total running time of the script: (2 minutes 56.174 seconds)

Estimated memory usage: 386 MB

Gallery generated by Sphinx-Gallery