9.4.1. Nilearn#

Custom objects adapted from nilearn.

class junifer.external.nilearn.JuniferNiftiSpheresMasker(seeds, radius=None, mask_img=None, agg_func=<function mean>, allow_overlap=False, dtype=None, **kwargs)#

Class for custom NiftiSpheresMasker.

Differs from nilearn.maskers.NiftiSpheresMasker in the following ways:

  • it allows to pass any callable as the agg_func parameter.

  • empty spheres do not create an error. Instead, agg_func is applied to an empty array and the result is passed.

Parameters:
seedslist of float

Seed definitions. List of coordinates of the seeds in the same space as the images (typically MNI or TAL).

radiusfloat, optional

Indicates, in millimeters, the radius for the sphere around the seed. If None, signal is extracted on a single voxel (default None).

mask_imgNiimg-like object, optional

Mask to apply to regions before extracting signals (default None).

agg_funccallable(), optional

The function to aggregate signals using (default numpy.mean).

allow_overlapbool, optional

If False, an error is raised if the maps overlap (default None).

dtypenumpy.dtype or “auto”, optional

The dtype for the extraction. If “auto”, the data will be converted to int32 if dtype is discrete and float32 if it is continuous (default None).

**kwargs

Keyword arguments are passed to the nilearn.maskers.NiftiSpheresMasker.

inverse_transform(region_signals)#

Compute voxel signals from spheres signals.

Parameters:
region_signals1D/2D numpy.ndarray

Signal for each region. If a 1D array is provided, then the shape should be (number of elements,), and a 3D img will be returned. If a 2D array is provided, then the shape should be (number of scans, number of elements), and a 4D img will be returned.

Returns:
voxel_signalsnibabel.nifti1.Nifti1Image

Signal for each sphere. shape: (mask_img, number of scans).

set_inverse_transform_request(*, region_signals='$UNCHANGED$')#

Request metadata passed to the inverse_transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to inverse_transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to inverse_transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
region_signalsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for region_signals parameter in inverse_transform.

Returns:
selfobject

The updated object.

set_transform_request(*, confounds='$UNCHANGED$', imgs='$UNCHANGED$', sample_mask='$UNCHANGED$')#

Request metadata passed to the transform method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
confoundsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for confounds parameter in transform.

imgsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for imgs parameter in transform.

sample_maskstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_mask parameter in transform.

Returns:
selfobject

The updated object.

transform_single_imgs(imgs, confounds=None, sample_mask=None)#

Extract signals from a single 4D niimg.

Parameters:
imgs3D/4D Niimg-like object

Images to process. If a 3D niimg is provided, a singleton dimension will be added to the output to represent the single scan in the niimg.

confoundspandas.DataFrame, optional

This parameter is passed to nilearn.signal.clean(). Please see the related documentation for details. shape: (number of scans, number of confounds)

sample_masknp.ndarray, list or tuple, optional

Masks the niimgs along time/fourth dimension to perform scrubbing (remove volumes with high motion) and/or non-steady-state volumes. This parameter is passed to nilearn.signal.clean(). shape: (number of scans - number of volumes removed, )

Returns:
region_signals2D numpy.ndarray

Signal for each sphere. shape: (number of scans, number of spheres)

Warns:
DeprecationWarning

If a 3D niimg input is provided, the current behavior (adding a singleton dimension to produce a 2D array) is deprecated. Starting in version 0.12, a 1D array will be returned for 3D inputs.