9.1.4. Markers#

Provide imports for markers sub-package.

class junifer.markers.ALFFParcels(parcellation, fractional, highpass=0.01, lowpass=0.1, tr=None, use_afni=None, masks=None, method='mean', method_params=None, name=None)#

Class for computing fALFF/ALFF on parcels.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

fractionalbool

Whether to compute fractional ALFF.

highpasspositive float, optional

The highpass cutoff frequency for the bandpass filter. If 0, it will not apply a highpass filter (default 0.01).

lowpasspositive float, optional

The lowpass cutoff frequency for the bandpass filter (default 0.1).

trpositive float, optional

The Repetition Time of the BOLD data. If None, will extract the TR from NIFTI header (default None).

use_afnibool, optional

Whether to use AFNI for computing. If None, will use AFNI only if available (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name().

namestr, optional

The name of the marker. If None, will use the class name (default None).

Notes

The tr parameter is crucial for the correctness of fALFF/ALFF computation. If a dataset is correctly preprocessed, the TR should be extracted from the NIFTI without any issue. However, it has been reported that some preprocessed data might not have the correct TR in the NIFTI header.

ALFF/fALFF are computed using a bandpass butterworth filter. See scipy.signal.butter() and scipy.signal.filtfilt() for more details.

class junifer.markers.ALFFSpheres(coords, fractional, radius=None, allow_overlap=False, highpass=0.01, lowpass=0.1, tr=None, use_afni=None, masks=None, method='mean', method_params=None, name=None)#

Class for computing fALFF/ALFF on spheres.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

fractionalbool

Whether to compute fractional ALFF.

highpasspositive float, optional

The highpass cutoff frequency for the bandpass filter. If 0, it will not apply a highpass filter (default 0.01).

lowpasspositive float, optional

The lowpass cutoff frequency for the bandpass filter (default 0.1).

trpositive float, optional

The Repetition Time of the BOLD data. If None, will extract the TR from NIFTI header (default None).

use_afnibool, optional

Whether to use AFNI for computing. If None, will use AFNI only if available (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name().

namestr, optional

The name of the marker. If None, will use the class name (default None).

Notes

The tr parameter is crucial for the correctness of fALFF/ALFF computation. If a dataset is correctly preprocessed, the TR should be extracted from the NIFTI without any issue. However, it has been reported that some preprocessed data might not have the correct TR in the NIFTI header.

ALFF/fALFF are computed using a bandpass butterworth filter. See scipy.signal.butter() and scipy.signal.filtfilt() for more details.

class junifer.markers.BaseMarker(on=None, name=None)#

Abstract base class for all markers.

Parameters:
onstr or list of str

The kind of data to apply the marker to. By default, will work on all available data.

namestr, optional

The name of the marker. By default, it will use the class name as the name of the marker (default None).

abstract compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter.

abstract get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

abstract get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

store(type_, out, storage)#

Store.

Parameters:
type_str

The data type to store.

outdict

The computed result as a dictionary to store.

storagestorage-like

The storage class, for example, SQLiteFeatureStorage.

validate_input(input)#

Validate input.

Parameters:
inputlist of str

The input to the pipeline step. The list must contain the available Junifer Data dictionary keys.

Returns:
list of str

The actual elements of the input that will be processed by this pipeline step.

Raises:
ValueError

If the input does not have the required data.

class junifer.markers.CrossParcellationFC(parcellation_one, parcellation_two, aggregation_method='mean', correlation_method='pearson', masks=None, name=None)#

Class for calculating parcel-wise correlations with 2 parcellations.

Parameters:
parcellation_onestr

The name of the first parcellation.

parcellation_twostr

The name of the second parcellation.

aggregation_methodstr, optional

The aggregation method (default “mean”).

correlation_methodstr, optional

Any method that can be passed to pandas.DataFrame.corr (default “pearson”).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Take a timeseries, parcellate them with two different parcellation schemes, and get parcel-wise correlations between the two different parcellated time series. Shape of output matrix corresponds to number of ROIs in (parcellation_two, parcellation_one).

Parameters:
inputdict

The BOLD data as a dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the correlation values between the two parcellations as a numpy.ndarray

  • col_names : the ROIs for first parcellation as a list

  • row_names : the ROIs for second parcellation as a list

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker

class junifer.markers.EdgeCentricFCParcels(parcellation, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for edge-centric FC using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation of BOLD time series. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

cor_methodstr, optional

The method to perform correlation. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

References

[1]

Jo et al. (2021) Subject identification using edge-centric functional connectivity doi: https://doi.org/10.1016/j.neuroimage.2021.118204

aggregate(input, extra_input=None)#

Perform parcel aggregation and ETS computation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.EdgeCentricFCSpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for edge-centric FC using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_EdgeCentricFCSpheres where KIND is the kind of data it was applied to (default None).

References

[1]

Jo et al. (2021) Subject identification using edge-centric functional connectivity doi: https://doi.org/10.1016/j.neuroimage.2021.118204

aggregate(input, extra_input=None)#

Perform sphere aggregation and ETS computation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.FunctionalConnectivityParcels(parcellation, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for functional connectivity using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

aggregate(input, extra_input=None)#

Perform parcel aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.FunctionalConnectivitySpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for functional connectivity using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_FunctionalConnectivitySpheres where KIND is the kind of data it was applied to (default None).

aggregate(input, extra_input=None)#

Perform sphere aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.MarkerCollection(markers, datareader=None, preprocessing=None, storage=None)#

Class for marker collection.

Parameters:
markerslist of marker-like

The markers to compute.

datareaderDataReader-like object, optional

The DataReader to use (default None).

preprocessingpreprocessing-like, optional

The preprocessing steps to apply.

storagestorage-like, optional

The storage to use (default None).

fit(input)#

Fit the pipeline.

Parameters:
inputdict

The input data to fit the pipeline on. Should be the output of indexing the Data Grabber with one element.

Returns:
dict or None

The output of the pipeline. Each key represents a marker name and the values are the computer marker values. If the pipeline has a storage configured, then the output will be None.

validate(datagrabber)#

Validate the pipeline.

Without doing any computation, check if the marker collection can be fit without problems. That is, the data required for each marker is present and streamed down the steps. Also, if a storage is configured, check that the storage can handle the markers output.

Parameters:
datagrabberDataGrabber-like

The DataGrabber to validate.

class junifer.markers.ParcelAggregation(parcellation, method, method_params=None, time_method=None, time_method_params=None, masks=None, on=None, name=None)#

Class for parcel aggregation.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

methodstr

The method to perform aggregation using. Check valid options in get_aggfunc_by_name().

method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name().

time_methodstr, optional

The method to use to aggregate the time series over the time points, after applying method (only applicable to BOLD data). If None, it will not operate on the time dimension (default None).

time_method_paramsdict, optional

The parameters to pass to the time aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

on{“T1w”, “BOLD”, “VBM_GM”, “VBM_WM”, “fALFF”, “GCOR”, “LCOR”} or list of the options, optional

The data types to apply the marker to. If None, will work on all available data (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.RSSETSMarker(parcellation, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for root sum of squares of edgewise timeseries.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Take a timeseries of brain areas, and calculate timeseries for each edge according to the method outlined in [1]. For more information, check https://github.com/brain-networks/edge-ts/blob/master/main.m

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

References

[1]

Zamani Esfahlani et al. (2020) High-amplitude cofluctuations in cortical activity drive functional connectivity doi: 10.1073/pnas.2005531117

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.ReHoParcels(parcellation, use_afni=None, reho_params=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for regional homogeneity on parcels.

Parameters:
parcellationstr

The name of the parcellation. Check valid options by calling list_parcellations().

use_afnibool, optional

Whether to use AFNI for computing. If None, will use AFNI only if available (default None).

reho_paramsdict, optional

Extra parameters for computing ReHo map as a dictionary (default None). If use_afni = True, then the valid keys are:

  • nneigh{7, 19, 27}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

  • neigh_radpositive float, optional

    The radius of a desired neighbourhood (default None).

  • neigh_xpositive float, optional

    The semi-radius for x-axis of ellipsoidal volumes (default None).

  • neigh_ypositive float, optional

    The semi-radius for y-axis of ellipsoidal volumes (default None).

  • neigh_zpositive float, optional

    The semi-radius for z-axis of ellipsoidal volumes (default None).

  • box_radpositive int, optional

    The number of voxels outward in a given cardinal direction for a cubic box centered on a given voxel (default None).

  • box_xpositive int, optional

    The number of voxels for +/- x-axis of cuboidal volumes (default None).

  • box_ypositive int, optional

    The number of voxels for +/- y-axis of cuboidal volumes (default None).

  • box_zpositive int, optional

    The number of voxels for +/- z-axis of cuboidal volumes (default None).

else if use_afni = False, then the valid keys are:

  • nneigh{7, 19, 27, 125}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

    • 125 : for 5x5 cuboidal volume

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a 1D numpy.ndarray

  • col_names : the column labels for the parcels as a list

class junifer.markers.ReHoSpheres(coords, radius=None, allow_overlap=False, use_afni=None, reho_params=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for regional homogeneity on spheres.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in millimeters. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

use_afnibool, optional

Whether to use AFNI for computing. If None, will use AFNI only if available (default None).

reho_paramsdict, optional

Extra parameters for computing ReHo map as a dictionary (default None). If use_afni = True, then the valid keys are:

  • nneigh{7, 19, 27}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

  • neigh_radpositive float, optional

    The radius of a desired neighbourhood (default None).

  • neigh_xpositive float, optional

    The semi-radius for x-axis of ellipsoidal volumes (default None).

  • neigh_ypositive float, optional

    The semi-radius for y-axis of ellipsoidal volumes (default None).

  • neigh_zpositive float, optional

    The semi-radius for z-axis of ellipsoidal volumes (default None).

  • box_radpositive int, optional

    The number of voxels outward in a given cardinal direction for a cubic box centered on a given voxel (default None).

  • box_xpositive int, optional

    The number of voxels for +/- x-axis of cuboidal volumes (default None).

  • box_ypositive int, optional

    The number of voxels for +/- y-axis of cuboidal volumes (default None).

  • box_zpositive int, optional

    The number of voxels for +/- z-axis of cuboidal volumes (default None).

else if use_afni = False, then the valid keys are:

  • nneigh{7, 19, 27, 125}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

    • 125 : for 5x5 cuboidal volume

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a 1D numpy.ndarray

  • col_names : the column labels for the spheres as a list

class junifer.markers.SphereAggregation(coords, radius=None, allow_overlap=False, method='mean', method_params=None, time_method=None, time_method_params=None, masks=None, on=None, name=None)#

Class for sphere aggregation.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in millimeters. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default “mean”).

method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

time_methodstr, optional

The method to use to aggregate the time series over the time points, after applying method (only applicable to BOLD data). If None, it will not operate on the time dimension (default None).

time_method_paramsdict, optional

The parameters to pass to the time aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

on{“T1w”, “BOLD”, “VBM_GM”, “VBM_WM”, “fALFF”, “GCOR”, “LCOR”} or list of the options, optional

The data types to apply the marker to. If None, will work on all available data (default None).

namestr, optional

The name of the marker. By default, it will use KIND_SphereAggregation where KIND is the kind of data it was applied to (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.TemporalSNRParcels(parcellation, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for temporal signal-to-noise ratio using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

aggregate(input, extra_input=None)#

Perform parcel aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which the data is the voxelwise temporal SNR map.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : ROI-wise temporal SNR as a numpy.ndarray

  • col_names : the ROI labels for the computed values as list

class junifer.markers.TemporalSNRSpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for temporal signal-to-noise ratio using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_FunctionalConnectivitySpheres where KIND is the kind of data it was applied to (default None).

aggregate(input, extra_input=None)#

Perform sphere aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which the data is the voxelwise temporal SNR map.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : VOI-wise temporal SNR as a numpy.ndarray

  • col_names : the VOI labels for the computed values as list