9.1.4. Markers#

Markers for feature extraction.

class junifer.markers.ALFFParcels(parcellation, fractional, using, highpass=0.01, lowpass=0.1, tr=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for ALFF / fALFF on parcels.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

fractionalbool

Whether to compute fractional ALFF.

using{“junifer”, “afni”}

Implementation to use for computing ALFF:

  • “junifer” : Use junifer’s own ALFF implementation

  • “afni” : Use AFNI’s 3dRSFC

highpasspositive float, optional

The highpass cutoff frequency for the bandpass filter. If 0, it will not apply a highpass filter (default 0.01).

lowpasspositive float, optional

The lowpass cutoff frequency for the bandpass filter (default 0.1).

trpositive float, optional

The Repetition Time of the BOLD data. If None, will extract the TR from NIfTI header (default None).

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

Notes

The tr parameter is crucial for the correctness of fALFF/ALFF computation. If a dataset is correctly preprocessed, the tr should be extracted from the NIfTI without any issue. However, it has been reported that some preprocessed data might not have the correct tr in the NIfTI header.

ALFF/fALFF are computed using a bandpass butterworth filter. See scipy.signal.butter() and scipy.signal.filtfilt() for more details.

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.ALFFSpheres(coords, fractional, using, radius=None, allow_overlap=False, highpass=0.01, lowpass=0.1, tr=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for computing ALFF / fALFF on spheres.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

fractionalbool

Whether to compute fractional ALFF.

using{“junifer”, “afni”}

Implementation to use for computing ALFF:

  • “junifer” : Use junifer’s own ALFF implementation

  • “afni” : Use AFNI’s 3dRSFC

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

highpasspositive float, optional

The highpass cutoff frequency for the bandpass filter. If 0, it will not apply a highpass filter (default 0.01).

lowpasspositive float, optional

The lowpass cutoff frequency for the bandpass filter (default 0.1).

trpositive float, optional

The Repetition Time of the BOLD data. If None, will extract the TR from NIfTI header (default None).

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name().

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

Notes

The tr parameter is crucial for the correctness of fALFF/ALFF computation. If a dataset is correctly preprocessed, the tr should be extracted from the NIfTI without any issue. However, it has been reported that some preprocessed data might not have the correct tr in the NIFTI header.

ALFF/fALFF are computed using a bandpass butterworth filter. See scipy.signal.butter() and scipy.signal.filtfilt() for more details.

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.BaseMarker(on=None, name=None)#

Abstract base class for all markers.

Parameters:
onstr or list of str

The kind of data to apply the marker to. By default, will work on all available data.

namestr, optional

The name of the marker. By default, it will use the class name as the name of the marker (default None).

Raises:
ValueError

If required input data type(s) is(are) not found.

abstract compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter.

abstract get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

abstract get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

store(type_, out, storage)#

Store.

Parameters:
type_str

The data type to store.

outdict

The computed result as a dictionary to store.

storagestorage-like

The storage class, for example, SQLiteFeatureStorage.

validate_input(input)#

Validate input.

Parameters:
inputlist of str

The input to the pipeline step. The list must contain the available Junifer Data dictionary keys.

Returns:
list of str

The actual elements of the input that will be processed by this pipeline step.

Raises:
ValueError

If the input does not have the required data.

class junifer.markers.CrossParcellationFC(parcellation_one, parcellation_two, aggregation_method='mean', correlation_method='pearson', masks=None, name=None)#

Class for calculating parcel-wise correlations with 2 parcellations.

Parameters:
parcellation_onestr

The name of the first parcellation.

parcellation_twostr

The name of the second parcellation.

aggregation_methodstr, optional

The aggregation method (default “mean”).

correlation_methodstr, optional

Any method that can be passed to pandas.DataFrame.corr (default “pearson”).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Take a timeseries, parcellate them with two different parcellation schemes, and get parcel-wise correlations between the two different parcellated time series. Shape of output matrix corresponds to number of ROIs in (parcellation_two, parcellation_one).

Parameters:
inputdict

The BOLD data as a dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the correlation values between the two parcellations as a numpy.ndarray

  • col_names : the ROIs for first parcellation as a list

  • row_names : the ROIs for second parcellation as a list

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker

class junifer.markers.EdgeCentricFCParcels(parcellation, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for edge-centric FC using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation of BOLD time series. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

cor_methodstr, optional

The method to perform correlation. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

References

[1]

Jo et al. (2021) Subject identification using edge-centric functional connectivity doi: https://doi.org/10.1016/j.neuroimage.2021.118204

aggregate(input, extra_input=None)#

Perform parcel aggregation and ETS computation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.EdgeCentricFCSpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for edge-centric FC using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_EdgeCentricFCSpheres where KIND is the kind of data it was applied to (default None).

References

[1]

Jo et al. (2021) Subject identification using edge-centric functional connectivity doi: https://doi.org/10.1016/j.neuroimage.2021.118204

aggregate(input, extra_input=None)#

Perform sphere aggregation and ETS computation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.FunctionalConnectivityParcels(parcellation, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for functional connectivity using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

aggregate(input, extra_input=None)#

Perform parcel aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.FunctionalConnectivitySpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, cor_method='covariance', cor_method_params=None, masks=None, name=None)#

Class for functional connectivity using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

cor_methodstr, optional

The method to perform correlation using. Check valid options in nilearn.connectome.ConnectivityMeasure (default “covariance”).

cor_method_paramsdict, optional

Parameters to pass to the correlation function. Check valid options in nilearn.connectome.ConnectivityMeasure (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_FunctionalConnectivitySpheres where KIND is the kind of data it was applied to (default None).

aggregate(input, extra_input=None)#

Perform sphere aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

class junifer.markers.MarkerCollection(markers, datareader=None, preprocessors=None, storage=None)#

Class for marker collection.

Parameters:
markerslist of marker-like

The markers to compute.

datareaderDataReader-like object, optional

The DataReader to use (default None).

preprocessorslist of preprocessing-like, optional

The preprocessors to apply (default None).

storagestorage-like, optional

The storage to use (default None).

Raises:
ValueError

If markers have same names.

fit(input)#

Fit the pipeline.

Parameters:
inputdict

The input data to fit the pipeline on. Should be the output of indexing the Data Grabber with one element.

Returns:
dict or None

The output of the pipeline. Each key represents a marker name and the values are the computer marker values. If the pipeline has a storage configured, then the output will be None.

validate(datagrabber)#

Validate the pipeline.

Without doing any computation, check if the marker collection can be fitted without problems i.e., the data required for each marker is present and streamed down the steps. Also, if a storage is configured, check that the storage can handle the markers’ output.

Parameters:
datagrabberDataGrabber-like

The DataGrabber to validate.

class junifer.markers.ParcelAggregation(parcellation, method, method_params=None, time_method=None, time_method_params=None, masks=None, on=None, name=None)#

Class for parcel aggregation.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

methodstr

The method to perform aggregation using. Check valid options in get_aggfunc_by_name().

method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name().

time_methodstr, optional

The method to use to aggregate the time series over the time points, after applying method (only applicable to BOLD data). If None, it will not operate on the time dimension (default None).

time_method_paramsdict, optional

The parameters to pass to the time aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

on{“T1w”, “T2w”, “BOLD”, “VBM_GM”, “VBM_WM”, “VBM_CSF”, “fALFF”, “GCOR”, “LCOR”} or list of the options, optional

The data types to apply the marker to. If None, will work on all available data (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

Raises:
ValueError

If time_method is specified for non-BOLD data or if time_method_params is not None when time_method is None.

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

Warns:
RuntimeWarning

If time aggregation is required but only time point is available.

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

Raises:
ValueError

If the input_type is invalid.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.RSSETSMarker(parcellation, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for root sum of squares of edgewise timeseries.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Take a timeseries of brain areas, and calculate timeseries for each edge according to the method outlined in [1]. For more information, check https://github.com/brain-networks/edge-ts/blob/master/main.m

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

References

[1]

Zamani Esfahlani et al. (2020) High-amplitude cofluctuations in cortical activity drive functional connectivity doi: 10.1073/pnas.2005531117

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.ReHoParcels(parcellation, using, reho_params=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for regional homogeneity on parcels.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

using{“junifer”, “afni”}

Implementation to use for computing ReHo:

  • “junifer” : Use junifer’s own ReHo implementation

  • “afni” : Use AFNI’s 3dReHo

reho_paramsdict, optional

Extra parameters for computing ReHo map as a dictionary (default None). If using="afni", then the valid keys are:

  • nneigh{7, 19, 27}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

  • neigh_radpositive float, optional

    The radius of a desired neighbourhood (default None).

  • neigh_xpositive float, optional

    The semi-radius for x-axis of ellipsoidal volumes (default None).

  • neigh_ypositive float, optional

    The semi-radius for y-axis of ellipsoidal volumes (default None).

  • neigh_zpositive float, optional

    The semi-radius for z-axis of ellipsoidal volumes (default None).

  • box_radpositive int, optional

    The number of voxels outward in a given cardinal direction for a cubic box centered on a given voxel (default None).

  • box_xpositive int, optional

    The number of voxels for +/- x-axis of cuboidal volumes (default None).

  • box_ypositive int, optional

    The number of voxels for +/- y-axis of cuboidal volumes (default None).

  • box_zpositive int, optional

    The number of voxels for +/- z-axis of cuboidal volumes (default None).

else if using="junifer", then the valid keys are:

  • nneigh{7, 19, 27, 125}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

    • 125 : for 5x5 cuboidal volume

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a 1D numpy.ndarray

  • col_names : the column labels for the parcels as a list

class junifer.markers.ReHoSpheres(coords, using, radius=None, allow_overlap=False, reho_params=None, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for regional homogeneity on spheres.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

using{“junifer”, “afni”}

Implementation to use for computing ReHo:

  • “junifer” : Use junifer’s own ReHo implementation

  • “afni” : Use AFNI’s 3dReHo

radiusfloat, optional

The radius of the sphere in millimeters. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

use_afnibool, optional

Whether to use AFNI for computing. If None, will use AFNI only if available (default None).

reho_paramsdict, optional

Extra parameters for computing ReHo map as a dictionary (default None). If using="afni", then the valid keys are:

  • nneigh{7, 19, 27}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

  • neigh_radpositive float, optional

    The radius of a desired neighbourhood (default None).

  • neigh_xpositive float, optional

    The semi-radius for x-axis of ellipsoidal volumes (default None).

  • neigh_ypositive float, optional

    The semi-radius for y-axis of ellipsoidal volumes (default None).

  • neigh_zpositive float, optional

    The semi-radius for z-axis of ellipsoidal volumes (default None).

  • box_radpositive int, optional

    The number of voxels outward in a given cardinal direction for a cubic box centered on a given voxel (default None).

  • box_xpositive int, optional

    The number of voxels for +/- x-axis of cuboidal volumes (default None).

  • box_ypositive int, optional

    The number of voxels for +/- y-axis of cuboidal volumes (default None).

  • box_zpositive int, optional

    The number of voxels for +/- z-axis of cuboidal volumes (default None).

else if using="junifer", then the valid keys are:

  • nneigh{7, 19, 27, 125}, optional (default 27)

    Number of voxels in the neighbourhood, inclusive. Can be:

    • 7 : for facewise neighbours only

    • 19 : for face- and edge-wise nieghbours

    • 27 : for face-, edge-, and node-wise neighbors

    • 125 : for 5x5 cuboidal volume

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

The BOLD data as dictionary.

extra_inputdict, optional

The other fields in the pipeline data object (default None).

Returns:
dict

The computed result as dictionary. The dictionary has the following keys:

  • data : the actual computed values as a 1D numpy.ndarray

  • col_names : the column labels for the spheres as a list

class junifer.markers.SphereAggregation(coords, radius=None, allow_overlap=False, method='mean', method_params=None, time_method=None, time_method_params=None, masks=None, on=None, name=None)#

Class for sphere aggregation.

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in millimeters. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default “mean”).

method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

time_methodstr, optional

The method to use to aggregate the time series over the time points, after applying method (only applicable to BOLD data). If None, it will not operate on the time dimension (default None).

time_method_paramsdict, optional

The parameters to pass to the time aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

on{“T1w”, “T2w”, “BOLD”, “VBM_GM”, “VBM_WM”, “VBM_CSF”, “fALFF”, “GCOR”, “LCOR”} or list of the options, optional

The data types to apply the marker to. If None, will work on all available data (default None).

namestr, optional

The name of the marker. By default, it will use KIND_SphereAggregation where KIND is the kind of data it was applied to (default None).

Raises:
ValueError

If time_method is specified for non-BOLD data or if time_method_params is not None when time_method is None.

compute(input, extra_input=None)#

Compute.

Parameters:
inputdict

A single input from the pipeline data object in which to compute the marker.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : the actual computed values as a numpy.ndarray

  • col_names : the column labels for the computed values as list

Warns:
RuntimeWarning

If time aggregation is required but only time point is available.

get_output_type(input_type)#

Get output type.

Parameters:
input_typestr

The data type input to the marker.

Returns:
str

The storage type output by the marker.

Raises:
ValueError

If the input_type is invalid.

get_valid_inputs()#

Get valid data types for input.

Returns:
list of str

The list of data types that can be used as input for this marker.

class junifer.markers.TemporalSNRParcels(parcellation, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for temporal signal-to-noise ratio using parcellations.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. If None, will use the class name (default None).

aggregate(input, extra_input=None)#

Perform parcel aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which the data is the voxelwise temporal SNR map.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : ROI-wise temporal SNR as a numpy.ndarray

  • col_names : the ROI labels for the computed values as list

class junifer.markers.TemporalSNRSpheres(coords, radius=None, allow_overlap=False, agg_method='mean', agg_method_params=None, masks=None, name=None)#

Class for temporal signal-to-noise ratio using coordinates (spheres).

Parameters:
coordsstr

The name of the coordinates list to use. See list_coordinates() for options.

radiusfloat, optional

The radius of the sphere in mm. If None, the signal will be extracted from a single voxel. See nilearn.maskers.NiftiSpheresMasker for more information (default None).

allow_overlapbool, optional

Whether to allow overlapping spheres. If False, an error is raised if the spheres overlap (default is False).

agg_methodstr, optional

The aggregation method to use. See get_aggfunc_by_name() for more information (default None).

agg_method_paramsdict, optional

The parameters to pass to the aggregation method (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

namestr, optional

The name of the marker. By default, it will use KIND_FunctionalConnectivitySpheres where KIND is the kind of data it was applied to (default None).

aggregate(input, extra_input=None)#

Perform sphere aggregation.

Parameters:
inputdict

A single input from the pipeline data object in which the data is the voxelwise temporal SNR map.

extra_inputdict, optional

The other fields in the pipeline data object. Useful for accessing other data kind that needs to be used in the computation. For example, the functional connectivity markers can make use of the confounds if available (default None).

Returns:
dict

The computed result as dictionary. This will be either returned to the user or stored in the storage by calling the store method with this as a parameter. The dictionary has the following keys:

  • data : VOI-wise temporal SNR as a numpy.ndarray

  • col_names : the VOI labels for the computed values as list

Complexity#

Provide imports for complexity sub-package.

class junifer.markers.complexity.HurstExponent(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for Hurst exponent of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the Hurst exponent calculation function. For more information, check out junifer.markers.utils._hurst_exponent. If None, value is set to {“method”: “dfa”} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import HurstExponent or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, and calculate Hurst exponent using the detrended fluctuation analysis method assuming the data is monofractal [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.fractal_dfa

References

[1]

Peng, C.; Havlin, S.; Stanley, H.E.; Goldberger, A.L. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos Interdiscip. J. Nonlinear Sci., 5, 82-87, 1995.

class junifer.markers.complexity.MultiscaleEntropyAUC(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for AUC of multiscale entropy of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the AUC of multiscale entropy calculation function. For more information, check out junifer.markers.utils._multiscale_entropy_auc. If None, value is set to {“m”: 2, “tol”: 0.5, “scale”: 10} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import MultiscaleEntropyAUC or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, calculate multiscale entropy for each region and calculate the AUC of the entropy curves leading to a region-wise map of the brain [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_multiscale

References

[1]

Costa, M., Goldberger, A. L., & Peng, C. K. Multiscale entropy analysis of complex physiologic time series. Physical review letters, 89(6), 068102, 2002.

class junifer.markers.complexity.PermEntropy(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for permutation entropy of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the permutation entropy calculation function. For more information, check out junifer.markers.utils._perm_entropy. If None, value is set to {“m”: 2, “delay”: 1} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import PermEntropy or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, and calculate permutation entropy according to the method outlined in [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_permutation

References

[1]

Bandt, C., & Pompe, B. (2002) Permutation entropy: a natural complexity measure for time series. Physical review letters, 88(17), 174102.

class junifer.markers.complexity.RangeEntropy(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for range entropy of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the range entropy calculation function. For more information, check out junifer.markers.utils._range_entropy. If None, value is set to {“m”: 2, “tol”: 0.5, “delay”: 1} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import RangeEntropy or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, and calculate range entropy according to the method outlined in [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_range

References

[1]

A. Omidvarnia et al. (2018) Range Entropy: A Bridge between Signal Complexity and Self-Similarity. Entropy, vol. 20, no. 12, p. 962, 2018.

class junifer.markers.complexity.RangeEntropyAUC(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for AUC of range entropy values of a time series over r = 0 to 1.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the range entropy calculation function. For more information, check out junifer.markers.utils._range_entropy. If None, value is set to {“m”: 2, “delay”: 1, “n_r”: 10} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import RangeEntropyAUC or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, calculate range entropy according to the method outlined in [1] across the range of tolerance value r from 0 to 1, and compute its area under the curve.

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_range

References

[1]

A. Omidvarnia et al. (2018) Range Entropy: A Bridge between Signal Complexity and Self-Similarity. Entropy, vol. 20, no. 12, p. 962, 2018.

class junifer.markers.complexity.SampleEntropy(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for sample entropy of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the sample entropy calculation function. For more information, check out junifer.markers.utils._sample_entropy. If None, value is set to {“m”: 2, “delay”: 1, “tol”: 0.5} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import SampleEntropy or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, and calculate sample entropy [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_sample

References

[1]

Richman, J., Moorman, J. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol., 278 (6) (2000), pp. H2039-2049

class junifer.markers.complexity.WeightedPermEntropy(parcellation, agg_method='mean', agg_method_params=None, masks=None, params=None, name=None)#

Class for weighted permutation entropy of a time series.

Parameters:
parcellationstr or list of str

The name(s) of the parcellation(s). Check valid options by calling junifer.data.parcellations.list_parcellations().

agg_methodstr, optional

The method to perform aggregation using. Check valid options in junifer.stats.get_aggfunc_by_name() (default “mean”).

agg_method_paramsdict, optional

Parameters to pass to the aggregation function. Check valid options in junifer.stats.get_aggfunc_by_name() (default None).

masksstr, dict or list of dict or str, optional

The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).

paramsdict, optional

Parameters to pass to the weighted permutation entropy calculation function. For more information, check out junifer.markers.utils._weighted_perm_entropy. If None, value is set to {“m”: 2, “delay”: 1} (default None).

namestr, optional

The name of the marker. If None, it will use the class name (default None).

Warning

This class is not automatically imported by junifer and requires you to import it explicitly. You can do it programmatically by from junifer.markers.complexity import WeightedPermEntropy or in the YAML by with: junifer.markers.complexity.

compute_complexity(extracted_bold_values)#

Compute complexity measure.

Take a timeseries of brain areas, and calculate weighted permutation entropy according to the method outlined in [1].

Parameters:
extracted_bold_valuesnumpy.ndarray

The BOLD values extracted via parcel aggregation.

Returns:
numpy.ndarray

The values after computing complexity measure.

See also

neurokit2.entropy_permutation

References

[1]

Fadlallah, B., Chen, B., Keil, A., & Principe, J. (2013) Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information. Physical Review E, 87(2), 022911.

junifer.markers.complexity.find_spec(name, package=None)#

Return the spec for the specified module.

First, sys.modules is checked to see if the module was already imported. If so, then sys.modules[name].__spec__ is returned. If that happens to be set to None, then ValueError is raised. If the module is not in sys.modules, then sys.meta_path is searched for a suitable spec with the value of ‘path’ given to the finders. None is returned if no spec could be found.

If the name is for submodule (contains a dot), the parent module is automatically imported.

The name and package arguments work the same as importlib.import_module(). In other words, relative module names (with leading dots) work.

junifer.markers.complexity.raise_error(msg, klass=<class 'ValueError'>, exception=None)#

Raise error, but first log it.

Parameters:
msgstr

The message for the exception.

klasssubclass of Exception, optional

The subclass of Exception to raise using (default ValueError).

exceptionException, optional

The original exception to follow up on (default None).