9.1.3. Pre-processing¶
Preprocessors for preprocessing data before feature extraction.
- pydantic model junifer.preprocess.BasePreprocessor¶
Abstract base class for preprocessor.
For every preprocessor, one needs to provide a concrete implementation of this abstract class.
- Parameters:
- on
listofDataTypeorNone, optional The data type(s) to apply the preprocessor on. If None, will work on all available data types. Check
DataTypefor valid values (default None).- required_data_types
listofDataTypeorNone, optional The data type(s) needed for computation. If None, will be equal to
on. CheckDataTypefor valid values (default None).
- on
- Attributes:
valid_inputsValid data types to operate on.
- Raises:
AttributeErrorIf the preprocessor does not have
_VALID_DATA_TYPESattribute.ValueErrorIf required input data type(s) is(are) not found.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "BasePreprocessor", "description": "Abstract base class for preprocessor.\n\nFor every preprocessor, one needs to provide a concrete\nimplementation of this abstract class.\n\nParameters\n----------\non : list of :enum:`.DataType` or None, optional\n The data type(s) to apply the preprocessor on.\n If None, will work on all available data types.\n Check :enum:`.DataType` for valid values (default None).\nrequired_data_types : list of :enum:`.DataType` or None, optional\n The data type(s) needed for computation.\n If None, will be equal to ``on``.\n Check :enum:`.DataType` for valid values (default None).\n\nAttributes\n----------\nvalid_inputs\n\nRaises\n------\nAttributeError\n If the preprocessor does not have ``_VALID_DATA_TYPES`` attribute.\nValueError\n If required input data type(s) is(are) not found.", "type": "object", "properties": { "on": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "On" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" } }, "$defs": { "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" } } }
- Config:
use_enum_values: bool = True
- Fields:
on (list[junifer.datagrabber.base.DataType] | None)required_data_types (list[junifer.datagrabber.base.DataType] | None)
- model_post_init(context)¶
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- abstract preprocess(input, extra_input=None)¶
Preprocess.
- Parameters:
- input
dict A single input from the Junifer Data object to preprocess.
- extra_input
dict, optional The other fields in the Junifer Data object. Useful for accessing other data type that needs to be used in the computation. For example, the confound removers can make use of the confounds if available (default None).
- input
- Returns:
dictThe computed result as dictionary.
- validate_input(input)¶
Validate input.
- Parameters:
- Returns:
- Raises:
ValueErrorIf the input does not have the required data.
- validate_preprocessor_params()¶
Run extra logical validation for preprocessor.
Subclasses can override to provide validation.
- enum junifer.preprocess.Confounds(value)¶
Accepted confounds.
Basic: only the confounding time seriesPower2: signal + quadratic termDerivatives: signal + derivativesFull: signal + deriv. + quadratic terms + power2
- Member Type:
Valid values are as follows:
- Basic = <Confounds.Basic: 'basic'>¶
- Power2 = <Confounds.Power2: 'power2'>¶
- Derivatives = <Confounds.Derivatives: 'derivatives'>¶
- Full = <Confounds.Full: 'full'>¶
- pydantic model junifer.preprocess.Smoothing¶
Class for smoothing.
- Parameters:
- using
SmoothingImpl - on
listof {DataType.T1w,DataType.T2w,DataType.BOLD} The data type(s) to apply smoothing to.
- smoothing_params
dict, optional Extra parameters for smoothing as a dictionary (default None). If
using=SmoothingImpl.nilearn, then the valid keys are:fmhwscalar,numpy.ndarray, tuple or list of scalar, “fast” or NoneSmoothing strength, as a full-width at half maximum, in millimeters:
If nonzero scalar, width is identical in all 3 directions.
If
numpy.ndarray, tuple, or list, it must have 3 elements, giving the FWHM along each axis. If any of the elements is 0 or None, smoothing is not performed along that axis.If
"fast", a fast smoothing will be performed with a filter[0.2, 1, 0.2]in each direction and a normalisation to preserve the local average value.If None, no filtering is performed (useful when just removal of non-finite values is needed).
else if
using=SmoothingImpl.afni, then the valid keys are:fwhmint or floatSmooth until the value. AFNI estimates the smoothing and then applies smoothing to reach
fwhm.
else if
using=SmoothingImpl.fsl, then the valid keys are:brightness_thresholdfloatThreshold to discriminate between noise and the underlying image. The value should be set greater than the noise level and less than the contrast of the underlying image.
fwhmfloatSpatial extent of smoothing.
- using
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "Smoothing", "description": "Class for smoothing.\n\nParameters\n----------\nusing : :enum:`.SmoothingImpl`\non : list of {``DataType.T1w``, ``DataType.T2w``, ``DataType.BOLD``}\n The data type(s) to apply smoothing to.\nsmoothing_params : dict, optional\n Extra parameters for smoothing as a dictionary (default None).\n If ``using=SmoothingImpl.nilearn``, then the valid keys are:\n\n * ``fmhw`` : scalar, ``numpy.ndarray``, tuple or list of scalar, \"fast\" or None\n Smoothing strength, as a full-width at half maximum, in\n millimeters:\n\n - If nonzero scalar, width is identical in all 3 directions.\n - If ``numpy.ndarray``, tuple, or list, it must have 3 elements,\n giving the FWHM along each axis. If any of the elements is 0 or\n None, smoothing is not performed along that axis.\n - If ``\"fast\"``, a fast smoothing will be performed with a filter\n ``[0.2, 1, 0.2]`` in each direction and a normalisation to\n preserve the local average value.\n - If None, no filtering is performed (useful when just removal of\n non-finite values is needed).\n\n else if ``using=SmoothingImpl.afni``, then the valid keys are:\n\n * ``fwhm`` : int or float\n Smooth until the value. AFNI estimates the smoothing and then\n applies smoothing to reach ``fwhm``.\n\n else if ``using=SmoothingImpl.fsl``, then the valid keys are:\n\n * ``brightness_threshold`` : float\n Threshold to discriminate between noise and the underlying image.\n The value should be set greater than the noise level and less than\n the contrast of the underlying image.\n * ``fwhm`` : float\n Spatial extent of smoothing.", "type": "object", "properties": { "on": { "items": { "enum": [ "T1w", "T2w", "BOLD" ], "type": "string" }, "title": "On", "type": "array" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" }, "using": { "$ref": "#/$defs/SmoothingImpl" }, "smoothing_params": { "anyOf": [ { "additionalProperties": true, "type": "object" }, { "type": "null" } ], "default": null, "title": "Smoothing Params" } }, "$defs": { "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" }, "SmoothingImpl": { "description": "Accepted smoothing implementations.\n\n* ``nilearn`` : :func:`nilearn.image.smooth_img`\n* ``afni`` : AFNI's ``3dBlurToFWHM``\n* ``fsl`` : FSL SUSAN's ``susan``", "enum": [ "nilearn", "afni", "fsl" ], "title": "SmoothingImpl", "type": "string" } }, "required": [ "on", "using" ] }
- Config:
use_enum_values: bool = True
- Fields:
on (list[Literal[junifer.datagrabber.base.DataType.T1w, junifer.datagrabber.base.DataType.T2w, junifer.datagrabber.base.DataType.BOLD]])smoothing_params (dict | None)using (junifer.preprocess.smoothing.smoothing.SmoothingImpl)
- field using: SmoothingImpl [Required]¶
- preprocess(input, extra_input=None)¶
Preprocess.
- validate_preprocessor_params()¶
Run extra logical validation for preprocessor.
- enum junifer.preprocess.SmoothingImpl(value)¶
Accepted smoothing implementations.
nilearn:nilearn.image.smooth_img()afni: AFNI’s3dBlurToFWHMfsl: FSL SUSAN’ssusan
- Member Type:
Valid values are as follows:
- nilearn = <SmoothingImpl.nilearn: 'nilearn'>¶
- afni = <SmoothingImpl.afni: 'afni'>¶
- fsl = <SmoothingImpl.fsl: 'fsl'>¶
- pydantic model junifer.preprocess.SpaceWarper¶
Class for warping data to other template spaces.
- Parameters:
- using
SpaceWarpingImpl - reference
str The data type to use as reference for warping, can be either a data type like
"T1w"or a template space like"MNI152NLin2009cAsym". Use"T1w"for native space warping and named templates for template space warping.- on
listof {DataType.T1w,DataType.T2w,DataType.BOLD,DataType.VBM_GM,DataType.VBM_WM,DataType.VBM_CSF,DataType.FALFF,DataType.GCOR,DataType.LCOR} The data type(s) to warp.
- using
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "SpaceWarper", "description": "Class for warping data to other template spaces.\n\nParameters\n----------\nusing : :enum:`.SpaceWarpingImpl`\nreference : str\n The data type to use as reference for warping, can be either a data\n type like ``\"T1w\"`` or a template space like ``\"MNI152NLin2009cAsym\"``.\n Use ``\"T1w\"`` for native space warping and named templates for\n template space warping.\non : list of {``DataType.T1w``, ``DataType.T2w``, ``DataType.BOLD``, ``DataType.VBM_GM``, ``DataType.VBM_WM``, ``DataType.VBM_CSF``, ``DataType.FALFF``, ``DataType.GCOR``, ``DataType.LCOR``}\n The data type(s) to warp.", "type": "object", "properties": { "on": { "items": { "enum": [ "T1w", "T2w", "BOLD", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR" ], "type": "string" }, "title": "On", "type": "array" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" }, "using": { "$ref": "#/$defs/SpaceWarpingImpl" }, "reference": { "title": "Reference", "type": "string" } }, "$defs": { "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" }, "SpaceWarpingImpl": { "description": "Accepted space warping implementations.\n\n* ``fsl`` : FSL's ``applywarp``\n* ``ants`` : ANTs' ``antsApplyTransforms``\n* ``auto`` : Auto-select tool when ``reference=\"T1w\"``", "enum": [ "fsl", "ants", "auto" ], "title": "SpaceWarpingImpl", "type": "string" } }, "required": [ "on", "using", "reference" ] }
- Config:
use_enum_values: bool = True
- Fields:
on (list[Literal[junifer.datagrabber.base.DataType.T1w, junifer.datagrabber.base.DataType.T2w, junifer.datagrabber.base.DataType.BOLD, junifer.datagrabber.base.DataType.VBM_GM, junifer.datagrabber.base.DataType.VBM_WM, junifer.datagrabber.base.DataType.VBM_CSF, junifer.datagrabber.base.DataType.FALFF, junifer.datagrabber.base.DataType.GCOR, junifer.datagrabber.base.DataType.LCOR]])reference (str)using (junifer.preprocess.warping.space_warper.SpaceWarpingImpl)
- field on: list[Literal[DataType.T1w, DataType.T2w, DataType.BOLD, DataType.VBM_GM, DataType.VBM_WM, DataType.VBM_CSF, DataType.FALFF, DataType.GCOR, DataType.LCOR]] [Required]¶
- field using: SpaceWarpingImpl [Required]¶
- preprocess(input, extra_input=None)¶
Preprocess.
- Parameters:
- Returns:
dictThe computed result as dictionary.
- Raises:
ValueErrorIf
extra_inputis None when transforming to native space i.e., using"T1w"as reference.RuntimeErrorIf warper could not be found in
extra_inputwhenusing="auto"or converting from native space or if the data is in the correct space and does not require warping or if FSL is used whenreference="T1w".
- validate_preprocessor_params()¶
Run extra logical validation for preprocessor.
- enum junifer.preprocess.SpaceWarpingImpl(value)¶
Accepted space warping implementations.
fsl: FSL’sapplywarpants: ANTs’antsApplyTransformsauto: Auto-select tool whenreference="T1w"
- Member Type:
Valid values are as follows:
- fsl = <SpaceWarpingImpl.fsl: 'fsl'>¶
- ants = <SpaceWarpingImpl.ants: 'ants'>¶
- auto = <SpaceWarpingImpl.auto: 'auto'>¶
- class junifer.preprocess.Strategy¶
Accepted confound removal strategy.
- pydantic model junifer.preprocess.TemporalFilter¶
Class for temporal filtering.
Temporal filtering is based on
nilearn.image.clean_img().- Parameters:
- detrendbool, optional
If True, detrending will be applied on timeseries (default True).
- standardizebool, optional
If True, returned signals are set to unit variance (default True).
- low_pass
float, optional Low cutoff frequencies, in Hertz. If None, no filtering is applied (default None).
- high_pass
float, optional High cutoff frequencies, in Hertz. If None, no filtering is applied (default None).
- t_r
float, optional Repetition time, in second (sampling period). If None, it will use t_r from nifti header (default None).
- masks
listofdictorstr, orNone, optional The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "TemporalFilter", "description": "Class for temporal filtering.\n\nTemporal filtering is based on :func:`nilearn.image.clean_img`.\n\nParameters\n----------\ndetrend : bool, optional\n If True, detrending will be applied on timeseries (default True).\nstandardize : bool, optional\n If True, returned signals are set to unit variance (default True).\nlow_pass : float, optional\n Low cutoff frequencies, in Hertz. If None, no filtering is applied\n (default None).\nhigh_pass : float, optional\n High cutoff frequencies, in Hertz. If None, no filtering is\n applied (default None).\nt_r : float, optional\n Repetition time, in second (sampling period).\n If None, it will use t_r from nifti header (default None).\nmasks : list of dict or str, or None, optional\n The specification of the masks to apply to regions before extracting\n signals. Check :ref:`Using Masks <using_masks>` for more details.\n If None, will not apply any mask (default None).", "type": "object", "properties": { "on": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "On" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" }, "detrend": { "default": true, "title": "Detrend", "type": "boolean" }, "standardize": { "default": true, "title": "Standardize", "type": "boolean" }, "low_pass": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Low Pass" }, "high_pass": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "High Pass" }, "t_r": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "T R" }, "masks": { "anyOf": [ { "items": { "anyOf": [ { "additionalProperties": true, "type": "object" }, { "type": "string" } ] }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Masks" } }, "$defs": { "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" } } }
- Config:
use_enum_values: bool = True
- Fields:
detrend (bool)high_pass (float | None)low_pass (float | None)masks (list[dict | str] | None)standardize (bool)t_r (float | None)
- preprocess(input, extra_input=None)¶
Preprocess.
- pydantic model junifer.preprocess.TemporalSlicer¶
Class for temporal slicing.
- Parameters:
- start
zeroor positivefloat Starting time point, in second.
- stop
floatorNone Ending time point, in second. If None, stops at the last time point. Can also do negative indexing and has the same meaning as standard Python slicing except it represents time points.
- duration
floatorNone, optional Time duration to add to
start, in second. If None,stopis respected, else error is raised (default None).- t_r
floatorNone, optional Repetition time, in second (sampling period). If None, it will use t_r from nifti header (default None).
- start
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "TemporalSlicer", "description": "Class for temporal slicing.\n\nParameters\n----------\nstart : ``zero`` or positive float\n Starting time point, in second.\nstop : float or None\n Ending time point, in second. If None, stops at the last time point.\n Can also do negative indexing and has the same meaning as standard\n Python slicing except it represents time points.\nduration : float or None, optional\n Time duration to add to ``start``, in second. If None, ``stop`` is\n respected, else error is raised (default None).\nt_r : float or None, optional\n Repetition time, in second (sampling period).\n If None, it will use t_r from nifti header (default None).", "type": "object", "properties": { "on": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "On" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" }, "start": { "anyOf": [ { "const": 0, "type": "integer" }, { "exclusiveMinimum": 0, "type": "number" } ], "title": "Start" }, "stop": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "title": "Stop" }, "duration": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Duration" }, "t_r": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "T R" } }, "$defs": { "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" } }, "required": [ "start", "stop" ] }
- Config:
use_enum_values: bool = True
- Fields:
duration (float | None)start (Literal[0] | Annotated[float, annotated_types.Gt(gt=0)])stop (float | None)t_r (float | None)
- preprocess(input, extra_input=None)¶
Preprocess.
- Parameters:
- Returns:
dictThe computed result as dictionary.
- Raises:
RuntimeErrorIf no time slicing will be performed or if
stopis not None whendurationis provided or if calculated stop index is greater than allowed value.
- pydantic model junifer.preprocess.fMRIPrepConfoundRemover¶
Class for confound removal using fMRIPrep confounds format.
Read confound files and select columns according to a pre-defined strategy.
Confound removal is based on
nilearn.image.clean_img().- Parameters:
- strategy
StrategyorNone, optional The strategy to use for each component. If None, will use the full strategy for all components except
"scrubbing"which will be set to False (default None). The keys of the dictionary should correspond to names of noise components (Strategy) to include and the values should correspond to types of confounds (Confounds) extracted from each signal.- spike
float, optional If None, no spike regressor is added. If spike is a float, it will add a spike regressor for every point at which framewise displacement exceeds the specified float (default None).
- scrub
int, optional After accounting for time frames with excessive motion, further remove segments shorter than the given number. When the value is 0, remove time frames based on excessive framewise displacement and DVARS only. If None and no
"scrubbing"instrategy, no scrubbing is performed, else the default value is 0. The default value is referred as full scrubbing (default None).- fd_threshold
float, optional Framewise displacement threshold for scrub in mm. If None no
"scrubbing"instrategy, no scrubbing is performed, else the default value is 0.5 (default None).- std_dvars_threshold
float, optional Standardized DVARS threshold for scrub. DVARs is defined as root mean squared intensity difference of volume N to volume N+1. D referring to temporal derivative of timecourses, VARS referring to root mean squared variance over voxels. If None and no
"scrubbing"instrategy, no scrubbing is performed, else the default value is 1.5 (default None).- detrendbool, optional
If True, detrending will be applied on timeseries, before confound removal (default True).
- standardizebool, optional
If True, returned signals are set to unit variance (default True).
- low_pass
float, optional Low cutoff frequencies, in Hertz. If None, no filtering is applied (default None).
- high_pass
float, optional High cutoff frequencies, in Hertz. If None, no filtering is applied (default None).
- t_r
float, optional Repetition time, in second (sampling period). If None, it will use t_r from nifti header (default None).
- masks
listofdictorstr, orNone, optional The specification of the masks to apply to regions before extracting signals. Check Using Masks for more details. If None, will not apply any mask (default None).
- strategy
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
Show JSON schema
{ "title": "fMRIPrepConfoundRemover", "description": "Class for confound removal using fMRIPrep confounds format.\n\nRead confound files and select columns according to\na pre-defined strategy.\n\nConfound removal is based on :func:`nilearn.image.clean_img`.\n\nParameters\n----------\nstrategy : :class:`.Strategy` or None, optional\n The strategy to use for each component. If None, will use the *full*\n strategy for all components except ``\"scrubbing\"`` which will be set\n to False (default None).\n The keys of the dictionary should correspond to names of noise\n components (Strategy) to include and the values should correspond to\n types of confounds (Confounds) extracted from each signal.\nspike : float, optional\n If None, no spike regressor is added. If spike is a float, it will\n add a spike regressor for every point at which framewise displacement\n exceeds the specified float (default None).\nscrub : int, optional\n After accounting for time frames with excessive motion, further remove\n segments shorter than the given number. When the value is 0, remove\n time frames based on excessive framewise displacement and DVARS only.\n If None and no ``\"scrubbing\"`` in ``strategy``, no scrubbing is\n performed, else the default value is 0. The default value is referred\n as full scrubbing (default None).\nfd_threshold : float, optional\n Framewise displacement threshold for scrub in mm. If None no\n ``\"scrubbing\"`` in ``strategy``, no scrubbing is performed, else the\n default value is 0.5 (default None).\nstd_dvars_threshold : float, optional\n Standardized DVARS threshold for scrub. DVARs is defined as root mean\n squared intensity difference of volume N to volume N+1. D referring to\n temporal derivative of timecourses, VARS referring to root mean squared\n variance over voxels. If None and no ``\"scrubbing\"`` in ``strategy``,\n no scrubbing is performed, else the default value is 1.5\n (default None).\ndetrend : bool, optional\n If True, detrending will be applied on timeseries, before confound\n removal (default True).\nstandardize : bool, optional\n If True, returned signals are set to unit variance (default True).\nlow_pass : float, optional\n Low cutoff frequencies, in Hertz. If None, no filtering is applied\n (default None).\nhigh_pass : float, optional\n High cutoff frequencies, in Hertz. If None, no filtering is\n applied (default None).\nt_r : float, optional\n Repetition time, in second (sampling period).\n If None, it will use t_r from nifti header (default None).\nmasks : list of dict or str, or None, optional\n The specification of the masks to apply to regions before extracting\n signals. Check :ref:`Using Masks <using_masks>` for more details.\n If None, will not apply any mask (default None).", "type": "object", "properties": { "on": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "On" }, "required_data_types": { "anyOf": [ { "items": { "$ref": "#/$defs/DataType" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Required Data Types" }, "strategy": { "anyOf": [ { "$ref": "#/$defs/Strategy" }, { "type": "null" } ], "default": null }, "spike": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Spike" }, "scrub": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Scrub" }, "fd_threshold": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Fd Threshold" }, "std_dvars_threshold": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Std Dvars Threshold" }, "detrend": { "default": true, "title": "Detrend", "type": "boolean" }, "standardize": { "default": true, "title": "Standardize", "type": "boolean" }, "low_pass": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Low Pass" }, "high_pass": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "High Pass" }, "t_r": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "T R" }, "masks": { "anyOf": [ { "items": { "anyOf": [ { "additionalProperties": true, "type": "object" }, { "type": "string" } ] }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Masks" } }, "$defs": { "Confounds": { "description": "Accepted confounds.\n\n* ``Basic`` : only the confounding time series\n* ``Power2`` : signal + quadratic term\n* ``Derivatives`` : signal + derivatives\n* ``Full`` : signal + deriv. + quadratic terms + power2", "enum": [ "basic", "power2", "derivatives", "full" ], "title": "Confounds", "type": "string" }, "DataType": { "description": "Accepted data type.", "enum": [ "T1w", "T2w", "BOLD", "Warp", "VBM_GM", "VBM_WM", "VBM_CSF", "fALFF", "GCOR", "LCOR", "DWI", "FreeSurfer" ], "title": "DataType", "type": "string" }, "Strategy": { "description": "Accepted confound removal strategy.", "properties": { "motion": { "$ref": "#/$defs/Confounds" }, "wm_csf": { "$ref": "#/$defs/Confounds" }, "global_signal": { "$ref": "#/$defs/Confounds" }, "scrubbing": { "title": "Scrubbing", "type": "boolean" } }, "title": "Strategy", "type": "object" } } }
- Config:
use_enum_values: bool = True
- Fields:
detrend (bool)fd_threshold (float | None)high_pass (float | None)low_pass (float | None)masks (list[dict | str] | None)scrub (int | None)spike (float | None)standardize (bool)std_dvars_threshold (float | None)strategy (junifer.preprocess.confounds.fmriprep_confound_remover.Strategy | None)t_r (float | None)
- preprocess(input, extra_input=None)¶
Preprocess.
- validate_preprocessor_params()¶
Run extra logical validation for preprocessor.