asldro.filters package¶
Submodules¶
asldro.filters.acquire_mri_image_filter module¶
AcquireMriImageFilter Class
-
class
asldro.filters.acquire_mri_image_filter.
AcquireMriImageFilter
¶ Bases:
asldro.filters.filter_block.FilterBlock
A filter block that simulates the acquisition of an MRI image based on ground truth inputs.
Combines:
Returns
AddComplexNoiseFilter
Inputs
Input Parameters are all keyword arguments for the
AcquireMriImageFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAcquireMriImageFilter.KEY_T1
- Parameters
't1' (BaseImageContainer) – Longitudinal relaxation time in seconds
't2' (BaseImageContainer) – Transverse relaxation time in seconds
't2_star' (BaseImageContainer) – Transverse relaxation time including time-invariant magnetic field inhomogeneities.
'm0' (BaseImageContainer) – Equilibrium magnetisation
'mag_eng' – Added to M0 before relaxation is calculated, provides a means to encode another signal into the MRI signal (non-complex data).
'acq_contrast' (str) – Determines which signal model to use:
"ge"
(case insensitive) for Gradient Echo,"se"
(case insensitive) for Spin Echo,"ir"
(case insensitive) for Inversion Recovery.'echo_time' (float) – The echo time in seconds
'repetition_time' (float) – The repeat time in seconds
'excitation_flip_angle' (float) – Excitation pulse flip angle in degrees. Only used when
"acq_contrast"
is"ge"
or"ir"
.'inversion_flip_angle' (float, optional) – Inversion pulse flip angle in degrees. Only used when
acq_contrast
is"ir"
.'inversion_time' – The inversion time in seconds. Only used when
acq_contrast
is"ir"
.'image_flavour' (str, optional) – sets the metadata
image_flavour
in the output image to this.'translation' – \([\Delta x,\Delta y,\Delta z]\) amount to translate along the x, y and z axes.
'rotation' (Tuple[float, float, float], optional) – \([\theta_x,\theta_y,\theta_z]\) angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive).
'rotation_origin' (Tuple[float, float, float], optional) – \([x_r,y_r,z_r]\) coordinates of the point to perform rotations about.
target_shape (Tuple[int, int, int], optional) – \([L_t,M_t,N_t]\) target shape for the acquired image
'snr' (float) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – Synthesised MRI image.
-
KEY_ACQ_CONTRAST
= 'acq_contrast'¶
-
KEY_ECHO_TIME
= 'echo_time'¶
-
KEY_EXCITATION_FLIP_ANGLE
= 'excitation_flip_angle'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_FLAVOUR
= 'image_flavour'¶
-
KEY_INVERSION_FLIP_ANGLE
= 'inversion_flip_angle'¶
-
KEY_INVERSION_TIME
= 'inversion_time'¶
-
KEY_M0
= 'm0'¶
-
KEY_MAG_ENC
= 'mag_enc'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_REPETITION_TIME
= 'repetition_time'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_SNR
= 'snr'¶
-
KEY_T1
= 't1'¶
-
KEY_T2
= 't2'¶
-
KEY_T2_STAR
= 't2_star'¶
-
KEY_TARGET_SHAPE
= 'target_shape'¶
-
KEY_TRANSLATION
= 'translation'¶
asldro.filters.add_complex_noise_filter module¶
Add complex noise filter block
-
class
asldro.filters.add_complex_noise_filter.
AddComplexNoiseFilter
¶ Bases:
asldro.filters.filter_block.FilterBlock
A filter that adds normally distributed random noise to the real and imaginary parts of the fourier transform of the input image.
Inputs
Input parameters are all keyword arguments for the
AddComplexNoiseFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAddComplexNoiseFilter.KEY_SNR
.- Parameters
'image' (BaseImageContainer) – An input image which noise will be added to. Can be either scalar or complex. If it is complex, normally distributed random noise will be added to both real and imaginary parts.
'snr' (float) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – The input image with complex noise added.
The noise is added pseudo-randomly based on the state of numpy.random. This should be appropriately controlled prior to running the filter
-
KEY_IMAGE
= 'image'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_SNR
= 'snr'¶
asldro.filters.add_noise_filter module¶
Add noise filter
-
class
asldro.filters.add_noise_filter.
AddNoiseFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that adds normally distributed random noise to an input image.
Inputs
Input parameters are all keyword arguments for the
AddNoiseFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAddNoiseFilter.KEY_SNR
.- Parameters
'image' (BaseImageContainer) – An input image which noise will be added to. Can be either scalar or complex. If it is complex, normally distributed random noise will be added to both real and imaginary parts.
'snr' (float) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – The input image with noise added.
‘reference_image’ can be in a different data domain to the ‘image’. For example, ‘image’ might be in the inverse domain (i.e. fourier transformed) whereas ‘reference_image’ is in the spatial domain. Where data domains differ the following scaling is applied to the noise amplitude:
‘image’ is SPATIAL_DOMAIN and ‘reference_image’ is INVERSE_DOMAIN: 1/N
‘image’ is INVERSE_DOMAIN and ‘reference_image’ is SPATIAL_DOMAIN: N
Where N is reference_image.image.size
The noise is added pseudo-randomly based on the state of numpy.random. This should be appropriately controlled prior to running the filter
Note that the actual SNR (as calculated using “A comparison of two methods for measuring the signal to noise ratio on MR images”, PMB, vol 44, no. 12, pp.N261-N264 (1999)) will not match the desired SNR under the following circumstances:
‘image’ is SPATIAL_DOMAIN and ‘reference_image’ is INVERSE_DOMAIN
‘image’ is INVERSE_DOMAIN and ‘reference_image’ is SPATIAL_DOMAIN
In the second case, performing an inverse fourier transform on the output image with noise results in a spatial domain image where the calculated SNR matches the desired SNR. This is how the
AddNoiseFilter
is used within theAddComplexNoiseFilter
-
KEY_IMAGE
= 'image'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_SNR
= 'snr'¶
asldro.filters.affine_matrix_filter module¶
Affine Matrix Filter
-
class
asldro.filters.affine_matrix_filter.
AffineMatrixFilter
(name: str = 'Unknown')¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that creates an affine transformation matrix based on input parameters for rotation, translation, and scaling.
Conventions are for RAS+ coordinate systems only
Inputs
Input Parameters are all keyword arguments for the
AffineMatrixFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAffineMatrixFilter.KEY_ROTATION
- Parameters
'rotation' (Tuple[float, float, float], optional) – [\(\theta_x\), \(\theta_y\), \(\theta_z\)] angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive), defaults to (0, 0, 0)
'rotation_origin' (Tuple[float, float, float], optional) – [\(x_r\), \(y_r\), \(z_r\)] coordinates of the point to perform rotations about, defaults to (0, 0, 0)
'translation' (Tuple[float, float, float], optional) – [\(\Delta x\), \(\Delta y\), \(\Delta z\)] amount to translate along the x, y and z axes. defaults to (0, 0, 0)
'scale' (Tuple[float, float, float], optional) – [\(s_x\), \(s_y\), \(s_z\)] scaling factors along each axis, defaults to (1, 1, 1)
'affine' (np.ndarray(4), optional) – 4x4 affine matrix to apply transformation to, defaults to numpy.eye(4)
'affine_last' (np.ndarray(4), optional) – input 4x4 affine matrix that is applied last, defaults to numpy.eye(4)
Outputs
Once run, the filter will populate the dictionary
AffineMatrixFilter.outputs
with the following entries- Parameters
'affine' (np.ndarray(4)) – 4x4 affine matrix with all transformations combined.
'affine_inverse' (np.ndarray(4)) – 4x4 affine matrix that is the inverse of ‘affine’
The output affine matrix is calculated as follows:
\[\begin{split}&\mathbf{M} = \mathbf{B}\mathbf{S}\mathbf{T}\mathbf{T_{r}}\mathbf{R_z} \mathbf{R_y}\mathbf{R_x}\mathbf{T_{r}}^{-1}\mathbf{M_\text{in}}\\ \\ \text{where,}&\\ &\mathbf{A} = \text{Existing affine matrix}\\ &\mathbf{B} = \text{Affine matrix to combine last}\\ &\mathbf{S} = \begin{pmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0& 0 & 0& 1 \end{pmatrix}=\text{scaling matrix}\\ &\mathbf{T} = \begin{pmatrix} 1 & 0 & 0 & \Delta x \\ 0 & 1& 0 & \Delta y \\ 0 & 0 & 1& \Delta z \\ 0& 0 & 0& 1 \end{pmatrix}=\text{translation matrix}\\ &\mathbf{T_r} = \begin{pmatrix} 1 & 0 & 0 & x_r \\ 0 & 1& 0 & y_r \\ 0 & 0 & 1& z_r \\ 0& 0 & 0& 1 \end{pmatrix}= \text{translation to rotation centre matrix}\\ &\mathbf{R_x} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \cos{\theta_x}& -\sin{\theta_x} & 0\\ 0 & \sin{\theta_x} & \cos{\theta_x}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about x matrix}\\ &\mathbf{R_y} = \begin{pmatrix} \cos{\theta_y} & 0 & \sin{\theta_y} & 0\\ 0 & 1 & 0 & 0\\ -\sin{\theta_y} & 0 & \cos{\theta_y}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about y matrix}\\ &\mathbf{R_z} = \begin{pmatrix} \cos{\theta_z}& -\sin{\theta_z} & 0 & 0\\ \sin{\theta_z} & \cos{\theta_z}& 0 &0\\ 0& 0& 1 & 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about z matrix}\\\end{split}\]-
KEY_AFFINE
= 'affine'¶
-
KEY_AFFINE_INVERSE
= 'affine_inverse'¶
-
KEY_AFFINE_LAST
= 'affine_last'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_SCALE
= 'scale'¶
-
KEY_TRANSLATION
= 'translation'¶
asldro.filters.append_metadata_filter module¶
AppendMetadataFilter
-
class
asldro.filters.append_metadata_filter.
AppendMetadataFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that can add key-value pairs to the metadata dictionary property of an image container. If the supplied key already exists the old value will be overwritten with the new value. The input image container is modified and a reference passed to the output, i.e. no copy is made.
Inputs
Input Parameters are all keyword arguments for the
AppendMetadataFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAppendMetadataFilter.KEY_METADATA
- Parameters
'image' (BaseImageContainer) – The input image to append the metadata to
'metadata' (dict) – dictionary of key-value pars to append to the metadata property of the input image.
Outputs
Once run, the filter will populate the dictionary
AppendMetadataFilter.outputs
with the following entries- Parameters
'image' – The input image, with the input metadata merged.
-
KEY_IMAGE
= 'image'¶
-
KEY_METADATA
= 'metadata'¶
asldro.filters.basefilter module¶
BaseFilter classes and exception handling
-
class
asldro.filters.basefilter.
BaseFilter
(name: str = 'Unknown')¶ Bases:
abc.ABC
An abstract base class for filters. All filters should inherit this class
-
add_child_filter
(child: asldro.filters.basefilter.BaseFilter, io_map: Mapping[str, str] = None)¶ See documentation for add_parent_filter
-
add_input
(key: str, value)¶ Adds an input with a given key and value. If the key is already in the inputs, an RuntimeError is raised
-
add_inputs
(input_dict: Mapping[str, Any], io_map: Mapping[str, str] = None, io_map_optional: bool = False)¶ Adds multiple inputs via a dictionary. Optionally, maps the dictionary onto different input keys using an io_map. :param input_dict: The input dictionary :param io_map: The dictionary used to perform the mapping. All keys and values must be strings. For example: As an example: {
“one”: “two”, “three”: “four”
} will map inputs keys of “one” to “two” AND “three” to “four”. If io_map is None, no mapping with be performed. :param io_map_optional: If this is False, a KeyError will be raised if the keys in the io_map are not found in the input_dict. :raises KeyError: if keys required in the mapping are not found in the input_dict
-
add_parent_filter
(parent: asldro.filters.basefilter.BaseFilter, io_map: Mapping[str, str] = None)¶ Add a parent filter (the inputs of this filter will be connected to the output of the parents). By default, the ALL outputs of the parent will be directly mapped to the inputs of this filter using the same KEY. This can be overridden by supplying io_map. e.g. io_map = {
“output_key1”:”input_key1”, “output_key2”:”input_key2”, … }
will map the output of the parent filter with a key of “output_key1” to the input of this filter with a key of “input_key1” etc. If io_map is defined ONLY those keys which are explicitly listed are mapped (the others are ignored)
-
run
(history=None)¶ Calls the _run class on all parents (recursively) to make sure they are up-to-date. Then maps the parents’ outputs to inputs for this filter. Then calls the _run method on this filter.
-
-
exception
asldro.filters.basefilter.
BaseFilterException
(msg: str)¶ Bases:
Exception
Exceptions for this modules
-
exception
asldro.filters.basefilter.
FilterInputKeyError
¶ Bases:
Exception
Used to show an error with a filter’s input keys e.g. multiple values have been assigned to the same input
-
exception
asldro.filters.basefilter.
FilterInputValidationError
¶ Bases:
Exception
Used to show an error when running the validation on the filter’s inputs i.e. when running _validate_inputs()
-
exception
asldro.filters.basefilter.
FilterLoopError
¶ Bases:
Exception
Used when a loop is detected in the filter chain
asldro.filters.bids_output_filter module¶
BidsOutputFilter
-
class
asldro.filters.bids_output_filter.
BidsOutputFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that will output an input image container in Brain Imaging Data Structure (BIDS) format.
BIDS comprises of a NIFTI image file and accompanying .json sidecar that contains additional parameters. More information on BIDS can be found at https://bids.neuroimaging.io/
Inputs
Input Parameters are all keyword arguments for the
BidsOutputFilter.add_inputs()
member function. They are also accessible via class constants, for exampleBidsOutputFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – the image to save in BIDS format
'output_directory' (str) – The root directory to save to
'filename_prefix' (str, optional) – string to prefix the filename with.
Outputs
Once run, the filter will populate the dictionary
BidsOutputFilter.outputs
with the following entries- Parameters
'filename' (str) – the filename of the saved file
'sidecar' (dict) – the fields that make up the output *.json file.
Files will be saved in subdirectories corresponding to the metadata entry
series_type
:‘structural’ will be saved in the subdirectory ‘anat’
‘asl’ will be saved in the subdirectory ‘asl’
‘ground_truth’ will be saved in the subdirectory ‘ground_truth’
Filenames will be given by: <series_number>_<filename_prefix>_<modality_label>.<ext>, where
<series_number> is given by metadata field
series_number
, which is an integer number and will be prefixed by zeros so that it is 3 characterslong, for example 003, 010, 243<filename_prefix> is the string supplied by the input
filename_prefix
<modality_label> is determined based on
series_type
:‘structural’: it is given by the metadata field
modality
.‘asl’: it is determined by asl_context. If asl_context only contains entries that match with ‘m0scan’ then it will be set to ‘m0scan’, otherwise ‘asl’.
‘ground_truth’: it will be a concatenation of ‘ground_truth_’ + the metadata field
quantity
, e.g. ‘ground_truth_t1’.
Image Metadata
The input
image
must have certain metadata fields present, these being dependent on theseries_type
.- Parameters
'series_type' (str) – Describes the type of series. Either ‘asl’, ‘structural’ or ‘ground_truth’.
'modality' (string) – modality of the image series, only required by ‘structural’.
'series_number' (int) – number to identify the image series by, if multiple image series are being saved with similar parameters so that their filenames and BIDS fields would be identical, providing a unique series number will address this.
'quantity' (str) – (‘ground_truth’ only) name of the quantity that the image is a map of.
'units' (str) – (‘ground_truth’ only) units the quantity is in.
If
series_type
andmodality_label
are both ‘asl’ then the following metadata entries are required:- Parameters
'label_type' – describes the type of ASL labelling.
'label_duration' (float) – duration of the labelling pulse in seconds.
'post_label_delay – delay time following the labelling pulse before the acquisition in seconds.
'label_efficiency' (float) – the degree of inversion of the magnetisation (between 0 and 1)
'image_flavour' – a string that is used as the third entry in the BIDS field
ImageType
(corresponding with the dicom tag (0008,0008). For ASL images this should be ‘PERFUSION’.
“type ‘image_flavour’: str
-
ACQ_CONTRAST_MAPPING
= {'ge': 'GR', 'ir': 'IR', 'se': 'SE'}¶
-
ACQ_DATE_TIME
= 'AcquisitionDateTime'¶
-
BIDS_MAPPING
= {'acq_contrast': 'ScanningSequence', 'echo_time': 'EchoTime', 'excitation_flip_angle': 'FlipAngle', 'inversion_time': 'InversionTime', 'label_duration': 'LabelingDuration', 'label_efficiency': 'LabelingEfficiency', 'label_type': 'LabelingType', 'magnetic_field_strength': 'MagneticFieldStrength', 'mr_acq_type': 'MrAcquisitionType', 'post_label_delay': 'PostLabelingDelay', 'quantity': 'Quantity', 'repetition_time': 'RepetitionTime', 'segmentation': 'LabelMap', 'series_description': 'SeriesDescription', 'series_number': 'SeriesNumber', 'units': 'Units', 'voxel_size': 'AcquisitionVoxelSize'}¶
-
COMPLEX_IMAGE_COMPONENT_MAPPING
= {'COMPLEX_IMAGE_TYPE': 'COMPLEX', 'IMAGINARY_IMAGE_TYPE': 'IMAGINARY', 'MAGNITUDE_IMAGE_TYPE': 'MAGNITUDE', 'PHASE_IMAGE_TYPE': 'PHASE', 'REAL_IMAGE_TYPE': 'REAL'}¶
-
DRO_SOFTWARE
= 'DROSoftware'¶
-
DRO_SOFTWARE_URL
= 'DROSoftwareUrl'¶
-
DRO_SOFTWARE_VERSION
= 'DROSoftwareVersion'¶
-
IMAGE_TYPE
= 'ImageType'¶
-
KEY_FILENAME
= 'filename'¶
-
KEY_FILENAME_PREFIX
= 'filename_prefix'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_OUTPUT_DIRECTORY
= 'output_directory'¶
-
KEY_SIDECAR
= 'sidecar'¶
-
LABEL_MAP_MAPPING
= {'background': 'BG', 'csf': 'CSF', 'grey_matter': 'GM', 'lesion': 'L', 'vascular': 'VS', 'white_matter': 'WM'}¶
-
SERIES_DESCRIPTION
= 'series_description'¶
-
SERIES_NUMBER
= 'series_number'¶
-
SERIES_TYPE
= 'series_type'¶
-
static
determine_asl_modality_label
(asl_context: Union[str, List[str]]) → str¶ Function that determines the modality_label for asl image types based on an input asl_context list
- Parameters
asl_context (Union[str, List[str]]) – either a single string or list of asl context strings , e.g. [“m0scan”, “control”, “label”]
- Returns
a string determining the asl context, either “asl” or “m0scan”
- Return type
str
asldro.filters.combine_time_series_filter module¶
Combine Time Series Filter
-
class
asldro.filters.combine_time_series_filter.
CombineTimeSeriesFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that takes, as input, as set of ImageContainers. These should each represent a single time point in a time series acquisition. As an output, these ImageContainers will be concatenated across the 4th (time) dimension and their metadata combined with the following rules: - if all values of a given field are the same, for all time-series, use that value in the output metadata; else, - concatenate the values in a list. Instance variables of the BaseImageContainer such as
image_flavour
, will all be checked for consistency and copied across to the output image.Inputs
Input Parameters are all keyword arguments for the
CombineTimeSeriesFilter.add_inputs()
member function. They are also accessible via class constants, for exampleCombineTimeSeriesFilter.KEY_T1
.- Parameters
'image_NNNNN' – A time-series image. The order of these time series will be
determined by the NNNNN component, which shall be a positive integer. Any number of digits can be used in combination in NNNNN. For example, as sequence, image_0000, image_1, image_002, image_03 is valid. NOTE: the indices MUST start from 0 and increment by 1, and have no missing or duplicate indices. This is to help prevent accidentally missing/adding an index value. :type ‘image_NNNNN’: BaseImageContainer
Outputs
Once run, the filter will populate the dictionary
MriSignalFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – A 4D image of the combined time series.
-
INPUT_IMAGE_REGEX_OBJ
= re.compile('^image_(?P<index>[0-9]+)$')¶
-
KEY_IMAGE
= 'image'¶
asldro.filters.filter_block module¶
FilterBlock class
-
class
asldro.filters.filter_block.
FilterBlock
(name: str = 'Unknown')¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter made from multiple, chained filters. Used for when the same configuration of filters is used multiple times, or needs to be tested as a whole
-
run
(history=None)¶ Calls the BaseFilter’s run method to make sure all of the inputs of this FilterBlock are up-to-date and valid. Then runs this FilterBlock’s output filter, and populates the outputs to this FilterBlock.
-
asldro.filters.fourier_filter module¶
Fourier Transform filter
-
class
asldro.filters.fourier_filter.
FftFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for performing a n-dimensional fast fourier transform of input. Input is either a NumpyImageContainer or NiftiImageContainer. Output is a complex numpy array of the discrete fourier transform named ‘kdata’
-
KEY_IMAGE
= 'image'¶
-
-
class
asldro.filters.fourier_filter.
IfftFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for performing a n-dimensional inverse fast fourier transform of input. Input is a numpy array named ‘kdata’. Output is a complex numpy array of the inverse discrete fourier transform named ‘image’
-
KEY_IMAGE
= 'image'¶
-
asldro.filters.gkm_filter module¶
General Kinetic Model Filter
-
class
asldro.filters.gkm_filter.
GkmFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that generates the ASL signal using the General Kinetic Model. From: Buxton et. al, ‘A general kinetic model for quantitative perfusion imaging with arterial spin labeling’, Magnetic Resonance in Medicine, vol. 40, no. 3, pp. 383-396, 1998. https://doi.org/10.1002/mrm.1910400308
Inputs
Input Parameters are all keyword arguments for the
GkmFilter.add_inputs()
member function. They are also accessible via class constants, for exampleGkmFilter.KEY_PERFUSION_RATE
- Parameters
'perfusion_rate' (BaseImageContainer) – Map of perfusion rate, in ml/100g/min (>=0)
- :param ‘transit_time’ Map of the time taken for the labelled bolus
to reach the voxel, seconds (>=0).
- Parameters
'm0' – The tissue equilibrium magnetisation, can be a map or single value (>=0).
'label_type' (str) – Determines which GKM equations to use: “casl” OR “pcasl” (case insensitive) for the continuous model “pasl” (case insensitive) for the pulsed model
'label_duration' (float) – The length of the labelling pulse, seconds (0 to 100 inclusive)
'signal_time' (float) – The time after labelling commences to generate signal, seconds (0 to 100 inclusive)
'label_efficiency' (float) – The degree of inversion of the labelling (0 to 1 inclusive)
'lambda_blood_brain' (float) – The blood-brain-partition-coefficient (0 to 1 inclusive)
't1_arterial_blood' (float) – Longitudinal relaxation time of arterial blood, seconds (0 exclusive to 100 inclusive)
't1_tissue' (BaseImageContainer) – Longitudinal relaxation time of the tissue, seconds (0 to 100 inclusive, however voxels with
t1 = 0
will havedelta_m = 0
)
Outputs
Once run, the filter will populate the dictionary
GkmFilter.outputs
with the following entries- Parameters
'delta_m' (BaseImageContainer) – An image with synthetic ASL perfusion contrast. This will be the same class as the input ‘perfusion_rate’
The following parameters are added to
GkmFilter.outputs["delta_m"].metadata
:label_type
label_duration
post_label_delay
label_efficiency
lambda_blood_brain
t1_arterial_blood
post_label_delay
is calculated assignal_time - label_duration
-
CASL
= 'casl'¶
-
KEY_DELTA_M
= 'delta_m'¶
-
KEY_LABEL_DURATION
= 'label_duration'¶
-
KEY_LABEL_EFFICIENCY
= 'label_efficiency'¶
-
KEY_LABEL_TYPE
= 'label_type'¶
-
KEY_LAMBDA_BLOOD_BRAIN
= 'lambda_blood_brain'¶
-
KEY_M0
= 'm0'¶
-
KEY_PERFUSION_RATE
= 'perfusion_rate'¶
-
KEY_POST_LABEL_DELAY
= 'post_label_delay'¶
-
KEY_SIGNAL_TIME
= 'signal_time'¶
-
KEY_T1_ARTERIAL_BLOOD
= 't1_arterial_blood'¶
-
KEY_T1_TISSUE
= 't1_tissue'¶
-
KEY_TRANSIT_TIME
= 'transit_time'¶
-
PASL
= 'pasl'¶
-
PCASL
= 'pcasl'¶
asldro.filters.ground_truth_loader module¶
Ground truth loader filter
-
class
asldro.filters.ground_truth_loader.
GroundTruthLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading ground truth NIFTI/JSON file pairs.
Inputs
Input Parameters are all keyword arguments for the
GroundTruthLoaderFilter.add_inputs()
member function. They are also accessible via class constants, for exampleGroundTruthLoaderFilter.KEY_IMAGE
- Parameters
'image' (NiftiImageContainer) – ground truth image, must be 5D and the 5th dimension have the same length as the number of quantities.
'quantities' (list[str]) – list of quantity names
'units' (list[str]) – list of units corresponding to the quantities, must be the same length as quantities
'parameters' (dict) – dictionary containing keys “t1_arterial_blood”, “lambda_blood_brain” and “magnetic_field_strength”.
'segmentation' – dictionary containing key-value pairs corresponding to tissue type and label value in the “seg_label” volume.
'image_override' – (optional) dictionary containing single-value override values
for any of the ‘image’ that are loaded. The keys must match the quantity name defined in ‘quantities’. :type ‘image_override’: dict :param ‘parameter_override’: (optional) dictionary containing single-value override values for any of the ‘parameters’ that are loaded. The keys must match the key defined in ‘parameters’. :type ‘parameter_override’: dict :param ‘ground_truth_modulate’: dictionary with keys corresponding with quantity names. The possible dictionary values (both optional) are: {
“scale”: N, “offset”: M
} Any corresponding images will have the corresponding scale and offset applied before being output. See
ScaleOffsetFilter
for more details. :type ‘ground_truth_modulate’: dictOutputs
Once run, the filter will populate the dictionary
GroundTruthLoaderFilter.outputs
with output fields based on the input ‘quantities’. Each key in ‘quantities’ will result in a NiftiImageContainer corresponding to a 3D/4D subset of the nifti input (split along the 5th dimension). The data types of images will be the same as those input EXCEPT for a quantity labelled “seg_label” which will be converted to a uint16 data type. If ‘override_image’ is defined, the corresponding ‘image’ will be set to the overriding value before being output. If ‘override_parameters’ is defined, the corresponding parameter will be set to the overriding value before being output. If ‘ground_truth_modulate’ is defined, the corresponding ‘image’(s) will be scaled and/or offset by the corresponding values. The keys-value pairs in the input ‘parameters’ will also be destructured and piped through to the output, for example: :param ‘t1’: volume of T1 relaxation times :type ‘t1’: NiftiImageContainer :param ‘seg_label’: segmentation label mask corresponding to different tissue types. :type ‘seg_label’: NiftiImageContainer (uint16 data type) :param ‘magnetic_field_strength’: the magnetic field strenght in Tesla. :type ‘magnetic_field_strength’: float :param ‘t1_arterial_blood’: the T1 relaxation time of arterial blood :type ‘t1_arterial_blood’: float :param ‘lambda_blood_brain’: the blood-brain-partition-coefficient :type ‘lambda_blood_brain’: floatA field metadata will be created in each image container, with the following fields:
magnetic_field_strength
: corresponds to the value in the “parameters” object.quantity
: corresponds to the entry in the “quantities” array.units
: corresponds with the entry in the “units” array.
The “segmentation” object from the JSON file will also be piped through to the metadata entry of the “seg_label” image container.
-
KEY_GROUND_TRUTH_MODULATE
= 'ground_truth_modulate'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_OVERRIDE
= 'image_override'¶
-
KEY_MAG_STRENGTH
= 'magnetic_field_strength'¶
-
KEY_PARAMETERS
= 'parameters'¶
-
KEY_PARAMETER_OVERRIDE
= 'parameter_override'¶
-
KEY_QUANTITIES
= 'quantities'¶
-
KEY_QUANTITY
= 'quantity'¶
-
KEY_SEGMENTATION
= 'segmentation'¶
-
KEY_UNITS
= 'units'¶
asldro.filters.invert_image_filter module¶
Invert image filter
-
class
asldro.filters.invert_image_filter.
InvertImageFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter which simply inverts the input image.
Must have one input named ‘image’. These correspond with a derivative of BaseImageContainer.
Creates a single output named ‘image’.
asldro.filters.json_loader module¶
JSON file loader filter
-
class
asldro.filters.json_loader.
JsonLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading a JSON file.
Inputs
Input parameters are all keyword arguments for the
JsonLoaderFilter.add_inputs()
member function. They are also accessible via class constants, for exampleJsonLoaderFilter.KEY_FILENAME
.- Parameters
'filename' (str) – The path to the JSON file to load
'schema' – (optional) The schema to validate against (in python dict format). Some schemas
can be found in asldro.validators.schemas, or one can just in input here. :param ‘schema’: dict
Outputs
Creates a multiple outputs, based on the root key,value pairs in the JSON filter. For example: { “foo”: 1, “bar”: “test”} will create two outputs named “foo” and “bar” with integer and string values respectively. The outputs may also be nested i.e. object or arrays.
asldro.filters.mri_signal_filter module¶
MRI Signal Filter
-
class
asldro.filters.mri_signal_filter.
MriSignalFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that generates either the Gradient Echo, Spin Echo or Inversion Recovery MRI signal.
Gradient echo is with arbitrary excitation flip angle.
Spin echo assumes perfect 90° excitation and 180° refocusing pulses.
Inversion recovery can have arbitrary inversion pulse and excitation pulse flip angles.
Inputs
Input Parameters are all keyword arguments for the
MriSignalFilter.add_inputs()
member function. They are also accessible via class constants, for exampleMriSignalFilter.KEY_T1
- Parameters
't1' (BaseImageContainer) – Longitudinal relaxation time in seconds (>=0, non-complex data)
't2' (BaseImageContainer) – Transverse relaxation time in seconds (>=0, non-complex data)
't2_star' (BaseImageContainer) – Transverse relaxation time including time-invariant magnetic field inhomogeneities, only required for gradient echo (>=0, non-complex data)
'm0' (BaseImageContainer) – Equilibrium magnetisation (>=0, non-complex data)
'mag_eng' – Added to M0 before relaxation is calculated, provides a means to encode another signal into the MRI signal (non-complex data)
'acq_contrast' (str) – Determines which signal model to use:
"ge"
(case insensitive) for Gradient Echo,"se"
(case insensitive) for Spin Echo,"ir"
(case insensitive) for Inversion Recovery.'echo_time' (float) – The echo time in seconds (>=0)
'repetition_time' (float) – The repeat time in seconds (>=0)
'excitation_flip_angle' (float, optional) – Excitation pulse flip angle in degrees. Only used when
"acq_contrast"
is"ge"
or"ir"
. Defaults to 90.0'inversion_flip_angle' (float, optional) – Inversion pulse flip angle in degrees. Only used when
acq_contrast
is"ir"
. Defaults to 180.0'inversion_time' – The inversion time in seconds. Only used when
acq_contrast
is"ir"
. Defaults to 1.0.'image_flavour' (str) – sets the metadata
image_flavour
in the output image to this.
Outputs
Once run, the filter will populate the dictionary
MriSignalFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – An image of the generated MRI signal. Will be of the same class as the input
t1
The following parameters are added to
MriSignalFilter.outputs["image"].metadata
:acq_contrast
echo time
excitation_flip_angle
image_flavour
inversion_time
inversion_flip_angle
mr_acq_type
= “3D”
image_flavour
is obtained (in order of precedence):If present, from the input
image_flavour
If present, derived from the metadata in the input
mag_enc
“OTHER”
The following equations are used to compute the MRI signal:
Gradient Echo
\[S(\text{TE},\text{TR}, \theta_1) = \sin\theta_1\cdot(\frac{M_0 \cdot(1-e^{-\frac{TR}{T_{1}}})} {1-\cos\theta_1 e^{-\frac{TR}{T_{1}}}-e^{-\frac{TR}{T_{2}}}\cdot \left(e^{-\frac{TR}{T_{1}}}-\cos\theta_1\right)} + M_{\text{enc}}) \cdot e^{-\frac{\text{TE}}{T^{*}_2}}\]Spin Echo
\[S(\text{TE},\text{TR}) = (M_0 \cdot (1-e^{-\frac{\text{TR}}{T_1}}) + M_{\text{enc}}) \cdot e^{-\frac{\text{TE}}{T_2}}\]Inversion Recovery
\[\begin{split}&S(\text{TE},\text{TR}, \text{TI}, \theta_1, \theta_2) = \sin\theta_1 \cdot (\frac{M_0(1-\left(1-\cos\theta_{2}\right) e^{-\frac{TI}{T_{1}}}-\cos\theta_{2}e^{-\frac{TR}{T_{1}}})} {1-\cos\theta_{1}\cos\theta_{2}e^{-\frac{TR}{T_{1}}}}+ M_\text{enc}) \cdot e^{-\frac{TE}{T_{2}}}\\ &\theta_1 = \text{excitation pulse flip angle}\\ &\theta_2 = \text{inversion pulse flip angle}\end{split}\]-
CONTRAST_GE
= 'ge'¶
-
CONTRAST_IR
= 'ir'¶
-
CONTRAST_SE
= 'se'¶
-
KEY_ACQ_CONTRAST
= 'acq_contrast'¶
-
KEY_ACQ_TYPE
= 'mr_acq_type'¶
-
KEY_ECHO_TIME
= 'echo_time'¶
-
KEY_EXCITATION_FLIP_ANGLE
= 'excitation_flip_angle'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_FLAVOUR
= 'image_flavour'¶
-
KEY_INVERSION_FLIP_ANGLE
= 'inversion_flip_angle'¶
-
KEY_INVERSION_TIME
= 'inversion_time'¶
-
KEY_M0
= 'm0'¶
-
KEY_MAG_ENC
= 'mag_enc'¶
-
KEY_REPETITION_TIME
= 'repetition_time'¶
-
KEY_T1
= 't1'¶
-
KEY_T2
= 't2'¶
-
KEY_T2_STAR
= 't2_star'¶
asldro.filters.nifti_loader module¶
NIFTI file loader filter
-
class
asldro.filters.nifti_loader.
NiftiLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading a NIFTI image from a file.
Must have a single string input named ‘filename’.
Creates a single Image container as an output named ‘image’
asldro.filters.phase_magnitude_filter module¶
PhaseMagnitudeFilter Class
-
class
asldro.filters.phase_magnitude_filter.
PhaseMagnitudeFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter block that will take image data and convert it into its Phase and Magnitude components. Typically, this will be used after a
AcquireMriImageFilter
which contains real and imaginary components, however it may also be used with image data that is of type:REAL_IMAGE_TYPE
: in which case the phase is 0° where the image value is positive, and 180° where it is negative.IMAGINARY_IMAGE_TYPE
: in which case the phase is 90° where the image value is positive, and 270° where it is negative.MAGNITUDE_IMAGE_TYPE
: in which case the phase cannot be defined and so the output phase image is set toNone
.
Inputs
Input Parameters are all keyword arguments for the
PhaseMagnitudeFilter.add_inputs()
member function. They are also accessible via class constants, for examplePhaseMagnitudeFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – The input data image, cannot be a phase image
Outputs
- Parameters
'phase' (BaseImageContainer) – Phase image (will have
image_type==PHASE_IMAGE_TYPE
)'magnitude' (BaseImageContainer) – Magnitude image (will have
image_type==MAGNITUDE_IMAGE_TYPE
)
-
KEY_IMAGE
= 'image'¶
-
KEY_MAGNITUDE
= 'magnitude'¶
-
KEY_PHASE
= 'phase'¶
asldro.filters.resample_filter module¶
Resample Filter
-
class
asldro.filters.resample_filter.
ResampleFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that can resample an image based on a target shape and affine. Note that nilearn actually applies the inverse of the target affine.
Inputs
Input Parameters are all keyword arguments for the
ResampleFilter.add_inputs()
member function. They are also accessible via class constants, for exampleResampleFilter.KEY_AFFINE
- Parameters
'image' (BaseImageContainer) – Image to resample
'affine' (np.ndarray(4)) – Image is resampled according to this 4x4 affine matrix
'shape' (Tuple[int, int, int]) – Image is resampled according to this new shape.
Outputs Once run, the filter will populate the dictionary
ResampleFilter.outputs
with the following entries:- Parameters
'image' (BaseImageContainer) – The input image, resampled in accordance with the input shape and affine.
The metadata property of the
ResampleFilter.outputs["image"]
is updated with the fieldvoxel_size
, corresponding to the size of each voxel.-
KEY_AFFINE
= 'affine'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_SHAPE
= 'shape'¶
asldro.filters.scale_offset_filter module¶
ScaleOffsetFilter Class
-
class
asldro.filters.scale_offset_filter.
ScaleOffsetFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that will take image data and apply a scale and/or offset according to the equation:
\[I_{output} = I_{input} * m + b\]where ‘m’ is the scale and ‘b’ is the offset (scale first then offset)
Inputs
Input Parameters are all keyword arguments for the
ScaleOffsetFilter.add_inputs()
member function. They are also accessible via class constants, for exampleScaleOffsetFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – The input image
'scale' (float / int) – (optional) a scale to apply
'offset' (float / int) – (optional) an offset to apply
Outputs
- Parameters
'image' (BaseImageContainer) – The output image
-
KEY_IMAGE
= 'image'¶
-
KEY_OFFSET
= 'offset'¶
-
KEY_SCALE
= 'scale'¶
asldro.filters.transform_resample_image_filter module¶
Transform resample image filter
-
class
asldro.filters.transform_resample_image_filter.
TransformResampleImageFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that transforms and resamples an image in world space. The field of view (FOV) of the resampled image is the same as the FOV of the input image.
Conventions are for RAS+ coordinate systems only
Inputs
Input Parameters are all keyword arguments for the
TransformResampleImageFilter.add_inputs()
member function. They are also accessible via class constants, for exampleTransformResampleImageFilter.KEY_ROTATION
- Parameters
'image' (BaseImageContainer) – The input image
'translation' – \([\Delta r_x,\Delta r_y,\Delta r_z]\) amount to translate along the x, y and z axes. defaults to (0, 0, 0)
'rotation' (Tuple[float, float, float], optional) – \([\theta_x,\theta_y,\theta_z]\) angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive), defaults to (0, 0, 0)
'rotation_origin' (Tuple[float, float, float], optional) – \([x_r,y_r,z_r]\) coordinates of the point to perform rotations about, defaults to (0, 0, 0)
target_shape (Tuple[int, int, int]) – \([L_t,M_t,N_t]\) target shape for the resampled image
Outputs
Once run, the filter will populate the dictionary
TransformResampleImageFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – The input image, resampled in accordance with the specified shape and applied world-space transformation.
The metadata property of the
TransformResampleImageFilter.outputs["image"]
is updated with the fieldvoxel_size
, corresponding to the size of each voxel.The output image is resampled according to the target affine:
\[\begin{split}&\mathbf{A}=(\mathbf{T(\Delta r_{\text{im}})}\mathbf{S}\mathbf{T(\Delta r)} \mathbf{T(r_0)}\mathbf{R}\mathbf{T(r_0)}^{-1})^{-1}\\ \text{where,}&\\ & \mathbf{T(r_0)} = \mathbf{T}(x_r, y_r, z_r)= \text{Affine for translation to rotation centre}\\ & \mathbf{T(\Delta r)} = \mathbf{T}(\Delta r_x, \Delta r_y, \Delta r_z)= \text{Affine for translation of image in world space}\\ & \mathbf{T(\Delta r_{\text{im}})} = \mathbf{T}(x_0/s_x,y_0/s_y,z_0/s_z)^{-1} =\text{Affine for translation to the input image origin} \\ &\mathbf{T} = \begin{pmatrix} 1 & 0 & 0 & \Delta x \\ 0 & 1& 0 & \Delta y \\ 0 & 0 & 1& \Delta z \\ 0& 0 & 0& 1 \end{pmatrix}=\text{translation matrix}\\ &\mathbf{S} = \begin{pmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ & 0 & 0& 1 \end{pmatrix}=\text{scaling matrix}\\ & [s_x, s_y, s_z] = \frac{[L_t,M_t,N_t]}{[L_i,M_i,N_i]}\\ & [L_i, M_i, N_i] = \text{shape of the input image}\\ & [x_0, y_0, z_0] = \text{input image origin coordinates (vector part of input image's affine)}\\ &\mathbf{R} = \mathbf{R_z} \mathbf{R_y} \mathbf{R_x} = \text{Affine for rotation of image in world space}\\ &\mathbf{R_x} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \cos{\theta_x}& -\sin{\theta_x} & 0\\ 0 & \sin{\theta_x} & \cos{\theta_x}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about x matrix}\\ &\mathbf{R_y} = \begin{pmatrix} \cos{\theta_y} & 0 & \sin{\theta_y} & 0\\ 0 & 1 & 0 & 0\\ -\sin{\theta_y} & 0 & \cos{\theta_y}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about y matrix}\\ &\mathbf{R_z} = \begin{pmatrix} \cos{\theta_z}& -\sin{\theta_z} & 0 & 0\\ \sin{\theta_z} & \cos{\theta_z}& 0 &0\\ 0& 0& 1 & 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about z matrix}\\\end{split}\]After resampling the output image’s affine is modified to only contain the scaling:
\[\mathbf{A_{\text{new}}} = (\mathbf{T(\Delta r_{\text{im}})}\mathbf{S})^{-1}\]-
KEY_IMAGE
= 'image'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_TARGET_SHAPE
= 'target_shape'¶
-
KEY_TRANSLATION
= 'translation'¶
-
VOXEL_SIZE
= 'voxel_size'¶