asldro.filters package¶
Submodules¶
asldro.filters.acquire_mri_image_filter module¶
AcquireMriImageFilter Class
-
class
asldro.filters.acquire_mri_image_filter.
AcquireMriImageFilter
¶ Bases:
asldro.filters.filter_block.FilterBlock
A filter block that simulates the acquisition of an MRI image based on ground truth inputs.
Combines:
Returns
AddComplexNoiseFilter
Inputs
Input Parameters are all keyword arguments for the
AcquireMriImageFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAcquireMriImageFilter.KEY_T1
- Parameters
't1' (BaseImageContainer) – Longitudinal relaxation time in seconds
't2' (BaseImageContainer) – Transverse relaxation time in seconds
't2_star' (BaseImageContainer) – Transverse relaxation time including time-invariant magnetic field inhomogeneities.
'm0' (BaseImageContainer) – Equilibrium magnetisation
'mag_eng' – Added to M0 before relaxation is calculated, provides a means to encode another signal into the MRI signal (non-complex data).
'acq_contrast' (str) – Determines which signal model to use:
"ge"
(case insensitive) for Gradient Echo,"se"
(case insensitive) for Spin Echo,"ir"
(case insensitive) for Inversion Recovery.'echo_time' (float) – The echo time in seconds
'repetition_time' (float) – The repeat time in seconds
'excitation_flip_angle' (float) – Excitation pulse flip angle in degrees. Only used when
"acq_contrast"
is"ge"
or"ir"
.'inversion_flip_angle' (float, optional) – Inversion pulse flip angle in degrees. Only used when
acq_contrast
is"ir"
.'inversion_time' – The inversion time in seconds. Only used when
acq_contrast
is"ir"
.'image_flavour' (str, optional) – sets the metadata
image_flavour
in the output image to this.'translation' – \([\Delta x,\Delta y,\Delta z]\) amount to translate along the x, y and z axes.
'rotation' (Tuple[float, float, float], optional) – \([\theta_x,\theta_y,\theta_z]\) angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive).
'rotation_origin' (Tuple[float, float, float], optional) – \([x_r,y_r,z_r]\) coordinates of the point to perform rotations about.
target_shape (Tuple[int, int, int], optional) – \([L_t,M_t,N_t]\) target shape for the acquired image
'interpolation' (str, optional) –
Defines the interpolation method for the resampling:
- ’continuous’
order 3 spline interpolation (default method for ResampleFilter)
- ’linear’
order 1 linear interpolation
- ’nearest’
nearest neighbour interpolation
'snr' (float or int) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – Synthesised MRI image.
-
KEY_ACQ_CONTRAST
= 'acq_contrast'¶
-
KEY_ECHO_TIME
= 'echo_time'¶
-
KEY_EXCITATION_FLIP_ANGLE
= 'excitation_flip_angle'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_FLAVOUR
= 'image_flavour'¶
-
KEY_INTERPOLATION
= 'interpolation'¶
-
KEY_INVERSION_FLIP_ANGLE
= 'inversion_flip_angle'¶
-
KEY_INVERSION_TIME
= 'inversion_time'¶
-
KEY_M0
= 'm0'¶
-
KEY_MAG_ENC
= 'mag_enc'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_REPETITION_TIME
= 'repetition_time'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_SNR
= 'snr'¶
-
KEY_T1
= 't1'¶
-
KEY_T2
= 't2'¶
-
KEY_T2_STAR
= 't2_star'¶
-
KEY_TARGET_SHAPE
= 'target_shape'¶
-
KEY_TRANSLATION
= 'translation'¶
asldro.filters.add_complex_noise_filter module¶
Add complex noise filter block
-
class
asldro.filters.add_complex_noise_filter.
AddComplexNoiseFilter
¶ Bases:
asldro.filters.filter_block.FilterBlock
A filter that adds normally distributed random noise to the real and imaginary parts of the fourier transform of the input image.
Inputs
Input parameters are all keyword arguments for the
AddComplexNoiseFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAddComplexNoiseFilter.KEY_SNR
.- Parameters
'image' (BaseImageContainer) – An input image which noise will be added to. Can be either scalar or complex. If it is complex, normally distributed random noise will be added to both real and imaginary parts.
'snr' (float or int) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – The input image with complex noise added.
The noise is added pseudo-randomly based on the state of numpy.random. This should be appropriately controlled prior to running the filter
-
KEY_IMAGE
= 'image'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_SNR
= 'snr'¶
asldro.filters.add_noise_filter module¶
Add noise filter
-
class
asldro.filters.add_noise_filter.
AddNoiseFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that adds normally distributed random noise to an input image.
Inputs
Input parameters are all keyword arguments for the
AddNoiseFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAddNoiseFilter.KEY_SNR
.- Parameters
'image' (BaseImageContainer) – An input image which noise will be added to. Can be either scalar or complex. If it is complex, normally distributed random noise will be added to both real and imaginary parts.
'snr' (float or int) – the desired signal-to-noise ratio (>= 0). A value of zero means that no noise is added to the input image.
'reference_image' (BaseImageContainer, optional) – The reference image that is used to calculate the amplitude of the random noise to add to ‘image’. The shape of this must match the shape of ‘image’. If this is not supplied then ‘image’ will be used for calculating the noise amplitude.
Outputs
- Parameters
'image' (BaseImageContainer) – The input image with noise added.
‘reference_image’ can be in a different data domain to the ‘image’. For example, ‘image’ might be in the inverse domain (i.e. fourier transformed) whereas ‘reference_image’ is in the spatial domain. Where data domains differ the following scaling is applied to the noise amplitude:
‘image’ is SPATIAL_DOMAIN and ‘reference_image’ is INVERSE_DOMAIN: 1/N
‘image’ is INVERSE_DOMAIN and ‘reference_image’ is SPATIAL_DOMAIN: N
Where N is reference_image.image.size
The noise is added pseudo-randomly based on the state of numpy.random. This should be appropriately controlled prior to running the filter
Note that the actual SNR (as calculated using “A comparison of two methods for measuring the signal to noise ratio on MR images”, PMB, vol 44, no. 12, pp.N261-N264 (1999)) will not match the desired SNR under the following circumstances:
‘image’ is SPATIAL_DOMAIN and ‘reference_image’ is INVERSE_DOMAIN
‘image’ is INVERSE_DOMAIN and ‘reference_image’ is SPATIAL_DOMAIN
In the second case, performing an inverse fourier transform on the output image with noise results in a spatial domain image where the calculated SNR matches the desired SNR. This is how the
AddNoiseFilter
is used within theAddComplexNoiseFilter
-
KEY_IMAGE
= 'image'¶
-
KEY_REF_IMAGE
= 'reference_image'¶
-
KEY_SNR
= 'snr'¶
asldro.filters.affine_matrix_filter module¶
Affine Matrix Filter
-
class
asldro.filters.affine_matrix_filter.
AffineMatrixFilter
(name: str = 'Unknown')¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that creates an affine transformation matrix based on input parameters for rotation, translation, and scaling.
Conventions are for RAS+ coordinate systems only
Inputs
Input Parameters are all keyword arguments for the
AffineMatrixFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAffineMatrixFilter.KEY_ROTATION
- Parameters
'rotation' (Tuple[float, float, float], optional) – [\(\theta_x\), \(\theta_y\), \(\theta_z\)] angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive), defaults to (0, 0, 0)
'rotation_origin' (Tuple[float, float, float], optional) – [\(x_r\), \(y_r\), \(z_r\)] coordinates of the point to perform rotations about, defaults to (0, 0, 0)
'translation' (Tuple[float, float, float], optional) – [\(\Delta x\), \(\Delta y\), \(\Delta z\)] amount to translate along the x, y and z axes. defaults to (0, 0, 0)
'scale' (Tuple[float, float, float], optional) – [\(s_x\), \(s_y\), \(s_z\)] scaling factors along each axis, defaults to (1, 1, 1)
'affine' (np.ndarray(4), optional) – 4x4 affine matrix to apply transformation to, defaults to numpy.eye(4)
'affine_last' (np.ndarray(4), optional) – input 4x4 affine matrix that is applied last, defaults to numpy.eye(4)
Outputs
Once run, the filter will populate the dictionary
AffineMatrixFilter.outputs
with the following entries- Parameters
'affine' (np.ndarray(4)) – 4x4 affine matrix with all transformations combined.
'affine_inverse' (np.ndarray(4)) – 4x4 affine matrix that is the inverse of ‘affine’
The output affine matrix is calculated as follows:
\[\begin{split}&\mathbf{M} = \mathbf{B}\mathbf{S}\mathbf{T}\mathbf{T_{r}}\mathbf{R_z} \mathbf{R_y}\mathbf{R_x}\mathbf{T_{r}}^{-1}\mathbf{M_\text{in}}\\ \\ \text{where,}&\\ &\mathbf{A} = \text{Existing affine matrix}\\ &\mathbf{B} = \text{Affine matrix to combine last}\\ &\mathbf{S} = \begin{pmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0& 0 & 0& 1 \end{pmatrix}=\text{scaling matrix}\\ &\mathbf{T} = \begin{pmatrix} 1 & 0 & 0 & \Delta x \\ 0 & 1& 0 & \Delta y \\ 0 & 0 & 1& \Delta z \\ 0& 0 & 0& 1 \end{pmatrix}=\text{translation matrix}\\ &\mathbf{T_r} = \begin{pmatrix} 1 & 0 & 0 & x_r \\ 0 & 1& 0 & y_r \\ 0 & 0 & 1& z_r \\ 0& 0 & 0& 1 \end{pmatrix}= \text{translation to rotation centre matrix}\\ &\mathbf{R_x} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \cos{\theta_x}& -\sin{\theta_x} & 0\\ 0 & \sin{\theta_x} & \cos{\theta_x}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about x matrix}\\ &\mathbf{R_y} = \begin{pmatrix} \cos{\theta_y} & 0 & \sin{\theta_y} & 0\\ 0 & 1 & 0 & 0\\ -\sin{\theta_y} & 0 & \cos{\theta_y}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about y matrix}\\ &\mathbf{R_z} = \begin{pmatrix} \cos{\theta_z}& -\sin{\theta_z} & 0 & 0\\ \sin{\theta_z} & \cos{\theta_z}& 0 &0\\ 0& 0& 1 & 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about z matrix}\\\end{split}\]-
KEY_AFFINE
= 'affine'¶
-
KEY_AFFINE_INVERSE
= 'affine_inverse'¶
-
KEY_AFFINE_LAST
= 'affine_last'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_SCALE
= 'scale'¶
-
KEY_TRANSLATION
= 'translation'¶
asldro.filters.append_metadata_filter module¶
AppendMetadataFilter
-
class
asldro.filters.append_metadata_filter.
AppendMetadataFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that can add key-value pairs to the metadata dictionary property of an image container. If the supplied key already exists the old value will be overwritten with the new value. The input image container is modified and a reference passed to the output, i.e. no copy is made.
Inputs
Input Parameters are all keyword arguments for the
AppendMetadataFilter.add_inputs()
member function. They are also accessible via class constants, for exampleAppendMetadataFilter.KEY_METADATA
- Parameters
'image' (BaseImageContainer) – The input image to append the metadata to
'metadata' (dict) – dictionary of key-value pars to append to the metadata property of the input image.
Outputs
Once run, the filter will populate the dictionary
AppendMetadataFilter.outputs
with the following entries- Parameters
'image' – The input image, with the input metadata merged.
-
KEY_IMAGE
= 'image'¶
-
KEY_METADATA
= 'metadata'¶
asldro.filters.asl_quantification_filter module¶
ASL quantification filter class
-
class
asldro.filters.asl_quantification_filter.
AslQuantificationFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that calculates the perfusion rate for arterial spin labelling data.
Inputs
Input Parameters are all keyword arguments for the
AslQuantificationFilter.add_input()
member function. They are also accessible via class constants, for exampleAslQuantificationFilter.KEY_CONTROL
- Parameters
'control' (BaseImageContainer) – the control image (3D or 4D timeseries)
'label' (BaseImageContainer) – the label image (3D or 4D timeseries)
'm0' (BaseImageContainer) – equilibrium magnetisation image
'label_type' (str) – the type of labelling used: “pasl” for pulsed ASL “pcasl” or “casl” for for continuous ASL.
'lambda_blood_brain' (float) – The blood-brain-partition-coefficient (0 to 1 inclusive)
'label_duration' (float) – The temporal duration of the labelled bolus, seconds (0 or greater). For PASL this is equivalent to \(\text{TI}_1\)
'post_label_delay' (float or List[float]) – The duration between the end of the labelling pulse and the imaging excitation pulse, seconds (0 or greater). For PASL this is equivalent to \(\text{TI}\). If
'model'=='full'
then this must be a list and the length of this must match the number of unique entries in'multiphase_index'
.'label_efficiency' (float) – The degree of inversion of the labelling (0 to 1 inclusive)
't1_arterial_blood' (float) – Longitudinal relaxation time of arterial blood, seconds (greater than 0)
't1_tissue' (float or BaseImageContainer) – Longitudinal relaxation time of the tissue, seconds (greater than 0). Required if
'model'=='full'
'model' (str) –
defines which model to use
’whitepaper’ uses the single-subtraction white paper equation
’full’ least square fitting to the full GKM.
'multiphase_index' – A list the same length as the fourth dimension of the label image that defines which phase each image belongs to, and is also the corresponding index in the
'post_label_delay'
list. Required if'model'=='full'
.
Outputs
- Parameters
'perfusion_rate' (BaseImageContainer) – map of the calculated perfusion rate
If
'model'=='full'
the following are also output:- Parameters
'transit_time' (BaseImageContainer) – The estimated transit time in seconds.
'std_error' (BaseImageContainer) – The standard error of the estimate of the fit.
'perfusion_rate_err' – One standard deviation error in the fitted
perfusion rate. :type ‘perfusion_rate_err’: BaseImageContainer :param ‘transit_time_err’: One standard deviation error in the fitted
transit time.
Quantification Model
The following equations are used to calculate the perfusion rate, depending on the input
model
:- ‘whitepaper’
simplified single subtraction equations [1].
for pCASL/CASL see
AslQuantificationFilter.asl_quant_wp_casl
for PASL see
AslQuantificationFilter.asl_quant_wp_pasl
.
- ‘full’
Lease squares fitting to the full General Kinetic Model [2]. See
AslQuantificationFilter.asl_quant_lsq_gkm
.
-
ESTIMATION_ALGORITHM
= {'full': 'Least Squares fit to the General Kinetic Model for\nArterial Spin Labelling:\n\nBuxton et. al., A general\nkinetic model for quantitative perfusion imaging with arterial\nspin labeling. Magnetic Resonance in Medicine, 40(3):383–396,\nsep 1998. doi:10.1002/mrm.1910400308.', 'whitepaper': 'Calculated using the single subtraction simplified model for\nCBF quantification from the ASL White Paper:\n\nAlsop et. al., Recommended implementation of arterial\nspin-labeled perfusion MRI for clinical applications:\na consensus of the ISMRM perfusion study group and the\neuropean consortium for ASL in dementia. Magnetic Resonance\nin Medicine, 73(1):102–116, apr 2014. doi:10.1002/mrm.25197\n'}¶
-
FIT_IMAGE_NAME
= {'perfusion_rate_err': 'RCBFErr', 'std_error': 'FITErr', 'transit_time': 'ATT', 'transit_time_err': 'ATTErr'}¶
-
FIT_IMAGE_UNITS
= {'perfusion_rate_err': 'ml/100g/min', 'std_error': 'a.u.', 'transit_time': 's', 'transit_time_err': 's'}¶
-
FULL
= 'full'¶
-
KEY_CONTROL
= 'control'¶
-
KEY_LABEL
= 'label'¶
-
KEY_LABEL_DURATION
= 'label_duration'¶
-
KEY_LABEL_EFFICIENCY
= 'label_efficiency'¶
-
KEY_LABEL_TYPE
= 'label_type'¶
-
KEY_LAMBDA_BLOOD_BRAIN
= 'lambda_blood_brain'¶
-
KEY_M0
= 'm0'¶
-
KEY_MODEL
= 'model'¶
-
KEY_MULTIPHASE_INDEX
= 'multiphase_index'¶
-
KEY_PERFUSION_RATE
= 'perfusion_rate'¶
-
KEY_PERFUSION_RATE_ERR
= 'perfusion_rate_err'¶
-
KEY_POST_LABEL_DELAY
= 'post_label_delay'¶
-
KEY_STD_ERROR
= 'std_error'¶
-
KEY_T1_ARTERIAL_BLOOD
= 't1_arterial_blood'¶
-
KEY_T1_TISSUE
= 't1_tissue'¶
-
KEY_TRANSIT_TIME
= 'transit_time'¶
-
KEY_TRANSIT_TIME_ERR
= 'transit_time_err'¶
-
M0_TOL
= 1e-06¶
-
WHITEPAPER
= 'whitepaper'¶
-
static
asl_quant_lsq_gkm
(control: numpy.ndarray, label: numpy.ndarray, m0_tissue: numpy.ndarray, lambda_blood_brain: numpy.ndarray, label_duration: float, post_label_delay: List[float], label_efficiency: float, t1_arterial_blood: float, t1_tissue: numpy.ndarray, label_type: str) → dict¶ Calculates the perfusion and transit time by least-squares fitting to the ASL General Kinetic Model [2].
Fitting is performed using
scipy.optimize.curve_fit
.See
GkmFilter
andGkmFilter.calculate_delta_m_gkm
for implementation details of the GKM function.- Parameters
control (np.ndarray) – control signal, must be 4D with signal for each post labelling delay on the 4th axis. Must have same dimensions as
label
.label (np.ndarray) – label signal, must be 4D with signal for each post labelling delay on the 4th axis. Must have same dimensions as
control
.m0_tissue (np.ndarray) – equilibrium magnetisation of the tissue.
lambda_blood_brain (np.ndarray) – tissue partition coefficient in g/ml.
label_duration (float) – duration of the labelling pulse in seconds.
post_label_delay (np.ndarray) – array of post label delays, must be equal in length to the number of 3D volumes in
control
andlabel
.label_efficiency (float) – The degree of inversion of the labelling pulse.
t1_arterial_blood (float) – Longitudinal relaxation time of the arterial blood in seconds.
t1_tissue (np.ndarray) – Longitudinal relaxation time of the tissue in seconds.
label_type (str) – The type of labelling: pulsed (‘pasl’) or continuous (‘casl’ or ‘pcasl’).
- Returns
A dictionary containing the following np.ndarrays:
- ’perfusion_rate’
The estimated perfusion rate in ml/100g/min.
- ’transit_time’
The estimated transit time in seconds.
- ’std_error’
The standard error of the estimate of the fit.
- ’perfusion_rate_err’
One standard deviation error in the fitted perfusion rate.
- ’transit_time_err’
One standard deviation error in the fitted transit time.
- Return type
dict
control
,label
,m0_tissue
,t1_tissue
andlambda_blood_brain
must all have the same dimensions for the first 3 dimensions.
-
static
asl_quant_wp_casl
(control: numpy.ndarray, label: numpy.ndarray, m0: numpy.ndarray, lambda_blood_brain: float, label_duration: float, post_label_delay: float, label_efficiency: float, t1_arterial_blood: float) → numpy.ndarray¶ Performs ASL quantification using the White Paper equation for (pseudo)continuous ASL [1].
\[\begin{split}&f = \frac{6000 \cdot\ \lambda \cdot (\text{SI}_{\text{control}} - \text{SI}_{\text{label}}) \cdot e^{\frac{\text{PLD}}{T_{1,b}}}}{2 \cdot \alpha \cdot T_{1,b} \cdot \text{SI}_{\text{M0}} \cdot (1-e^{-\frac{\tau}{T_{1,b}}})}\\ \text{where,}\\ &f = \text{perfusion rate in ml/100g/min}\\ &\text{SI}_{\text{control}} = \text{control image signal}\\ &\text{SI}_{\text{label}} = \text{label image signal}\\ &\text{SI}_{\text{M0}} = \text{equilibrium magnetision signal}\\ &\tau = \text{label duration}\\ &\text{PLD} = \text{Post Label Delay}\\ &T_{1,b} = \text{longitudinal relaxation time of arterial blood}\\ &\alpha = \text{labelling efficiency}\\ &\lambda = \text{blood-brain partition coefficient}\\\end{split}\]- Parameters
control (np.ndarray) – control image, \(\text{SI}_{\text{control}}\)
label (np.ndarray) – label image \(\text{SI}_{\text{label}}\)
m0 (np.ndarray) – equilibrium magnetisation image, \(\text{SI}_{\text{M0}}\)
lambda_blood_brain (float) – blood-brain partition coefficient in ml/g, \(\lambda\)
label_duration (float) – label duration in seconds, \(\tau\)
post_label_delay (float) – duration between the end of the label pulse and the start of the image acquisition in seconds, \(\text{PLD}\)
label_efficiency (float) – labelling efficiency, \(\alpha\)
t1_arterial_blood (float) – longitudinal relaxation time of arterial blood in seconds, \(T_{1,b}\)
- Returns
the perfusion rate in ml/100g/min, \(f\)
- Return type
np.ndarray
-
static
asl_quant_wp_pasl
(control: numpy.ndarray, label: numpy.ndarray, m0: numpy.ndarray, lambda_blood_brain: float, bolus_duration: float, inversion_time: float, label_efficiency: float, t1_arterial_blood: float) → numpy.ndarray¶ Performs ASL quantification using the White Paper equation for pulsed ASL [1].
\[\begin{split}&f = \frac{6000 \cdot\ \lambda \cdot (\text{SI}_{\text{control}} - \text{SI}_{\text{label}}) \cdot e^{\frac{\text{TI}}{T_{1,b}}}}{2 \cdot \alpha \cdot \text{TI}_1 \cdot \text{SI}_{\text{M0}}}\\ \text{where,}\\ &f = \text{perfusion rate in ml/100g/min}\\ &\text{SI}_{\text{control}} = \text{control image signal}\\ &\text{SI}_{\text{label}} = \text{label image signal}\\ &\text{SI}_{\text{M0}} = \text{equilibrium magnetision signal}\\ &\text{TI} = \text{inversion time}\\ &\text{TI}_1 = \text{bolus duration}\\ &T_{1,b} = \text{longitudinal relaxation time of arterial blood}\\ &\alpha = \text{labelling efficiency}\\ &\lambda = \text{blood-brain partition coefficient}\\\end{split}\]- Parameters
control (np.ndarray) – control image, \(\text{SI}_{\text{control}}\)
label (np.ndarray) – label image, \(\text{SI}_{\text{label}}\)
m0 (np.ndarray) – equilibrium magnetisation image, \(\text{SI}_{\text{M0}}\)
lambda_blood_brain (float) – blood-brain partition coefficient in ml/g, \(\lambda\)
inversion_time (float) – time between the inversion pulse and the start of the image acquisition in seconds, \(\text{TI}\)
bolus_duration (float) – temporal duration of the labelled bolus in seconds, defined as the duration between the inversion pulse and the start of the bolus cutoff pulses (QUIPPSS, Q2-TIPS etc), \(\text{TI}_1\)
label_efficiency (float) – labelling efficiency, \(\alpha\)
t1_arterial_blood (float) – longitudinal relaxation time of arterial blood in seconds, \(T_{1,b}\)
- Returns
the perfusion rate in ml/100g/min, \(f\)
- Return type
np.ndarray
asldro.filters.background_suppression_filter module¶
Background Suppression Filter
-
class
asldro.filters.background_suppression_filter.
BackgroundSuppressionFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that simulates a background suppression pulse sequence on longitudinal magnetisation. It can either use explicitly supplied pulse timings, or calculate optimised pulse timings for specified T1s.
Inputs
Input Parameters are all keyword arguments for the
BackgroundSuppressionFilter.add_inputs()
member function. They are also accessible via class constants, for exampleCombineTimeSeriesFilter.KEY_T1
.- Parameters
'mag_z' (BaseImageContainer) – Image of the initial longitudinal magnetisation. Image data must not be a complex data type.
't1' (BaseImageContainer) – Image of the longitudinal relaxation time. Image data must be greater than 0 and non-compex. Also its shape should match the shape of
'mag_z'
.'sat_pulse_time' (float) – The time, in seconds between the saturation pulse and the imaging excitation pulse. Must be greater than 0.
'inv_pulse_times' (list[float], optional) – The inversion times for each inversion pulse, defined as the spacing between the inversion pulse and the imaging excitation pulse. Must be greater than 0. If omitted then optimal inversion times will be calculated for
'num_inv_pulses'
number of pulses, and the T1 times given by't1_opt'
.'t1_opt' (list[float]) – T1 times, in seconds to optimise the pulse inversion times for. Each must be greater than 0, and if omitted then the unique values in the input
t1
will be used.'mag_time' (float) – The time, in seconds after the saturation pulse to sample the longitudinal magnetisation. The output magnetisation will only reflect the pulses that will have run by this time. Must be greater than 0. If omitted, defaults to the same value as
'sat_pulse_time'
. If'mag_time'
is longer than'sat_pulse_time'
, then this difference will be added to both'sat_pulse_time'
and also'inv_pulse_times'
(regardless of whether this has been supplied as an input or optimised values calculated). If the pulse timings already include an added delay to ensure the magnetisation is positive then this parameter should be omitted.'num_inv_pulses' – The number of inversion pulses to calculate optimised timings for. Must be greater than 0, and this parameter must be present if
'inv_pulse_times'
is omitted.'pulse_efficiency' (str or float) –
Defines the efficiency of the inversion pulses. Can take the values:
- ’realistic’
Pulse efficiencies are calculated according to a model based on the T1. See
BackgroundSuppressionFilter.calculate_pulse_efficiency
for details on implementation.- ’ideal’
Inversion pulses are 100% efficient.
- -1 to 0
The efficiency is defined explicitly, with -1 being full inversion and 0 no inversion.
Outputs
Once run, the filter will populate the dictionary
BackgroundSuppressionFilter.outputs
with the following entries:- Parameters
'mag_z' (BaseImageContainer) – The longitudinal magnetisation at t=``mag_time``.
'inv_pulse_times' (list[float]) – The inversion pulse timings.
Metadata
The output
'mag_z'
inherits metadata from the input'mag_z'
, and then has the following entries appended:- background_suppression
True
- background_suppression_inv_pulse_timing
'inv_pulse_times'
- background_suppression_sat_pulse_timing
'sat_pulse_time'
- background_suppression_num_pulses
The number of inversion pulses.
Background Suppression Model
Details on the model implemented can be found in
BackgroundSuppressionFilter.calculate_mz
Details on how the pulse timings are optimised can be found in
BackgroundSuppressionFilter.optimise_inv_pulse_times
-
EFF_IDEAL
= 'ideal'¶
-
EFF_REALISTIC
= 'realistic'¶
-
KEY_INV_PULSE_TIMES
= 'inv_pulse_times'¶
-
KEY_MAG_TIME
= 'mag_time'¶
-
KEY_MAG_Z
= 'mag_z'¶
-
KEY_NUM_INV_PULSES
= 'num_inv_pulses'¶
-
KEY_PULSE_EFFICIENCY
= 'pulse_efficiency'¶
-
KEY_SAT_PULSE_TIME
= 'sat_pulse_time'¶
-
KEY_T1
= 't1'¶
-
KEY_T1_OPT
= 't1_opt'¶
-
M_BACKGROUND_SUPPRESSION
= 'background_suppression'¶
-
M_BSUP_INV_PULSE_TIMING
= 'background_suppression_inv_pulse_timing'¶
-
M_BSUP_NUM_PULSES
= 'background_suppression_num_pulses'¶
-
M_BSUP_SAT_PULSE_TIMING
= 'background_suppression_sat_pulse_timing'¶
-
static
calculate_mz
(initial_mz: numpy.ndarray, t1: numpy.ndarray, inv_pulse_times: list, sat_pulse_time: float, mag_time: float, inv_eff: numpy.ndarray, sat_eff: numpy.ndarray = 1.0) → numpy.ndarray¶ Calculates the longitudinal magnetisation after a sequence of background suppression pulses [4]
- Parameters
initial_mz (np.ndarray) – The initial longitudinal magnetisation, \(M_z(t=0)\)
t1 (np.ndarray) – The longitudinal relaxation time, \(T_1\)
inv_pulse_times (list[float]) – Inversion pulse times, with respect to the imaging excitation pulse, \(\{ \tau_i, \tau_{i+1}... \tau_{M-1}, \tau_M \}\)
mag_time (float) – The time at which to calculate the longitudinal magnetisation, \(t\), cannot be greater than sat_pulse_time
sat_pulse_time (float) – The time between the saturation pulse and the imaging excitation pulse, \(Q\).
inv_eff (np.ndarray) – The efficiency of the inversion pulses, \(\chi\) .-1 is complete inversion.
sat_eff (np.ndarray) – The efficiency of the saturation pulses, \(\psi\). 1 is full saturation.
- Returns
The longitudinal magnetisation after the background suppression sequence
- Return type
np.ndarray
Equation
The longitudinal magnetisation at time \(t\) after the start of a background suppression sequence has started is calculated using the equation below. Only pulses that have run at time \(t\) contribute to the calculated magnetisation.
\[\begin{split}\begin{align} &M_z(t)= M_z(t=0)\cdot (1 + ((1-\psi)-1)\chi^n e^{-\frac{t}{T_1} }+ \sum \limits_{m=1}^n(\chi^m - \chi^{m-1}) e^{-\frac{\tau_m}{T_1}})\\ &\text{where}\\ &M_z(t)=\text{longitudinal magnetisation at time t}\\ &Q=\text{the delay between the saturation pulse and imaging excitation pulse}\\ &\psi=\text{saturation pulse efficiency}, 0 \leq \psi \leq 1\\ &\chi=\text{inversion pulse efficiency}, -1 \leq \chi \leq 0\\ &\tau_m = \text{inversion time of the }m^\text{th}\text{ pulse}\\ &T_1=\text{longitudinal relaxation time}\\ \end{align}\end{split}\]
-
static
calculate_pulse_efficiency
(t1: numpy.ndarray) → numpy.ndarray¶ Calculates the pulse efficiency per T1 based on a polynomial fit [3].
- Parameters
t1 (np.ndarray) – t1 times to calculate the pulse efficiencies for, seconds.
- Returns
The pulse efficiencies, \(\chi\)
- Return type
np.ndarray
Equation
\[\begin{split}\newcommand{\sn}[2]{#1 {\times} 10 ^ {#2}} \chi= \begin{cases} -0.998 & 250 \leq T_1 <450\\ - \left ( \begin{align} \sn{-2.245}{-15}T_1^4 \\ + \sn{2.378}{-11}T_1^3 \\ - \sn{8.987}{-8}T_1^2\\ + \sn{1.442}{-4}T_1\\ + \sn{9.1555}{-1} \end{align}\right ) & 450 \leq T_1 < 2000\\ -0.998 & 2000 \leq T_1 < 4200 \end{cases}\end{split}\]
-
static
optimise_inv_pulse_times
(sat_time: float, t1: numpy.ndarray, pulse_eff: numpy.ndarray, num_pulses: int, method: str = 'Nelder-Mead') → scipy.optimize.OptimizeResult¶ Calculates optimised inversion pulse times for a background suppression pulse sequence.
- Parameters
sat_time (float) – The time, in seconds between the saturation pulse and the imaging excitation pulse, \(Q\).
t1 (np.ndarray) – The longitudinal relaxation times to optimise the pulses for, \(T_1\).
pulse_eff (np.ndarray) – The inversion pulse efficiency, \(\chi\). corresponding to each
t1
entry.num_pulses (int) – The number of inversion pulses to optimise times for, \(N\). Must be greater than 0.
method (str, optional) – The optimisation method to use, see
scipy.optimize.minimize
for more details. Defaults to “Nelder-Mead”.
- Raises
ValueError – If the number of pulses is less than 1.
- Returns
The result from the optimisation
- Return type
OptimizeResult
Equation
A set of optimimal inversion times, \(\{ \tau_i, \tau_{i+1}... \tau_{M-1}, \tau_M \}\) are calculated by minimising the sum-of-squares of the magnetisation of all the T1 species in
t1_opt
:\[\begin{split}\begin{align} &\min \left (\sum\limits_i^N M_z^2(t=Q, T_{1,i},\chi, \psi, \tau) + \sum\limits_i^N \begin{cases} 1 & M_z(t=Q, T_{1,i},\chi, \psi, \tau) < 0\\ 0 & M_z(t=Q, T_{1,i},\chi, \psi, \tau) \geq 0 \end{cases} \right) \\ &\text{where}\\ &N = \text{The number of $T_1$ species to optimise for}\\ \end{align}\end{split}\]
asldro.filters.basefilter module¶
BaseFilter classes and exception handling
-
class
asldro.filters.basefilter.
BaseFilter
(name: str = 'Unknown')¶ Bases:
abc.ABC
An abstract base class for filters. All filters should inherit this class
-
add_child_filter
(child: asldro.filters.basefilter.BaseFilter, io_map: Mapping[str, str] = None)¶ See documentation for add_parent_filter
-
add_input
(key: str, value)¶ Adds an input with a given key and value. If the key is already in the inputs, an RuntimeError is raised
-
add_inputs
(input_dict: Mapping[str, Any], io_map: Mapping[str, str] = None, io_map_optional: bool = False)¶ Adds multiple inputs via a dictionary. Optionally, maps the dictionary onto different input keys using an io_map. :param input_dict: The input dictionary :param io_map: The dictionary used to perform the mapping. All keys and values must be strings. For example: As an example: {
“one”: “two”, “three”: “four”
} will map inputs keys of “one” to “two” AND “three” to “four”. If io_map is None, no mapping with be performed. :param io_map_optional: If this is False, a KeyError will be raised if the keys in the io_map are not found in the input_dict. :raises KeyError: if keys required in the mapping are not found in the input_dict
-
add_parent_filter
(parent: asldro.filters.basefilter.BaseFilter, io_map: Mapping[str, str] = None)¶ Add a parent filter (the inputs of this filter will be connected to the output of the parents). By default, the ALL outputs of the parent will be directly mapped to the inputs of this filter using the same KEY. This can be overridden by supplying io_map. e.g. io_map = {
“output_key1”:”input_key1”, “output_key2”:”input_key2”, … }
will map the output of the parent filter with a key of “output_key1” to the input of this filter with a key of “input_key1” etc. If io_map is defined ONLY those keys which are explicitly listed are mapped (the others are ignored)
-
run
(history=None)¶ Calls the _run class on all parents (recursively) to make sure they are up-to-date. Then maps the parents’ outputs to inputs for this filter. Then calls the _run method on this filter.
-
-
exception
asldro.filters.basefilter.
BaseFilterException
(msg: str)¶ Bases:
Exception
Exceptions for this modules
-
exception
asldro.filters.basefilter.
FilterInputKeyError
¶ Bases:
Exception
Used to show an error with a filter’s input keys e.g. multiple values have been assigned to the same input
-
exception
asldro.filters.basefilter.
FilterInputValidationError
¶ Bases:
Exception
Used to show an error when running the validation on the filter’s inputs i.e. when running _validate_inputs()
-
exception
asldro.filters.basefilter.
FilterLoopError
¶ Bases:
Exception
Used when a loop is detected in the filter chain
asldro.filters.bids_output_filter module¶
asldro.filters.combine_fuzzy_masks_filter module¶
Combined Fuzzy Masks Filter
-
class
asldro.filters.combine_fuzzy_masks_filter.
CombineFuzzyMasksFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for creating a segmentation mask based on one or more ‘fuzzy’ masks.
Inputs
Input Parameters are all keyword arguments for the
CombineFuzzyMasksFilter.add_input()
member function. They are also accessible via class constants, for exampleCombineFuzzyMasksFilter.KEY_THRESHOLD
- Parameters
'fuzzy_mask' (BaseImageContainer or list[BaseImageContainer]) – Fuzzy mask images to combine. Each mask should have voxel values between 0 and 1, defining the fraction of that region in each voxel.
'region_values' (int or list[int]) – A list of values to assign to each region in the output ‘seg_mask’. The order corresponds with the order of the masks in ‘fuzzy_mask’.
'region_priority' (list[int] or int) – A list of priority order for the regions, 1 being the highest priority. The order corresponds with the order of the masks in ‘fuzzy_mask’, and all values must be unique. If ‘fuzzy_mask’ is a single image then this input can be omitted.
'threshold' (float, optional) – The threshold value, below which a region’s contributions to a voxel are ignored. Must be between 0 and 1.0. Defaults to 0.05.
Outputs
Once run, the filter will populate the dictionary
CombineFuzzyMasksFilter.outputs
with the following entries- Parameters
'seg_label' (BaseImageContainer) – A segmentation mask image constructed from the inputs, defining exclusive regions (one region per voxel). The image data type is numpy.int16.
-
KEY_FUZZY_MASK
= 'fuzzy_mask'¶
-
KEY_REGION_PRIORITY
= 'region_priority'¶
-
KEY_REGION_VALUES
= 'region_values'¶
-
KEY_SEG_MASK
= 'seg_mask'¶
-
KEY_THRESHOLD
= 'threshold'¶
asldro.filters.combine_time_series_filter module¶
Combine Time Series Filter
-
class
asldro.filters.combine_time_series_filter.
CombineTimeSeriesFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that takes, as input, as set of ImageContainers. These should each represent a single time point in a time series acquisition. As an output, these ImageContainers will be concatenated across the 4th (time) dimension and their metadata combined with the following rules:
if all values of a given field are the same, for all time-series, use that value in the output metadata; else,
concatenate the values in a list.
Instance variables of the BaseImageContainer such as
image_flavour
, will all be checked for consistency and copied across to the output image.Inputs
Input Parameters are all keyword arguments for the
CombineTimeSeriesFilter.add_inputs()
member function. They are also accessible via class constants, for exampleCombineTimeSeriesFilter.KEY_T1
.- Parameters
'image_NNNNN' – A time-series image. The order of these time series will be determined by the NNNNN component, which shall be a positive integer. Any number of digits can be used in combination in NNNNN. For example, as sequence, image_0000, image_1, image_002, image_03 is valid.
Note
the indices MUST start from 0 and increment by 1, and have no missing or duplicate indices. This is to help prevent accidentally missing/adding an index value.
Note
If the image data type is complex, it is likely that most NIFTI viewers will problems displaying 4D complex data correctly.
Outputs
Once run, the filter will populate the dictionary
MriSignalFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – A 4D image of the combined time series.
-
INPUT_IMAGE_REGEX_OBJ
= re.compile('^image_(?P<index>[0-9]+)$')¶
-
KEY_IMAGE
= 'image'¶
asldro.filters.create_volumes_from_seg_mask module¶
Create volumes from segmentation mask filter
-
class
asldro.filters.create_volumes_from_seg_mask.
CreateVolumesFromSegMask
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for assigning values to regions defined by a segmentation mask, then concatenating these images into a 5D image
Inputs
Input Parameters are all keyword arguments for the
CreateVolumesFromSegMask.add_input()
member function. They are also accessible via class constants, for exampleCreateVolumesFromSegMask.KEY_IMAGE
- Parameters
'seg_mask' (BaseImageContainer) – segmentation mask image comprising integer values for each region. dtype of the image data must be an unsigned or signed integer type.
'label_values' (list[int]) – List of the integer values in
'seg_mask'
, must not have any duplicate values. The order of the integers in this list must be matched by the input'label_names'
, and the lists with quantity values for each quantity in the input'quantitites'
.'label_names' (list[str]) – List of strings defining the names of each region defined in
'seg_mask'
. The order matches the order given in'label_values'
.'quantities' – Dictionary, containing key/value pairs where the key name defines a a quantity, and the value is an array of floats that define the value to assign to each region. The order of these floats matches the order given in
'label_values'
.'units' (list[str]) – List of strings defining the units that correspond with each quantity given in the dictionary
'quantities'
, as given by the order defined in that dictionary.
Outputs
Once run, the filter will populate the dictionary
CreateVolumesFromSegMask.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – The combined 5D image, with volumes where the values for each quantity have been assigned to the regions defined in
'seg_mask'
. The final entry in the 5th dimension is a copy of the image'seg_mask'
.'image_info' (dict) – A dictionary describing the regions, quantities and units in the outpu
'image'
. This is of the same format as the ground truth JSON file, however there is no ‘parameters’ object. See Making an input ground truth for more information on this format.
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_INFO
= 'image_info'¶
-
KEY_LABEL_NAMES
= 'label_names'¶
-
KEY_LABEL_VALUES
= 'label_values'¶
-
KEY_QUANTITIES
= 'quantities'¶
-
KEY_SEG_MASK
= 'seg_mask'¶
-
KEY_UNITS
= 'units'¶
asldro.filters.filter_block module¶
FilterBlock class
-
class
asldro.filters.filter_block.
FilterBlock
(name: str = 'Unknown')¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter made from multiple, chained filters. Used for when the same configuration of filters is used multiple times, or needs to be tested as a whole
-
run
(history=None)¶ Calls the BaseFilter’s run method to make sure all of the inputs of this FilterBlock are up-to-date and valid. Then runs this FilterBlock’s output filter, and populates the outputs to this FilterBlock.
-
asldro.filters.fourier_filter module¶
Fourier Transform filter
-
class
asldro.filters.fourier_filter.
FftFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for performing a n-dimensional fast fourier transform of input. Input is either a NumpyImageContainer or NiftiImageContainer. Output is a complex numpy array of the discrete fourier transform named ‘kdata’
-
KEY_IMAGE
= 'image'¶
-
-
class
asldro.filters.fourier_filter.
IfftFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for performing a n-dimensional inverse fast fourier transform of input. Input is a numpy array named ‘kdata’. Output is a complex numpy array of the inverse discrete fourier transform named ‘image’
-
KEY_IMAGE
= 'image'¶
-
asldro.filters.gkm_filter module¶
General Kinetic Model Filter
-
class
asldro.filters.gkm_filter.
GkmFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that generates the ASL signal using the General Kinetic Model.
Inputs
Input Parameters are all keyword arguments for the
GkmFilter.add_inputs()
member function. They are also accessible via class constants, for exampleGkmFilter.KEY_PERFUSION_RATE
- Parameters
'perfusion_rate' (BaseImageContainer) – Map of perfusion rate, in ml/100g/min (>=0)
- :param ‘transit_time’ Map of the time taken for the labelled bolus
to reach the voxel, seconds (>=0).
- Parameters
'm0' – The tissue equilibrium magnetisation, can be a map or single value (>=0).
'label_type' (str) –
Determines which GKM equations to use:
”casl” OR “pcasl” (case insensitive) for the continuous model
”pasl” (case insensitive) for the pulsed model
'label_duration' (float) – The length of the labelling pulse, seconds (0 to 100 inclusive)
'signal_time' (float) – The time after labelling commences to generate signal, seconds (0 to 100 inclusive)
'label_efficiency' (float) – The degree of inversion of the labelling (0 to 1 inclusive)
'lambda_blood_brain' (float or BaseImageContainer) – The blood-brain-partition-coefficient (0 to 1 inclusive)
't1_arterial_blood' (float) – Longitudinal relaxation time of arterial blood, seconds (0 exclusive to 100 inclusive)
't1_tissue' (BaseImageContainer) – Longitudinal relaxation time of the tissue, seconds (0 to 100 inclusive, however voxels with
t1 = 0
will havedelta_m = 0
)'model' (str) –
The model to use to generate the perfusion signal:
”full” for the full “Buxton” General Kinetic Model [2]
”whitepaper” for the simplified model, derived from the quantification equations the ASL Whitepaper consensus paper [1].
Defaults to “full”.
Outputs
Once run, the filter will populate the dictionary
GkmFilter.outputs
with the following entries- Parameters
'delta_m' (BaseImageContainer) – An image with synthetic ASL perfusion contrast. This will be the same class as the input ‘perfusion_rate’
Metadata
The following parameters are added to
GkmFilter.outputs["delta_m"].metadata
:label_type
label_duration
(pcasl/casl only)post_label_delay
bolus_cut_off_flag
(pasl only)bolus_cut_off_delay_time
(pasl only)label_efficiency
lambda_blood_brain
(only if a single value is supplied)t1_arterial_blood
m0
(only if a single value is supplied)gkm_model
=model
post_label_delay
is calculated assignal_time - label_duration
bolus_cut_off_delay_time
takes the value of the inputlabel_duration
, this field is used for pasl in line with the BIDS specification.Equations
The general kinetic model [2] is the standard signal model for ASL perfusion measurements. It considers the difference between the control and label conditions to be a deliverable tracer, referred to as \(\Delta M(t)\).
The amount of \(\Delta M(t)\) within a voxel at time \(t\) depends on the history of:
delivery of magnetisation by arterial flow
clearance by venous flow
longitudinal relaxation
These processes can be described by defining three functions of time:
The delivery function \(c(t)\) - the normalised arterial concentration of magnetisation arriving at the voxel at time \(t\).
The residue function \(r(t,t')\) - the fraction of tagged water molecules that arrive at time \(t'\) and are still in the voxel at time \(t\).
The magnetisation relaxation function \(m(t,t')\) is the fraction of the original longitudinal magnetisation tag carried by the water molecules that arrived at time \(t'\) that remains at time \(t\).
Using these definitions \(\Delta M(t)\) can be constructed as the sum over history of delivery of magnetisation to the tissue weighted with the fraction of that magnetisation that remains in the voxel:
\[\begin{split}&\Delta M(t)=2\cdot M_{0,b}\cdot f\cdot\left\{ c(t)\ast\left[r(t)\cdot m(t)\right]\right\}\\ &\text{where}\\ &\ast=\text{convolution operator} \\ &r(t)=\text{residue function}=e^{-\frac{ft}{\lambda}}\\ &m(t)=e^{-\frac{t}{T_{1}}}\\ &c(t)=\text{delivery function, defined as plug flow} = \begin{cases} 0 & 0<t<\Delta t\\ \alpha e^{-\frac{t}{T_{1,b}}}\,\text{(PASL)} & \Delta t<t<\Delta t+\tau\\ \alpha e^{-\frac{\Delta t}{T_{1,b}}}\,\text{(CASL/pCASL)}\\ 0 & t>\Delta t+\tau \end{cases}\\ &\alpha=\text{labelling efficiency} \\ &\tau=\text{label duration} \\ &\Delta t=\text{initial transit delay, ATT} \\ &M_{0,b} = \text{equilibrium magnetisation of arterial blood} = \frac{M_{0,\text{tissue}}}{\lambda} \\ & f = \text{the perfusion rate, CBF}\\ &\lambda = \text{blood brain partition coefficient}\\ &T_{1,b} = \text{longitudinal relaxation time of arterial blood}\\ &T_{1} = \text{longitudinal relaxation time of tissue}\\\end{split}\]Note that all units are in SI, with \(f\) having units \(s^{-1}\). Multiplying by 6000 gives units of \(ml/100g/min\).
Full Model
The full solutions to the GKM [2] are used to calculate \(\Delta M(t)\) when
model=="full"
:(p)CASL:
\[\begin{split}&\Delta M(t)=\begin{cases} 0 & 0<t\leq\Delta t\\ 2M_{0,b}fT'_{1}\alpha e^{-\frac{\Delta t}{T_{1,b}}}q_{ss}(t) & \Delta t<t<\Delta t+\tau\\ 2M_{0,b}fT'_{1}\alpha e^{-\frac{\Delta t}{T_{1,b}}} e^{-\frac{t-\tau-\Delta t}{T'_{1}}}q_{ss}(t) & t\geq\Delta t+\tau \end{cases}\\ &\text{where}\\ &q_{ss}(t)=\begin{cases} 1-e^{-\frac{t-\Delta t}{T'_{1}}} & \Delta t<t <\Delta t+\tau\\ 1-e^{-\frac{\tau}{T'_{1}}} & t\geq\Delta t+\tau \end{cases}\\ &\frac{1}{T'_{1}}=\frac{1}{T_1} + \frac{f}{\lambda}\\\end{split}\]PASL:
\[\begin{split}&\Delta M(t)=\begin{cases} 0 & 0<t\leq\Delta t\\ 2M_{0,b}f(t-\Delta t) \alpha e^{-\frac{t}{T_{1,b}}}q_{p}(t) & \Delta t < t < t\Delta t+\tau\\ 2M_{0,b}f\alpha \tau e^{-\frac{t}{T_{1,b}}}q_{p}(t) & t\geq\Delta t+\tau \end{cases}\\ &\text{where}\\ &q_{p}(t)=\begin{cases} \frac{e^{kt}(e^{-k \Delta t}-e^{-kt})}{k(t-\Delta t)} & \Delta t<t<\Delta t+\tau\\ \frac{e^{kt}(e^{-k\Delta t}-e^{k(\tau + \Delta t)}}{k\tau} & t\geq\Delta t+\tau \end{cases}\\ &\frac{1}{T'_{1}}=\frac{1}{T_1} + \frac{f}{\lambda}\\ &k=\frac{1}{T_{1,b}}-\frac{1}{T'_1}\end{split}\]
*Simplified Model”
The simplified model, derived from the single subtraction quantification equations (see
AslQuantificationFilter
) are used whenmodel=="whitepaper"
:(p)CASL:
\[\begin{split}&\Delta M(t) = \begin{cases} 0 & 0<t\leq\Delta t + \tau\\ {2 M_{0,b} f T_{1,b} \alpha (1-e^{-\frac{\tau}{T_{1,b}}}) e^{-\frac{t-\tau}{T_{1,b}}}} & t > \Delta t + \tau\\ \end{cases}\\\end{split}\]PASL
\[\begin{split}&\Delta M(t) = \begin{cases} 0 & 0<t\leq\Delta t + \tau\\ {2 M_{0,b} f \tau \alpha e^{-\frac{t}{T_{1,b}}}} & t > \Delta t + \tau\\ \end{cases}\end{split}\]
-
CASL
= 'casl'¶
-
KEY_DELTA_M
= 'delta_m'¶
-
KEY_LABEL_DURATION
= 'label_duration'¶
-
KEY_LABEL_EFFICIENCY
= 'label_efficiency'¶
-
KEY_LABEL_TYPE
= 'label_type'¶
-
KEY_LAMBDA_BLOOD_BRAIN
= 'lambda_blood_brain'¶
-
KEY_M0
= 'm0'¶
-
KEY_MODEL
= 'model'¶
-
KEY_PERFUSION_RATE
= 'perfusion_rate'¶
-
KEY_SIGNAL_TIME
= 'signal_time'¶
-
KEY_T1_ARTERIAL_BLOOD
= 't1_arterial_blood'¶
-
KEY_T1_TISSUE
= 't1_tissue'¶
-
KEY_TRANSIT_TIME
= 'transit_time'¶
-
MODEL_FULL
= 'full'¶
-
MODEL_WP
= 'whitepaper'¶
-
M_BOLUS_CUT_OFF_DELAY_TIME
= 'bolus_cut_off_delay_time'¶
-
M_BOLUS_CUT_OFF_FLAG
= 'bolus_cut_off_flag'¶
-
M_GKM_MODEL
= 'gkm_model'¶
-
M_POST_LABEL_DELAY
= 'post_label_delay'¶
-
PASL
= 'pasl'¶
-
PCASL
= 'pcasl'¶
-
static
calculate_delta_m_gkm
(perfusion_rate: numpy.ndarray, transit_time: numpy.ndarray, m0_tissue: numpy.ndarray, label_duration: float, signal_time: float, label_efficiency: float, partition_coefficient: numpy.ndarray, t1_arterial_blood: float, t1_tissue: numpy.ndarray, label_type: str) → numpy.ndarray¶ Calculates the difference in magnetisation between the control and label condition (\(\Delta M\)) using the full solutions to the General Kinetic Model [2].
- Parameters
perfusion_rate (np.ndarray) – Map of perfusion rate
transit_time (np.ndarray) – Map of transit time
m0_tissue (np.ndarray) – The tissue equilibrium magnetisation
label_duration (float) – The length of the labelling pulse
signal_time (float) – The time after the labelling pulse commences to generate signal.
label_efficiency (float) – The degree of inversion of the labelling pulse.
partition_coefficient (np.ndarray) – The tissue-blood partition coefficient
t1_arterial_blood (float) – Longitudinal relaxation time of the arterial blood.
t1_tissue (np.ndarray) – Longitudinal relaxation time of the tissue
label_type (str) – Determines the specific model to use: Pulsed (“pasl”) or (pseudo)Continuous (“pcasl” or “casl”) labelling
- Returns
the difference magnetisation, \(\Delta M\)
- Return type
np.ndarray
-
static
calculate_delta_m_whitepaper
(perfusion_rate: numpy.ndarray, transit_time: numpy.ndarray, m0_tissue: numpy.ndarray, label_duration: float, signal_time: float, label_efficiency: float, partition_coefficient: numpy.ndarray, t1_arterial_blood: float, label_type: str) → numpy.ndarray¶ Calculates the difference in magnetisation between the control and label condition (\(\Delta M\)) using the single subtraction simplification from the ASL Whitepaper consensus paper [1].
- Parameters
perfusion_rate (np.ndarray) – Map of perfusion rate
transit_time (np.ndarray) – Map of transit time
m0_tissue (np.ndarray) – The tissue equilibrium magnetisation
label_duration (float) – The length of the labelling pulse
signal_time (float) – The time after the labelling pulse commences to generate signal.
label_efficiency (float) – The degree of inversion of the labelling pulse.
partition_coefficient (np.ndarray) – The tissue-blood partition coefficient
t1_arterial_blood (float) – Longitudinal relaxation time of the arterial blood.
t1_tissue (np.ndarray) – Longitudinal relaxation time of the tissue
label_type (str) – Determines the specific model to use: Pulsed (“pasl”) or (pseudo)Continuous (“pcasl” or “casl”) labelling
- Returns
the difference magnetisation, \(\Delta M\)
- Return type
np.ndarray
-
static
check_and_make_image_from_value
(arg: float, shape: tuple, metadata: dict, metadata_key: str) → numpy.ndarray¶ Checks the type of the input parameter to see if it is a float or a BaseImageContainer. If it is an image:
return the image ndarray
check if it has the same value everywhere (i.e. an image override), if it does then place the value into the metadata dict under the metadata_key
If it is a float: * make a ndarray with the same value * place the value into the metadata dict under the metadata_key
This makes calculations more straightforward as a ndarray can always be expected.
Arguments
- Parameters
arg (float or BaseImageContainer) – The input parameter to check
shape (tuple) – The shape of the image to create
metadata (dict) – metadata dict, which is updated by this function
metadata_key (str) – key to assign the value of arg (if a float or single value image) to
- Returns
image of the parameter
- Rype
np.ndarray
-
static
compute_arrival_state_masks
(transit_time: numpy.ndarray, signal_time: float, label_duration: float) → dict¶ Creates boolean masks for each of the states of the delivery curve
- Parameters
transit_time (np.ndarray) – map of the transit time
signal_time (float) – the time to generate signal at
label_duration (float) – The duration of the labelling pulse
- Returns
a dictionary with three entries, each a ndarray with shape the same as transit_time:
- ”not_arrived”
voxels where the bolus has not reached yet
- ”arriving”
voxels where the bolus has reached but not been completely delivered.
- ”arrived”
voxels where the bolus has been completely delivered
- Return type
dict
asldro.filters.ground_truth_loader module¶
Ground truth loader filter
-
class
asldro.filters.ground_truth_loader.
GroundTruthLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading ground truth NIFTI/JSON file pairs.
Inputs
Input Parameters are all keyword arguments for the
GroundTruthLoaderFilter.add_input()
member function. They are also accessible via class constants, for exampleGroundTruthLoaderFilter.KEY_IMAGE
- Parameters
'image' (NiftiImageContainer) – ground truth image, must be 5D and the 5th dimension have the same length as the number of quantities.
'quantities' (list[str]) – list of quantity names
'units' (list[str]) – list of units corresponding to the quantities, must be the same length as quantities
'parameters' (dict) – dictionary containing keys
't1_arterial_blood'
,'lambda_blood_brain'
and'magnetic_field_strength'
.'segmentation' – dictionary containing key-value pairs corresponding to tissue type and label value in the
'seg_label'
volume.'image_override' (dict) – (optional) dictionary containing single-value override values for any of the
'image'
that are loaded. The keys must match the quantity name defined in'quantities'
.'parameter_override' (dict) – (optional) dictionary containing single-value override values for any of the
'parameters'
that are loaded. The keys must match the key defined in'parameters'
.'ground_truth_modulate' (dict) –
dictionary with keys corresponding with quantity names. The possible dictionary values (both optional) are:
{ "scale": N, "offset": M, }
Any corresponding images will have the corresponding scale and offset applied before being output. See
ScaleOffsetFilter
for more details.
Outputs
Once run, the filter will populate the dictionary
GroundTruthLoaderFilter.outputs
with output fields based on the input'quantities'
.Each key in
'quantities'
will result in a NiftiImageContainer corresponding to a 3D/4D subset of the nifti input (split along the 5th dimension). The data types of images will be the same as those input EXCEPT for a quantity labelled'seg_label'
which will be converted to a uint16 data type.If ‘override_image’ is defined, the corresponding ‘image’ will be set to the overriding value before being output.
If ‘override_parameters’ is defined, the corresponding parameter will be set to the overriding value before being output.
If ‘ground_truth_modulate’ is defined, the corresponding ‘image’(s) will be scaled and/or offset by the corresponding values.
The keys-value pairs in the input
'parameters'
will also be destructured and piped through to the output, for example:- Parameters
't1' (NiftiImageContainer) – volume of T1 relaxation times
'seg_label' (NiftiImageContainer (uint16 data type)) – segmentation label mask corresponding to different tissue types.
'magnetic_field_strength' (float) – the magnetic field strenght in Tesla.
't1_arterial_blood' (float) – the T1 relaxation time of arterial blood
'lambda_blood_brain' (float) – the blood-brain-partition-coefficient
A field metadata will be created in each image container, with the following fields:
- Magnetic_field_strength
corresponds to the value in the ‘parameters’ object.
- Quantity
corresponds to the entry in the
'quantities'
array.- Units
corresponds with the entry in the
'units'
array.
The ‘
segmentation'
object from the JSON file will also be piped through to the metadata entry of the'seg_label'
image container.-
KEY_GROUND_TRUTH_MODULATE
= 'ground_truth_modulate'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_OVERRIDE
= 'image_override'¶
-
KEY_MAG_STRENGTH
= 'magnetic_field_strength'¶
-
KEY_PARAMETERS
= 'parameters'¶
-
KEY_PARAMETER_OVERRIDE
= 'parameter_override'¶
-
KEY_QUANTITIES
= 'quantities'¶
-
KEY_QUANTITY
= 'quantity'¶
-
KEY_SEGMENTATION
= 'segmentation'¶
-
KEY_UNITS
= 'units'¶
asldro.filters.image_tools module¶
Filters for basic image container manipulation and maths
-
class
asldro.filters.image_tools.
FloatToIntImageFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter which converts image data from float to integer.
Inputs
Input Parameters are all keyword arguments for the
FloatToIntImageFilter.add_input()
member function. They are also accessible via class constants, for exampleFloatToIntImageFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – Image to convert from float to integer. The dtype of the image data must be float.
'method' –
Defines which method to use for conversion:
”round”: returns the nearest integer
”floor”: returns the largest integer that is less than the input value.
”ceil”: returns the smallest integer that is greater than the input value.
”truncate”: Removes the decimal portion of the number. This will round down for positive numbers and up for negative.
Outputs
Once run, the filter will populate the dictionary
FloatToIntImageFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – The input image, with the image data as integer type.
-
CEIL
= 'ceil'¶
-
FLOOR
= 'floor'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_METHOD
= 'method'¶
-
METHODS
= ['round', 'floor', 'ceil', 'truncate']¶
-
ROUND
= 'round'¶
-
TRUNCATE
= 'truncate'¶
asldro.filters.invert_image_filter module¶
Invert image filter
-
class
asldro.filters.invert_image_filter.
InvertImageFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter which simply inverts the input image.
Must have one input named ‘image’. These correspond with a derivative of BaseImageContainer.
Creates a single output named ‘image’.
asldro.filters.json_loader module¶
JSON file loader filter
-
class
asldro.filters.json_loader.
JsonLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading a JSON file.
Inputs
Input parameters are all keyword arguments for the
JsonLoaderFilter.add_inputs()
member function. They are also accessible via class constants, for exampleJsonLoaderFilter.KEY_FILENAME
.- Parameters
'filename' (str) – The path to the JSON file to load
'schema' (dict) – (optional) The schema to validate against (in python dict format). Some schemas can be found in asldro.validators.schemas, or one can just in input here.
'root_object_name' (str) – Optionally place all of the key-value pairs inside this object
Outputs
Creates a multiple outputs, based on the root key,value pairs in the JSON filter. For example: { “foo”: 1, “bar”: “test”} will create two outputs named “foo” and “bar” with integer and string values respectively. The outputs may also be nested i.e. object or arrays.
If the input parameter
'root_object_name'
is supplied then these outputs will be nested within an object taking the name of the value of'root_object_name'
-
KEY_FILENAME
= 'filename'¶
-
KEY_ROOT_OBJECT_NAME
= 'root_object_name'¶
-
KEY_SCHEMA
= 'schema'¶
asldro.filters.load_asl_bids_filter module¶
Load ASL BIDS filter class
-
class
asldro.filters.load_asl_bids_filter.
LoadAslBidsFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that loads in ASL data in BIDS format, comprising of a NIFTI image file, json sidear and tsv aslcontext file. After loading in the data, image containers are created using the volumes described in aslcontext. For each of these containers, the data in sidecar is added to the metadata object. In addition a metadata ‘asl_context’ is created which is a list of the corresponding volumes contained in each container. Any metadata entries that are an array and specific to each volume have only the corresponding values copied.
Inputs
Input Parameters are all keyword arguments for the
LoadAslBidsFilter.add_inputs()
member function. They are also accessible via class constants, for exampleLoadAslBidsFilter.KEY_SIDECAR
- Parameters
'image_filename' (str) – path and filename to the ASL NIFTI image (must end in .nii or.nii.gz)
'sidecar_filename' – path and filename to the json sidecar (must end in .json)
'aslcontext_filename' (str) – path and filename to the aslcontext file (must end in .tsv). This must be a tab separated values file, with heading ‘volume_type’ and then entries which are either ‘control’, ‘label’, or ‘m0scan’.
Outputs
Once run, the filter will populate the dictionary
LoadAslBidsFilter.outputs
with the following entries- Parameters
'source' (BaseImageContainer) – the full ASL NIFTI image
'control' (BaseImageContainer) – control volumes (as defined by aslcontext)
'label' (BaseImageContainer) – label volumes (as defined by aslcontext)
'm0' (BaseImageContainer) – m0 volumes (as defined by aslcontext)
-
ASL_CONTEXT_MAPPING
= {'control': 'control', 'label': 'label', 'm0': 'm0scan'}¶
-
KEY_ASLCONTEXT_FILENAME
= 'aslcontext_filename'¶
-
KEY_CONTROL
= 'control'¶
-
KEY_IMAGE_FILENAME
= 'image_filename'¶
-
KEY_LABEL
= 'label'¶
-
KEY_M0
= 'm0'¶
-
KEY_SIDECAR
= 'sidecar'¶
-
KEY_SIDECAR_FILENAME
= 'sidecar_filename'¶
-
KEY_SOURCE
= 'source'¶
-
LIST_FIELDS_TO_EXCLUDE
= ['ScanningSequence', 'ComplexImageComponent', 'ImageType', 'AcquisitionVoxelSize']¶
asldro.filters.mri_signal_filter module¶
MRI Signal Filter
-
class
asldro.filters.mri_signal_filter.
MriSignalFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that generates either the Gradient Echo, Spin Echo or Inversion Recovery MRI signal.
Gradient echo is with arbitrary excitation flip angle.
Spin echo assumes perfect 90° excitation and 180° refocusing pulses.
Inversion recovery can have arbitrary inversion pulse and excitation pulse flip angles.
Inputs
Input Parameters are all keyword arguments for the
MriSignalFilter.add_inputs()
member function. They are also accessible via class constants, for exampleMriSignalFilter.KEY_T1
- Parameters
't1' (BaseImageContainer) – Longitudinal relaxation time in seconds (>=0, non-complex data)
't2' (BaseImageContainer) – Transverse relaxation time in seconds (>=0, non-complex data)
't2_star' (BaseImageContainer) – Transverse relaxation time including time-invariant magnetic field inhomogeneities, only required for gradient echo (>=0, non-complex data)
'm0' (BaseImageContainer) – Equilibrium magnetisation (non-complex data)
'mag_eng' – Added to M0 before relaxation is calculated, provides a means to encode another signal into the MRI signal (non-complex data)
'acq_contrast' (str) – Determines which signal model to use:
"ge"
(case insensitive) for Gradient Echo,"se"
(case insensitive) for Spin Echo,"ir"
(case insensitive) for Inversion Recovery.'echo_time' (float) – The echo time in seconds (>=0)
'repetition_time' (float) – The repeat time in seconds (>=0)
'excitation_flip_angle' (float, optional) – Excitation pulse flip angle in degrees. Only used when
'acq_contrast'
is"ge"
or"ir"
. Defaults to 90.0'inversion_flip_angle' (float, optional) – Inversion pulse flip angle in degrees. Only used when
acq_contrast
is"ir"
. Defaults to 180.0'inversion_time' – The inversion time in seconds. Only used when
'acq_contrast'
is"ir"
. Defaults to 1.0.'image_flavour' (str) – sets the metadata
'image_flavour'
in the output image to this.
Outputs
Once run, the filter will populate the dictionary
MriSignalFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – An image of the generated MRI signal. Will be of the same class as the input
'm0'
Output Image Metadata
The metadata in the output image
MriSignalFilter.outputs["image"]
is derived from the input'm0'
. If the input'mag_enc'
is present, its metadata is merged with precedence. In addition, following parameters are added:'acq_contrast'
'echo time'
'excitation_flip_angle'
'image_flavour'
'inversion_time'
'inversion_flip_angle'
``’mr_acq_type’` = “3D”
Metadata entries for
'units'
and'quantity'
will be removed.'image_flavour'
is obtained (in order of precedence):If present, from the input
'image_flavour'
If present, derived from the metadata in the input
'mag_enc'
“OTHER”
Signal Equations
The following equations are used to compute the MRI signal:
Gradient Echo
\[S(\text{TE},\text{TR}, \theta_1) = \sin\theta_1\cdot(\frac{M_0 \cdot(1-e^{-\frac{TR}{T_{1}}})} {1-\cos\theta_1 e^{-\frac{TR}{T_{1}}}-e^{-\frac{TR}{T_{2}}}\cdot \left(e^{-\frac{TR}{T_{1}}}-\cos\theta_1\right)} + M_{\text{enc}}) \cdot e^{-\frac{\text{TE}}{T^{*}_2}}\]Spin Echo (assuming 90° and 180° pulses)
\[S(\text{TE},\text{TR}) = (M_0 \cdot (1-e^{-\frac{\text{TR}}{T_1}}) + M_{\text{enc}}) \cdot e^{-\frac{\text{TE}}{T_2}}\]Inversion Recovery
\[\begin{split}&S(\text{TE},\text{TR}, \text{TI}, \theta_1, \theta_2) = \sin\theta_1 \cdot (\frac{M_0(1-\left(1-\cos\theta_{2}\right) e^{-\frac{TI}{T_{1}}}-\cos\theta_{2}e^{-\frac{TR}{T_{1}}})} {1-\cos\theta_{1}\cos\theta_{2}e^{-\frac{TR}{T_{1}}}}+ M_\text{enc}) \cdot e^{-\frac{TE}{T_{2}}}\\ &\theta_1 = \text{excitation pulse flip angle}\\ &\theta_2 = \text{inversion pulse flip angle}\\ &\text{TI} = \text{inversion time}\\ &\text{TR} = \text{repetition time}\\ &\text{TE} = \text{echo time}\\\end{split}\]-
CONTRAST_GE
= 'ge'¶
-
CONTRAST_IR
= 'ir'¶
-
CONTRAST_SE
= 'se'¶
-
KEY_ACQ_CONTRAST
= 'acq_contrast'¶
-
KEY_ACQ_TYPE
= 'mr_acq_type'¶
-
KEY_BACKGROUND_SUPPRESSION
= 'background_suppression'¶
-
KEY_ECHO_TIME
= 'echo_time'¶
-
KEY_EXCITATION_FLIP_ANGLE
= 'excitation_flip_angle'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_IMAGE_FLAVOUR
= 'image_flavour'¶
-
KEY_INVERSION_FLIP_ANGLE
= 'inversion_flip_angle'¶
-
KEY_INVERSION_TIME
= 'inversion_time'¶
-
KEY_M0
= 'm0'¶
-
KEY_MAG_ENC
= 'mag_enc'¶
-
KEY_REPETITION_TIME
= 'repetition_time'¶
-
KEY_T1
= 't1'¶
-
KEY_T2
= 't2'¶
-
KEY_T2_STAR
= 't2_star'¶
asldro.filters.nifti_loader module¶
NIFTI file loader filter
-
class
asldro.filters.nifti_loader.
NiftiLoaderFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter for loading a NIFTI image from a file.
Must have a single string input named ‘filename’.
Creates a single Image container as an output named ‘image’
asldro.filters.phase_magnitude_filter module¶
PhaseMagnitudeFilter Class
-
class
asldro.filters.phase_magnitude_filter.
PhaseMagnitudeFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter block that will take image data and convert it into its Phase and Magnitude components. Typically, this will be used after a
AcquireMriImageFilter
which contains real and imaginary components, however it may also be used with image data that is of type:REAL_IMAGE_TYPE
: in which case the phase is 0° where the image value is positive, and 180° where it is negative.IMAGINARY_IMAGE_TYPE
: in which case the phase is 90° where the image value is positive, and 270° where it is negative.MAGNITUDE_IMAGE_TYPE
: in which case the phase cannot be defined and so the output phase image is set toNone
.
Inputs
Input Parameters are all keyword arguments for the
PhaseMagnitudeFilter.add_inputs()
member function. They are also accessible via class constants, for examplePhaseMagnitudeFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – The input data image, cannot be a phase image
Outputs
- Parameters
'phase' (BaseImageContainer) – Phase image (will have
image_type==PHASE_IMAGE_TYPE
)'magnitude' (BaseImageContainer) – Magnitude image (will have
image_type==MAGNITUDE_IMAGE_TYPE
)
-
KEY_IMAGE
= 'image'¶
-
KEY_MAGNITUDE
= 'magnitude'¶
-
KEY_PHASE
= 'phase'¶
asldro.filters.resample_filter module¶
Resample Filter
-
class
asldro.filters.resample_filter.
ResampleFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that can resample an image based on a target shape and affine. Note that nilearn actually applies the inverse of the target affine.
Inputs
Input Parameters are all keyword arguments for the
ResampleFilter.add_inputs()
member function. They are also accessible via class constants, for exampleResampleFilter.KEY_AFFINE
- Parameters
'image' (BaseImageContainer) – Image to resample
'affine' (np.ndarray(4)) – Image is resampled according to this 4x4 affine matrix
'shape' (Tuple[int, int, int]) – Image is resampled according to this new shape.
'interpolation' (str, optional) –
Defines the interpolation method:
- ’continuous’
order 3 spline interpolation (default)
- ’linear’
order 1 linear interpolation
- ’nearest’
nearest neighbour interpolation
Outputs
Once run, the filter will populate the dictionary
ResampleFilter.outputs
with the following entries:- Parameters
'image' (BaseImageContainer) – The input image, resampled in accordance with the input shape and affine.
The metadata property of the
ResampleFilter.outputs["image"]
is updated with the fieldvoxel_size
, corresponding to the size of each voxel.-
CONTINUOUS
= 'continuous'¶
-
INTERPOLATION_LIST
= ['continuous', 'linear', 'nearest']¶
-
KEY_AFFINE
= 'affine'¶
-
KEY_IMAGE
= 'image'¶
-
KEY_INTERPOLATION
= 'interpolation'¶
-
KEY_SHAPE
= 'shape'¶
-
LINEAR
= 'linear'¶
-
NEAREST
= 'nearest'¶
asldro.filters.scale_offset_filter module¶
ScaleOffsetFilter Class
-
class
asldro.filters.scale_offset_filter.
ScaleOffsetFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that will take image data and apply a scale and/or offset according to the equation:
\[I_{output} = I_{input} * m + b\]where ‘m’ is the scale and ‘b’ is the offset (scale first then offset)
Inputs
Input Parameters are all keyword arguments for the
ScaleOffsetFilter.add_inputs()
member function. They are also accessible via class constants, for exampleScaleOffsetFilter.KEY_IMAGE
- Parameters
'image' (BaseImageContainer) – The input image
'scale' (float / int) – (optional) a scale to apply
'offset' (float / int) – (optional) an offset to apply
Outputs
- Parameters
'image' (BaseImageContainer) – The output image
-
KEY_IMAGE
= 'image'¶
-
KEY_OFFSET
= 'offset'¶
-
KEY_SCALE
= 'scale'¶
asldro.filters.transform_resample_image_filter module¶
Transform resample image filter
-
class
asldro.filters.transform_resample_image_filter.
TransformResampleImageFilter
¶ Bases:
asldro.filters.basefilter.BaseFilter
A filter that transforms and resamples an image in world space. The field of view (FOV) of the resampled image is the same as the FOV of the input image.
Conventions are for RAS+ coordinate systems only
Inputs
Input Parameters are all keyword arguments for the
TransformResampleImageFilter.add_inputs()
member function. They are also accessible via class constants, for exampleTransformResampleImageFilter.KEY_ROTATION
- Parameters
'image' (BaseImageContainer) – The input image
'translation' – \([\Delta r_x,\Delta r_y,\Delta r_z]\) amount to translate along the x, y and z axes. defaults to (0, 0, 0)
'rotation' (Tuple[float, float, float], optional) – \([\theta_x,\theta_y,\theta_z]\) angles to rotate about the x, y and z axes in degrees(-180 to 180 degrees inclusive), defaults to (0, 0, 0)
'rotation_origin' (Tuple[float, float, float], optional) – \([x_r,y_r,z_r]\) coordinates of the point to perform rotations about, defaults to (0, 0, 0)
target_shape (Tuple[int, int, int]) – \([L_t,M_t,N_t]\) target shape for the resampled image
'interpolation' (str, optional) –
Defines the interpolation method for the resampling:
- ’continuous’
order 3 spline interpolation (default method for ResampleFilter)
- ’linear’
order 1 linear interpolation
- ’nearest’
nearest neighbour interpolation
Outputs
Once run, the filter will populate the dictionary
TransformResampleImageFilter.outputs
with the following entries- Parameters
'image' (BaseImageContainer) – The input image, resampled in accordance with the specified shape and applied world-space transformation.
The metadata property of the
TransformResampleImageFilter.outputs["image"]
is updated with the fieldvoxel_size
, corresponding to the size of each voxel.The output image is resampled according to the target affine:
\[\begin{split}&\mathbf{A}=(\mathbf{T(\Delta r_{\text{im}})}\mathbf{S}\mathbf{T(\Delta r)} \mathbf{T(r_0)}\mathbf{R}\mathbf{T(r_0)}^{-1})^{-1}\\ \text{where,}&\\ & \mathbf{T(r_0)} = \mathbf{T}(x_r, y_r, z_r)= \text{Affine for translation to rotation centre}\\ & \mathbf{T(\Delta r)} = \mathbf{T}(\Delta r_x, \Delta r_y, \Delta r_z)= \text{Affine for translation of image in world space}\\ & \mathbf{T(\Delta r_{\text{im}})} = \mathbf{T}(x_0/s_x,y_0/s_y,z_0/s_z)^{-1} =\text{Affine for translation to the input image origin} \\ &\mathbf{T} = \begin{pmatrix} 1 & 0 & 0 & \Delta x \\ 0 & 1& 0 & \Delta y \\ 0 & 0 & 1& \Delta z \\ 0& 0 & 0& 1 \end{pmatrix}=\text{translation matrix}\\ &\mathbf{S} = \begin{pmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ & 0 & 0& 1 \end{pmatrix}=\text{scaling matrix}\\ & [s_x, s_y, s_z] = \frac{[L_t,M_t,N_t]}{[v_x, v_y, v_z]\cdot[L_i,M_i,N_i]}\\ & \text{divisions and multiplications are element-wise (Hadamard)}\\ & [L_i, M_i, N_i] = \text{shape of the input image}\\ & [v_x, v_y, v_z] = \text{voxel dimensions of the input image}\\ & [x_0, y_0, z_0] = \text{input image origin coordinates (vector part of input image's affine)}\\ &\mathbf{R} = \mathbf{R_z} \mathbf{R_y} \mathbf{R_x} = \text{Affine for rotation of image in world space}\\ &\mathbf{R_x} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \cos{\theta_x}& -\sin{\theta_x} & 0\\ 0 & \sin{\theta_x} & \cos{\theta_x}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about x matrix}\\ &\mathbf{R_y} = \begin{pmatrix} \cos{\theta_y} & 0 & \sin{\theta_y} & 0\\ 0 & 1 & 0 & 0\\ -\sin{\theta_y} & 0 & \cos{\theta_y}& 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about y matrix}\\ &\mathbf{R_z} = \begin{pmatrix} \cos{\theta_z}& -\sin{\theta_z} & 0 & 0\\ \sin{\theta_z} & \cos{\theta_z}& 0 &0\\ 0& 0& 1 & 0\\ 0& 0 & 0& 1 \end{pmatrix}= \text{rotation about z matrix}\\\end{split}\]After resampling the output image’s affine is modified to only contain the scaling:
\[\mathbf{A_{\text{new}}} = (\mathbf{T(\Delta r_{\text{im}})}\mathbf{S})^{-1}\]-
INTERPOLATION_LIST
= ['continuous', 'linear', 'nearest']¶
-
KEY_IMAGE
= 'image'¶
-
KEY_INTERPOLATION
= 'interpolation'¶
-
KEY_ROTATION
= 'rotation'¶
-
KEY_ROTATION_ORIGIN
= 'rotation_origin'¶
-
KEY_TARGET_SHAPE
= 'target_shape'¶
-
KEY_TRANSLATION
= 'translation'¶
-
VOXEL_SIZE
= 'voxel_size'¶