postprocess

A postprocess modifies beamformed data. This includes adaptive beamforming, compounding, and image enhancement techniques.

Input: beamformed_dataOutput: beamformed_data

Compounding

class coherent_compounding

COHERENT_COMPOUNDING Sum complex beamformed data across transmit/receive dimensions.

Coherent compounding sums the complex beamformed data across the specified dimension(s) to improve SNR while preserving phase information. If window_size is set, a sliding-window sum is applied along the transmit dimension.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension Which dimension(s) to sum over (transmit, receive, or both) window_size If set, sliding-window sum along transmit dimension []

Example:

obj = postprocess.coherent_compounding();

See also POSTPROCESS, INCOHERENT_COMPOUNDING, DIMENSION

authors: Alfonso Rodriguez-Molares (alfonso.r.molares@ntnu.no)

Ole Marius Hoel Rindal <olemarius@olemarius.net> Fabrice Prieur Stefano Fiorentini <stefano.fiorentini@ntnu.no>

$Last updated: 2023/09/01$

Property Summary
dimension

Which “dimension” to sum over

window_size

If specified, do moving sum along transmit dimension

Method Summary
go(h)

check if we can skip calculation

class incoherent_compounding

INCOHERENT_COMPOUNDING Sum magnitude of beamformed data across dimensions.

Incoherent compounding sums the absolute value of beamformed data across the specified dimension(s), reducing speckle at the cost of phase information.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension Which dimension(s) to sum over (transmit, receive, or both)

Example:

obj = postprocess.incoherent_compounding();

See also POSTPROCESS, COHERENT_COMPOUNDING, DIMENSION

authors: Alfonso Rodriguez-Molares (alfonso.r.molares@ntnu.no)

Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/05/11$

Property Summary
dimension

Which “dimension” to sum over

Method Summary
go(h)

check if we can skip calculation

Adaptive Beamforming

class coherence_factor

COHERENCE_FACTOR Mallart-Fink coherence factor adaptive beamforming.

Applies coherence factor weighting to beamformed data to suppress off-axis echoes and improve contrast. Coherence is computed on transmit, receive, or both dimensions.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

CF Beamformed data with computed coherence factor active_element_criterium Threshold for active element decision [] dimension Dimension(s) for coherence (transmit, receive, or both)

Example:

obj = postprocess.coherence_factor();

See also POSTPROCESS, PHASE_COHERENCE_FACTOR, GENERALIZED_COHERENCE_FACTOR, DIMENSION

References:

Mallart and Fink, “Adaptive focusing in scattering media through sound-speed inhomogeneities: The van Cittert Zernike approach and focusing criterion”, J. Acoust. Soc. Am., vol. 96, no. 6, pp. 3721-3732, 1994

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Property Summary
CF

BEAMFORMED_DATA class with the computed coherent factor

active_element_criterium

value to decide whether an element is used or not. This value depends on the SNR so it must be adjusted on a case-by-case basis.

dimension

dimension class that specifies whether the process will run only on transmit, receive, or both.

Method Summary
go(h)

check if we can skip calculation

class generalized_coherence_factor

GENERALIZED_COHERENCE_FACTOR Lin-Li generalized coherence factor beamforming.

Uses frequency-domain low-frequency energy ratio as a coherence factor to weight beamformed data. Computes coherence on transmit, receive, or both.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

GCF Beamformed data with computed coherence factor active_element_criterium Threshold for active element decision [] dimension Dimension(s) for coherence (transmit, receive, or both) M0 Low-frequency region size []

Example:

obj = postprocess.generalized_coherence_factor();

See also POSTPROCESS, COHERENCE_FACTOR, GENERALIZED_COHERENCE_FACTOR_OMHR, DIMENSION

References:

Lin and Li, “Adaptive imaging using the generalized coherence factor,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 50, no. 2, pp. 128-141, 2003

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Property Summary
GCF

BEAMFORMED_DATA class with the computed coherent factor

M0

low frequency

active_element_criterium

value to decide whether an element is used or not. This value depends on the SNR so it must be adjusted on a case-by-case basis.

dimension

dimension class that specifies whether the process will run only on transmit, receive, or both.

Method Summary
go(h)

check if we can skip calculation

class generalized_coherence_factor_OMHR

GENERALIZED_COHERENCE_FACTOR_OMHR Generalized coherence factor (pixel-wise FFT).

Alternative implementation of the generalized coherence factor using pixel-wise FFT over active elements. Requires channel_data for apodization. Use dimension to select transmit or receive.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

M0 Low-frequency region size [] channel_data Channel data (required for probe/apodization) GCF Beamformed data with computed coherence factor dimension Dimension(s) for coherence (transmit, receive, or both)

Example:

obj = postprocess.generalized_coherence_factor_OMHR();

See also POSTPROCESS, GENERALIZED_COHERENCE_FACTOR, COHERENCE_FACTOR, DIMENSION

References:

Li and Li, “Adaptive imaging using the generalized coherence factor,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 50, no. 2, pp. 128-141, 2003

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Andreas Austeng <AndreasAusteng@ifi.uio.no>

$Last updated: 2017/05/05$

Property Summary
GCF

BEAMFORMED_DATA class with the computed phase coherent factor

M0

Low frequency region

channel_data

Need the probe

dimension

dimension class that specifies whether the process will run only on transmit, receive, or both.

Method Summary
go(h)

check if we can skip calculation

class phase_coherence_factor

PHASE_COHERENCE_FACTOR Camacho-Fritsch phase coherence factor beamforming.

Uses phase statistics to compute a coherence factor that weights beamformed data. Computes coherence on transmit, receive, or both dimensions.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

center_frequency Center frequency of RF signals [Hz] sound_speed Reference sound speed [m/s] gamma Mixing ratio [] sigma_0 Reference phase value [] FCA Absolute phase coherence factor (beamformed_data) FCC Complex phase coherence factor (beamformed_data) dimension Dimension(s) for coherence (transmit, receive, or both)

Example:

obj = postprocess.phase_coherence_factor();

See also POSTPROCESS, COHERENCE_FACTOR, GENERALIZED_COHERENCE_FACTOR, DIMENSION

References:

Camacho and Fritsch, “Phase coherence imaging of grained materials,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 58, no. 5, pp. 1006-1015, 2011

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Property Summary
FCA

BEAMFORMED_DATA class with the computed absolute phase coherence factor

FCC

BEAMFORMED_DATA class with the computed complex phase coherence factor

center_frequency

center frequency of RF signals [Hz]

dimension

Which “dimension” to sum over

gamma

mixing ratio

sigma_0

reference phase value

sound_speed

reference sound speed [m/s]

Method Summary
go(h)

check if we can skip calculation

class capon_minimum_variance

CAPON_MINIMUM_VARIANCE Capon (minimum variance) adaptive beamforming.

Applies Capon minimum-variance beamforming to beamformed data for improved resolution and sidelobe suppression. Requires channel_data and scan. Implementation emphasizes clarity over speed.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

active_element_criterium Threshold for active element decision [] L_elements Subarray size [] K_in_lambda Temporal averaging factor [lambda] regCoef Regularization (diagonal loading) factor [] doForwardBackward Forward-backward averaging (0 or 1) dimension Dimension(s) (transmit, receive, or both) channel_data Channel data (required) scan Scan geometry (required)

Example:

obj = postprocess.capon_minimum_variance();

See also POSTPROCESS, EIGENSPACE_BASED_MINIMUM_VARIANCE, DIMENSION

References:

Capon, “High-resolution frequency-wavenumber spectrum analysis,” Proc. IEEE, vol. 57, no. 8, pp. 1408-1418, 1969

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/05/02$

Property Summary
K_in_lambda

temporal averaging factor

L_elements

subarray size

active_element_criterium

value to decide whether an element is used or not

channel_data

Channel data

dimension

dimension class that specifies whether the process will run only on transmit, receive, or both.

doForwardBackward

forward backward averaging

regCoef

regularization factor

Method Summary
go(h)

check if we can skip calculation

class eigenspace_based_minimum_variance

EIGENSPACE_BASED_MINIMUM_VARIANCE Eigenspace-based minimum variance beamforming.

Projects Capon weights onto the signal subspace to improve robustness. Uses gamma to separate signal and noise subspaces. Requires channel_data and scan. Implementation emphasizes clarity over speed.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

active_element_criterium Threshold for active element decision [] L_elements Subarray size [] K_in_lambda Temporal averaging factor [lambda] regCoef Regularization (diagonal loading) factor [] doForwardBackward Forward-backward averaging (0 or 1) dimension Dimension(s) (transmit, receive, or both) gamma Signal subspace eigenvalue threshold [] channel_data Channel data (required) scan Scan geometry (required)

Example:

obj = postprocess.eigenspace_based_minimum_variance();

See also POSTPROCESS, CAPON_MINIMUM_VARIANCE, DIMENSION

References:

Mohammadzadeh Asl, B. and Mahloojifar, A., “Eigenspace-based minimum variance beamforming applied to medical ultrasound imaging,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 57, no. 11, pp. 2381-2390, 2010. https://ieeexplore.ieee.org/document/5611687

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/05/02$

Property Summary
active_element_criterium

value to decide whether an element is used or not

channel_data

Channel data

Method Summary
go(h)

check if we can skip calculation

class delay_multiply_and_sum

DELAY_MULTIPLY_AND_SUM Delay Multiply And Sum beamforming algorithm

Implements the Delay Multiply And Sum (DMAS) beamforming algorithm. Computes coherence on transmit, receive, or both dimensions.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension dimension to compute coherence on (transmit/receive/both) channel_data uff.channel_data object for parameters filter_freqs optional [4x1] passband/stopband edges [Hz]

Example:

obj = postprocess.delay_multiply_and_sum();

See also POSTPROCESS, SIMPLIFIED_DELAY_MULTIPLY_AND_SUM, DIMENSION

References:

Matrone, G., Savoia, A. S., & Magenes, G. (2015). The Delay Multiply and Sum Beamforming Algorithm in Ultrasound B-Mode Medical Imaging, IEEE TMI, 34(4), 940-949.

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/09/10$

Property Summary
channel_data

UFF.CHANNEL_DATA class

filter_freqs

optional: four increasing numbers specifying the passband and stopband edges of the bandpass filter

Method Summary
go(h)

check if we can skip calculation

class simplified_delay_multiply_and_sum

SIMPLIFIED_DELAY_MULTIPLY_AND_SUM Simplified Delay Multiply And Sum beamforming

Fast simplified implementation of the Delay Multiply And Sum (DMAS) beamforming algorithm for real-time applications.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension dimension to compute coherence on (transmit/receive/both) channel_data uff.channel_data object for parameters filter_freqs optional [4x1] passband/stopband edges [Hz]

Example:

obj = postprocess.simplified_delay_multiply_and_sum();

See also POSTPROCESS, DELAY_MULTIPLY_AND_SUM, DIMENSION

References:

Jeon, S., Park, E. Y., Choi, W., et al. (2019). Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans. Photoacoustics, 15, 100136.

implementers: Sufayan Ikabal Mulani <sufayanm@ifi.uio.no> and Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2023/11/08$

Property Summary
channel_data

UFF.CHANNEL_DATA class

filter_freqs

optional: four increasing numbers specifying the passband and stopband edges of the bandpass filter

Method Summary
go(h)

check if we can skip calculation

class short_lag_spatial_coherence

SHORT_LAG_SPATIAL_COHERENCE Short-lag spatial coherence imaging

Implements the Short-Lag Spatial Coherence (SLSC) algorithm for coherence-based ultrasound imaging.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

active_element_criterium threshold for element activation K_in_lambda axial kernel length [lambda] dimension transmit, receive, or both maxM maximum lag value slsc_values computed SLSC values channel_data uff.channel_data object

Example:

obj = postprocess.short_lag_spatial_coherence();

See also POSTPROCESS, DIMENSION, UFF.APODIZATION

References (must be cited when using this process):

Lediju, M. A., Trahey, G. E., Byram, B. C., & Dahl, J. J. (2011). “Short-lag spatial coherence of backscattered echoes: Imaging characteristics.” IEEE TUFFC, 58(7), 1377-1388. https://doi.org/10.1109/TUFFC.2011.1957

For cardiac imaging, also cite: Lediju Bell, M. A., Goswami, R., Kisslo, J. A., Dahl, J. J., & Trahey, G. E. (2013). “Short-Lag Spatial Coherence (SLSC) Imaging of Cardiac Ultrasound Data: Initial Clinical Results.” Ultrasound Med. Biol., 39(10), 1861-1874. https://doi.org/10.1016/j.ultrasmedbio.2013.03.029

Citation policy: http://www.ustb.no/citation/

$Last updated: 2017/09/10$

Property Summary
active_element_criterium

value to decide whether an element is used or not

dimension

dimension class that specifies whether the process will run only on transmit, receive, or both.

Method Summary
go(h)

check if we can skip calculation

makelagmat(h, numrows, numcols, maxlag)

MAKELAGMAT Generates a cell array containing lag information A matrix that contains the lag number of each element with respect to every other element is output by this function. These values are stored in a NUMROWS*NUMCOLS by NUMROWS*NUMCOLS matrix. The output is a cell array containing the indices from the lag matrix that corresponds to the cell number, i.e. LAG{1} will contain all the indices of points in the lag matrix that are equal to 1, and LAG{2} for those that are equal to 2, etc.

LAG = MAKELAGMAT(NUMROWS, NUMCOLS, MAXLAG) will return a cell array of dimensions (MAXLAG, 1). Each cell will contain the indices from the lag matrix that corresponds to the cell number.

LAG = MAKELAGMAT(NUMROWS, NUMCOLS) will assume that MAXLAG is set to the maximum possible value.

short_lag_spatial_coherence_implementation(h, data_cube)

Calculate the Normalized Spatial Coherence across the receive aperture

Displacement Estimation

class autocorrelation_displacement_estimation

AUTOCORRELATION_DISPLACEMENT_ESTIMATION Estimates tissue displacement via autocorrelation.

Time-domain displacement estimation using the autocorrelation method, adapted from Doppler velocity estimation. Uses fixed center frequency. For depth-dependent center frequency estimation, use modified_autocorrelation_displacement_estimation.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

z_gate axial gate size [samples] x_gate lateral gate size [samples] packet_size number of frames per estimation [frames] channel_data uff.channel_data for sound speed and center frequency

Example:

obj = postprocess.autocorrelation_displacement_estimation();

See also POSTPROCESS, MODIFIED_AUTOCORRELATION_DISPLACEMENT_ESTIMATION

References:

Barber et al., IEEE Trans. Biomed. Eng., 1985 Kasai et al., IEEE Trans. Sonics Ultrason., 1985 Angelsen & Kristoffersen, IEEE Trans. Biomed. Eng., 1983 Børstad, “Comparison of three ultrasound velocity estimators”, NTNU, 2010

Authors: Ole Marius Hoel Rindal <olemarius@olemarius.net> $Last updated: 2017/08/15$

Method Summary
go(h)

check if we can skip calculation

class modified_autocorrelation_displacement_estimation

MODIFIED_AUTOCORRELATION_DISPLACEMENT_ESTIMATION Displacement estimation with depth-dependent center frequency.

Time-domain displacement estimation using the 2D autocorrelation method (Loupas et al.). Estimates center frequency at each depth to compensate for frequency-dependent attenuation, improving accuracy over the standard autocorrelation method.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

z_gate axial gate size [samples] x_gate lateral gate size [samples] packet_size number of frames per estimation [frames] estimated_center_frequency estimated center frequency per pixel [Hz] channel_data uff.channel_data for sound speed

Example:

obj = postprocess.modified_autocorrelation_displacement_estimation();

See also POSTPROCESS, AUTOCORRELATION_DISPLACEMENT_ESTIMATION

References:

Loupas et al., IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 1995 Barber et al., IEEE Trans. Biomed. Eng., 1985 Kasai et al., IEEE Trans. Sonics Ultrason., 1985 Angelsen & Kristoffersen, IEEE Trans. Biomed. Eng., 1983 Børstad, “Comparison of three ultrasound velocity estimators”, NTNU, 2010

Authors: Ole Marius Hoel Rindal <olemarius@olemarius.net> $Last updated: 2017/08/15$

Method Summary
go(h)

check if we can skip calculation

Image Enhancement

class non_local_means_filtering

NON_LOCAL_MEANS_FILTERING Non-local means denoising filter

Fast non-local means denoising based on distances in feature space. Supports Gaussian and Rician noise models.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension transmit, receive, or both run_on_logcompressed apply to log-compressed data sigma noise power (std dev) beta filtering strength (0.8-1.2) rs search radii [3x1] rc comparison radii [3x1] ps preselection threshold flag ‘gaussian’ or ‘rician’ block vector (1) or loop (0) computation

Example:

obj = postprocess.non_local_means_filtering();

See also POSTPROCESS, WIENER, MEDIAN

References:

Tristan-Vega, A., et al. (2012). Efficient and robust nonlocal means denoising of MR data based on salient features matching. CMPB, 105, 131-144.

implementers: Antonio Tristan-Vega <atriveg@lpi.tel.uva.es>

Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2020/01/04$

Property Summary
beta

Filtering parameter. The larger its value, the more

block

This second flag tells the algorithm if the computation of the

dimension

Dimension class that specifies whether the

flag

Must be either ‘gaussian’ (the default) or ‘rician’. In the

ps

The preselection threshold. All those pixels in the search

rc

A 3x1 vector with the comparison radii

rs

A 3x1 vector with the search radii

sigma

The noise power in the input image. In the Gaussian case, this

Method Summary
ComputeLocalFeatures3D(h, I, radii)

Computes the local mean value and the local gradients of a 3D image.

I: the input image radii: a 3x1 vector of integers with the size of the neighborhood used

to compute the local values. Gaussian windows are used generated for each dimension as gausswin(2*radii(d)+1). If not provided, [x=1;y=1;z=1] will be assumed

mu: A 3D image, the same size as I, with local mean. Gx: A 3D image, the same size as I, with the gradient in the ‘x’

direction (dimension 2 in matlab).

Gy: A 3D image, the same size as I, with the gradient in the ‘y’

direction (dimension 1 in matlab).

Gz: A 3D image, the same size as I, with the gradient in the ‘z’

direction (dimension 3 in matlab).

factors: a 3x1 vector with the factors to be applied to each gradient

difference to estimate patch distances.

hcorr: the effective reduction in the amount of noise in the

distances between patches because of the fitting.

FastNonLocalMeans3D(h, V, sigma, beta, rs, rc, ps, flag, block)

A fast implementation of the non-local means based on distances in the features space.

NOTE: Some of the computational features described in the paper above cannot be exploited in the matlab implementation. If performance is an issue for you, we strongly encourage you use the C++/ITK implementation available at: http://www.nitrc.org/projects/unlmeans, for which both source code and pre-compiled executables can be downloaded.

V: The input volume to be filtered (3D). - MANDATORY sigma: The noise power in the input image. In the Gaussian case, this

is the standard deviation of the Gaussian noise at each pixel. In the Rician case, it is the standard deviation of noise in the original, Gaussian distributed, real and imaginary parts of the signal (whose modulus is computed to get the Rician variable). - MANDATORY

beta: The filtering parameter. The larger its value, the more

aggressive the filtering. The smaller its value, the better details are preserved. It should be in the range of 0.8 to 1.2 for best performance (Default: 1.0).

rs: A 3x1 vector with the search radii (Default: 2,2,2). rc: A 3x1 vector with the comparison radii (Default: 1,1,1). ps: The preselection threshold. All those pixels in the search

window whose normalized distance to the center pixel is larger than this value are automatically removed from the weighted average (Default: 2.0).

flag: Must be either ‘gaussian’ (the default) or ‘rician’. In the

latter case, the weighted average is performed over the squared pixels, and the filtered value is computed as sqrt(mu-2�sigma^2) so that the estimate becomes unbiased.

block: This second flag tells the algorithm if the computation of the

weights within the search window must be done with a loop (0, the default since it seems to be faster for the default search window) or it must be done with vector operations (1). Choose 0 with small search radii or 1 with larger search radii.

out: The filtered volume.

My3DConv(h, I, gx, gy, gz)

Computes a separable 3D convolution

go(h)

check if we can skip calculation

class wiener

WIENER Wiener filter for ultrasound image denoising

Applies 2D Wiener filtering to reduce noise in beamformed images. Works on combined (single-channel) images only.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

m filter window size (rows) [pixels] n filter window size (columns) [pixels] sigma noise variance estimate run_on_logcompressed apply to log-compressed data (default: true)

Example:

obj = postprocess.wiener();

See also POSTPROCESS, MEDIAN, NON_LOCAL_MEANS_FILTERING

Method Summary
go(h)

check that the input is combined image

class median

MEDIAN Median filter for ultrasound image denoising

Applies 2D median filtering to reduce speckle and noise in beamformed images. Works on combined (single-channel) images only.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

m filter window size (rows) [pixels] n filter window size (columns) [pixels]

Example:

obj = postprocess.median();

See also POSTPROCESS, WIENER, NON_LOCAL_MEANS_FILTERING

Method Summary
go(h)

check that the input is combined image

class scan_converter

SCAN_CONVERTER Converts sector-scan beamformed data to a linear scan grid.

Interpolates beamformed data from a sector scan geometry onto a 2D/3D linear scan grid using scattered interpolation.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

scan UFF.SCAN object defining the desired output scan geometry

Example:

obj = postprocess.scan_converter();

See also POSTPROCESS, UFF.BEAMFORMED_DATA, UFF.LINEAR_SCAN, UFF.SECTOR_SCAN

authors: Stefano Fiorentini (stefano.fiorentini@ntnu.no) $Date: 2022/12/19$

Property Summary
scan

UFF.SCAN object defining the desired scan for the output beamformed data

Method Summary
go(h)

check if we can skip calculation

Gray Level Transforms

class gray_level_transform

GRAY_LEVEL_TRANSFORM Applies polynomial gray-level mapping for dynamic range stretching.

Transforms beamformed signal amplitudes using a polynomial in log space (p(b) = a*b^3 + b*b^2 + c*b + d) mapped to linear space via cubic spline. Improves apparent image quality through dynamic range stretching prior to log-compression.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

a cubic coefficient b quadratic coefficient c linear coefficient d constant offset plot_functions enable debug plotting of transfer functions scan uff.scan object (optional) is_exp experimental flag

Example:

obj = postprocess.gray_level_transform();

See also POSTPROCESS, POLYNOMIAL_GRAY_LEVEL_TRANSFORM, SCURVE_GRAY_LEVEL_TRANSFORM

References:

Rindal, O. M. H., Austeng, A., Fatemi, A., Rodriguez-Molares, A., “The effect of dynamic range alterations in the estimation of contrast,” IEEE TUFFC, vol. 66, no. 7, pp. 1198-1208, 2019. https://ieeexplore.ieee.org/document/8691813

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Method Summary
go(h)

check if we can skip calculation

class polynomial_gray_level_transform

POLYNOMIAL_GRAY_LEVEL_TRANSFORM Polynomial gray-level mapping for dynamic range stretching.

Applies a cubic polynomial (p(b) = a*b^3 + b*b^2 + c*b + d) in log space, mapped to linear space via cubic spline interpolation. Used for dynamic range stretching of beamformed ultrasound images prior to display.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

a cubic coefficient b quadratic coefficient c linear coefficient d constant offset plot_functions enable debug plotting of transfer functions scan uff.scan object (optional) is_exp experimental flag

Example:

obj = postprocess.polynomial_gray_level_transform();

See also POSTPROCESS, GRAY_LEVEL_TRANSFORM, SCURVE_GRAY_LEVEL_TRANSFORM

References:

Rindal, O. M. H., Austeng, A., Fatemi, A., Rodriguez-Molares, A., “The effect of dynamic range alterations in the estimation of contrast,” IEEE TUFFC, vol. 66, no. 7, pp. 1198-1208, 2019. https://ieeexplore.ieee.org/document/8691813

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Method Summary
go(h)

check if we can skip calculation

class scurve_gray_level_transform

SCURVE_GRAY_LEVEL_TRANSFORM S-curve (sigmoid) gray-level mapping for dynamic range stretching.

Applies an S-curve transfer function (1/(1+exp(-a*(x_dB-b)))) in log space, mapped to linear space via cubic spline. Provides a sigmoidal compression for dynamic range stretching of beamformed ultrasound images.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

a sigmoid steepness parameter b sigmoid center point [dB] c scaling factor plot_functions enable debug plotting of transfer functions scan uff.scan object (optional)

Example:

obj = postprocess.scurve_gray_level_transform();

See also POSTPROCESS, GRAY_LEVEL_TRANSFORM, POLYNOMIAL_GRAY_LEVEL_TRANSFORM

References:

Rindal, O. M. H., Austeng, A., Fatemi, A., Rodriguez-Molares, A., “The effect of dynamic range alterations in the estimation of contrast,” IEEE TUFFC, vol. 66, no. 7, pp. 1198-1208, 2019. https://ieeexplore.ieee.org/document/8691813

implementers: Ole Marius Hoel Rindal <olemarius@olemarius.net>

Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

$Last updated: 2017/09/12$

Method Summary
go(h)

check if we can skip calculation

Utilities

class max

MAX Maximum-value compounding of beamformed data

Takes the maximum absolute value across transmit and/or receive dimensions to produce a compounded image.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Properties:

dimension dimension to take max over (transmit/receive/both)

Example:

obj = postprocess.max();

See also POSTPROCESS, STACK, DIMENSION

authors: Alfonso Rodriguez-Molares <alfonso.r.molares@ntnu.no>

Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/09/10$

Property Summary
dimension

Which “dimension” to sum over

Method Summary
go(h)

check if we can skip calculation

class stack

STACK Scanline stacking for multi-angle beamformed data

Stacks scanlines from multiple transmit angles into a single extended image. Supports linear and sector scans.

Input: uff.beamformed_data -> Output: uff.beamformed_data

Example:

obj = postprocess.stack();

See also POSTPROCESS, MAX, UFF.LINEAR_SCAN, UFF.SECTOR_SCAN

authors: Alfonso Rodriguez-Molares (alfonso.r.molares@ntnu.no)

Ole Marius Hoel Rindal <olemarius@olemarius.net>

$Last updated: 2017/09/10$

Method Summary
go(h)

check if we can skip calculation