Our approach utilizes level sets with geodesic active contours 205 . The signed distance function, ψ, was chosen as the embedding function for the zero level set 206 .The movement of the contour is controlled by the generic level‐set equation:
ψ ∙ ψ | ψ| | ψ| 24
where
A
is an advection term,P
is an expansion term, andZ
is a spatial modifier term for the mean curvature κ 207 . Scalar constants α, β, and γ are used to weight the relative contributions of the three terms to the curve evolution. The contour is extracted at any time as the zero level‐set, ψ , 0 25
The function ψ is solved at each time step in a finite difference scheme to create an image field of function values. The best approximation of the object surface is then to calculate the image positions corresponding to zero crossings in function values.
The geodesic active contours equation in 24 may be extended with an additional shape prior term. We used the formulation of this term presented by Leventon
et al.
:ψ∗ ψ 26
where ψ* is the best estimate of the final curve as determined by a maximum
a posteriori
approach and λ is a scalar weight constant 208 . The estimate of the final curve can be calculated by〈 , 〉 argmax
, , |ψ, 27
which seeks to maximize the probability of a set of shape parameters,
α
, and rigid pose parameters,r
, both defined later given the surface ψ at some point in time and the gradient of the image, . Equation 27 can be expanded using Bayes’ Rule:, |ψ, ψ| , | , ,ψ
ψ, 28
where the denominator can be discarded as it does not depend on shape or pose. The first term in the numerator is modeled as a Laplacian density function over
V
outside, the volume of the current curve ψ which lies outside the estimated final curve ψ*:ψ| , exp 29
The second term describes the probability of observing a given image gradient given the current and final curves. It can be assumed that when the curve correctly outlines the boundary of an object, its relationship with | | should be Gaussian. Thus this term is modeled as a Laplacian of the goodness of fit of the Gaussian ψ∗ to the samples ψ∗, | | :
| , ,ψ exp ψ∗ | | 30
The third term is the probability distribution of the shape prior parameters. The typical strategy for computing a prior on shape variation is to build a shape model given a set of training images. Given
n
training images, the training set R , , … , consists of the signed distance maps for eachimage. The mean surface, μ, is computed as the mean of the signed distance maps, ∑ . Principle component analysis PCA is used to compute the shape variance. First, the mean shape is subtracted from each to create mean‐offset maps, which are then each placed as column vectors in an
N
xn
‐dimensional matrixM
, whereN
is the number of samples pixels in each distance map.Instead of performing the eigen decomposition on the large
N
xN
covariance matrixM
*M
T, we decompose the much smallern
xn
inner product matrixM
T*M
. The resulting eigen vectors,E
, are then multiplied by the matrixM
to get the principle component images,U
. The object ψ can be estimated by the firstk
principle components as ak
‐dimensional vector of shape parameters,α
:U
kT ψ 31The shape prior probability term is thus modeled as a Gaussian with shape variance Σk: 1
2 |Σ |exp 1
2 TΣ 32
where Σk is a diagonal matrix containing the first
k
eigenvalues.The last term, , is the probability of observing a set of pose parameters. We currently do not assume any pose more likely than another, and so merely assume a uniform distribution over these parameters. The approximate signed distance map to the shape prior can computed as
ψ∗ ∗ ∗ 33
where are the square root of the eigenvalues, and
T x
is a transform which defines the pose of the shape, which we chose to be a rigid transform function with parameters,r
, of rotation and translation. The shape prior term in 26 is thus constructed at each iteration of the level set evolution by referring to the PCA description of the object shape and then using an optimizer to solve for a set of shape and pose parameters which maximize the posterior probability in 28 . The
segmentation algorithm described above was implemented in C primarily using ITK Kitware Inc., Clifton Park, NY and the segmentation pipeline is shown in Figure 41.
Figure 41. Ultrasound image processing and segmentation pipeline using the shape model derived from a co‐registered tomogram segmentation.
The segmentation is performed after soft tissue movement has been corrected with our model‐based approach. Each ultrasound image is thus associated with a specific model deformation derived from the position of the ultrasound probe in the tissue. The statistical shape model is then created from intersections of the ultrasound beam with the co‐registered tomogram target segmentation as shown in Figure 42. Typically it is desirable to create a shape model from a very large dataset, and since this is not available in practice for irregular patient‐specific structures such
as tumors, the approach here was to perturb the ultrasound beam in the elevational directions several millimeters in each direction and record the varying intersected target borders. This was done for several planes in either direction in order to gather at least five shape images with which to create the shape model.
Figure 42. Creation of statistical shape model from co‐registered tomogram target for the ultrasound segmentation pipeline.