• Tidak ada hasil yang ditemukan

A custom-designed stereovision system consisting of two C-mount cameras (Flea2 model FL2G-50S5C-C, Point Grey Research Inc., Richmond, BC, Canada) was rigidly mounted to a Zeiss surgical microscope (OPMI®Pentero™, Carl Zeiss, Inc., Oberkochen, Germany) through a binocular port. Acquisition of stereo image pairs was externally triggered using two charge-coupled device (CCD) cameras (image size of 7681,024; pixel resolution of approximately 50–100mm; images are in red, green, and blue (RGB)) rigidly attached to a surgical microscope. The position and orientation of the microscope was available from a StealthStation® navigation system via StealthLink (Medtronic, Inc., Louisville, CO) through a rigidly-attached tracker.

A sequence of stereo image pairs was acquired for a clinical patient case (patient was an 18-year old male with resection of abnormal epilepsy brain tissue in the right frontal region), together with synchronized blood pressure using an in-house Labview (National Instruments Corporation, Austin, TX) software package. These image pairs are first rectified based on a pinhole camera model and radial lens distortion correction following the procedure. The technical details of stereovision surface reconstruction based on a pinhole camera model with correction for radial lens distortion can be found, e.g., in Sun et al. [2], and are not repeated here. The following framework describes the general procedures involved in computing in-plane strain of the cortical surface:

1. Optional rigid registration to remove unintentional motion of the microscope by limiting the ROI for registration to be outside of the craniotomy;

2. Compute cortical surface deformation relative to the first image with optical flow motion tracking;

3. FFT to detect the fundamental frequencies in motion corresponding to patient respiration and BPP;

4. Generate an average cortical surface profile based on the largest integer multiple of harmonic cycles;

5. Compute cortical surface in-plane strain in reference to the averaged image or reference state.

24.2.1 Two-Frame Optical Flow Computation

Optical flow has been an important research area in computer vision widely used to track small-scale motion in temporal sequences of images based on the principle of intensity conservation between image frames [6,7]. In this study, image pixel displacements between the deformed (at time t + 1) and undeformed (at time t) images was computed by optical flow motion tracking using the variational model based on [8]. Assuming the image intensity of a pixel or material point, (x,y), does not change due to its displacement, the following gray value constancy constraint holds:

IðpþwÞ ¼Ið Þ;p (24.1)

170 S. Ji et al.

in whichp¼(x,y,t) and the underlying flow field,w(p), is given byw(p)¼(u(p),v(p), 1), whereu(p) andv(p) are the horizontal and vertical components of the flow field, respectively. The global deviations from the gray value constancy assumption are measured by an energy term:

EDataðu;vÞ ¼ ð

CjIðpþwÞ Ið Þp j2

dp; (24.2)

where a robust function [9],CðxÞ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2þe2

p , was used to enable an L1minimization. The choice ofCdoes not introduce any additional parameters other than a small constant,e(e¼0.001), for numerical purposes.

The gray value constancy constraint only applies to a pixel locally without considering any interaction between neighboring pixels. Because the flow field in a natural scene is typically smooth, an additional piecewise smoothness constraint can be applied to the spatial domain (if there are only two frames available), leading to the following energy term:

ESmoothðu;vÞ ¼ ð

fjruj2þ rvj j2

dp; (24.3)

wherefis a robust function chosen to be identical toC, andris the gradient operator:jruj2¼u2xþu2y(ux¼@@xu,uy¼@@yu).

Combining the gray value constancy constraint and piecewise smoothness constraint leads to the following objective function in the continuous spatial domain:

E u;ð vÞ ¼EDataþaESmooth; (24.4)

where a (a>0; was chosen as 0.02 in this study) is a regularization parameter. Computing the optical flow is then transformed into an optimization problem in which a spatially continuous flow field,uandv, are determined that minimizes the total energy,E. In this study, an iterative reweighted least squares algorithm [10] was used. A multiscale approach starting with a coarse, smoothed image set was also used to ensure global minimization. In addition, the formulation was extended for application to a multi-channel RGB image, where a vector,I(p), was used to account for image intensities in the red, green, and blue channels. The details of the algorithm are given in [11].

24.2.2 Averaging Cortical Surface Deformation Over Time

For a typical sequence of stereo images, the magnitude of cortical surface pulsation relative to the first image pair was illustrated in Fig.24.1, where the displacement magnitude (which was averaged across the exposed cortical area) as a function of time was plotted for the entire duration of image acquisition (T¼21.45 s), together with synchronized blood Fig. 24.1 Cortical surface displacement magnitude averaged over the exposed cortical surface area relative to the first image as a function of image acquisition time overlaid with the synchronized blood pressure

24 Tracking Cortical Surface Deformation Using Stereovision 171

pressure normalized for visualization. Apparently, motion of the cortical surface followed a harmonic pattern that was in concert with BPP as well as controlled patient mechanical ventilation.

To establish an averaged cortical surface profile, it is necessary to detect the underlying frequency components of the deformation signal. To this end, fast Fourier transform (FFT; [12]) was utilized. A Hamming window (Fig.24.2a; [13]) was also applied to the deformation signal prior to FFT in order to minimize spectral leakages due to limited sampling rate and the fact that the sampled window was not necessarily an integer multiple of the underlying signal cycles. The resulting spectral amplitude and frequency relationship is shown in Fig.24.2b.

Suppose the total number of image frames or deformation samples for the patient wasNsampand the total image duration wasT, the following defines the sampling frequency,fsamp:

fsamp¼Nsamp

T ; (24.5)

For the patient included,Nsamp¼200 andT¼21.45, which led tofsamp¼9.324 Hz. After de-meaning the deformation signal, the spectral amplitude vs. frequency relationship was obtained from the FFT analysis, from which the highest two peaks that correspond to the patient respiration (fres) and BPP (fBPP) were identified (Fig.24.2b). Because the spectral amplitude of the respiration was significantly (82%) higher than that of BPP (the spectral amplitude of respiration 1.163 and 0.6403, respectively), we chose to include the largest integer multiple of respiration cycles available from the acquisitions (i.e.,Nres) for this patient case. Formally, the largest integer multiple,Nres, was obtained using the following equation:

Nres

fres Nsamp

fsamp ; (24.6)

which leads to:

Nres¼floor fres

fsampNsamp

: (24.7)

The number of image acquisitions to be used in averaging,Nframe, was then obtained with the following equation:

Nframe¼round Nresfsamp

fres

¼round fres

fsampNsamp

fsamp

fres

; (24.8)

Fig. 24.2 (a) Comparison of the average cortical surface displacement magnitude before and after applying a Hamming window as a function of image frame number; (b) spectral amplitude as a function of the frequency obtained from the FFT analysis, where peaks corresponding to respiration and BPP were identified

172 S. Ji et al.

For each pixel in the exposed cortical surface area, its displacement components from the firstNframeimages were then averaged. In order to obtain an averaged cortical surface profile, image interpolation was utilized using the same gray intensity constancy assumption with the optical flow motion-tracking algorithm. Specifically, suppose the resulting average displacement vector relative to the first image isdX, and assuming that for the first image (i.e., used as the reference state before motion averaging), image intensity,, at pixel location, X, remains identical after the displacement of dX, the functional mapping for the deformed average image,G’(X), was then numerically interpolated based on the following relationship:

G0ðXþdXÞ ¼Gð Þ;X (24.9)

24.2.3 Displacement Field Local Smoothing

The same optical flow motion tracking algorithm was executed to derive image deformation field of each rectified image with respect to the averaged image. In-plane strain was calculated numerically by differentiating the displacement field.

Because of possible noise in the measured displacements, smoothing prior to differentiation is typical [14], and a point-wise local least squares fitting scheme was employed for this purpose. For each pixel in the image, a linear plane was used to approximate displacements (ux,uy) from the neighboring pixels of size (2m+ 1)(2m+ 1):

uxð Þ ¼i; j a0þa1xi;jþa2yi;j; (24.10) uyð Þ ¼i; j b0þb1xi;jþb2yi;j; (24.11) whereiandjspan from –mtomfor a set of grid points; (xi,j,yi,j) is the corresponding pixel location; while (a,b) are the polynomial coefficients to be determined. A largemtends to eliminate local variations in strain, while a smallmmay not sufficiently smooth strain estimation in certain regions. In this work,mwas empirically chosen to be 4 as a reasonable trade-off.

24.2.4 Cortical Surface Strain Estimation

Cortical surface in-plane strain was estimated using the Eulerian formulation based on deformation gradient of the locally smoothed displacement field. From the full-field deformation, displacement gradient is readily computed [15]:

ru¼ @

@ux @u

@y

@v

@x @v

@y

!

: (24.12)

The deformation gradient is then obtained:

F¼ðI ruÞ1: (24.13)

Finally, the Eulerian strain tensor is obtained:

eð Þ ¼x 1

2 I F1T

F1

: (24.14)

The maximum principal strain components (denoted asemax) and their directions were then obtained by computing the Eigen values and Eigen vectors of the strain tensor at each pixel [15]. All data analyses were performed on a Linux computer (2.6 GHz, 8 GB RAM) using MATLAB (R2010b, The Mathworks, Natick, MA).

24 Tracking Cortical Surface Deformation Using Stereovision 173