• Tidak ada hasil yang ditemukan

Towards Fast and High-quality Biomedical Image Reconstruction

N/A
N/A
Protected

Academic year: 2023

Membagikan "Towards Fast and High-quality Biomedical Image Reconstruction"

Copied!
137
0
0

Teks penuh

For example, in biomedical imaging, such as computed tomography (CT), magnetic resonance imaging (MRI), that term stands for the transformation of the possibly complete or undersampled spectral domains (sinogram for CT and k-space for MRI) to the visible image domains. 15 11 Hiding P2P overhead using streams by overlapping computation and communication 16 12 GPU direct version 2.0, function interface and the actual data communication flow 17 13 Halo exchange between GPUs belonging to the different nodes.

Figure 1: Image reconstruction tasks can be varied across modalities.
Figure 1: Image reconstruction tasks can be varied across modalities.

Motivation

Problem statement

The main objectives of this thesis are therefore to develop faster and high-quality biomedical image reconstruction methods, which are represented as three tasks: restoring the image signals from undersampled Magnetic Resonance k-space data, segmenting the semantic information (membrane, blob-like cell nuclei) , and removing noise from electron microscope images.

Example of image reconstruction problem: Compressed Sensing MRI

This sparsity-based CS-MRI method introduces computational overhead into the reconstruction process due to solving expensive nonlinear 1 minimization problems, leading to the development of efficient numerical algorithms [51] and the use of parallel computing hardware to reduce computation time. accelerate [99]. The nuclear norm and low-level matrix completion techniques have also been used for CS-MRI reconstruction.

Figure 3: MRI examples. Red line : time axis, Blue-green lines : x-y axis, (a) and (b) : different types of time-varying dynamic MRI, (c) CS-MRI example
Figure 3: MRI examples. Red line : time axis, Blue-green lines : x-y axis, (a) and (b) : different types of time-varying dynamic MRI, (c) CS-MRI example

Inspirations

Research statement

In addition to adversary training and the introduction of noise patch concatenation, an improvement over unpaired image-to-image translation using cycle-consistent loss is presented to demonstrate that having prior knowledge about the noise pattern, such as grid tape film or charge damage is beneficial to the denoising process, compared to direct translation back and forth between noisy and denoised image domains.

Discrete Wavelet Transform

Wavelet lifting schemes

Wavelet lifting schemes consist of multiple steps (eg two for Haar and CDF 5/3, four for CDF 9/7) with an optional normalization step at the end. Both Haar and CDF 5/3 lifting schemes use one pair of weights (α and β) while CDF the 9/7 scheme requires two pairs of weights, (α and β), and (γ and δ), as listed in Table 1 .

Table 1: Lifting coefficients of selected wavelet families Haar CDF 5/3 CDF 9/7
Table 1: Lifting coefficients of selected wavelet families Haar CDF 5/3 CDF 9/7

Mixed-band wavelet transform

Assuming this process runs on the GPU, then you can consider the input and output pixel arrays as global memory and the dashed wavelet transform box as shared memory. As shown in Figure 38 on the left, read and write steps are different in the conventional wavelet transform, while they are identical in the mixed-band wavelet transform.

Bit rotation permutation

The advantages of the mixed-band approach include 1) lower memory consumption due to in-place filtering, 2) combined multilevel transformation in a single kernel call without synchronization using global memory, and 3) reducing the index calculation overhead for moving the location of the wavelet coefficients.

Multi-GPU Iterative Solver

Multi–GPU parallelization on a shared memory system

However, if we choose the halo size properly and use multiple streams, we can hide data communication overhead by overlapping with execution. If we choose the halo size properly (depending on the communication delay and computing cost), we can hide communication overhead by overlapping halo exchange and computation.

Figure 9: Data splitting based on the number of GPUs.
Figure 9: Data splitting based on the number of GPUs.

Multi–GPU parallelization on a distributed memory system

We can further improve this approach by switching to the non-blocking (i.e. MPI_Isend(..), MPI_Irecv(..)) version. In this case, the CPU process immediately returns to the ready stage to send the second halo.

Figure 12: GPU direct version 2.0, function interface and the actual data communication flow
Figure 12: GPU direct version 2.0, function interface and the actual data communication flow

Convolutional Sparse Coding Solver

For example, the atoms in the dictionary of the LdCT-Chest data represent fine-level details related to bone and surrounding muscle tissue. They generate the data-dependent learned basis, which differs significantly from the universal dictionary of the Fourier, cosine, and wavelet transforms.

Figure 17: A 64-atom dictionary generated from natural images.
Figure 17: A 64-atom dictionary generated from natural images.

Deep Learning in Bio-medical Image Reconstruction

Deep Learning in Image Processing and Computer Vision

Since their solver is entirely in the image domain, the linear complexity of convolution affects the performance of their algorithm. They generate the data-dependent learning basis, which differs significantly from the universal dictionary in the Fourier, cosine and wavelet transforms. a) Dictionaries of LdCT-Chest dataset.

Figure 19: Volume rendering of hierarchical multi-scale 3D dictionaries. From left to right: three resolutions of 24 atoms (7 3 , 15 3 and 31 3 ).
Figure 19: Volume rendering of hierarchical multi-scale 3D dictionaries. From left to right: three resolutions of 24 atoms (7 3 , 15 3 and 31 3 ).

Deep Learning in Connectomics

Previous automatic segmentation work using deep learning has mainly focused on patch-based pixel-wise classification based on a convolutional neural network (CNN) for affinity map generation [ 124 ] and cell membrane probability estimation [ 31 ]. A notable recent advance in the machine learning domain is the introduction of a fully convolutional neural network (FCN) [85] for the end-to-end semantic segmentation problem.

Deep Learning for Compressed Sensing MRI Reconstruction

Conventional approaches usually do not use noise-free samples as ground truth to perform the image denoising task. This deep model consists of multiple phases, each of which corresponds to a single iteration of the ADMM algorithm.

GPU Optimization stategies

  • Using shared memory
  • Using registers
  • Exploiting Instruction level parallelism (ILP)
  • Exploiting warp shuffles on Kepler GPUs
  • Combining all: hybrid approach
  • Fused multi-level Haar DWT

As shown in the figure, the total number of shared memory transactions (drawn in red arrows) is reduced in the register-based implementation. The shared memory-based implementation shown in Figure 22 (top) (a) requires 6 reads (two sets of three reads on override with α are performed in parallel), 3 writes to shared memory, and two __syncthreads() calls.

Figure 21: Region of reading and writing on using only shared memory strategy.
Figure 21: Region of reading and writing on using only shared memory strategy.

Result

  • Comparison of various strategies
  • Comparison with different hybrid configurations
  • Comparison with CPU implementations
  • Comparison with other GPU DWT methods
  • Comparison on multi-level GPU DWT

Note that some of the configurations (in gray) will be invalid due to either the number of pixels required along the vertical axis for a valid tile or the hardware limitations (shared memory resource per block). As shown here, our proposed method achieved higher performance compared to the state-of-the-art method [40] measured on an NVIDIA Kepler GPU (K40).

Figure 25: Running times (in msecs) of various strategies, on NVIDIA Kepler GPUs.
Figure 25: Running times (in msecs) of various strategies, on NVIDIA Kepler GPUs.

Application: 3D CSMRI reconstruction

As shown in Table 10, adding a 3D wavelet term can effectively increase the PSNR of the reconstructed images. It can be seen that the wavelet term helps to reduce noise-like effects and increase the quality of 3D MRI reconstruction.

Table 10: PSNRs of 3D CSMRI reconstruction on 128 × 128 × 128 datasets. Unit: dB
Table 10: PSNRs of 3D CSMRI reconstruction on 128 × 128 × 128 datasets. Unit: dB

Summary

Method

Compressed Sensing formulation for DCE-MRI

51] proposed a closed-form solution to invert the left-hand side of a given linear system using forward and inverse 3D Fourier transforms, but we cannot use the same method because the k-space data in our formulation is 2D. Since the left-hand side of Equation 14 is not directly invertible, we can use iterative methods such as fixed-point iteration or the conjugate gradient method, which converge more slowly than the explicit inversion in the original method.

Multi-GPU Implementation

Because currents run in parallel, current 0 can continue to calculate on the interior during the communication (Fig. 30 yellow). Then each out-of-region data on the CPU will be transferred to the ghost region of its neighborhood CPU via MPI send and receive (Fig. 30 magenta).

Result

Image Quality and Running time Evaluation

Finally, the valid ghost data is copied from the host to the device asynchronously to complete a round ghost communication (Fig. 30 purple). By doing this, we can effectively hide the communication delay by allowing the ghost transfers to take place on the different GPU streams while the main stream continues the computation on the inner region.

Multi-GPU Performance Evaluation

While both methods generate the curves similar to those of the full reconstruction, our result is more accurate (i.e., close to the full reconstruction curve), which is due to the different numerical characteristics of the algorithm. The result also confirms that our method converges much faster than Bilen's, up to 6.9 × in GPU implementation.

Summary

In this section, we introduce a novel application of convolutional sparse coding to reconstruct undersampled DCE-MRI. To the best of our knowledge, this is the first CS-DCE-MRI reconstruction method based on convolutional sparse coding and GPU acceleration.

Method

Subproblem for convolutional sparse coding

Since we will be using the Fourier transform to solve this problem, G should be padded to zero to make its magnitude the same as Gf and Xf. XfHSf+σ(Gf−Hf) (29) Solve for G: G can be found by taking the inverse Fourier transform of Df.

Subproblem for Total Variation along t

Result

Convergence evaluation: As shown in Figure 65, the obtained peak signal-to-noise ratios (PSNRs) of the proposed method are significantly higher than those of Caballero et al. Our learning method can generate atoms that capture details in the features of the image much better than the patch-based learning method (Fig. 38a).

Figure 34: Convergence rate evaluation based on PSNRs.
Figure 34: Convergence rate evaluation based on PSNRs.

Summary

Conventional CS-MRI reconstruction methods have exploited the sparsity of the signal by applying universal sparsifying transforms such as Fourier (e.g. discrete Fourier transform (DFT) or discrete cosine transform (DCT)), Total Variation (TV) and Wavelets (e.g. ., Hair, Daubechies, etc.). To the best of our knowledge, this is the first CS-MRI reconstruction method based on GPU-accelerated 3D CSC.

Figure 36: Left: the patch-based dictionary learning method [17]. Right: the proposed method.
Figure 36: Left: the patch-based dictionary learning method [17]. Right: the proposed method.

Method

Since we will be using the Fourier transform to solve this problem, g should be zero-padded to make its magnitude the same as gf and xf. Note that the effective solutions of 48, 54 and 59 can be determined via the Sherman-Morrison formula for independent linear systems as shown in [131].

Result

As can be seen, our approach generated fewer errors compared to the state-of-the-art method of [17] and conventional CS reconstruction using wavelet and TV energy [99]. As you can see in this figure, our method requires more iterations (epochs) to converge to the steady state, but the actual running time is much faster than the others due to GPU acceleration.

Table 12: Reconstruction times of learning-based methods (100 epochs)
Table 12: Reconstruction times of learning-based methods (100 epochs)

Summary

Method

  • Problem Definition and Notations
  • Overview of the Proposed Method
  • Generative Adversarial Loss
  • Cyclic Data Consistency Loss
  • Model Architecture

Instead, it is trained to produce the inverse amplitude of the noise caused by reconstruction from undersampled data. In practice, however, many iterative methods also use additional steps to iterate through the current result and then attempt to 'correct' the errors.

Figure 41: Two learning processes are trained adversarially to achieve better reconstruction from generator G and to fool the ability of recognizing the real or fake MR image from discriminator D.
Figure 41: Two learning processes are trained adversarially to achieve better reconstruction from generator G and to fool the ability of recognizing the real or fake MR image from discriminator D.

Result

Results on real-valued MRI data

Running time estimation: Table 13 summarizes the running times of our method and other state-of-the-art learning-based CS-MRI methods. Although approaches based on dictionary learning exploit pre-trained dictionaries, their reconstruction time depends on the numerical methods used.

Figure 46: Radial sampling masks used in our experiments.
Figure 46: Radial sampling masks used in our experiments.

Results on complex-valued MRI data

Summary

Data preparation

We take advantage of this fact to enrich our training data for each slice by forming an additional fifteen other images and corresponding labels. For 2D enrichment, we generate eight permutations for each slice by rotating the original image by and 270◦ and then reflecting each case.

Figure 53: Data enrichment through these eight orientation variations would increase the input training data size by a factor of eight
Figure 53: Data enrichment through these eight orientation variations would increase the input training data size by a factor of eight

Chain of FusionBlocks: W-net architecture

Random noise, boundary expansion, random shuffle, data normalization and cross-validation: During the training phase, we randomly add Gaussian noise to the raw image (mean μ= 0, variance σ = 0.1). Before the training phase, we perform a simple data normalization by scaling the image data to the interval between [0,1].

Result

Experimental Setup

Larval zebrafish ssEM data

We deployed the trained network on the entire set of 16000 larval zebrafish brain sections imaged at nm3vx-1 resolution, which is approximately 1.2 terabytes of data size. We also observed that the distribution of features appears to be mirror symmetric across the midline of the zebrafish larva.

Figure 56: Visual comparison of the larval zebrafish ssEM volume segmentation; (a) input ssEM volume; (b) manual segmentation (ground truth); (c) U-net [105] result; (d) RDN [43] result;
Figure 56: Visual comparison of the larval zebrafish ssEM volume segmentation; (a) input ssEM volume; (b) manual segmentation (ground truth); (c) U-net [105] result; (d) RDN [43] result;

Larval Drosophila ssEM data

We performed a statistical analysis of the morphological properties of the cells, such as volume, surface area and sphericity. By applying LMC post-processing to the W-net264 output, our method was ranked at the top of the ISBI 2012 EM segmentation challenge table (as of March 2017).

Figure 59: Example results of cellular membrane segmentation on test data from the ISBI 2012 EM segmentation challenge (slice 22/30) illustrating an input EM image (left), the probability prediction from our W-net 2 64 (middle), and the thinned probability
Figure 59: Example results of cellular membrane segmentation on test data from the ISBI 2012 EM segmentation challenge (slice 22/30) illustrating an input EM image (left), the probability prediction from our W-net 2 64 (middle), and the thinned probability

Discussion

As shown, our method, like other state-of-the-art methods, can remove non-cellular membrane structures belonging to mitochondria (seen as dark shaded textures) and vesicles (seen as small spheres). In addition, our method can better improve the errors in MDPM using a concatenated network (i.e., corrects the errors in the error-corrected results) and adapts better for larger image size due to the nature of the end-to-end network.

Summary

Another example is electron beam charge damage artifacts that are sometimes observed in the scanning EM (SEM). We propose a novel semi-supervised learning-based denoising method that learns sources of image artifacts (i.e., noise) and removes them from unseen images.

Method

Data preparation

Selection of noise samples between types and strains: Images were obtained from larval zebrafish brain tissue 5–7 days after fertilization by TEM and SEM methods. Note that these patches are randomly extracted with the criteria given in the collected image section.

Overview of the proposed method

There are two different training paths; The upper direction encourages the reconstruction of the input clean image and noise pattern after going through G1 and G2. The discriminatorsD, includingDI,DC,DN for recognizing clean images C, noisy patterns N and noisy imagesI respectively, try to distinguish between the real instances from the database and the fake results generated by G.

Figure 62: Example of inter-type and charge noise images. (a) is a clean TEM image, (b) is a charge noise image, and (c) is a noisy SEM image.
Figure 62: Example of inter-type and charge noise images. (a) is a clean TEM image, (b) is a charge noise image, and (c) is a noisy SEM image.

Loss definition

In practice, various distance metrics, such as mean squared error (MSE), mean-absolute error (MAE), etc., can be used to implement Lcyc. The entire framework was implemented using a system-oriented programming wrapper (tensorpack20) of the tensorflow 21 library.

Training and Testing specification

This means that the noise pattern can be consolidated with the clean image to obtain the noisy image and then separated to meet the conservative power consumption. The Adam optimizer [76] is used with an initial learning rate of 1e−4, which decreases monotonically over 500 epochs.

Result

Experiment setup

For examples 3 and 5, we do not have realistically clean images, so we made visual comparisons between the results without noise.

Quantitative and qualitative evaluations

One can also find subtle differences between ours and CycleGAN; ours tends to reconstruct the clearer background and the edges are sharper than CycleGAN, which may be due to the separate discriminators for clean and noisy images in the proposed method. Overall, as shown in Figure 70, the proposed method generates fewer errors compared to the ground truth than the others.

Figure 67: Comparison on TEM ZB dataset in case 1.
Figure 67: Comparison on TEM ZB dataset in case 1.

Summary

Summary of Dissertation Research

Limitations and Cautions of the current work

Future work

Dictionary learning and time saving for dynamic MR data reconstruction.IEEE Transactions on Medical Imaging. An integrated micro- and macro-architectural analysis of the Drosophila brain by computer-assisted serial section electron microscopy.

Figure 73: Quadtree decomposition for an agent can solve a landmark detection problem And there are several solutions that can perform this task well but they are lacking of the  abil-ity of correcting the segmentation at different scales
Figure 73: Quadtree decomposition for an agent can solve a landmark detection problem And there are several solutions that can perform this task well but they are lacking of the abil-ity of correcting the segmentation at different scales

Gambar

Figure 2: Image-to-image translation tasks can be varied across scales.
Figure 6: Memory layouts of conventional and mixed-band wavelets in (a) 2D and (b) 3D.
Table 1: Lifting coefficients of selected wavelet families Haar CDF 5/3 CDF 9/7
Figure 12: GPU direct version 2.0, function interface and the actual data communication flow
+7

Referensi

Dokumen terkait

Based on discussions with industrial supervisors on the relation between all these data, a summarization between the relationship of mechanical properties can be summarized below in