• Tidak ada hasil yang ditemukan

III. Refining DNNs with SURE and Poisson denoising 17

3.3 Discussion

(higher noise), our proposed SDA-PURE method outperformed the conventional BM3D+VST method significantly, while SDA-MSE-GT that was trained with the ground truth data yielded the best results. Figs. 3.4 and 3.5 illustrate the visual quality of the outputs of the denois- ing methods for ζ = 0.1 and 0.2, respectively. We observed that BM3D+VST images were considerably blurrier than the results of our SDA-based methods, especially forζ = 0.2.

It was found that for ˙in the range of

10−6, 10−2

worked well for PURE approximation, which is similar to the SURE case. At high values ofζ >0.1, the approximation became highly inaccurate and after ζ >0.2 the model could not converge and thus training was not feasible.

These findings about the accuracy of PURE approximation are in agreement with [47].

Table 3.3: Results of Poisson noise denoisers for MNIST (Performance in dB). Averages of 10 experiments are reported.

Methods BM3D+VST SDA-PURE SDA-MSE-GT

ζ= 0.01 30.57 30.55 30.55

ζ= 0.1 22.34 25.80 25.99

ζ= 0.2 19.56 23.51 24.27

proposed [54] so that other common noise types in imaging systems can be potentially considered for SURE-based methods. It should be noted that SURE does not require any prior knowledge on images. Thus, it can be applied to the measurement domain for different applications, such as medical imaging. Owing to noise correlation (colored noise) in the image domain (e.g., based on the Radon transform in the case of CT or PET), further investigations will be necessary to apply our proposed method directly to the image domain.

CHAPTER IV

Conclusion

We proposed a SURE based training method for general deep learning denoisers in unsuper- vised ways. Our proposed method with SURE trained DNN denoisers without noiseless ground truth data so that they could yield comparable denoising performances to those elicited by the same denoisers that were trained with noiseless ground truth data, and outperform the con- ventional state-of-the-art BM3D. Our proposed SURE-based refining method with a noisy test image further improved performance and outperformed conventional BM3D, deep image prior, and often the networks trained with ground truth for both grayscale and color images. Potential extension of our SURE-based methods to Poisson noise model was also demonstrated.

Since SURE can incorporate a variety of distributions other than Gaussian and Poisson noises, it is possible to extend our work to be used in more research areas. In addition, our work can be applied to wide variety of network architectures, so the performance of our method can improve with the emergence of more powerful denoisers.

References

[1] A Krizhevsky, I Sutskever, and G E Hinton, “Imagenet classification with deep convolu- tional neural networks,” in Advances in Neural Information Processing Systems (NIPS) 25, 2012, pp. 1097–1105. 1

[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep Residual Learning for Image Recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. 1

[3] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi, “Inception- v4, Inception-ResNet and the Impact of Residual Connections on Learning.,” in AAAI Conference on Artificial Intelligence (AAAI), 2017, pp. 4278–4284. 1

[4] R Girshick, J Donahue, and T Darrell, “Rich feature hierarchies for accurate object detec- tion and semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 580–587. 1

[5] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards real- time object detection with region proposal networks,” inAdvances in Neural Information Processing Systems (NIPS) 28, 2015, pp. 91–99. 1

[6] Joseph Redmon and Ali Farhadi, “YOLO9000: Better, Faster, Stronger,” in IEEE Con-

REFERENCES [7] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” inInternational Conference on Learning Representation (ICLR), 2015. 1

[8] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” inIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431–3440. 1

[9] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam,

“Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,”

inEuropean Conference on Computer Vision (ECCV). 2018, pp. 833–851, Springer Inter- national Publishing. 1

[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Nets,” inAdvances in Neural Information Processing Systems (NIPS), 2014, pp. 2672–2680. 1

[11] Diederik P Kingma and Max Welling, “Auto-Encoding Variational Bayes,” inInternational Conference on Learning Representations (ICLR), 2014. 1

[12] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” in IEEE International Con- ference on Computer Vision (ICCV). 2017, pp. 2242–2251, IEEE. 1

[13] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre Antoine Man- zagol, “Stacked denoising autoencoders: Learning Useful Representations in a Deep Net- work with a Local Denoising Criterion,” Journal of Machine Learning Research, vol. 11, pp. 3371–3408, Dec. 2010. 1,9,10

[14] Harold C Burger, Christian J Schuler, and Stefan Harmeling, “Image denoising: Can plain neural networks compete with BM3D?,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 2392–2399. 1,9

[15] Yi-Qing Wang and Jean-Michel Morel, “Can a Single Image Denoising Neural Network Handle All Levels of Gaussian Noise?,” IEEE Signal Processing Letters, vol. 21, no. 9, pp.

1150–1153, May 2014. 1,9

[16] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, May 2017. 1,9,12,13,14,19,20

REFERENCES [17] Stamatios Lefkimmiatis, “Non-local Color Image Denoising with Convolutional Neural Networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5882–5891. 1,9

[18] Junyuan Xie, Linli Xu, and Enhong Chen, “Image denoising and inpainting with deep neural networks,” inAdvances in Neural Information Processing Systems (NIPS) 25, 2012, pp. 341–349. 1

[19] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 1–14, July 2017. 1 [20] Raymond A Yeh, Chen Chen, Teck Yian Lim, Alexander G Schwing, Mark Hasegawa- Johnson, and Minh N Do, “Semantic image inpainting with deep generative models,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017, pp. 6882–

6890, IEEE. 1

[21] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Image super-resolution using deep convolutional networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 2, pp. 295–307, 2016. 1

[22] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2017, pp. 1132–1140, IEEE. 1

[23] Dongwon Park, Kwanyoung Kim, and Se Young Chun, “Efficient module based single image super resolution for multiple problems,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Ulsan National Institute of Science and Technology, Ulsan, South Korea, 2018, pp. 995–1003, IEEE. 1

[24] Xiao Jiao Mao, Chunhua Shen, and Yu Bin Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” inAdvances in Neural Information Processing Systems (NIPS) 29, 2016, pp. 2810–2818. 1

[25] Ruohan Gao and Kristen Grauman, “On-demand learning for deep image restoration,” in IEEE International Conference on Computer Vision (ICCV), 2017. 1

[26] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang, “Learning Deep CNN Denoiser

REFERENCES [27] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, Aug. 2007. 1,9,13,20

[28] Minchao Ye, Yuntao Qian, and Jun Zhou, “Multitask Sparse Nonnegative Matrix Factor- ization for Joint Spectral–Spatial Hyperspectral Imagery Denoising,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 5, pp. 2621–2639, Dec. 2014. 2

[29] Hu Chen, Yi Zhang, Mannudeep K Kalra, Feng Lin, Yang Chen, Peixi Liao, Jiliu Zhou, and Ge Wang, “Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2524–2535, Nov.

2017. 2

[30] Eunhee Kang, Junhong Min, and Jong Chul Ye, “A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,” Medical Physics, vol. 44, no. 10, pp. e360–e375, Oct. 2017. 2

[31] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Ait- tala, and Timo Aila, “Noise2Noise: Learning Image Restoration without Clean Data,” in International Conference on Machine Learning (ICML), 2018, pp. 2965–2974. 2

[32] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, “Deep Image Prior,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 9446–9454.

2,21,24

[33] David L Donoho and Iain M Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,”Journal of the american statistical association, vol. 90, no. 432, pp. 1200–1224, 1995. 4

[34] Xiao-Ping Zhang and Mita D Desai, “Adaptive denoising based on sure risk,” IEEE signal processing letters, vol. 5, no. 10, pp. 265–267, 1998. 4

[35] C M Stein, “Estimation of the mean of a multivariate normal distribution,” The Annals of Statistics, vol. 9, no. 6, pp. 1135–1151, Nov. 1981. 4,5

[36] A Buades, B Coll, and J M Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, Jan. 2005. 4

[37] J Salmon, “On Two Parameters for Denoising With Non-Local Means,” IEEE Signal Processing Letters, vol. 17, no. 3, pp. 269–272, Mar. 2010. 4

REFERENCES [38] D Van De Ville and M Kocher, “SURE-Based Non-Local Means,” IEEE Signal Processing

Letters, vol. 16, no. 11, pp. 973–976, Nov. 2009. 4

[39] Minh Phuong Nguyen and Se Young Chun, “Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1637–1649, Feb. 2017. 4

[40] S Ramani, T Blu, and M Unser, “Monte-Carlo Sure: A Black-Box Optimization of Reg- ularization Parameters for General Denoising Algorithms,” IEEE Transactions on Image Processing, vol. 17, no. 9, pp. 1540–1554, Aug. 2008. 4,5,6,11,15,24

[41] Charles-Alban Deledalle, Samuel Vaiter, Jalal Fadili, and Gabriel Peyr´e, “Stein unbiased gradient estimator of the risk (sugar) for multiple parameter selection,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 2448–2487, 2014. 4,13

[42] T Blu and F Luisier, “The SURE-LET Approach to Image Denoising,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2778–2786, Oct. 2007. 5

[43] L´eon Bottou, “Online Learning and Stochastic Approximations,” in On-line learning in neural networks, pp. 9–42. Cambridge University Press New York, NY, USA, 1998. 6

[44] Yurii Nesterov, “A method of solving a convex programming problem with convergence rate o (1/k2),” inSoviet Mathematics Doklady, 1983. 6

[45] Diederik P Kingma and Jimmy Ba, “Adam - A Method for Stochastic Optimization,” in International Conference on Learning Representation (ICLR), 2015. 6,10

[46] Mart´ın Abadi et al., “Tensorflow: A system for large-scale machine learning,” inProceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, 2016, pp. 265–283. 8

[47] Yoann Le Montagner, Elsa D Angelini, and Jean-Christophe Olivo-Marin, “An unbiased risk estimator for image denoising in the presence of mixed poisson–gaussian noise,” IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1255–1268, 2014. 15,18,19,24

[48] Ryan J Tibshirani and Saharon Rosset, “Excess optimism: How biased is the apparent error of an estimator tuned by sure?,” arXiv preprint arXiv:1612.09415, 2016. 16

[49] Florian Luisier, Thierry Blu, and Michael Unser, “Image denoising in mixed poisson–

Dokumen terkait