2.6 Turbulent Flows Reconstruction
2.6.2 Deep Learning Methods for Super-Resolution Reconstruction of Turbulent Flows . 30
In [120], two deep learning models, i.e., the SCNN and MTPC, are developed for the super-resolution recon- struction of turbulent flows. They all take low-resolution flow information as an input. However, the SCNN takes one instantaneous snapshot as an input, while the MTPC takes a temporal sequence of snapshots as an
input. Therefore, the MTPC has the advantage of drawing extra temporal information from adjacent frames.
To see whether the deep learning methods are able to reproduce turbulence, two canonical turbulent problems were tested. For the isotropic turbulence, the reproduced energy spectra and predicted PDF of the normalized velocity gradients by the MTPC are close to those of the DNS result. For the turbulent channel flow, the deep-learning-based approaches greatly enhanced the spatial resolution in different wall regions and layers. The correlation coefficients between reconstructed data and the reference data are high in the outer layer and log-law region where turbulence dominates. However, the coefficients are not so high in the viscosity-dominated region where there are the most vigorous turbulent activities. All assessments in both cases show that the MTPC greatly improved the quality of the LR input and outperformed the SCNN and bicubic interpolation method, especially at small scales. The extra temporal information from consecutive snapshots helps the MTPC to generate more physically reasonable results. On the other hand, because there are more input snapshots to handle, the MTPC spends more time on prediction compared with static models.
All the experiments were performed with 2D snapshots, but the networks used in this study can be easily extended to 3D cases by using 3D convolution kernels.
Although deep-learning methods achieve great progress, there are still some challenges. First, the DL models are not able to exactly reproduce the kinetic energy several orders of magnitude smaller than the total energy, i.e., the very small spatial structures. Second, the DL methods show performance diversity in different directions in the anisotropic turbulence case even though a special normalization method is used to convert data in the three directions into the same order of magnitude. These problems may be eliminated by intro- ducing novel machine learning methods like unsupervised learning or including prior physical knowledge in the training process. That may be part of the future work.
The success of the DL models in reconstructing subgrid flow variables probably inspires the development of subgrid models in the CFD. The SR technology can also serve as the post-processing tool to denoise, correct, or enrich data from experiments and numerical simulations. the authors believe that the super- resolution technology would have a board practical application in the fluid dynamics with the increasing volumes of data accessible.
2.6.3 Learning a Deep Convolutional Network for Image Super-Resolution
The sparse-coding-based method [212, 213] is one of the representative methods for external example-based image super-resolution. This method involves several steps in its pipeline. First, overlapping patches are densely extracted from the image and pre-processed (e.g., subtracting mean). These patches are then encoded by a low-resolution dictionary. The sparse coefficients are passed into a high-resolution dictionary for re- constructing high-resolution patches. The overlapping reconstructed patches are aggregated (or averaged) to
produce the output. Previous SR methods pay particular attention to learning and optimizing the dictionaries [212, 213] or alternative ways of modeling them [24, 37]. However, the rest of the steps in the pipeline have been rarely optimized or considered in an united optimization framework.
In this paper, the authors show the aforementioned pipeline is equivalent to a deep convolutional neural network [112]. Motivated by this fact, the authors directly consider a convolutional neural network which is an end-to-end mapping between low- and high-resolution images. The proposed method differs fundamen- tally from existing external example-based approaches, in that the proposed method does not explicitly learn the dictionaries [212, 213] or manifolds [24, 37] for modeling the patch space. These are implicitly achieved via hidden layers. Furthermore, the patch extraction and aggregation are also formulated as convolutional layers, so are involved in the optimization. In the method, the entire SR pipeline is fully obtained through learning, with little pre/post-processing.
The authors name the proposed model Super-Resolution Convolutional Neural Network (SRCNN). The proposed SRCNN has several appealing properties. First, its structure is intentionally designed with sim- plicity in mind, and yet provides superior accuracy comparing with state-of-the-art example-based methods.
Second, with moderate numbers of filters and layers, the method achieves fast speed for practical on-line usage even on a CPU. The method is faster than a series of example-based methods, because it is fully feed- forward and does not need to solve any optimization problem on usage. Third, experiments show that the restoration quality of the network can be further improved when (i) larger datasets are available, and/or (ii) a larger model is used. On the contrary, larger datasets/models can present challenges for existing example- based methods. Overall, the contributions of this work are mainly in three aspects: 1) The authors present a convolutional neural network for image super-resolution. The network directly learns an end-to-end mapping between low- and high-resolution. 2) The authors establish a relationship between the deep-learning-based SR method and the traditional sparse-coding-based SR methods. This relationship provides a guidance for the design of the network structure. 3) The authors demonstrate that deep learning is useful in the classical computer vision problem of super-resolution, and can achieve good quality and speed.
In conclusion. the authors have presented a novel deep learning approach for single image super-resolution (SR). Conventional sparse-coding-based image super-resolution methods can be reformulated into a deep convolutional neural network. The proposed approach, SRCNN, learns an end-to-end mapping between low- and high-resolution images, with little extra pre/post-processing beyond the optimization. With a lightweight structure, the SRCNN has achieved superior performance than the state-of-the-art methods. Additional per- formance can be further gained by exploring more hidden layers/filters in the network, and different training strategies. Besides, the proposed structure, with its advantages of simplicity and robustness, could be applied to other low-level vision problems, such as image deblurring or simultaneous SR + denoising. One could also
investigate a network to cope with different upscaling factors.
2.6.4 tempoGAN: A Temporally Coherent, Volumetric GAN for Super-Resolution Fluid Flow