This section evaluates the proposed method that is robust even in contaminated scenarios using real- world sensor data collected from a manufacturing company, LG Electronics, cooperating with our re- search team.
Data description
This real-world study was carried out at the refrigerator quality inspection station in the LG Smart Park. When the refrigerators enter the quality inspection station, the operator turns on the inspection mode of each refrigerator. Afterwards, the refrigerator transmits sensor data related to various cooling performance to the server through Wi-Fi while quality inspection is performed along the inspection lane. For each refrigerator, sensor data is collected during the time the refrigerator has been in the quality inspection lane.
For the comparison between the proposed model for robustness of anomaly detectors in Contami- nated scenarios, we use real sensor data from an advanced manufacturing process. The manufacturing equipment has 7 sensors, denoted asY,W1,W2,V1, . . . ,V4, that record values of different process variables such as electricity, power, temperature and so on. Y,W,V represent the record values measured by the product sensors, operation sensors, environment sensors, respectively.
(a)Y(t) (b)W1(t) (c)V1(t)
Figure 12: Actual sensor profiles collected from 673 refrigerators in a specific model
LetY denote the record values measured by the sensors related to the key quality characteristic of refrigerator compressor. Figure 12 showsY over 673 samples (refrigerators in a specific model) from the inspection station for about one year (from May 2021 to April 2022). Due to data confidentiality, the time interval is normalized to[0,1]in this study. The gray solid lines in the figure represent the normal samples, while the red dashed line represents the abnormal samples of the refrigerator later identified as defective. But, samples are perfectly uncurated and may already contain some anomalies. The purpose of this study is to identify these defective refrigerators based on the contaminated sensor data.
(a) Semantic space after training the Labeled anomaly-Informed Neural
Transformations (NTL(with LOE+))
(b) Semantic space after training the NTL(with LOE)
Figure 13: Visualization of how the anomaly sample (red) from the real world and the different views created by the proposed method (NTL(with LOE+)) and NTL(with LOE) (gray color per transformation type) cluster in the semantic space of the encoder
(a) Anomaly score after training the Labeled anomaly-Informed Neural Transformations
(NTL(with LOE+))
(b) Anomaly score after training the NTL(with LOE)
Figure 14: Anomaly score histogram after training of each method
Results
We present the experimental results of real-world in Table 2, 3. Table 2, 3 compares the perfor- mances between the NTL(with LOE) and the proposed method (NTL(with LOE+)) in terms of “Multi- ple Univariate” and “Multivariate”. The results show that the proposed method performs better than the NTL(with LOE) . In terms of “Multiple Univariate” and “Multivariate”, the proposed method is able to detect 7 and 8 out of 12 Product defects, respectively, and the NTL(with LOE) is able to detect 0 and 2 out of 12 Product defects, respectively.
Figure 13 shows the semantic(embedding) space of the encoderZ after training. For visualization purposes, we train the NTL(with LOE) and the proposed method on the real-world. The original samples are transformed by each of the learned transformationsTk(x) =Mk(x)⊙xto generate K=6 different
views of each sample. By projecting three principal components with PCA, 3D visualization can be achieved. In Figure 13 (a), the abnormal sample is well separated from the normal in the semantic space since the training was done using the labeled anomaly. However, in Figure 13 (b), the abnormal sample is overlapped with the normal in the semantic space since the training wasn’t done using the labeled anomaly. In addition, Figure 14 represents the anomaly score histogram after each method’s training is completed, the red dotted line means threshold, and the black dotted circle means a true anomaly.
Figure 14 (a) shows that the anomaly score of the true anomaly can be clearly distinguished from the normal when the labeled anomaly is used in training, whereas Figure 14 (b) shows that the anomaly score of the true anomaly is indistinguishable from the normal when the labeled anomaly is not used.
Based on these results, we find that it is helpful to use labelled anomaly, even if the number of labelled anomaly is extremely small.
Table 2: Performance comparisons in Real-world example whenα=0.01.
Data Type Product 1 Product 2 Product 3 Product 4 Product 5 Product 6
pc pf pc pf pc pf pc pf pc pf pc pf
NTL(with LOE)
Multiple Univariate
0.0 0.003 0.5 0.002 0.0 0.019 0.667 0.016 0.0 0.021 0.0 0.014
NTL(with LOE+H) 1.0 0.0 0.5 0.002 1.0 0.009 0.667 0.016 1.0 0.0 1.0 0.013
NTL(with LOE+S) 1.0 0.0 0.5 0.002 1.0 0.009 0.667 0.016 1.0 0.0 1.0 0.013
NTL(with LOE)
Multivariate
0.0 0.003 0.0 0.003 0.0 0.019 1.0 0.015 0.0 0.021 0.0 0.014
NTL(with LOE+H) 1.0 0.0 1.0 0.001 1.0 0.009 1.0 0.015 1.0 0.0 1.0 0.013
NTL(with LOE+S) 1.0 0.0 1.0 0.001 1.0 0.009 1.0 0.015 1.0 0.0 1.0 0.013
Table 3: Performance comparisons in Real-world example whenα=0.01.
Data Type Product 7 Product 8 Product 9 Product 10 Product 11 Product 12
pc pf pc pf pc pf pc pf pc pf pc pf
NTL(with LOE)
Multiple Univariate
0.0 0.006 0.0 0.038 0.0 0.017 0.0 0.015 0.0 0.008 0.0 0.006 NTL(with LOE+H) 0.0 0.006 1.0 0.009 0.333 0.014 0.0 0.015 0.0 0.008 1.0 0.005 NTL(with LOE+S) 0.0 0.006 1.0 0.009 0.666 0.01 0.0 0.015 0.0 0.008 1.0 0.005 NTL(with LOE)
Multivariate
0.0 0.006 0.0 0.038 0.0 0.017 0.0 0.015 0.0 0.008 1.0 0.005 NTL(with LOE+H) 0.0 0.006 1.0 0.009 0.333 0.014 0.0 0.015 0.0 0.008 1.0 0.005 NTL(with LOE+S) 0.0 0.006 1.0 0.009 0.666 0.01 0.0 0.015 0.0 0.008 1.0 0.005
VI Conclusion
This study considered a real-world problem at the refrigerator quality inspection station based on the university-industry collaboration project with LG H&A division. Although many methods for the iden- tification of anomalies have been developed by deep learning, their application has been limited in modern manufacturing quality inspection processes. This is because not only is the proportion of their anomalies very small, but the differences between the anomalies and the normal are so subtle that it is difficult to identify them. And one more reason is that real sensor data from an advanced manufactur- ing process might have been contaminated. The thesis focus, among other things, on solving anomaly detection in contaminated scenarios.
To address the challenges associated with the above matter, we proposed the semi-supervised con- trastive learning that is robust even in contaminated scenarios for the refrigerator quality inspection station. Using the proposed method, labeled anomaly information can be considered in contaminated scenarios. The proposed method was validated using simulation data and real-quality inspection data collected from sensors in refrigerators provided by the LG H&A division.
Experimental results indicate that the proposed method outperforms the NTL(with LOE) in terms of detection performance, as it reflects the labeled anomaly information. The proposed method in this study is limited to the quality inspection station of refrigerators, but it can be applied to the quality inspection stations of other products produced by LG H&A division.
References
[1] S.-Y. Park, “LG overtakes whirlpool as top global appliance maker with record 2021 sales,” 2022.
[Online]. Available: https://www.kedglobal.com/earnings/newsView/ked202201270016
[2] D. D. John I. Taylor, JL Lavina, “LG smart park named ‘lighthouse factory’ for futuristic manufacturing technology,”LG Press Release, 2022. [Online]. Available: https://www.lg.com/us/
press-release/lg-smart-park-named-lighthouse-factory-futuristic-manufacturing-technology [3] N. Huyan, D. Quan, X. Zhang, X. Liang, J. Chanussot, and L. Jiao, “Unsupervised outlier detection
using memory and contrastive learning,”IEEE Transactions on Image Processing, 2022.
[4] S. Wang, Y. Zeng, X. Liu, E. Zhu, J. Yin, C. Xu, and M. Kloft, “Effective end-to-end unsupervised outlier detection via inlier priority of discriminative network,” Advances in neural information processing systems, vol. 32, 2019.
[5] C. Qiu, A. Li, M. Kloft, M. Rudolph, and S. Mandt, “Latent outlier exposure for anomaly detection with contaminated data,”arXiv preprint arXiv:2202.08088, 2022.
[6] G. Pang, C. Shen, L. Cao, and A. V. D. Hengel, “Deep learning for anomaly detection: A review,”
ACM Computing Surveys (CSUR), vol. 54, no. 2, pp. 1–38, 2021.
[7] P. Perera, P. Oza, and V. M. Patel, “One-class classification: A survey,” arXiv preprint arXiv:2101.03064, 2021.
[8] L. Ruff, J. R. Kauffmann, R. A. Vandermeulen, G. Montavon, W. Samek, M. Kloft, T. G. Dietterich, and K.-R. Müller, “A unifying review of deep and shallow anomaly detection,”Proceedings of the IEEE, vol. 109, no. 5, pp. 756–795, 2021.
[9] X. Chen and E. Konukoglu, “Unsupervised detection of lesions in brain mri using constrained adversarial auto-encoders,”arXiv preprint arXiv:1806.04972, 2018.
[10] E. Principi, F. Vesperini, S. Squartini, and F. Piazza, “Acoustic novelty detection with adversarial autoencoders,” in2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017, pp. 3324–3330.
[11] C. Zhou and R. C. Paffenroth, “Anomaly detection with robust deep autoencoders,” inProceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 665–674.
[12] L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Müller, and M. Kloft, “Deep one-class classification,” in International conference on machine learning.
PMLR, 2018, pp. 4393–4402.
[13] L. Ruff, R. A. Vandermeulen, N. Görnitz, A. Binder, E. Müller, K.-R. Müller, and M. Kloft, “Deep semi-supervised anomaly detection,”arXiv preprint arXiv:1906.02694, 2019.
[14] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs, “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” in Interna- tional conference on information processing in medical imaging. Springer, 2017, pp. 146–157.
[15] L. Deecke, R. Vandermeulen, L. Ruff, S. Mandt, and M. Kloft, “Image anomaly detection with gen- erative adversarial networks,” in Joint european conference on machine learning and knowledge discovery in databases. Springer, 2018, pp. 3–17.
[16] D. Hendrycks, M. Mazeika, and T. Dietterich, “Deep anomaly detection with outlier exposure,”
arXiv preprint arXiv:1812.04606, 2018.
[17] S. Goyal, A. Raghunathan, M. Jain, H. V. Simhadri, and P. Jain, “Drocc: Deep robust one-class classification,” inInternational Conference on Machine Learning. PMLR, 2020, pp. 3711–3721.
[18] L. Li, J. Yan, Q. Wen, Y. Jin, and X. Yang, “Learning robust deep state space for unsupervised anomaly detection in contaminated time-series,”IEEE Transactions on Knowledge and Data En- gineering, 2022.
[19] J. Yoon, K. Sohn, C.-L. Li, S. O. Arik, C.-Y. Lee, and T. Pfister, “Self-trained one-class classifica- tion for unsupervised anomaly detection,”arXiv preprint arXiv:2106.06115, 2021.
[20] L. Beggel, M. Pfeiffer, and B. Bischl, “Robust anomaly detection in images using adversarial autoencoders,” inJoint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2019, pp. 206–222.
[21] Y. Xia, X. Cao, F. Wen, G. Hua, and J. Sun, “Learning discriminative reconstructions for unsuper- vised outlier removal,” inProceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1511–1519.
[22] N. Görnitz, A. Porbadnigk, A. Binder, C. Sannelli, M. Braun, K.-R. Müller, and M. Kloft, “Learn- ing and evaluation in presence of non-iid label noise,” in Artificial Intelligence and Statistics.
PMLR, 2014, pp. 293–302.
[23] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” Advances in Neural Information Processing Systems, vol. 33, pp. 9912–9924, 2020.
[24] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” inInternational conference on machine learning. PMLR, 2020, pp.
1597–1607.
[25] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual rep- resentation learning,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
[26] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recogni- tion,”arXiv preprint arXiv:1409.1556, 2014.
[27] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” in Proceedings of the IEEE international conference on computer vision, 2015, pp.
1422–1430.
[28] M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” inEuropean conference on computer vision. Springer, 2016, pp. 69–84.
[29] R. Zhang, P. Isola, and A. A. Efros, “Split-brain autoencoders: Unsupervised learning by cross- channel prediction,” inProceedings of the IEEE conference on computer vision and pattern recog- nition, 2017, pp. 1058–1067.
[30] I. Misra, C. L. Zitnick, and M. Hebert, “Shuffle and learn: unsupervised learning using temporal order verification,” inEuropean conference on computer vision. Springer, 2016, pp. 527–544.
[31] C. Qiu, T. Pfrommer, M. Kloft, S. Mandt, and M. Rudolph, “Neural transformation learning for deep anomaly detection beyond images,” in International Conference on Machine Learning.
PMLR, 2021, pp. 8703–8714.
[32] I. Golan and R. El-Yaniv, “Deep anomaly detection using geometric transformations,”Advances in neural information processing systems, vol. 31, 2018.
[33] K. Sohn, C.-L. Li, J. Yoon, M. Jin, and T. Pfister, “Learning and evaluating representations for deep one-class classification,”arXiv preprint arXiv:2011.02578, 2020.
[34] J. Tack, S. Mo, J. Jeong, and J. Shin, “Csi: Novelty detection via contrastive learning on dis- tributionally shifted instances,” Advances in neural information processing systems, vol. 33, pp.
11 839–11 852, 2020.
[35] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting im- age rotations,”arXiv preprint arXiv:1803.07728, 2018.
[36] Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu, “Simmim: A simple framework for masked image modeling,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9653–9663.
[37] K. Kotar, G. Ilharco, L. Schmidt, K. Ehsani, and R. Mottaghi, “Contrasting contrastive self- supervised representation learning pipelines,” inProceedings of the IEEE/CVF International Con- ference on Computer Vision, 2021, pp. 9949–9959.
[38] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,”
arXiv preprint arXiv:1807.03748, 2018.
[39] I. Misra and L. v. d. Maaten, “Self-supervised learning of pretext-invariant representations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.
6707–6717.
[40] K. M. Rashid and J. Louis, “Times-series data augmentation and deep learning for construction equipment activity recognition,”Advanced Engineering Informatics, vol. 42, p. 100944, 2019.
[41] Y. Sun and M. G. Genton, “Functional boxplots,”Journal of Computational and Graphical Statis- tics, vol. 20, no. 2, pp. 316–334, 2011.
Acknowledgements
First of all, I would like to thank Professor Sung-il Kim, my advisor, for teaching me well for two years.
Also, I would like to thank Professor Sang-jin Kweon and Professor Ji-soo Kim, who are defense judges, for taking the time to judge. There were many hard times in the two years at DA-LAB, but there were more good things. In particular, thank you so much to Jong-hwan Moon for leading LG project. And when I was wandering, thank you so much to Kwon-in Yoon who corrected my mind. If it weren’t for Kwon-in Yoon, I wouldn’t have graduated! Once again, I want to say thank you. Other than that, DA-LAB people were also very nice, so they helped me a lot when I was having a hard time. Thank you. And the most grateful thing is the support of my family. Without the mental support of my family, graduate school life would have been difficult to endure. Thank you so much for two years and I give this honor to my family. Thank you!