One of the main limitations on applying machine learning algorithms in cyber- security is the lack of training and validation data. This situation is aggravated by other factors such as the high cost of errors and lack of functional guarantees when processing unobserved data. Neural Networks do not scape from these restrictions and, in addition, add a lot of uncertainty due to their ingrained back-box nature.
Chapter 3 presents an anomaly detection solution based on LSTM models, which demonstrates the potential of such algorithms to detect attacks unnoticed by current enterprise systems. Although the results show the functionality and applicability of the framework, the framework was not exhaustively tested with because of the lack of testing data and ground truth, including a more diverse set of attacks. Therefore,
our first step in our feature work is the collection of more data logs including attacks from diverse sources so as to test the soundness of our approach. Our second step is directed to the automatic definition of the vocabulary of events. The work performed with LADOHD required a manual definition of the vocabulary of events to avoid the situation in which the events observed in training do not appear in the testing set and vice versa. Although our goal is not to completely remove the intervention of humans in the process, finding general rules or heuristics for this definition would speed up the implementation and evaluation time significantly.
REFERENCES
[1] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao,
“Neural cleanse: Identifying and mitigating backdoor attacks in neural net- works,” in2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019, p. 0.
[2] G. Tao, S. Ma, Y. Liu, and X. Zhang, “Attacks meet interpretability: Attribute- steered detection of adversarial samples,” in Advances in Neural Information Processing Systems, 2018, pp. 7717–7728.
[3] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural net- works,” in2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 39–57.
[4] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adver- sarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[5] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:1607.02533, 2016.
[6] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
[7] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami,
“The limitations of deep learning in adversarial settings,” in 2016 IEEE Eu- ropean Symposium on Security and Privacy (EuroS&P). IEEE, 2016, pp.
372–387.
[8] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017, pp. 1–18.
[9] T. B. Brown, D. Man´e, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,”
arXiv preprint arXiv:1712.09665, 2017.
[10] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime:
Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Se- curity. ACM, 2016, pp. 1528–1540.
[11] T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
[12] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,”arXiv preprint arXiv:1712.05526, 2017.
[13] Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, “Tro- janing attack on neural networks,” in 25nd Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, Febru- ary 18-221, 2018. The Internet Society, 2018.
[14] Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in2017 IEEE International Conference on Computer Design (ICCD). IEEE, 2017, pp. 45–48.
[15] K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against back- dooring attacks on deep neural networks,” in International Symposium on Re- search in Attacks, Intrusions, and Defenses (RAID). Springer, 2018, pp. 273–
294.
[16] R. Sommer and V. Paxson, “Outside the closed world: On using machine learn- ing for network intrusion detection,” inSecurity and Privacy (SP), 2010 IEEE Symposium on. IEEE, 2010, pp. 305–316.
[17] K. Greff, R. K. Srivastava, J. Koutn´ık, B. R. Steunebrink, and J. Schmidhuber,
“Lstm: A search space odyssey,” IEEE transactions on neural networks and learning systems, vol. 28, no. 10, pp. 2222–2232, 2017.
[18] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” ieee Computational intelligence magazine, vol. 13, no. 3, pp. 55–75, 2018.
[19] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Computational intelligence and neuroscience, vol. 2018, 2018.
[20] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[21] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” in Advances in neural information processing systems, 2014, pp. 487–495.
[22] O. M. Parkhi, A. Vedaldi, A. Zissermanet al., “Deep face recognition.” inbmvc, vol. 1, no. 3, 2015, p. 6.
[23] P. Sermanet and Y. LeCun, “Traffic sign recognition with multi-scale convolu- tional networks.” in IJCNN, 2011, pp. 2809–2813.
[24] S. Ma, Y. Liu, G. Tao, W.-C. Lee, and X. Zhang, “Nic: Detecting adversarial samples with neural network invariant checking,” in 26th Annual Network and Distributed System Security Symposium, NDSS, 2019, pp. 24–27.
[25] M. Villarreal-Vasquez, ConFoc Repository to be Public After Revision, 2019.
[Online]. Available: https://github.com/mvillarreal14/confoc
[26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recog- nition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[27] J. S. J. Stallkamp, M. Schlipsing and C. Igel, “Man vs. computer: Benchmark- ing machine learning algorithms for traffic sign recognition,” Neural Networks, no. 0, pp. –, 2012.
[28] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” inBritish Machine Vision Conference, 2015.
[29] S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.
[30] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off- the-shelf: an astounding baseline for recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp.
806–813.
[31] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell,
“Decaf: A deep convolutional activation feature for generic visual recognition,”
inInternational conference on machine learning, 2014, pp. 647–655.
[32] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convo- lutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414–2423.
[33] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30, no. 1, 2013, p. 3.
[34] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activa- tions in convolutional network,” arXiv preprint arXiv:1505.00853, 2015.
[35] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner et al., “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11, pp.
2278–2324, 1998.
[36] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 2016, pp. 582–597.
[37] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial exam- ples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
[38] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “To- wards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
[39] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[40] S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv preprint arXiv:1412.5068, 2014.
[41] A. N. Bhagoji, D. Cullina, C. Sitawarin, and P. Mittal, “Enhancing robustness of machine learning systems via data transformations,” in 2018 52nd Annual Conference on Information Sciences and Systems (CISS). IEEE, 2018, pp.
1–5.
[42] D. Meng and H. Chen, “Magnet: a two-pronged defense against adversarial examples,” inProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 135–147.
[43] R. Shin and D. Song, “Jpeg-resistant adversarial images,” inNIPS 2017 Work- shop on Machine Learning and Computer Security, vol. 1, 2017.
[44] G. K. Dziugaite, Z. Ghahramani, and D. M. Roy, “A study of the effect of jpg compression on adversarial images,” arXiv preprint arXiv:1608.00853, 2016.
[45] N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau, “Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression,” arXiv preprint arXiv:1705.02900, 2017.
[46] L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
[47] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[48] S. R. Safavian and D. Landgrebe, “A survey of decision tree classifier method- ology,”IEEE transactions on systems, man, and cybernetics, vol. 21, no. 3, pp.
660–674, 1991.
[49] I. Steinwart and A. Christmann, Support vector machines. Springer Science &
Business Media, 2008.
[50] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large- scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[51] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
[52] R. Arora, A. Basu, P. Mianjy, and A. Mukherjee, “Understanding deep neural networks with rectified linear units,”arXiv preprint arXiv:1611.01491, 2016.
[53] J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cire¸san, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber, and L. M. Gambardella, “Max-pooling convolutional neu- ral networks for vision-based hand gesture recognition,” in 2011 IEEE Inter- national Conference on Signal and Image Processing Applications (ICSIPA).
IEEE, 2011, pp. 342–347.
[54] D. C. Liu and J. Nocedal, “On the limited memory bfgs method for large scale optimization,” Mathematical programming, vol. 45, no. 1-3, pp. 503–528, 1989.
[55] R. A. Brualdi, H. J. Ryseret al.,Combinatorial matrix theory. Springer, 1991, vol. 39.
[56] G. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,”
Tech. rep., 10 2008.
[57] B. Wang, ConFoc Repository to be Public After Revision, 2019. [Online].
Available: https://github.com/bolunwang/backdoor
[58] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style trans- fer and super-resolution,” inEuropean conference on computer vision. Springer, 2016, pp. 694–711.
[59] M. B. Salem, S. Hershkop, and S. J. Stolfo, “A survey of insider attack detection research,” in Insider Attack and Cyber Security. Springer, 2008, pp. 69–90.
[60] A. Sanzgiri and D. Dasgupta, “Classification of insider threat detection tech- niques,” in Proceedings of the 11th annual cyber and information security re- search conference, 2016, pp. 1–4.
[61] J. Hunker and C. W. Probst, “Insiders and insider threats-an overview of defi- nitions and mitigation techniques.” JoWUA, vol. 2, no. 1, pp. 4–27, 2011.
[62] P. Chen, L. Desmet, and C. Huygens, “A study on advanced persistent threats,”
inIFIP International Conference on Communications and Multimedia Security.
Springer, 2014, pp. 63–72.
[63] D. Wagner and P. Soto, “Mimicry attacks on host-based intrusion detection systems,” in Proceedings of the 9th ACM Conference on Computer and Com- munications Security. ACM, 2002, pp. 255–264.
[64] H. Xu, W. Du, and S. J. Chapin, “Context sensitive anomaly monitoring of process control flow to detect mimicry attacks and impossible paths,” in In- ternational Workshop on Recent Advances in Intrusion Detection. Springer, 2004, pp. 21–38.
[65] J. T. Giffin, S. Jha, and B. P. Miller, “Automated discovery of mimicry at- tacks,” in International Workshop on Recent Advances in Intrusion Detection.
Springer, 2006, pp. 41–60.
[66] D. Yao, X. Shu, L. Cheng, and S. J. Stolfo, “Anomaly detection as a service:
Challenges, advances, and opportunities,” Synthesis Lectures on Information Security, Privacy, and Trust, vol. 9, no. 3, pp. 1–173, 2017.
[67] K. Xu, D. D. Yao, B. G. Ryder, and K. Tian, “Probabilistic program modeling for high-precision anomaly classification,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th. IEEE, 2015, pp. 497–511.
[68] A. Patcha and J.-M. Park, “An overview of anomaly detection techniques: Ex- isting solutions and latest technological trends,” Computer networks, vol. 51, no. 12, pp. 3448–3470, 2007.
[69] M. Agyemang, K. Barker, and R. Alhajj, “A comprehensive survey of numeric and symbolic outlier mining techniques,” Intelligent Data Analysis, vol. 10, no. 6, pp. 521–538, 2006.
[70] Z. A. Bakar, R. Mohemad, A. Ahmad, and M. M. Deris, “A comparative study for outlier detection techniques in data mining,” in 2006 IEEE conference on cybernetics and intelligent systems. IEEE, 2006, pp. 1–6.
[71] P. J. Rousseeuw and A. M. Leroy,Robust regression and outlier detection. John wiley & sons, 2005, vol. 589.
[72] V. Hodge and J. Austin, “A survey of outlier detection methodologies,” Artifi- cial intelligence review, vol. 22, no. 2, pp. 85–126, 2004.
[73] P. Thompson, “Weak models for insider threat detection,” in Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense III, vol. 5403. International Society for Optics and Photonics, 2004, pp. 40–48.
[74] M. B. Salem and S. J. Stolfo, “Masquerade attack detection using a search- behavior modeling approach,”Columbia University, Computer Science Depart- ment, Technical Report CUCS-027-09, 2009.
[75] C. Warrender, S. Forrest, and B. A. Pearlmutter, “Detecting intrusions using system calls: Alternative data models,” 1999.
[76] S. Forrest, S. A. Hofmeyr, A. Somayaji, and T. A. Longstaff, “A sense of self for unix processes,” in Security and Privacy, 1996. Proceedings., 1996 IEEE Symposium on. IEEE, 1996, pp. 120–128.
[77] S. A. Hofmeyr, S. Forrest, and A. Somayaji, “Intrusion detection using se- quences of system calls,” Journal of computer security, vol. 6, no. 3, pp. 151–
180, 1998.
[78] D.-Y. Yeung and Y. Ding, “Host-based intrusion detection using dynamic and static behavioral models,”Pattern recognition, vol. 36, no. 1, pp. 229–243, 2003.
[79] K. Xu, K. Tian, D. Yao, and B. G. Ryder, “A sharper sense of self: Probabilistic reasoning of program behaviors for anomaly detection with context sensitivity,”
in2016 46th Annual IEEE/IFIP International Conference on Dependable Sys- tems and Networks (DSN). IEEE, 2016, pp. 467–478.
[80] J. Hollm´en and V. Tresp, “Call-based fraud detection in mobile communication networks using a hierarchical regime-switching model,” in Advances in Neural Information Processing Systems, 1999, pp. 889–895.
[81] P. Smyth, “Clustering sequences with hidden markov models,” in Advances in neural information processing systems, 1997, pp. 648–654.
[82] I. Cadez, D. Heckerman, C. Meek, P. Smyth, and S. White, “Visualization of navigation patterns on a web site using model-based clustering,” inProceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 2000, pp. 280–284.
[83] G. Salton, R. Ross, and J. Kelleher, “Attentive language models,” inProceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, 2017, pp. 441–450.
[84] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural compu- tation, vol. 9, no. 8, pp. 1735–1780, 1997.
[85] K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
[86] M. Villarreal-Vasquez, LADOHD Repository to be Public After Revision, 2020.
[Online]. Available: https://github.com/mvillarreal14/ladohd
[87] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-based recom- mendations with recurrent neural networks,”arXiv preprint arXiv:1511.06939, 2015.
[88] Y. K. Tan, X. Xu, and Y. Liu, “Improved recurrent neural networks for session- based recommendations,” inProceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016, pp. 17–22.
[89] G. Kim, H. Yi, J. Lee, Y. Paek, and S. Yoon, “Lstm-based system-call lan- guage modeling and robust ensemble method for designing host-based intrusion detection systems,” arXiv preprint arXiv:1611.01726, 2016.
[90] C. Feng, T. Li, and D. Chana, “Multi-level anomaly detection in industrial con- trol systems via package signatures and lstm networks,” in Dependable Systems and Networks (DSN), 2017 47th Annual IEEE/IFIP International Conference on. IEEE, 2017, pp. 261–272.
[91] M. Du, F. Li, G. Zheng, and V. Srikumar, “Deeplog: Anomaly detection and diagnosis from system logs through deep learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 1285–1298.
[92] Y. Shen, E. Mariconti, P.-A. Vervier, and G. Stringhini, “Tiresias: Predict- ing security events through deep learning,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’18.
New York, NY, USA: ACM, 2018.
[93] M. Sundermeyer, R. Schl¨uter, and H. Ney, “Lstm neural networks for language modeling,” in Thirteenth annual conference of the international speech commu- nication association, 2012.
[94] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton, “Gram- mar as a foreign language,” in Advances in Neural Information Processing Sys- tems, 2015, pp. 2773–2781.
[95] K. Rocki, “Recurrent memory array structures,” 07 2016.
[96] G. Bouchard, “Efficient bounds for the softmax function, applications to in- ference in hybrid models,” in Presentation at the Workshop for Approximate Bayesian Inference in Continuous/Hybrid Systems at NIPS-07. Citeseer, 2007.
[97] A. Viterbi, “Error bounds for convolutional codes and an asymptotically opti- mum decoding algorithm,” IEEE transactions on Information Theory, vol. 13, no. 2, pp. 260–269, 1967.
[98] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015.
[99] S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing lstm language models,” arXiv preprint arXiv:1708.02182, 2017.
[100] I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” arXiv preprint arXiv:1608.03983, 2016.
[101] L. N. Smith, “Cyclical learning rates for training neural networks,” in Applica- tions of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017, pp. 464–472.
[102] A. Liaw, M. Wiener et al., “Classification and regression by randomforest,” R news, vol. 2, no. 3, pp. 18–22, 2002.
[103] S. Kotsiantis and P. Pintelas, “Recent advances in clustering: A brief survey,”
WSEAS Transactions on Information Science and Applications, vol. 1, no. 1, pp. 73–81, 2004.
[104] S. Norman, J. Chase, D. Goodwin, B. Freeman, V. Boyle, and R. Eckman, “A condensed approach to the cyber resilient design space,” INSIGHT, vol. 19, no. 2, pp. 43–46, 2016.
[105] J. Xu, Z. Kalbarczyk, and R. K. Iyer, “Transparent runtime randomization for security,” Tech. Rep. UILU-ENG-03-2207, 2003.
[106] C. Warrender, S. Forrest, and B. Pearlmutter, “Detecting intrusions using sys- tem calls: Alternative data models,” in Proceedings of the 1999 IEEE Sympo- sium on Security and Privacy, 1999, pp. 133–145.
[107] G. S. Kc, A. D. Keromytis, and V. Prevelakis, “Countering code-injection at- tacks with instruction-set randomization,” inProceedings of the 10th ACM Con- ference on Computer and Communications Security, 2003, pp. 272–280.
[108] H. Shacham, M. Page, B. Pfaff, E.-J. Goh, N. Modadugu, and D. Boneh, “On the effectiveness of address-space randomization,” in Proceedings of the 11th ACM Conference on Computer and Communications Security, 2004, pp. 298–
307.
[109] B. Bhargava, P. Angin, R. Ranchal, and S. Lingayat, “A distributed monitoring and reconfiguration approach for adaptive network computing,” in Proceedings of the 2015 IEEE 34th Symposium on Reliable Distributed Systems Workshop (SRDSW), 2015, pp. 31–35.
[110] K. Kirkpatrick, “Software-defined networking,” Communications of the ACM, vol. 56, no. 9, pp. 16–19, Sep. 2013.
[111] M. H. Marghny and A. I. Taloba, “Outlier detection using improved genetic k-means,”International Journal of Computer Applications, vol. 28, no. 11, pp.
33–36, August 2011.
[112] L. M. Manevitz and M. Yousef, “One-class svms for document classification,”
Journal of Machine Learning Research, vol. 2, pp. 139–154, Mar. 2002.
[113] T. Garfinkel and M. Rosenblum, “A virtual machine introspection based archi- tecture for intrusion detection,” in Proceedings of the Network and Distributed Systems Security Symposium, 2003, pp. 191–206.
[114] DHS, “Moving target defense,” https://www.dhs.gov/science-and- technology/csd-mtd, Accessed Feb. 2017.
[115] N. Ahmed and B. Bhargava, “Mayflies: A moving target defense framework for distributed systems,” in Proceedings of the 2016 ACM Workshop on Moving Target Defense, 2016, pp. 59–64.