• Tidak ada hasil yang ditemukan

Outdoor flight experiments were conducted to confirm the proposed trail navigation with ob- stacle avoidance algorithm. In Figure 40, the UAV’s travel path for the actual bike path with obstacles is shown. In Figure 41, the actual autonomous flight and the camera images at that time are shown. As a result, we can see that the UAV follows the way while avoiding obstacles.

Figure 40: Trajectory of UAV on the experiment

Figure 41: Actual autonomous flight with input camera images

VI Conclusion and Future Research

For various environments from forest to downtown, roads or trails are one of the environments that have the least unspecified factors such as obstacles. About this, many types of research are being done on path detecting and following UAV. However, in tracking the road, disturbances such as wind can adversely affect driving, and there are also risks of collision due to various types of obstacles.

In this paper, we propose three methods for the stable driving of UAV and finally introduce the integration of three proposed algorithms. The first is an algorithm where the UAV detects and navigates itself using convolutional neural networks. The convolutional neural network can distinguish and follow roads that are difficult to distinguish with conventional image processing methods. The second is obstacle avoidance through predicted optical flow through the convolu- tional neural network. Finally, when the UAV is off the road due to disturbance, the algorithm returns to the road quickly by using the data of the past time. Each algorithm is used to combine with weighing for stable autonomous driving of unmanned vehicles.

First of all, each algorithm is verified through the accuracy of test data and simulation of ROS and Gazebo. And then integrated algorithm confirms that it is applied to real unmanned aircraft through simulations in various situations and experiments with real UAVs.

As future works, we will develop the algorithms for experiments and will apply the suggested algorithms to the forest trails. The optical flow estimation algorithm speed is low because of the limitation of the computational board. Nvidia Jetson Xavier board is an enhancement model for machine learning. For better performance of obstacle avoidance, Nvidia Jetson Xavier board will be used. Next, Forest trail is more difficult than general roads because the distinction of the forest trail is harder than the distinction of the general roads and there are many unstructured obstacles in the forest trail. For stable autonomous navigation in the forest trail, we plan to research and develop the suggested algorithms.

References

[1] Hillel, A. B., Lerner, R., Levi, D., and Raz, G. (2014). Recent progress in road and lane detection: a survey. Machine vision and applications, 25(3), 727-745.

[2] Dickmanns, E. D., and Zapp, A. (1987, February). A curvature-based scheme for improv- ing road vehicle guidance by computer vision. In Mobile Robots I (Vol. 727, pp. 161-168).

International Society for Optics and Photonics.

[3] Dickmanns, E. D., Behringer, R., Dickmanns, D., Hildebrandt, T., Maurer, M., Thomanek, F., and Schiehlen, J. (1994, October). The seeing passenger car’VaMoRs-P’. In Proceedings of the Intelligent Vehicles’ 94 Symposium (pp. 68-73). IEEE.

[4] Dickmanns, E. D. (2007). Dynamic vision for perception and control of motion. Springer Science and Business Media.

[5] Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., ... and Lau, K.

(2006). Stanley: The robot that won the DARPA Grand Challenge. Journal of field Robotics, 23(9), 661-692.

[6] Rasmussen, C., Lu, Y., and Kocamaz, M. (2009, October). Appearance contrast for fast, robust trail-following. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 3505-3512). IEEE.

[7] Santana, P., Correia, L., Mendonça, R., Alves, N., and Barata, J. (2013). Tracking natural trails with swarm-based visual saliency. Journal of Field Robotics, 30(1), 64-86.

[8] Levin, A., and Weiss, Y. (2006, May). Learning to combine bottom-up and top-down seg- mentation. In European conference on computer vision (pp. 581-594). Springer, Berlin, Hei- delberg.

[9] Sermanet, P., Hadsell, R., Scoffier, M., Grimes, M., Ben, J., Erkan, A., ... and LeCun, Y.

(2009). A multirange architecture for collision-free off-road robot navigation. Journal of Field Robotics, 26(1), 52-87.

[10] Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Scoffier, M., Kavukcuoglu, K., ... and Le- Cun, Y. (2009). Learning long-range vision for autonomous off-road driving. Journal of Field Robotics, 26(2), 120-144.

[11] Giusti, A., Guzzi, J., Cireşan, D. C., He, F. L., Rodríguez, J. P., Fontana, F., ... and Scaramuzza, D. (2015). A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 1(2), 661-667.

[12] Smolyanskiy, N., Kamenev, A., Smith, J., and Birchfield, S. (2017, September). Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4241-4247). IEEE.

[13] Mori, T., and Scherer, S. (2013, May). First results in detecting and avoiding frontal obsta- cles from a monocular camera for micro unmanned aerial vehicles. In 2013 IEEE International Conference on Robotics and Automation (pp. 1750-1757). IEEE.

[14] Sun, D., Yang, X., Liu, M. Y., and Kautz, J. (2018). PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8934-8943).

[15] Souhila, K., and Karim, A. (2007). Optical flow based robot obstacle avoidance. Interna- tional Journal of Advanced Robotic Systems, 4(1), 2.

[16] Muratet, L., Doncieux, S., Briere, Y., and Meyer, J. A. (2005). A contribution to vision- based autonomous helicopter flight in urban environments. Robotics and Autonomous Sys- tems, 50(4), 195-209.

[17] Peng, X. Z., Lin, H. Y., and Dai, J. M. (2016, June). Path planning and obstacle avoidance for vision guided quadrotor UAV navigation. In 2016 12th IEEE International Conference on Control and Automation (ICCA) (pp. 984-989). IEEE.

[18] Agrawal, P., Ratnoo, A., and Ghose, D. (2017). Inverse optical flow based guidance for UAV navigation through urban canyons. Aerospace Science and Technology, 68, 163-178.

[19] Liau, Y. S., Zhang, Q., Li, Y., and Ge, S. S. (2012, October). Non-metric navigation for mobile robot using optical flow. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4953-4958). IEEE.

[20] Gao, P., Zhang, D., Fang, Q., and Jin, S. (2017, May). Obstacle avoidance for micro quadrotor based on optical flow. In 2017 29th Chinese Control And Decision Conference (CCDC) (pp. 4033-4037). IEEE.

[21] Yang, Y. R., Gong, H. J., Wang, X. H., and Jia, S. (2016, August). Obstacle-avoidance strategy for small scale unmanned helicopter. In 2016 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC) (pp. 1594-1598). IEEE.

[22] Eresen, A., İmamoğlu, N., and Efe, M. Ö. (2012). Autonomous quadrotor flight with vision- based obstacle avoidance in virtual environment. Expert Systems with Applications, 39(1), 894-905. ‘

[23] Yoo, D. W., Won, D. Y., and Tahk, M. J. (2011). Optical flow based collision avoidance of multi-rotor uavs in urban environments. International Journal of Aeronautical and Space Sciences, 12(3), 252-259.

[24] Muratet, L., Doncieux, S., and Meyer, J. A. (2004). A biomimetic reactive navigation system using the optical flow for a rotary-wing UAV in urban environment. Proceedings of the International Session on Robotics, 2262-2270.

[25] Prashanth, K., Shankpal, P., Nagaraja, B., Kadambi, G. R., and Shankapal, S. (2013). Real time obstacle avoidance and navigation of a quad-rotor MAV using optical flow algorithms.

sastech Journal, 12(1), 31-35.

[26] Cho, G., Kim, J., and Oh, H. (2019). Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments. Sensors, 19(11), 2523.

[27] Burton, A., and Radford, J. (Eds.). (1978). Thinking in perspective: critical essays in the study of thought processes (Vol. 646). Routledge.

[28] Warren, D. H., and Strelow, E. R. (Eds.). (2013). Electronic spatial sensing for the blind:

contributions from perception, rehabilitation, and computer vision (Vol. 99). Springer Science and Business Media.

[29] Horn, B. K., and Schunck, B. G. (1981). Determining optical flow. Artificial intelligence, 17(1-3), 185-203.

[30] Lucas, B. D., and Kanade, T. (1981). An iterative image registration technique with an application to stereo vision.

[31] Revaud, J., Weinzaepfel, P., Harchaoui, Z., and Schmid, C. (2015). Epicflow: Edge- preserving interpolation of correspondences for optical flow. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1164-1172).

[32] Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., ... and Brox, T.

(2015). Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 2758-2766).

[33] Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., and Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2462-2470).

[34] Weinzaepfel, P., Revaud, J., Harchaoui, Z., and Schmid, C. (2013). DeepFlow: Large dis- placement optical flow with deep matching. In Proceedings of the IEEE International Con- ference on Computer Vision (pp. 1385-1392).

[35] McCulloch, W. S., and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.

[36] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386.

[37] Ivakhnenko, A. G., and Lapa, V. G. (1967). Cybernetics and forecasting techniques.

[38] Kelley, H. J. (1960). Gradient theory of optimal flight paths. Ars Journal, 30(10), 947-954.

[39] Bryson, A. E. (1961, April). A gradient method for optimizing multi-stage allocation pro- cesses. In Proc. Harvard Univ. Symposium on digital computers and their applications (Vol.

72).

[40] Dreyfus, S. (1973). The computational solution of optimal control problems with time lag.

IEEE Transactions on Automatic Control, 18(4), 383-385.

[41] Minsky, M., and Papert, S. A. (2017). Perceptrons: An introduction to computational geometry. MIT press.

[42] LeCun, Y., Kavukcuoglu, K., and Farabet, C. (2010, May). Convolutional networks and applications in vision. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems (pp. 253-256). IEEE.

[43] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.

[44] Bailer, C., Taetz, B., and Stricker, D. (2015). Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In Proceedings of the IEEE international conference on computer vision (pp. 4015-4023).

[45] Brox, T., and Malik, J. (2010). Large displacement optical flow: descriptor matching in vari- ational motion estimation. IEEE transactions on pattern analysis and machine intelligence, 33(3), 500-513.

[46] Sundaram, N., Brox, T., and Keutzer, K. (2010, September). Dense point trajectories by GPU-accelerated large displacement optical flow. In European conference on computer vision (pp. 438-451). Springer, Berlin, Heidelberg.

[47] Wulff, J., and Black, M. J. (2015). Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 120-130).

[48] Bao, L., Yang, Q., and Jin, H. (2014). Fast edge-preserving patchmatch for large displace- ment optical flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3534-3541).

[49] Kroeger, T., Timofte, R., Dai, D., and Van Gool, L. (2016, October). Fast optical flow using dense inverse search. In European Conference on Computer Vision (pp. 471-488). Springer, Cham.

[50] Robot Operating Sytem(ROS) https://www.ros.org/

Acknowledgements

I submitted my thesis on the research topic I have been studying during my master’s course.

There are a lot of people who have helped me over the past two years. It was so fortunate to have met these people in my life, and I would like to thank them.

I would like to first thank my advisor, Dr. Hyondong Oh, who has always provided me help and guidance throughout my studies. I am very thankful for the attention and advice that he has been able to give and guide me through the right way. I would also like to thank the people who worked together in the autonomous systems laboratory (ASL). Thanks to the people, I could learn a lot and have fun during my studies.

Finally, I would like to thank my family for acting as a supporter for me during my master’s course. Thank you very much for your support.

Dokumen terkait