ҚазҰТЗУ ХАБАРШЫСЫ
ВЕСТНИК КазНИТУ
VESTNIK KazNRTU
№4 (128)
АЛМАТЫ 2018 АВГУСТ ҚАЗАҚСТАН РЕСПУБЛИКАСЫ
БІЛІМ ЖӘНЕ ҒЫЛЫМ МИНИСТРЛІГІ МИНИСТЕРСТВО ОБРАЗОВАНИЯ И НАУКИ
РЕСПУБЛИКИ КАЗАХСТАН
Главный редактор И. К. Бейсембетов – ректор
Зам. главного редактора Б.К. Кенжалиев – проректор по науке
Отв. секретарь Н.Ф. Федосенко
Редакционная коллегия:
С.Б. Абдыгаппарова, Б.С. Ахметов, З.С. Абишева- акад. НАНРК, Л.Б. Атымтаева, Ж.Ж. Байгунчеков- акад. НАНРК, А.Б. Байбатша, А.О. Байконурова, В.И. Волчихин
(Россия), К. Дребенштед (Германия), Г.Ж. Жолтаев, Р.М. Искаков, С.Е. Кудайбергенов, С.Е. Кумеков, В.А. Луганов, С.С. Набойченко – член-корр. РАН, И.Г. Милев (Германия), С. Пежовник (Словения), Б.Р. Ракишев – акад. НАН РК, М.Б. Панфилов (Франция), Н.Т. Сайлаубеков, А.Р. Сейткулов, Фатхи Хабаши (Канада), Бражендра Мишра (США),
Корби Андерсон (США), В.А. Гольцев ( Россия), В. Ю. Коровин ( Украина), М.Г. Мустафин (Россия), Фан Хуаан ( Швеция), Х.П. Цинке ( Германия), Т.А. Чепуштанова, Г.Ж. Елигбаева, Б.У. Куспангалиев
Учредитель:
Казахский национальный исследовательский технический университет имени К.И. Сатпаева
Регистрация:
Министерство культуры, информации и общественного согласия Республики Казахстан № 951 – Ж “25” 11. 1999 г.
Основан в августе 1994 г. Выходит 6 раз в год Адрес редакции:
г. Алматы, ул. Сатпаева, 22, каб. 616, тел. 292-63-46 Nina. Fedorovna. 52 @ mail.ru
© КазНИТУ имени К.И. Сатпаева, 2018
●
Техникалық ғылымдар
ҚазҰТЗУ хабаршысы №4 2018 315
UDK 004.93N.K. Beisov
(Faculty of Information Technology, Al-Farabi Kazakh National University, Almaty, Kazakhstan e-mail: [email protected])
REVIEW OF MEANS OF EXISTING THE RECOGNITION OF GESTURES IN HUMAN-COMPUTER INTERACTION SYSTEMS
Abstract: In recent years, our society has focused on the development of information technology, tomorrow's computing environments will go beyond the keyboard, mouse and will require automatic control and interpretation of human movements using a variety of sensors, including video cameras. Gesture recognition is effectively used as an interface between people and computers for a long period of time. In this article, we offer several methods for recognizing gestures based on observations. In addition to describing a number of important technologies and gesture recognition algorithms, this article provides an overview of gesture recognition studies and the possibility of using gesture recognition in human interaction. One of the methods of human-machine interaction is a method based on the recognition of a person's gestures. Typically, gesture recognition systems are unable to handle systematic changes in the input signal, and therefore are too fragile to successfully apply in many real-world situations. To solve this problem, we recommend that you recog- nize the gestures and adapt the gesture model to the situation.
Key words:human-computer interaction, gesture recognition, situation.
Introduction
In recent years, our society has focused on the development of information technologies, tomorrow's computing environments will go beyond the keyboard, mouse and will require automatic control and interpre- tation of human movements using a variety of sensors, including video cameras. These environments will realize the needs for new types of human-computer interaction with interfaces. Such interfaces are natural and easily used in interaction with a computer without any special external equipment. Today, the keyboard, mouse and remote control are used as the main interface for transmitting information and commands for computerized equipment. Computer recognition gestures can provide a natural computer interface that allows people to spec- ify or rotate the computer model design by turning their hands. Gestures can be classified into two categories:
static and dynamic. The use of gestures as a natural interface serves as a motivating force for the exploration of gestures, its representations, and methods of recognition. Surveys conducted in human-computer interaction (HCI) and focuses on a variety of applications that use gestures for effective interaction. As an example of this interaction is some applications with three-dimensional information, such as visualization, computer games and robot control, other interfaces. In our daily life, we use computerized equipment that enables us to com- municate with people in such way, by understanding the visual inputs.
General understanding of human-computer interaction and gestures
Human-computer interaction is a disciplinary scientific direction, which is engaged in improving the methods of development, the introduction of interactive computer systems intended for human use. Human- computer interaction requires understanding the terminology of gestures. A gesture is the sign movement of the hands, feet, or other parts of the human body. The term can be considered in a broad and narrow sense. In a broad sense, gestures are all movements, poses, distance between companions. In the narrow sense, this word means only the movement of the hands.
In human-computer interaction, gesture recognition acts as a system, a combination of hardware and software. These means are designed to identify certain human gestures.
Human-computer interaction studies the methods of interaction of a person with a computer. This inter- action is understood as communication between a person and a computer. One of the methods of human- machine interaction is a method based on the recognition of a person's gestures.
Gesture recognition and gesture-based interaction are increasingly focusing on the human-computer interaction area. The hand is widely used for gesticulation compared to other body parts because it is a natu- ral medium for communication between people and therefore the most suitable tool for human-computer in- teraction. Interest in the recognition of gestures motivated by significant research, which were summarized in several surveys, directly or indirectly related to the gesture of recognition.
The purpose of this article is to review the methods in human-computer interaction. The basis of visual information is computer vision systems, the corresponding interfaces can be supplemented or replaced with traditional ones based on keyboard, mouse, remote control, data gloves or speech. Examples of appliance the system of the gestures recognition are:
• control of consumer electronics;
• interaction with computer vision systems;
• control of mechanical systems;
• computer games.
Methods of recognition gestures
The main advantages of using a computer vision system are that visual information allows you to com- municate with computer equipment at a distance, without need of physical contact with the equipment for management. Gestures have advantages in a noisy voice environment, in situations where voice commands would not be accepted, as well as for the transmission of quantitative information and spatial relationships.
The user must be able to operate the equipment without special external equipment, for example, a remote control. It should be emphasized that our goal is not to study gestures related to our speech or sign language.
The aim is to study gestures for various control tasks in human-computer interaction [2]. Intellectual interfaces - recognition of gestures, faces and eye tracking and speech recognition.
The modern level of development of information technologies, algorithms and methods contributes to the emergence of new interfaces for human-computer interaction. One of the promising options is the devel- opment and research of human-computer interfaces based on the recognition of gestures. The developers of such interfaces are tasked to use natural ways of communicating with a computer using gestures, facial expres- sions, voice, etc. The main task of expanding human-computer interaction is the development of a common methodology for detecting and recognizing dynamic gestures of a person. To solve this problem, it is necessary to develop an algorithm for the analysis of gesture recognition based on existing methods of detecting and recognizing a person's gestures.
One of the methods of recognizing gestures is the method - Viola-Jones - an algorithm that allows you to detect objects on images in real time. This method was proposed in 2001 by Paul Viola and Michael Jones.
This method is fundamental for finding objects on the image in real time in the majority of existing algorithms of recognition and identification [3,4]. It is also one of the best in terms of the ratio of recognition efficiency and speed of operation. The algorithm shows excellent results and recognizes objects at a small angle, up to about 30 degrees, under different lighting conditions.
The first appearance of the Kinect deep camera in 2010, the gestures sensors of the recognition system, opened the opportunity to improve of the urgency of the problem. The Kinect software is designed to determine the state of the main body of the human body. However, there is no mechanism for recognizing dynamic physical movements, and actual software is shut down, for a basic method a large database is required of all possible human states, required for classifier training. In general, the solution of problems associated with the recognition of complex physical gestures, is at an initial level. The variety of movements and human understanding is so great that the problem of recognizing them by the computer will remain relevant for a long time [5,14].
Kinect's capabilities include a wide range of modern industries. This device has been used in many studies as an alternative to the cheapest sensor to monitor system performance. In the future, it can be used to prevent certain diseases, under the supervision of human health, for example, Parkinson's disease, to determine a person's behavior with the help of Kinect. Poses of vertical track legs, length of steps, as well as in patients with Alzheimer's syndrome.
Kinect takes a special place in expanding opportunities for the elderly. Additional sincerity is a technology that com- bines realistic objects with computer graphics objects from different sources [6,7,12].
Additional sincerity using Kinect can be illustrated by the following fields of science and technology:
• training systems;
• design and engineering;
• cartography and tourism industry;
• QR codes.
Kinect allows the user to interact with the virtual environment through oral commands, bodybuilding, objects and videos [8]. Let's list the functions that can be performed using the Windows SDK software library:
• View one or more people in front of the sensor by recognizing the skeleton;
• Detection of the camera distance from the sensor to the object;
• Vocal hearing.
Using these functions, Kinect can also be used to integrate a virtual environment with additional sophis- tication. The Kinect sensor consists of an infrared ray, an infrared camera and an RGB camera. The present inventors describe the depth as a process triangulation. To create a projected permanent picture in a video, the laser set generates one radiation divided into several parts by a diffraction grating. It is captured by infrared camera connected to the reference image. The reference model is determined by the detection of the plane at a given distance and remains in the sensor memory. When the projection object is projected, the distance to
●
Техникалық ғылымдар
ҚазҰТЗУ хабаршысы №4 2018 317
the sensor is greater or less than the final plane. The distance to the sensor for each pixel can be obtained with the help of the corresponding inequality [9,10,11]. There is a transition of point clouds to a polygonal model (triangulation). For the triangulation of a three-dimensional cloud, different approaches are used. They consist of various algorithmic calculations, the requirements for the process of determining the structure, and various ways of obtaining results (triangles). Let's consider various ways of triangulation of a three-dimensional cloud and the possibility of using them. Figure 1 shows a model of point clouds using Delon triangulation.Figure 1. Example of triangulation for a three-dimensional point cloud
Triangulation methods that can be used for a continuous cloud of any point size can be described.
A complete triangulation algorithm works very quickly, but some applications require processing. The method of topological triangulation works automatically[16]. However, the higher the durability, the longer its dura- tion. The average length of triangular walls is called resolution.
Conclusion
In this article, we discussed how to recognize gestures in human-computer interaction systems. Nu- merous methods for gestures have been evaluated for the proposed core technologies in gesture recognition systems. At this stage, technology has a great potential for development, as they are a solution to the existing problem of human-computer interaction. Based on these methods, the following works were performed:
The analysis of existing software methods for recognizing gestures in human-computer interaction systems.
A comparative analysis of the most interesting methods of recognizing gestures.
The analysis and identification of many commands necessary for interaction with the system.
The quality of recognition of the proposed methods was evaluated on the basis of the tests carried out.
In the article methods of recognition of gestures are considered. The current state of the application of gesture recognition systems indicates that desktop applications are the most implemented applications for ges- ture recognition systems. Future studies in gesture recognition systems will provide an opportunity for re- searchers to create effective systems that are associated with basic technologies in the existing state of enabling technologies for gesture representations and recognition systems in general. In the future, we use the triangu- lation method for recognizing gestures in human-computer interaction systems.
REFERENS
[1] Amit B., Dagan E., Gershom K., Alon L., Jinon O., Yaron Y. Enhanced interactive games by mixing the whole body tracking and gesture animation. Proceedings of ACM SIGGRAPH ASIA 2010 sketches; Seoul, Korea. De- cember 15-18, 2010.
[2] Chang YJ, Chen SF, Huang JD Kinect based on physical rehabilitation: Pilot study for young people with impaired motor functions. Place of residence Deviation Disabil. 2011; 32: 2566-2570
[3] Forsythe D., Pons G. Computer vision. Modern approach / Trans. with English. A.V. Nazarenko and I.Yu.
Doroshenko .- M .: Publishing. house "William", 2004.- 928 p.
[4] Huang T.S. and others. Fast algorithms in digital image processing. - M.: Radio and Communication, 1984- 224 p.
[5] Andrew DW Use the depth of the camera as a touch sensor. Proceedings of the ACM International Confer- ence on Interfaces and Surfaces; Saarbruecken, Germany. November 7-10, 2010.
[6] Gottfried JM, Fear J., Garbe CS Computational flow in the range of multimodal Kinect data. Proceedings of the 7th International Symposium on Visual Computing, ISVC 2011; Las Vegas, Nevada, United States. September 26- 28, 2011; 758-767 p. Switzerland. March 6-9, 2011.
[7] 7.Kinect for Windows SDK, https://msdn.microsoft.com/en-us/library/hh855347.aspx (date of review:
10.03.2015).
[8] Stawers J., Hayes M., Bainbridge-Smith Control A. Height in a quadrotor helicopter using depth maps from the Microsoft Kinect sensor. Proceedings of the IEEE International Conference on mechatronics, IWM 2011; Istanbul, Turkey. April 13-15, 2011; 358-362.
[9] Walker B., Caroline P., William D. S. Using depth information to improve face recognition. Proceedings of the 6th International Conference on Human-Robot Interaction; Lausanne, Switzerland. March 6-9, 2011.
[10] Kerimbayev NN, Nauryzbaeva NM, Kurmanali MA Relationship between human and machine through Kinect // Bulletin of KazNPU, Almaty, -2017, №3.
[11] Kerimbayev N.N., Madiyeva B.A. (Republic of Kazakstan) PANA-TOMPKINS ALGORITHM IN ELECTROCARDIOGRAPH SYSTEM “HEARTBIT”. // International scientific review of the problems and prospects of modern science and education. BOSTON.USA.April 24-25, 2017.
[12] Karam M (2006) A framework for research and design of gesture based human–computer interactions. PhD Thesis, University of Southampton.
[13] Porta, M. Vision-based user interfaces: Methods and applications. Elsevier, International Journal Human- Computer Studies, 2002(57), pages 27-73.
[14] David Katuhe. Programming with the Kinect for Windows Software Development Kit, page 3, 2012.
[15] [Untitled Photograph of Kinect Sensors on Human Body] Retrieved March 10, 2013 from:
http://gmv.cast.uark.edu/uncategorized/working-with-data-from-the-kinect/attachment/kinect-sensors-on-human-body/.
[16] Nahapetyan V.E., Human-computer multi-touch interaction using depth sensors // Interactive Systems: Prob- lems of Human - Computer Interaction. Collection of scientific papers. − Ulyanovsk: USTU, 2013.
Бейсов Н. К.
Обзор средств, реализующих распознавание жестов в системах человеко-компьютерного взаимодействия Резюме. В последние годы наше общество сосредоточилось на развитии информационных технологий, зав- трашние вычислительные среды выйдут за рамки клавиатуры, мыши и потребуют автоматического контроля и интер- претации движений человека с использованием различных датчиков, в том числе видеокамер. Распознавание жеста эффективно используется в качестве интерфейса между людьми и компьютерами в течение длительного периода вре- мени. В этой статье мы предлагаем несколько способов распознавания жестов на основе наблюдений.
Ключевые слова: взаимодействие человека c компьютером, распознование жестов, ситуация.
Бейсов Н. К.
Адам-компьютер әрекеттесуінің қимылдарды тануды жүзеге асырылуына шолу
Резюме. Соңғы жылдары біздің қоғамымызда ақпараттық технологияларды дамытуға үлкен назар аудары- луда, ертеңгі есептеу орталары пернетақтаның, тышқанның және өзара әрекеттесу парадигмасын басқарады және түрлі сенсорларды, соның ішінде бейнекамераларды пайдаланатын адам қозғалысын автоматы түрде түсіндіруді талап етеді. Біз адамдардың іс-қимылдары бойынша бақылауларға негізделген қимылдарды танудың бірнеше әдістерін ұсынамыз. Бұл мәселені шешу үшін қимылдарды тану және қимыл модельдерін жағдайға бейімдеу әдістерін ұсынамыз.
Түйін сөздер: адам-компьютермен өзара әрекеттесу, қимылдарды тану, жағдай.
УДК 539.23:541.145
А.А. Markhabayeva1, Kh.A. Abdullin1,2, Sh.S. Syrym1
(1 Kazakh National University n.a. Al-Farabi, MES RK, Almaty, Kazakhstan
2National nanotechnology laboratory of public type of KazNU, Al-Farabi, MES RK, Almaty, Kazakhstan [email protected])
ANNEALING TEMPERATURE EFFECT ON THE PARTICLES SIZE AND PHOTOCATALYTIC ACTIVITY OF TUNGSTEN OXIDE NANOPOWDERS
Abstract. A simple method for obtaining nanopowders of tungsten oxides using a fibrous matrix in the form of defatted cotton is proposed. A medical cotton and ammonium metatungstate were used as the precursors. The morphology of synthesized nanopowders was studied using scanning electron microscopy. And the structure of obtained samples was studied using X-ray phase analysis and Raman spectroscopy. The particle sizes were estimated from the spectra of X-ray analysis using the Scherrer’s formula. Displacements of the Raman lines, changes in the shape and intensity of the Raman lines, as well as broadening of the X-ray reflections are observed with a decrease in the synthesis temperature, which is due to dimensional effects. The red shift of the Raman spectra is observed depending on the dispersion of the materials obtained. A simple and environmentally friendly technology of nanopowder synthesis has been developed that allows obtaining powders with the necessary dimensions.
Key words: tungsten oxide, nanopowders, Raman spectra, photocatalyst.