• Tidak ada hasil yang ditemukan

3-D Graphical Area Mapping Bilinear Interactive Technology

N/A
N/A
Protected

Academic year: 2023

Membagikan "3-D Graphical Area Mapping Bilinear Interactive Technology"

Copied!
80
0
0

Teks penuh

The Bubblebot will use the scanner designed and built during this thesis project to map the area and determine the optimal size it should be to navigate its environment. RGB-D and Simultaneous Localization and Mapping (SLAM), three main flaws in many of the current techniques have been determined and corrected. Giving robots the ability to create 3-D images of the surrounding area could be an invaluable aid in disasters like the one that befell the Chilean miners during the 2010 mine collapse.

All data collected in this method was drawn by hand to create a map of the terrain with its contours. Even after the creation of the planetable and alidade, two devices used to measure vertical angles, point positions, and height faster than by hand, the final map was still drawn by hand. Known as stereomodeling, this method replaces the need to draw topographic maps by hand, making it easier and faster to produce and replicate.

With the addition of computers, topographic maps have become much more detailed and can be created much faster than ever before.

Literature Review

The research done by Pedro Loureco et al.5 also used per-pixel cameras together with ultrasonic sensors to build a quad copter that can measure distances of objects in an enclosure. The distance data received from the sensors is graphed, which can be seen in Figure 3. This method was able to find the distance of the items in its area, unlike many others, but it also had many design flaws.

This method was successful in generating a 3-D map, but as seen in Figure 4, the robot requires large equipment, such as the laser beam finder, making the system cumbersome. This meant that larger motors were needed to operate the system and a larger power supply was needed to power it. In research done at Tsinghua University by Xiang Gao et al.7, an RGB-D based system was used to map an area and successfully avoid all obstacles.

David Droeschel, Max Schwartz and Sven Behnke from the University of Bonn8 took both 3-D and RGB-D ray-finding methods and managed to create a hybrid system.

Figure 2. RGB-D image from the University of Washington 4 .
Figure 2. RGB-D image from the University of Washington 4 .

History of Processing

Ongoing Research

RGB-D

With current technology, the RGB-D method would be the simplest and fastest technique for 3D imaging and mapping. The Kinect is a small device that creates a per-pixel in-depth image of the environment using an IR light camera and an in-depth pixel camera, as shown in Figure 912. The main reason for its use and popularity is due to its low cost and accessibility of the device.

By distinguishing the color of each pixel in the image, the RGB – D system can create an in-depth 3D image or map of the surrounding area. Since the RGB - D technique uses a depth camera to create the 3-D image of the area, this system will need a preset database of the material textures to correctly create a 3-D image of all objects in the mapped area. The main problem found using the RGB–D method was that the system had to determine the texture of.

The current solution used to avoid this problem is by reducing the resolution of the created image.

Figure 10. Turtlebot 7 .
Figure 10. Turtlebot 7 .

Hardware

  • Microcontrollers
  • Stepper Motor and Controller
  • Scanner
  • Line Laser
  • Ultrasonic Sensor
  • Assembily
  • Stepper Motor Controls
  • Budget

One was to increase the size of the battery pack, but this would increase the weight and overall size of the scanner. One of the most important parts of designing and choosing the hardware for the system was choosing the engine to use. This makes controlling the speed and direction of the motor more complicated than any other type of motor.

The webcam was used to record the red light from the line laser and block out the rest of the area. The main reason for its own battery power was to prevent the laser from drawing too much power from the primary power supply connected to the Arduinos and Raspberry Pi. Finally, an ultrasonic sensor was used to determine the distances of the obstacles on the 3D map.

On the right side of the board approx. 1 cm from the edge, place and fasten the line laser. Place an L-bracket approx. 2" x 2" in size on the left side of the board with the vertical side facing the stepper motor shaft. The first step in the electrical assembly of the system was to connect the stepper motor and its power source to the motor controller.

It can determine the distance the object in front of it is by taking the time the sound wave traveled from the ultrasound sensor and bouncing back to it, multiplying it by the speed of the sound wave and dividing by two. If imperial units are required, the speed of the sound wave in inches would be 0.0135 in/s. The result for the serial monitor showed the distance between the objects that the ultrasonic sensor measured in both inches and centimeters shown in Figure 22.

The first issue that arose when attempting ultrasonic 2D mapping was the inability of the Arduino software to create an image as an output. The Arduino software was used to control the Arduino and all other hardware connected to it, while the processing software would take all data and generate a two-dimensional Polar Coordinate (R, θ) map of the area.

Figure 12. Raspberry Pi model 3B.
Figure 12. Raspberry Pi model 3B.

Camera

The approach taken was by adding a second servo to the original system and tweaking the Processing and Arduino code, it was determined that using only ultrasonic sensors to create a 3-D map would not work. When using the ultrasonic sensor mapping approach, it would only work when the sensor was scanning small objects at a time. Unless every small feature is magnified, nothing in the image can be distinguished.

Figure 26b, on the other hand, is what the camera looks like after the 3-D image generator code in Processing is activated. In order to fully scan the environment and for Processing to know where the object was scanned, the system would need to rotate and inform the processing software at which angle the red line laser was fired. The motor angle used was monitored via the Serial Monitor and sent to the processing code via the com port as before.

Processing finally took all the data from the camera and Arduino to create a text file. On average, the amount of data points and numbers in a text file is typically over 800,000 and is so large that it would take up over 200 pages of a standard Word file. When the system finishes its scan, the motor used to rotate the camera and line laser will reset the scanner to its initial position so that it is ready to work when needed.

Even though the camera was set to see only the red light and ignore all other information, the number of pixels and data points received was exaggerated. In most scenarios, there was so much data to be processed by the Processing program that it caused the software to freeze or crash. The way to reduce the number of pixels used by the camera is to divide k, the number of pixels.

The camera used for this project only used half the pixels. Although the number of pixels is reduced to reduce the overall processing power, the final image will still be clear enough to make out.

Figure 26a. Area scanned.
Figure 26a. Area scanned.

Stepper Motor

The one used for this setup has anywhere from 1.2 to 3.0 megapixels depending on the size of the photo being taken. When looking at the image generated in some situations, the orientation of the image must be changed. If it is too slow, the scanner will take too long to time out, causing it to create an incomplete 3D image of the area.

One of the variable resistors will control the maximum angle the scanner will rotate and the other will control the speed at which the scanner will travel. As seen in the two images above, the scanner could mimic the depth of the objects by seeing the light intensity of the laser. Although the scanner could make a functional 3D image of the scanned objects, it still could not know the objects' distances from its original position.

To solve this, the ultrasonic sensors used at the beginning of the project would be combined with the laser scanning system to both create a map and know the distance of the items in the area. Because the scanner would need to see the environment, it would need to be mounted above all parts of the robot to see past protruding objects. The stand is specially designed for the dimensions of the stepper motor used and the amount of space on the robot.

The final mounting of the scanner on the test robot is shown in Figure 37. The Raspberry Pi's communication problem was due to the limited range of the Wi-Fi network to which the Raspberry Pi had a remote desktop connection. One of the main concerns in almost every pre-examined method was the need to drive to the obstacle on the map generated by the system to calculate the distance to the object.

By adding an ultrasonic sensor, the scanner could add data from it to the map and determine the distances of objects in the scanned area. The main purpose of the robot is to expand and shrink to the desired size to fit any space and avoid obstacles seen by the scanner. The Arduino uses this code to control the ultrasonic sensor to determine the distance of objects in the scanned area.

PI image img 2; // second image, which is a filtered version of the first image (thin, dotted line).

Figure 30. Blank generated image.
Figure 30. Blank generated image.

Image Genorator

Using the Snanner with a Robot

Ultrasonic Range Finder

Processing Radar Generator

Stepper Motor and Driver Tester

Stepper Motor Code for Scanner

Processing Camera Scanner

Processing 3D Image Generator

Gambar

Figure 1. Topographical map of Yellowstone National Park 3 .
Figure 2. RGB-D image from the University of Washington 4 .
Figure 3. Sensor based obstacles 5 .
Figure 4. Hartmut Surmann’s obstacle avoidance robot 6 .
+7

Referensi

Dokumen terkait

or turning point is highest point of interest of the story, the point of emotion, and the greatest attention and thrilling, when difficulties or problems are encountered and