• Tidak ada hasil yang ditemukan

Elements of Robotics

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Elements of Robotics"

Copied!
311
0
0

Teks penuh

Classi fi cation of Robots

Industrial Robots

The factory assembly line can operate without the presence of humans in a well-defined environment where the robot must perform tasks in a specified sequence, acting on objects that are precisely positioned in front of it (Fig.1.3). However, their design is simplified because they work in a customized environment that humans do not have access to while the robot is working.

Autonomous Mobile Robots

These are extremely difficult to develop due to the highly complex uncertain environment of motorized traffic and strict safety requirements. Better sensors can perceive the details of more complex situations, but to handle these situations, the robot's behavior control must be very flexible and adaptable.

Fig. 1.4 Autonomous mobile robot weeding a field (Courtesy of Ecorobotix)
Fig. 1.4 Autonomous mobile robot weeding a field (Courtesy of Ecorobotix)

Humanoid Robots

Much of the research and development in robotics today is focused on making robots more autonomous by improving sensors and enabling more intelligent control of the robot. This involves both sensing and intelligence, but it must also take into account the psychology and sociology of the interactions.

Educational Robots

End effectors can be built with robotics kits or by using additional components with pre-assembled robots, although educational robotic arms exist (Fig.1.6b). It uses event-action pairs: when the event represented by the block on the left occurs, the actions in the following blocks are executed.

Fig. 1.6 a LEGO ® Mindstorms EV3 (Courtesy of Adi Shmorak, Intelitek), b Poppy Ergo Jr robotic arms (Courtesy of the Poppy Project)
Fig. 1.6 a LEGO ® Mindstorms EV3 (Courtesy of Adi Shmorak, Intelitek), b Poppy Ergo Jr robotic arms (Courtesy of the Poppy Project)

The Generic Robot

  • Differential Drive
  • Proximity Sensors
  • Ground Sensors
  • Embedded Computer

A horizontal proximity sensor can measure the distance (in centimeters) from the robot to an object and the angle (in degrees) between the front of the robot and the object. The figure shows the robot from above, although the ground sensors are at the bottom of the robot.

Fig. 1.11 a Robot with a rotating sensor (gray dot), b Robot with two ground sensors on the bottom of the robot (gray rectangles)
Fig. 1.11 a Robot with a rotating sensor (gray dot), b Robot with two ground sensors on the bottom of the robot (gray rectangles)

The Algorithmic Formalism

They are also used for polling, an alternative to event handlers: instead of performing a calculation when an event occurs, sensors are read and stored periodically. More specifically, polling occurs as an event handler when a timer expires, but software design using polling can be quite different from event-based software design.

An Overview of the Content of the Book

This chapter introduces fuzzy logic and shows how it can be used to control a robot approaching an object. This chapter presents an approach to learning: artificial neural networks modeled on neurons in our brains.

Summary

This chapter presents a simplified treatment of the fundamental concepts of robot manipulators (forward and reverse kinematics, rotations, homogeneous transformations) in two dimensions, as well as a taste of three-dimensional rotations. Appendix B Mathematical Derivations and Tutorials This chapter contains tutorials that discuss some of the mathematical concepts used in the book.

Further Reading

When an object is detected to the right of the robot, the robot turns right. When an object is detected to the left of the robot, the robot turns left.

Fig. 2.1 Classification of sensors
Fig. 2.1 Classification of sensors

Classi fi cation of Sensors

Distance Sensors

  • Ultrasound Distance Sensors
  • Infrared Proximity Sensors
  • Optical Distance Sensors
  • Triangulating Sensors
  • Laser Scanners

Light intensity decreases with the square of the distance from the source, and this ratio can be used to measure the approximate distance to the object. Secondly, a laser beam is highly focused so that an accurate measurement of the angle to the object can be made (Fig.2.3).

Fig. 2.2 Measuring distance by transmitting a wave and receiving its reflection
Fig. 2.2 Measuring distance by transmitting a wave and receiving its reflection

Cameras

Given the position of the camera, what part of the sphere surrounding the camera is captured in the image. Cameras with a wide field of view are used in mobile robots because the image can be analyzed to understand the environment.

Other Sensors

An important feature in designing a camera for a robot is the field of view of its lens. The analysis of the image is used for navigation, to locate objects in the environment and to communicate with humans or other robots using visual properties such as color.

Range, Resolution, Precision, Accuracy

If a distance sensor consistently claims that the distance is 5 cm greater than it actually is, the sensor is not accurate. Place an object at different distances from the robot and measure the distances returned by the sensor.

Nonlinearity

Linear Sensors

Stick a ruler on your table and carefully place the robot so that its front sensor is placed next to the 0 mark of the ruler.

Mapping Nonlinear Sensors

If you look again at the graph in Fig.2.13, you can see that the segments of the curve are roughly linear, although their slopes change according to the curve. Therefore, we can get a good approximation of the distance corresponding to a sensor reading by taking the relative distance on a straight line between two points (Fig.2.14).

Summary

One solution is to take the closest value, so if the value 27 is returned by the sensor whose mapping is given in Table 2.1, the distance would be 12.

Further Reading

Purely reactive behavior occurs when the action is only related to the occurrence of an event and does not depend on data stored in memory (state). Sections 3.1–3.3 describe Braitenberg vehicles exhibiting reactive behavior; in Chapter 4 we present Braitenberg vehicles that do not respond to states.

Braitenberg Vehicles

Calibration is necessary to determine the optimal thresholds for fast robust movement of the robot. 40 3 Reactive behavior was the precursor to the LEGO®Mindstorms robotics kits.1 This chapter describes an implementation of most of the Braitenberg vehicles from the MIT report.

Reacting to the Detection of an Object

Specification (Dogged (stop)): As in activity 3.3, but when an object is not detected, the robot stops. Specification (Attractive and Repulsive): When an object approaches the robot from behind, the robot runs away until it is out of range.

Reacting and Turning

Specification (Paranoid (Right-Left)): When an object is detected in front of the robot, the robot moves forward. If an object is detected to the left of the robot, turn off the right motor and set the left motor to rotate forward.

Fig. 3.1 a Gentle left turn. b Sharp left turn
Fig. 3.1 a Gentle left turn. b Sharp left turn

Line Following

Line Following with a Pair of Ground Sensors

46 3 Reactive Behavior For now, we specify that the robot will stop when neither sensor detects the line. If the turn is too sharp, the robot on the other side of the line may run away.

Line Following with only One Ground Sensor

Hint: The only problem is when the robot crashes from the black side of the line, because there the algorithm does not distinguish between this case and the white side of the line. One solution is to use a variable to remember the previous value of the sensor so that the two cases can be distinguished.

Line Following Without a Gradient

Since the sensor values ​​are proportional to the gray scale of the line, the approximate distance of the sensor from the center of the line can be calculated. Compare the performance of the algorithm with the linear following algorithms using two sensors and one sensor with a gradient.

Fig. 3.6 Line following with a single sensor and without a gradient. Above: robot moving over the line, below: plot of sensor value vs
Fig. 3.6 Line following with a single sensor and without a gradient. Above: robot moving over the line, below: plot of sensor value vs

Braitenberg ’ s Presentation of the Vehicles

The robots in Fig.3.8a–b are the same as the robots in Fig.3.7a–b, respectively, except that the sensor values ​​are negated (signs on the connections): the more light detected, the slower the wheel will turn. Suppose there is a fixed bias applied to the motors so that the wheels turn forward when no light source is detected.

Summary

Use proximity sensors instead of Braitenberg's light sensors and the detection or non-detection of an object instead of stronger or weaker light sources.

Further Reading

The images or other third-party materials in this chapter are included under the Creative Commons license of the chapter, unless otherwise noted in a line of credit accompanying the materials. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by law or exceeds the permitted use, you must obtain permission directly from the copyright holder.

State Machines

Braitenberg vehicles and line-following algorithms (Chap.3) demonstrate reactive behavior, where the robot's action depends on the current values ​​returned by the robot's sensors, not on events that have occurred before. The action is not continuing; for example, the action rotate left means to set the motors so that the robot turns left, but the transition to the next state is done without waiting for the robot to reach a specific position.

Reactive Behavior with State

Search and Approach

The object has been detected but the search is not at an edge of the sector; in this case, the pass is taken. The search is at one end of the sector, but an object has not been detected; in this case, left-to-right or right-to-left transition is taken.

Implementation of Finite State Machines

The search is at an edge of the sector exactly when an object is detected; in this case an arbitrary transition is taken. That is, the robot can approach the object or it can change the direction of the search.

Summary

Further Reading

Real-world robots must move to specific locations and may have engineering limitations on how fast or slow they can move or turn. In the simplest implementation, the speed of the robot's wheels is assumed to be proportional to the power used by the motors.

Distance, Velocity and Time

Use a stopwatch to measure the time it takes for the robot to move between the lines. To accurately navigate an environment, a robot must detect objects in its environment, such as walls, floor markings, and objects.

Acceleration as Change in Velocity

When the robot's power setting is set to a fixed value, the force on the robot is constant and we expect the acceleration to remain constant, increasing the speed. But at a certain point, the acceleration is reduced to zero, which means that the speed no longer increases, because the force on the wheels is just sufficient to overcome the friction of the road and the wind resistance.

Fig. 5.1 An accelerating robot: distance increase as the square of time
Fig. 5.1 An accelerating robot: distance increase as the square of time

From Segments to Continuous Motion

When the robot appears to have reached full speed, record the time since the start of the run. Calculate velocities from distances divided by time and compare tov=at (Figure 5.2a).

Fig. 5.2 a Velocity for constant acceleration. b Distance for constant acceleration
Fig. 5.2 a Velocity for constant acceleration. b Distance for constant acceleration

Navigation by Odometry

Linear Odometry

Once the relationship between motor power and speeds is determined, the robot can calculate the distance traveled s=vt. If the robot starts at the origin (0,0) and moves in a straight line at an angle θ with speed v for time, the distance moved is iss=vt.

Fig. 5.3 Position and heading
Fig. 5.3 Position and heading

Odometry with Turns

If the distance is small, the line labeled dcis is approximately perpendicular to the radius through the robot's final position. With similar triangles, we see that θ is the change in the robot's course (Fig.5.5).

Errors in Odometry

Write a program that causes the robot to move in a straight line 2 m and then turn 360◦. Write a program that causes the robot to turn 360◦ and then move in a straight line 2 m.

Fig. 5.6 Odometry errors
Fig. 5.6 Odometry errors

Wheel Encoders

Watch the video and determine the number of revolutions by counting the number of times the mark is at the top of the wheel. Determine the number of revolutions by counting the number of times a mark is at the top of the wheel and dividing by.

Inertial Navigation Systems

Accelerometers

The direction in which the mass moves indicates the acceleration: forward or backward. The magnitude of the force is measured indirectly by measuring the distance the mass moves.

Gyroscopes

Two square masses are attached by flexible beams to anchors mounted on the base of the component. The Coriolis force is a force whose direction is given by the vector cross product of the axis of rotation and the motion of the mass, and whose magnitude is proportional to the linear velocity of the mass and the angular velocity of the gyro.

Figure 5.9 shows a CVG called a tuning fork gyroscope. Two square masses are attached by flexible beams to anchors that are mounted on the base of the component.
Figure 5.9 shows a CVG called a tuning fork gyroscope. Two square masses are attached by flexible beams to anchors that are mounted on the base of the component.

Applications

The masses (grey squares) are forced to vibrate at the same frequency as the two legs of a tuning fork. Since the masses move in different directions, the resulting forces will be equal but in opposite directions (solid arrows).

Degrees of Freedom and Numbers of Actuators

A differentially driven robot has two actuators, one for each wheel, even though the robot itself has three DOF. The two-jointed arm in Fig. 5.11 has two motors, one at each rotating joint, so the number of actuators is equal to the number of DOF.

The Relative Number of Actuators and DOF

The mobile arm robot (Fig.5.13b) has three actuators: a motor that moves the robot forward and backward, and motors for each of the two rotary joints. For the mobile arm robot (Fig.5.13b), there are an infinite number of base and arm positions that bring the end effector to a specific reachable position.

Fig. 5.13 a Robot arm: two DOF and four actuators. b Mobile robot and arm: two DOF and three actuators
Fig. 5.13 a Robot arm: two DOF and four actuators. b Mobile robot and arm: two DOF and three actuators

Holonomic and Non-holonomic Motion

To directly access the third OF, the robot must be able to move laterally. Two pairs of wheels on opposite sides of the robot can directly move the robot left, right, forward and backward.

Fig. 5.18 a Swedish wheel. b Omnidirectional robot (Courtesy LAMI-EPFL)
Fig. 5.18 a Swedish wheel. b Omnidirectional robot (Courtesy LAMI-EPFL)

Summary

Further Reading

For example, if a robot is to bring an object from a shelf in a warehouse to a delivery track, it must use sensors to navigate to the correct shelf, detect and grab the object, and then navigate back to the truck and load the object. In a warehouse there may be obstacles on the way to the shelf, the object will not be placed exactly on the shelf and the truck is never parked in the same place.

Control Models

Open Loop Control

Closed Loop Control

For example, if the reference value is the position of the robot relative to the shelf, the control value will be the power setting of the motors and the duration of operation of the motors. The variable kay represents the result, that is the actual state of the robot, for example the distance to the object.

The Period of a Control Algorithm

A 1 ms control period would waste computational power because the robot will only move 0.002 cm (0.02 mm) during each 1 ms cycle of the control algorithm. In the example, we concluded that the optimal period of the control algorithm was in the order of tenths of a second.

On-Off Control

The resulting behavior of the robot is shown in Fig.6.2: the robot will oscillate around the reference distance to the object. It is highly unlikely that the robot will actually stop at or near the reference distance.

Fig. 6.2 Behavior of the on-off algorithm
Fig. 6.2 Behavior of the on-off algorithm

Proportional (P) Controller

In theory, the low power setting should cause the robot to move slowly and eventually reach the reference distance. The P-controller sets the maximum motor power to make the robot move quickly towards the object.

Figure 6.3 plots the distance of the robot to the object as a function of time when the robot is controlled by a P controller
Figure 6.3 plots the distance of the robot to the object as a function of time when the robot is controlled by a P controller

Proportional-Integral (PI) Controller

Apply the proportional control algorithm to make the robot stop at a certain distance from an object. Implement a PI controller that causes the robot to stop at a certain distance from an object.

Fig. 6.5 Behavior of the PI controller
Fig. 6.5 Behavior of the PI controller

Proportional-Integral-Derivative (PID) Controller

Compare the behavior of the PI controller with a P controller for the same task by monitoring the variables of the control algorithms over time. What happens if you manually prevent the robot from moving for a short time and then let it go.

Summary

Further Reading

Self-driving car navigation can be divided into two tasks: There is a high-level task of finding the path between the starting position and the destination position. While high-level pathfinding can be done once before a trip (or every few minutes), the low-level task of obstacle avoidance must be done frequently, since the car never knows when a pedestrian will jump into the road or when a car following it will suddenly brake.

Obstacle Avoidance

Wall Following

The robot will never detect the target, so it will move indefinitely around the first obstacle.

Fig. 7.1 Wall following
Fig. 7.1 Wall following

Wall Following with Direction

After four left turns, its bearing is 360◦ (also north, a multiple of 360◦ ) and it continues to move forward, repeatedly encountering and following the wall. Implement the directional wall tracking algorithm and verify that it exhibits the behavior shown in Figure 7.4.

Fig. 7.4 Why wall following with direction doesn’t always work
Fig. 7.4 Why wall following with direction doesn’t always work

The Pledge Algorithm

Following a Line with a Code

We do not need a federated localization algorithm like those in Chapter 8, we only need to know the positions on the line that facilitate the task. Navigation without continuous localization can be done by reading a code placed on the floor next to the bar.

Fig. 7.6 A robot following a line using its left sensor and reading a code with its right sensor
Fig. 7.6 A robot following a line using its left sensor and reading a code with its right sensor

Ants Searching for a Food Source

The food source is represented by a dark spot that can be easily detected by a ground sensor on the robot. The robot's proximity sensors are used to detect the walls of the area.

Fig. 7.8 a The ants’ nest and the food source. b Pheromones create a trail
Fig. 7.8 a The ants’ nest and the food source. b Pheromones create a trail

A Probabilistic Model of the Ants ’ Behavior

Figure 7.11 shows the probability of an ant's location after finding a food source, returning to the nest, and making another random move. Although ants move randomly, their behavior of returning to the nest after finding a food source makes them more likely to be on the diagonal than anywhere else in the environment.

Fig. 7.11 Probabilities for the location of the ant
Fig. 7.11 Probabilities for the location of the ant

A Finite State Machine for the Path Finding Algorithm

Therefore, the robot (or its sensor) has to turn until it finds the direction to the nest. When it does, it turns to the nest and takes the transition to the stategoto nest.

Fig. 7.13 State machine for drawing a path between a food source and a nest see Table 7.1 for explanations of the abbreviations
Fig. 7.13 State machine for drawing a path between a food source and a nest see Table 7.1 for explanations of the abbreviations

Summary

Since the nest is next to a wall, this condition is also true when the robot returns to the nest. We specified that the nest can be detected (Activity 7.7), but the robot's sensor is not necessarily facing the direction of the nest.

Further Reading

If the robot sees a high-density (black) marker, it concludes that it has reached the food source and transitions to the state of food. This state is similar to the following state in that the robot moves forward toward the nest and turns right or left as needed to move toward the nest.

Landmarks

For a robot, what does it mean to "count steps" and "open our eyes every now and then". Now and then you can open your eyes for a second; this action costs 1 point.

Determining Position from Objects Whose Position

Determining Position from an Angle

From the known absolute coordinates of the object (x0,y0), the coordinates of the robot can be determined. The distance and the angle are measured from the position where the sensor is mounted, which is not necessarily the center of the robot.

Determining Position by Triangulation

To measure azimuth, line up the robot with one edge of the table and call it north. First, perform an angle measurement at one position and then pick up the robot and move it to a new position for the second measurement, carefully measuring the distance.

Fig. 8.2 Triangulation
Fig. 8.2 Triangulation

Global Positioning System

Probabilistic Localization

Sensing Increases Certainty

As time goes on, the uncertainty decreases: the robot knows with greater certainty where it is actually located. In this example, the robot finally knows its position by 6 with full certainty or it has reduced its uncertainty to one of the two positions 2,7.

Uncertainty in Sensing

For example, the 0.19 probability that the robot was in position 1 now becomes the probability that it is in position 2. Similarly, the 0.02 probability that the robot was in position 3 now becomes the probability that it is in position 4.

Uncertainty in Motion

138 8 Localization Table 8.2 Localization with uncertainty in sensitivity and movement, sensor=after multiplying by the uncertainty of the sensor, norm=after normalization, right=after moving right one position. The robot is likely in position 6, but we are less sure because the probability is only 0.43 instead of 0.63.

Summary

Further Reading

Maps are less relevant for a robot vacuum cleaner, because the manufacturer cannot prepare maps of the apartments of each customer. Observations of the sun and stars were used for localization and maps were created as exploration continued.

Discrete and Continuous Maps

In Fig.9.1b, three pairs of numbers represent the triangle with much greater accuracy than the seven pairs of the discrete map. Furthermore, it is easy to calculate whether a point is in the object or not by using analytic geometry.

The Content of the Cells of a Grid Map

Place your robot on a line in front of a series of obstacles that the robot can detect with a side sensor (Figure 9.3). Otherwise, draw regular lines on the ground that the robot can read for localization.

Fig. 9.2 A probabilistic grid map
Fig. 9.2 A probabilistic grid map

Creating a Map by Exploration: The Frontier Algorithm

Grid Maps with Occupancy Probabilities

The red lines of small squares in Fig.9.4 represent the boundary between the border and the unknown cells in the environment. The unknown cells adjacent to the border are the interesting ones that need to be explored to expand the current map.

The Frontier Algorithm

The example to which we applied the frontier algorithm is a relatively simple environment consisting of two rooms connected by a door (in the sixth column from the left in Figure 9-9), but otherwise isolated from the outside environment. Each robot will explore that part of the boundary closest to its position.

Fig. 9.6 The robot updates unknown cells adjacent to the frontier
Fig. 9.6 The robot updates unknown cells adjacent to the frontier

Priority in the Frontier Algorithm

Since each robot explores a different area of ​​the environment, the construction of the map will be much more efficient. You will need to include an algorithm to accurately move from one cell to another and an algorithm to move around obstacles.

Fig. 9.10 Exploration of a labyrinth
Fig. 9.10 Exploration of a labyrinth

Mapping Using Knowledge of the Environment

Measuring over a large area enables the robot to identify features such as walls and corners from a measurement taken at a single location. How large must the landmarks be for the robot to reliably detect them.

Fig. 9.12 A robotic lawnmower mowing an area and returning to its charging station
Fig. 9.12 A robotic lawnmower mowing an area and returning to its charging station

A Numerical Example for a SLAM Algorithm

The actual position of the robot is shown in Fig.9.17a and b is the corresponding map. However, due to the odometry error, the actual perception of the robot is different.

Fig. 9.15 a A robot near the wall of a room, b The corresponding map
Fig. 9.15 a A robot near the wall of a room, b The corresponding map

Activities for Demonstrating the SLAM Algorithm

From a series of measured observations, calculate their correspondences to the observations of the robot's 15 postures. Calculate the perception (d, θ) of the object from this attitude and then calculate the coordinate (x, y) of the new obstacle.

Fig. 9.23 Configuration for the SLAM algorithm
Fig. 9.23 Configuration for the SLAM algorithm

The Formalization of the SLAM Algorithm

Summary

Further Reading

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, modification, distribution, and reproduction in any media or format, so long as you properly credit the original author(s) and source, provide a link to the Creative Commons license, and indicate if changes have been made. Think of a robot used in a hospital to transport medicines and other supplies from storage areas to the doctors and nurses.

Dijkstra ’ s Algorithm for a Grid Map

Dijkstra ’ s Algorithm on a Grid Map with Constant

Figure 10.3a shows the network map after five iterations and Fig.10.3b shows the final network map after nine iterations when the goal cell is reached. The upper left diagram shows the network map after three iterations and the upper right diagram shows the map after 19 iterations.

Dijkstra ’ s Algorithm with Variable Costs

Finally, in the lower left diagram, G is found after 25 iterations, and the shortest path is indicated in gray in the lower right diagram. The right diagram shows the shortest path if the cost of moving through a cell of sand is only 2.

Fig. 10.4 Dijkstra’s algorithm for path planning on a grid map. Four stages in the execution of the algorithm are shown starting in the upper left diagram
Fig. 10.4 Dijkstra’s algorithm for path planning on a grid map. Four stages in the execution of the algorithm are shown starting in the upper left diagram

Dijkstra ’ s Algorithm for a Continuous Map

Place some obstacles in the grid and enter them on the robot's map. In this case, the shortest path in terms of the number of edges is also the geometrically shortest path.

Fig. 10.6 Segmenting a continuous map by vertical lines and the path through the segments
Fig. 10.6 Segmenting a continuous map by vertical lines and the path through the segments

Path Planning with the A Algorithm

The rest of the figure shows four stages of the algorithm leading to the shortest path to the target. Comparing Figures 10.4 and 10.12, we see that the A∗ algorithm only had to visit 71% of the cells visited by Dijkstra's algorithm.

Fig. 10.10 a Heuristic function. b The first two iterations of the A ∗ algorithm
Fig. 10.10 a Heuristic function. b The first two iterations of the A ∗ algorithm

Path Following and Obstacle Avoidance

Modify your implementation of the line-following algorithm so that the robot behaves correctly even if an obstacle is placed on the line. Modify your implementation of the line-following algorithm so that the robot behaves correctly even if additional robots move randomly in the region of the line.

Summary

Further Reading

Fuzzify The values ​​of the sensors are converted to values ​​of the linguistic variables, such as asfar, closing, near, calledpremises. Defuzzify The consequences are combined to produce acrispoutput, which is a numerical value that controls some aspect of the robot, such as the power supplied to the motors.

Fuzzify

If the value of the sensor is below far_low, then we are absolutely certain that the object is far and the certainty is 1. If the value is between close_low and far_height then we are somewhat certain that the object is far, but also somewhat certain that it is closing.

Apply Rules

The x-axis is the value returned by the sensor, and their axis gives the premise for each variable, the certainty that the language variable is true. For the value v, the certainty of the consequence is much smaller than the certainty obtained from the minimum function.

Defuzzify

Figure 11.3 shows the trapezoids for the resulting cruise with certainty 0.4 and the resulting fast with certainty 0.2. The value is closer to the value associated with cruising than the value associated with fast.

Summary

Set the appropriate thresholds for the proximity sensor and the appropriate defuzzification values ​​to get the clean motor speed. The disadvantage is that the control behavior with fuzzy logic is not as transparent as with classical control algorithms.

Further Reading

In this chapter, we present a flavor of algorithms for digital image processing and describe how they are used in robotic systems. Sections 12.3–12.6 describe image processing algorithms: digital filter enhancement and histogram manipulation, segmentation (edge ​​detection), and feature recognition (corner and blob detection, multiple feature identification).

Obtaining Images

This results in a one-dimensional array of pixels that can be processed by simplified versions of the algorithms we present. The sensors can measure light of wavelengths outside the range we call visible light: longer-wavelength infrared light and shorter-wavelength ultraviolet light.

An Overview of Digital Image Processing

Infrared imaging is important in robotics because hot objects such as people and cars can be detected as bright infrared light. Section 12.6 describes how to identify blobs, which are areas whose pixels have similar intensity but are not bounded by regular features such as lines and curves.

Image Enhancement

Spatial Filters

A program can make each pixel more like its neighbors by replacing the pixel's intensity with the average of its intensity and the intensity of its neighbors. 3 The filter is not applied to pixels at the border of the image to avoid exceeding the group boundaries.

Fig. 12.4 a Smoothing with the box filter. b Smoothing with a weighted filter
Fig. 12.4 a Smoothing with the box filter. b Smoothing with a weighted filter

Histogram Manipulation

Program the robot to move from left to right across the pattern, sampling the ground sensor output. Modify the program so that it replaces each sample with the average of the intensity of the sample and its two neighbors.

Fig. 12.6 a Binary image without noise. b Binary image with noise
Fig. 12.6 a Binary image without noise. b Binary image with noise

Edge Detection

Examine the histogram to determine a threshold value that will be used to distinguish between the black lines and the background. Calculate the sum of the contents of the containers until the sum is greater than a fraction (perhaps a third) of the samples.

Corner Detection

Recognizing Blobs

Summary

Further Reading

The Biological Neural System

The Arti fi cial Neural Network Model

Implementing a Braintenberg Vehicle with an ANN

Arti fi cial Neural Networks: Topologies

Multilayer Topology

Memory

Spatial Filter

Learning

Categories of Learning Algorithms

The Hebbian Rule for Learning in ANNs

Summary

Further Reading

Distinguishing Between Two Colors

A Discriminant Based on the Means

A Discriminant Based on the Means and Variances

Algorithm for Learning to Distinguish Colors

Linear Discriminant Analysis

Motivation

The Linear Discriminant

Choosing a Point for the Linear Discriminant

Choosing a Slope for the Linear Discriminant

Computation of a Linear Discriminant

Comparing the Quality of the Discriminants

Activities for LDA

Generalization of the Linear Discriminant

Perceptrons

Detecting a Slope

Classi fi cation with Perceptrons

Learning by a Perceptron

Numerical Example

Tuning the Parameters of the Perceptron

Summary

Further Reading

Approaches to Implementing Robot Collaboration

Coordination by Local Exchange of Information

Direct Communications

Indirect Communications

The BeeClust Algorithm

The ASSISIbf Implementation of BeeClust

Swarm Robotics Based on Physical Interactions

Collaborating on a Physical Task

Combining the Forces of Multiple Robots

Occlusion-Based Collective Pushing

Summary

Further Reading

Forward Kinematics

Inverse Kinematics

Rotations

Rotating a Vector

Rotating a Coordinate Frame

Transforming a Vector from One Coordinate Frame

Rotating and Translating a Coordinate Frame

A Taste of Three-Dimensional Rotations

Rotations Around the Three Axes

The Right-Hand Rule

Matrices for Three-Dimensional Rotations

Multiple Rotations

Euler Angles

The Number of Distinct Euler Angle Rotations

Advanced Topics in Three-Dimensional Transforms

Summary

Further Reading

Gambar

Fig. 1.3 Robots on an assembly line in a car factory. Source https://commons.wikimedia.org/
Fig. 1.6 a LEGO ® Mindstorms EV3 (Courtesy of Adi Shmorak, Intelitek), b Poppy Ergo Jr robotic arms (Courtesy of the Poppy Project)
Fig. 1.9 Wonder software for the Dash robot. Source https://www.makewonder.com/mediakit by permission of Wonder Workshop
Fig. 1.8 VPL software for the Thymio robot
+7

Referensi

Dokumen terkait

Based on principles and standards for school mathematics (1) presents mathematical communication standards in students in terms of 1) Organizing and consolidating mathematical

Notations Symbols A Pre-exponential factor in the Arrhenius equation s1 Ea Activation energy kJ mol kapp Pseudo-first-order rate constant for DCF ozonation s1 kOH Rate

Keywords: measurable functions; integrable functions; functions of bounded variation; Cauchy- Bunyakovsky-Schwarz inequality; Grüss’ type inequalities; ˇCebyšev functional 1.. For two

Table 10: Highest Muscle Activation for Males between Left and Right Hand Subjects Palmaris Longus Extensor Carpi Radialis L R L R Male B-H n=3 3 0 2 1 Subjects that write using

Calculation use method mathematical: 1 Runways 1 For arrival just arrival only obtained runway capacity of 42 movements per hour, for arrival only departure only is obtained runway

In this short paper, an integral inequality posed in the 11th Inter- national Mathematical Competition for University Students, Skopje, Macedo- nia, 25–26 July 2004 is generalized..

Beti Riawati, Nur Karomah Dwidayati&Isnaini Rosyida/ Unnes Journal of Mathematics Education Research 9 1 2020 1 - 10 8 language or mathematical symbols daily events in language or

Keywords: Gender Inequality Index; Per Capita Gross Domestic Product; Trade; Foreign Direct Investment; Dynamic Panel Data 1 Introduction Gender inequality remains a challenge for