• Tidak ada hasil yang ditemukan

The referenced state

Dalam dokumen Designing Autonomous Mobile Robots (Halaman 182-186)

It should be obvious that the navigation scheme we are pursuing is one that requires the robot to know roughly where it is before it can begin looking for and processing navigation features. Very few sensor systems, other than GPS, are global in nature, and very few features are so unique as to absolutely identify the robot’s position.

When a robot is first turned on, it is unreferenced by definition. That is, it does not know its position. To be referenced requires the following:

1. The robot’s position estimate must be close enough to reality that the sensors can image and process the available navigation features.

2. The robot’s uncertainty estimate must be greater than its true position error.

If our robot is turned on and receives globally unique position data from the GPS or another sensor system, then it may need to move slightly to determine its heading from a second position fix. Once it has accomplished this, it can automatically become referenced.

If, however, a robot does not have global data available, then it must be told where it is. The proper way to do this is to tell the robot what its coordinates and azimuth are, and to also give it landmarks to check. The following incident will demonstrate the importance of this requirement.

Flashback…

One morning I received an “incident report” in my email with a disturbing photo of a prone robot attached. A guard had stretched out on the floor beside the robot in mock sleep. Apparently, the robot had attempted to drive under a high shelving unit and had struck its head (sensor pod) on the shelf and had fallen over. This was a very serious—and thankfully extremely rare—incident.

Those who follow the popular TV cartoon series “The Simpsons” may remember the episode in which Homer Simpson passed down to his son Bart the three lines that had

served him well his whole life. The third of these lines was, “It was that way when I found it, boss.” In the security business, an incident report is an account of a serious event that contains adequate detail to totally exonerate the writer. In short, it always says, “It was that way when I found it, boss.”

This incident report stated that the robot had been sent to its charger and had mysteri- ously driven under the shelving unit with the aforementioned results. In those days we had a command line interface that the guard at the console used to control the robot. A quick download of the log file showed what had happened.

Instead of entering the command, “Send Charger,” the operator had entered the com- mand “Reference Charger.” This command told the robot its position was in front of the charger, not in a dangerous aisle with overhead shelves. The reference program specified two very close reflectors for navigation. The robot was expected to fine tune its position from these reflectors, turn off its collision avoidance, and move forward to impale itself on the charging prong. Since there was no provision in the program for what to do if it did not see the reflectors, the robot simply continued with the docking maneuver.

The oversight was quickly corrected, and later interlocks were added to prevent the rep- etition of such an event, but the damage had been done. Although this accident resulted in no damage to the robot or warehouse, it had a long-lasting impact on the confidence of the customer’s management in the program.

If the robot’s uncertainty estimate is smaller than the true error in position, then the navigation agents will reject any position corrections. If this happens, the robot will continue to become more severely out of position as it moves. At some uncertainty threshold, the robot must be set to an unreferenced state automatically.

If the robot successfully finds the referencing features, then—and only then—can it be considered referenced and ready for operation. Certain safety related problems such as servo stalls should cause a robot to become unreferenced. This is a way of halting automatic operations and asking the operator to assure it is safe to continue operation. We found that operators ranged from extremely competent and conscien- tious to untrainable.

Reducing uncertainty

As our robot cavorts about its environment, its odometry is continually increasing its uncertainty. Obviously, it should become less uncertain about an axis when it has received a correction for that axis. But how much less uncertain should it become?

With a little thought, we realize that we can’t reduce an axis uncertainty to zero. As the uncertainty becomes very small, only the tiniest implied corrections will yield a nonzero quality factor (Equation 11.9 and 11.10). In reality, the robot’s uncertainty is never zero, and the uncertainty estimate should reflect this fact. If a zero uncer- tainty were to be entered into the quality equation, then the denominator of the equation would be zero and a divide-by-zero error would result.

For these reasons we should establish a blackboard variable that specifies the mini- mum uncertainty level for each axis (separately). By placing these variables in a blackboard, we can tinker with them as we tune the system. At least as importantly, we can change the factors on the fly. There are several reasons we might want to do this.

The environment itself sometimes has an uncertainty; take for instance a cube farm.

The walls are not as precise as permanent building walls, and restless denizens of the farm may push at these confines, causing them to move slightly from day to day.

When navigating from cubical walls, we may want to increase the minimum azimuth and lateral uncertainty limits to reflect this fact.

How much we reduce the uncertainty as the result of a correction is part of the art of fuzzy navigation. Some of the rules for decreasing uncertainty are:

1. After a correction, uncertainty must not be reduced below the magnitude of the correction. If the correction was a mistake, the next cycle must be ca- pable of correcting it (at least partially).

2. After a correction, uncertainty must never be reduced below the value of the untaken implied correction (the full implied correction minus the correction taken). This is the amount of error calculated to be remaining.

Uncertainty may be manipulated in other ways, and these will be discussed in up- coming chapters. It should be apparent that we have selected the simple example of column navigation in order to present the principles involved. In reality, features may be treated discretely, or they may be extracted from onboard maps by the robot.

In any event, the principles of fuzzy navigation remain the same. Finally, uncertainty may be manipulated and used in other ways, and these will be discussed in upcoming chapters.

Sensors, Navigation Agents and Arbitration

12

C H A P T E R

Different environments offer different types of navigation features, and it is impor- tant that a robot be able to adapt to these differences on the fly. There is no reason why a robot cannot switch seamlessly from satellite navigation to lidar navigation, or to sonar navigation, or even use multiple methods simultaneously. Even a single- sensor system may be used in multiple ways to look for different types of features at the same time.

Dalam dokumen Designing Autonomous Mobile Robots (Halaman 182-186)