• Tidak ada hasil yang ditemukan

Dealing with Uncertainty on the Model, Random Disturbances, and Mea- surement Errorssurement Errors

Conclusions, Future Directions, and Possible Extensions

7.2 Future Directions and Possible Extensions

7.2.2 Monitoring a Distributed Environment

7.2.2.4 Dealing with Uncertainty on the Model, Random Disturbances, and Mea- surement Errorssurement Errors

In this section, we consider uncertainty originating from different sources: uncertainty on the model of the discrete state evolution (uncertainty on fi), random disturbances caused for example by unmodeled agents that populate the environment, and measurement errors

due to false alarms or missed detections. The arguments are not formal and have the sole purpose of showing possible ways of dealing with uncertainty within this framework.

Uncertainty on the Model. In this case, given a current agent location, (1) either the next location is not uniquely determined and belongs to a set of possible known locations, (2) or only a nominal next location is known and the rest is unmodeled because it is unexpected.

In case (1), the abscissa of an agent looks as depicted in Figure 7.2. This could corre-

Figure 7.2: Abscissa of one agent with uncertainty on the model of type (1).

spond in practice to the fact that the agent does not behave exactly the same way every day, but there might be variations from one day to the other that one can model. Also, the condi- tion that decides which way to go is not directly observable in general. In this case nothing changes in the estimation algorithm structure except that for each agent, the algorithm has to keep track of all possible branchings at the same time. This translates to a lower and an upper bound with dynamic dimension for each agent. The dimension increases when a new branching occurs, and it decreases when one branching becomes inconsistent with the measurement. The drawback is that the algorithm updates more than one variable for each agent with an increased computational cost.

In case (2), the abscissa of each agent looks as depicted in Figure 7.3. The unshaped parts are completely unknown as they correspond to the agent behavior that was not ex- pected and thus not modeled. If an agent goes in the unshaped region, it is lost until (if ever) he comes back on the nominal path. This corresponds in practice to the fact that one day an agent does not enter the building because he is sick or something unexpected happened to him, for example. In this case, the agent is lost. However,robustness of the

Figure 7.3: Abscissa of one agent with uncertainty on the model of type (2).

estimator requires that this does not affect “too much” the estimation error on the other agents. This can be guaranteed under the assumption already made of interval observabil- ity. This assumption states that for each agenti there is periodically one sensor firing for which the interval [Li,Ui] is the only one compatible with the abscissa coordinate where the sensor firing occurs for the entire time [Li,Ui] is compatible with it. As a consequence, if a firing does not occur for any time at which [Li,Ui] is the only one compatible with the firing of the sensor, it means that agentAi did not follow the nominal path. This fact gives an idea of how the algorithm can detect when an agent does something unexpected. Once the inconsistency is detected for one agent, the algorithm can keep estimating the position of all the other agents as usual. Note that before the inconsistency is detected, the estima- tion error can increase for all of the agents (this point is illustrated in a later simulation example).

Random Disturbances. This paragraph covers the case in which there are people wan- dering around the building whose identities are not known and whose behaviors are not modeled. They obviously cause the sensors to fire when they stop by locations at which the sensors are positioned. Robustness of the estimator requires that the estimation error does not diverge due to these random events. This can be obtained for example if an agent re- turns periodically to the same location where a sensor is placed after a long enough period of time. This way, if a firing is caused by a random agent, this happening will be detected at some later time. The basic idea is that when a firing occurs and a random agent could be the cause of that, the algorithm can keep track of multiple hypotheses, one in which the firing was caused by an agent and one in which the firing was caused by a random agent.

This causes an increase of the estimation error. At some later time the false hypothesis

will be detected as such, and the estimation error decreases again (this is illustrated in a simulation example).

Measurement Errors. Measurement errors can be of two kinds. We can have missed detections or false alarms. The case of missed detection can be treated in a way similar to the way uncertainty on the model (case (2)) is treated. In fact, a missed detection will cause an inconsistency to be detected as a sensor firing was in fact expected, but it did not occur.

The case of false alarm is similar to the case of random firings caused by random agents.

These “spurious firings” can be detected as such as explained in the paragraph on random disturbances.