EXERCISES
3.3 DELIBERATIVE APPROACH
Mobile robot control broadly adheres to two philosophies, deliberative and reactive.
Deliberative approaches or horizontal decomposition advocate the development of a centralised world representation from the fusion of sensory data, development of plans and subsequent execution of a given task and this process repeats.
To illustrate this point imagine a simple line following robot using deliberative principles.
It has a light sensor to perceive the environment (sense) and a motor to move around (act), however it also has a sleek computer onboard to design a map of the path it has to traverse a line for the line follower (plan), prior to its locomotion. So, unlike Braitenberg’s vehicles the sensing and acting are two different physical processes connected by planning. In a direct comparison to Braitenberg’s vehicles, this approach needs an onboard computer and makes this line following robot way too expensive and it adds to bulkiness both in hardware and software domains. If planning is avoided, then it is not only an advantage for the design but the execution of the task is also easier, as shown in flow diagrams inFigure 3.12.
The early robots, developed in the 1960s and 1970s, viz. Stanford Cart, Shakey, Hilare, Alvin etc., were more targetted on trying to get the robot to move and not on attaining higher levels of competence and cognition.They were based on the sense-plan-act model, which is a top-down approach or a functional decomposition. In the sense-plan-act model, the sense system translates the raw sensor data into a world model, the plan system then develops plans to execute a goal in this world model and finally the act system executes this plan. The intelligence of this model lies in the plan system or the programmer and not in the act system.
3.3.1 Shortcomings of the deliberative approach
By the mid-1980s researchers had found several lacuna in the sense-plan-act model. The inability to react to real-time response, often failure to attend to an emergency, bottlenecks leading to long latency time and lackluster performance in an unknown, unpredictable and noisy environment confirmed the inadequacy of the system. Since sense-plan-act worked cyclically, adding more sensors or enabling multiple goals meant more data to transfer over the cycle, thus leading to even worse performance. Horizontal decomposition shown in Figure 3.13, inherently assumes that the environment remains static between successive plans. Such an assumption correlates very badly to a dynamic world with moving obstacles and changing goal points. Another drawback was the lack of robustness. Because the information is processed sequentially, a failure in any single component causes a complete breakdown of the system. Such issues in implementation, motivations from the natural
FIGURE 3.12 Sense-plan-act approach vs. reactive approach.In (a) the two parallelograms are separated by a rectangle — processing of information between input and output. In contrast (b) lacks planning, and things happen on the run — more like the control paradigm for Braitenberg vehicles 1 to 4.
101
FIGURE 3.13 Deliberative approachor horizontal decomposition. From sensing to action, the modules of perception, modelling, planning, execution and motor control are designed as a series.
Thus a bottleneck at any module slows down the process. Also every module from perception to motor control has to work for the success of this control structure. A failure at any of these modules will lead to breakdown of the robot. Adapted from Hu and Brady [157].
world as Braitenberg’s and Walter’s and the obvious need for a parallelism in mobile robot control led to the development of the reactive paradigm or vertical decomposition. Other than the inadequecies of the deliberative approach, two more reasons contributed towards the nouvelle AI: motivations from the natural world and the differences between robots and computers.
3.3.2 From animals to robots
Biological science has always proved to be an opulent source of motivation for development of AI and robotics. To develop artificial beings which have an intelligence of their own should be rightfully motivated from the living world. One of the earliest pioneer was Da Vinci, who made designs of futuristic vehicles with inspirations from birds. Ants and bees have influenced optimisation algorithms. Anemotaxis has motivated navigation methods. Fukuda devised his brachiating robot studying gibbons. Works of Gibson [121] and later Duchon [99]
elucidate the basis of ecology-based robotics and are derived from animal behaviour. An entire discipline of cybernetics has taken shape in order to marry the ideas from biology to those of AI. All of swarm robotics is influenced by natural swarms of animals, birds, insects etc.
3.3.3 Robots and computers are fundamentally different
AI was developed in the latter half of the 20th century with the focus on the development of number crunching systems and helped in development of computers. The sense-plan-act
the final behaviour of the robot is rather different from what is coded in. The sense-plan-act approach, an open-loop plan execution without real-time feedback from the environment is not sufficient to deal with environmental uncertainty, moving obstacles and emergencies.
3. Opportunism suppresses the maxim of plan: Blindly following plans has proven to be detrimental in real world scenarios. Consider a foraging robot schematically shown in Figure 3.14, in a sense-plan-act tenet with 2 types of sensors, laser for obstacle avoidance and olfactory for food detection. A simple serial algorithm for it would be:
Algorithm 1Foraging robot using a serial approach repeat, move forward
if obstacle then move left else
use olfactory to detect food item end if
until forever
Atpoint Athe food item is within the detecting range of the olfactory sensor. Going by serial operation the robot must first avoid the obstacle and move to point B, where the food item will be out of its range of detection. If instead of moving left atpoint A, had the robot taken a opportunistic approach and moved right then it would have tracked down the food item successfully.
4. Graceful degradation: For robots operating in the real world, things can get difficult for three particular occurrences [164];
(a) A given command structure behaves in a different manner due to dynamic environment. As an example, for a robot following a trail of a white line discerns the track from the contrast of the two intensities as gathered from its light sensor;
when it enters an area which is heavily lit, the contrast of the two intensities will be much poorer and it makes the robot to slow down or to lose track of the white line.
(b) Robot’s program makes an assumption about the world which proves to be false.
As an example, for a robot programmed sequentially to move forward 20 meters and fetch a black ball, the tachometers are coded to 20 meters however, the program assumes no friction. With substantial frictional forces, the robot will fail to achieve this task.
(c) Sensor failure, for inaccurate input data, a computer program would give an unreliable result. However, for a robot, inaccurate data from the sensors is
103
FIGURE 3.14 Opportunism suppresses plans.The robot is foraging for food and has two types of sensing; laser for obstacle avoidance and olfactory to detect the food, and is driven by a serial algorithm. At point A, the robot must move to right in order to find the food item, which is not possible for serial execution.
commonplace. The robot programming should be designed in such a way that the robot performance should not break down due to such a failure. A parallel processing over multiple sensors gets around this issue.
These difficulties of a robot interacting in the real world separate it from number crunching systems. Efforts to have parallelism in sensor actuator pairs help in sustaining the robot in scenarios of sensor failure, effectively dealing with a dynamic real world, and to have an opportunistic outlook to attain a given task were realised in reactive approach which is also termed the nouvelle AI, as it was a paradigm shift from the traditional AI.