• Tidak ada hasil yang ditemukan

THREE FORKS IN THE ROAD

EXERCISES

2.6 THREE FORKS IN THE ROAD

71 performance, computational capacity and the hardware aspects of the machine on which the simulation is run, viz. RAM, data rate, CPU power, clock etc. Also, as in the previous method, this approach also fails to account for unknown terrain and dynamic tasks.

Evaluation is easier for social robots, multiple robot groups, robot swarms etc. as agency is tagged to a job at hand, strongly context related and is not arbitrary. Therefore, methods of evaluation are focused on the human-robot interactions and quality of group behaviour, respectively.

convergence of mind tools of the Gregorian creatures and can rigorously, collectively and publically ‘Generate-and-Test — the scientific method. It is a debate whether this transition started at the invention of the wheel or at the Copernican revolution in the mid 1500s or the development of string theory in late 1980s. However the ability to scientifically process and structure information to form intelligent action is the ‘Generate-and-Test mechanism the scientific creature.

FIGURE 2.29 Dennett‘s Tower.Winfield suggests another layer to Dennett’s tower as shown and proposes the evolution of artificially engineered, Walterian creatures in the near future, principled on embodied AI and named after W.G. Walter.

Extending the tower upward, Winfield [359] suggests Walterian creatures, where scientific creatures are instrumental in the development of artificial beings. The tools initially made by Gregorian creatures take on a life of their own, independent of the resources and support of the toolmakers. The Walterian creatures are smart tools that have learned to think, grown up and left the toolbox behind and taken on a being of their own. While also like Gregorian creatures, they are able to share tools, knowledge and experience. Embodied AI, ANIMAT and Artificial Life has this ability, by its defintion — tools attaining life.

However, as an advancement over the Gregorians, the Walterians are capable of memetic learning. Therefore, if at least one fellow Walterian has learned a skill and is either online or has previously uploaded that skill, then another Walterian can simply download it. The Walterians are intended to be beyond human beings as creatures that have not only left the toolbox and the tool maker, but also the gene pool and escaped natural selection. They would be creations of Gregorians but will not be bound in to the biosphere afforded by Planet Earth, viz. instead of oxygen, water and food to survive, the Walterians eventually, after evolving themselves would probably only need energy. However they would still relate to its

73

Gregorian–scientific roots via inheritance and evolutionary processes, artificially motivated from biology.

Winfield believes that Walterian creatures would be profoundly different and seemingly unimaginable by Gregorian and scientific creatures. These heirs to Elmer and Elsie are prospected to be far superior to the humble fungus eaters since the Walterian creatures are capable of evolving themselves as per their need and requirement. We human beings augment and acquaint ourselves with newer tools to compensate for the lack of our sensory and survival capabilities. In stark contrast the Walterians can effect their own artificial evolution. The ability is not entirely new and is found in mother nature where lizards, earthworms and few other lower animals with the ability to grow back their some of their limbs if they are lost in some unfortunate incident. However the Walterians will be able to evolve much faster than all known biological lifeforms. As an example, Winfield considered a futuristic scenario in which an intelligent autonomous explorer robot is charting an unknown planet. The robot — a Walterian — has the capability to simulate and foresee aspects of the future, and as per the simulation re-build parts of itself on the fly using a 3D printer like an inbuilt facility. Thus by smartly evolving it deals with the situation it has encountered in that unknown terrain. Suppose, the unknown terrain is a planet that has large waterbodies and the robot falls into one of those, a fully evolved Walterian may be able to ‘grow fins’ and replicate fish-like swimming ability before it drowns, thus evolving faster than the dynamic environment to ensure survival and well being.

Walterian creatures are not without parallels. Toda and Braitenberg both had predicted the natural evolution of cooperation, competition and interdependence, therefore creating an ecological niche of artificial creatures, trending towards the development of their own societies and cultures. The ability to change its own physiology and control its adaptability and evolution is similar to the models of self-replicating machines discussed by Von Neumann in the 1940s. In the real robot domain, current day AI research has developed robots with multiple limbs which can continue to function and pursue there goals even when they have lost a limb and some functionality. These robots can ‘heal’ and ‘grow’ the lost limb(s) on the fly using an on-board 3D printing facility.

2.6.1 The problem of completeness — Planning is NP-hard

A robot is usually tasked with locomotion, to move to an assigned location in reasonable time, with least effort and hassles. Motion planning and trajectory generation are often used for these kinds of problems. A good lot of other sophisticated tasks are extension of locomotion, therefore navigation and path planning are seen both as basic behaviours and also as design tools.

It has been shown that planning gets more complicated with dimensions of the planning space, viz. a two-dimensional navigation contained on an xy plane is simpler to design and easier to program than a three-dimensional navigation in xyz space. The complexity of path planning increases exponentially with the number of dimensions, and with increasing number of obstacles. However, simple planning centric jobs such as, go to goal point, search, tracking, mapping etc. in principle may take infinite time and/or become so involved that it is rendered impractical. These types of problems are called NP-hard (non-deterministic polynomial-time hard). For such problems it is rather easy to verify a given solution but there is no analytical method to find the solution to the given problem in finite and reasonable time. This renders planning incomplete and therefore not always yielding a

conditions known as ‘exceptional handling’ to prevent the unending loop, where the robot stops after a few attempts and reports that the job cannot be accomplished.

A good algorithm should gurantee finding a solution when there is one, and report if there is none, but issues of completeness can never be ‘cured’. However they can be suitably avoided, or reduced to a minimum. Exhaustive search is an apparent solution but is a compromise in time, other methods are, granulating space by slicing it preemptively or employing subgoals and hybrid approaches and employing a second path planner when the first one is rendered redundant. These techniques are navigation centric and discussed in Chapter 4.

Completeness may come across as a problem for designing robots, but it is omnipresent and is also a tagged to human cognition, and at various times we human beings may fail to comprehend a solution which does exist in principle, or we may be stuck in logical conundrums when no solution exists.

2.6.2 The problem of meaning — The symbol grounding problem

In the 1980s Searle proposed a thought experiment, which can be worded as:; suppose in the near future, AI has developed artificial agency which can understand Chinese and can also pass the Turing Test rendering indistinguishable output in at least text and speech content when compared to human response. Therefore, such an AI will be able to convince any Chinese that it is also a Chinese-speaking human being. Searle’s inquiry, does the agency literally ‘understand’ Chinese? (strong AI) or is it merely simulating the ability to understand Chinese? (weak AI). Since the agency is producing the output by relating the input to a rule of syntax, as given in the program script without really getting the hang of the semantics as is true for current-day translating software shown in Figure 2.30, the conclusion is that program script cannot lead to ‘consciousness’, but can at best mimic it, and therefore strong AI cannot exist.

The symbol grounding problem is very closely related to Searle’s Chinese room problem.

The symbol grounding problem is the shortcoming in associating verbal labels as words and syntaxes (symbols) to actual aspects. For example, considers a robot which has a ball in front of it. The robot will not be able to connect the word (symbol) to the object (aspect), and on being given a command like “pick up the ball” will not be able to execute the job. Human beings overcome this problem as cognition is developed by neural learning and groomed by gradual acquaintance with the environment and known social norms.

The symbol grounding problem, inquires“Can a robot deal with grounded symbols?” or, can a robot recognise, represent and communicate information about the world? Engineers and designers have been able to make robots which can identify a word with an image or video employing intensity heuristics or image processing, however this is limited to a

75

FIGURE 2.30 Google Translateand similar translating software are more fast dictionary searches than attempts to understand the language, its semantics and cultural aspects. A machine learning algorithm will easily translate alphabet, words and sentences by observing patterns. In contrast, a human translator will learn the language, than merely relate the symbols to form semantics. As Searle argues, a computer only manipulates the symbols and will not understand the semantics.

Therefore strong AI doesn’t exist and no artificial means will ever supersede human cognition.

select known set of objects, and tends to tailor the robot for a particular job, sacrificing context and therefore its versatility across various other jobs and serendipitous activities as exploration and discovery. Therefore, a robot can be programmed to interact and manipulate with symbols grounded in reality with its sensorimotor embodiment, but obviously lacks universality.

Harnad [142] has suggested than the symbol grounding problem is more compelling than merely developing a robot dictionary of sorts by relating image to name as shown in Figure 2.31, and to deal with grounded symbols is the robot’s ability to autonomously and ubiquitously establish the semiotic networks relating symbols to the world. Therefore, the phenomenon of symbol groundng sets the precedence for conscious behaviour where mental states are of vital importance.

2.6.2.1 Solving the symbol grounding problem

Attempts to solve the problem, have ranged from approximate solutions to complete denial of the problem to accepting it as an inherent problem of human cognition.

1. Approximate solutions: The most obvious method to attack the symbol grounding problem is supervised learning. Experiments which enable a handshake over verbal and vision modes has demonstrated gradual betterment of behaviour. However, this is a good approximation and broadens the horizons without ever solving the problem.

2. Avoided for low-level behaviours: The problem which seemingly has serious concerns for computers is insignificant for robots or at best remains a philosophical issue which can be safely avoided. For embodied AI, the onus is more on constructing and manipulating symbols than really grounding them. Since robots are localised, the meaning of objects and action can be gradually ascertained, or completely neglected if it doesn’t matter to the robot. For example for a robot with low-level behaviour it will hardly mean much if an obstacle is a book, brick or a box. All it needs to do is avoid the obstacle. Therefore, in embodied cognition the symbol grounding problem is more of a technical problem than a problem of cognition, and Vogt [340] calls it the

‘physical symbol grounding problem’.

3. As a philosophical limitation of human cognition: The need for meaning is higher-level

FIGURE 2.31 A very poor solution to the symbol grounding problem.An apparent brute force solution to the symbol grounding problem is to label everything. However that is still insufficient to develop all of the semiotic relationships and give meaning to objects.

cognition. When attempts for solution for a robot are made, the semiotic relations and hence the meaning to the objects are carefully mapped out and then coded by human programmers, most of which are based on human semantics and culture, and are not autonomously established by the artificial agent, as is desirable by the problem definition. Therefore, such a solution to the symbol grounding problem is biased on human knowledge and reasoning and not on the arbitration of the autonomous agent.

Touretzky [324] suggests a ‘robot version of the Chinese room problem’ which poses as a counter argument to Searle by using a robot instead of a computer. Modifying the Chinese room problem, suppose we have a robot which can pass the Turing Test and boldly speaking in idiomatic Chinese, and the question asked is ‘walk up to the window and tell me what you see?’. Here, visual perception will be an inquiry into mental imagery, and may not be the desired human response, viz. the robot may not perceive the sky as blue, or the sunlight as warm etc. Therefore, demonstrating that the Chinese room problem can be overcome with embodied agency. This is a version of the Turing Test and such methods are further discussed in chapter 9.

The symbol grounding is hardly a concern for robots with low-level behaviour, but it will be an issue for developing higher cognitive functions for enabling human-like behaviour.

The shortcoming in relating symbols and the real world is not a problem unique to robots but one often faced by us human beings. For example, on encountering fruits and vegetables from a different biome many of us face such an issue, and we resort to online search on the Internet.

Thoughts which gives meaning to an object and thoughts which interpret them are often not the same. Therefore meaning in higher cognitive functions is always contextual and will differ with perception, memory, training, cognitive bias etc. Hence, the symbol grounding problem will never be solved [79, 310], but it can be effectively reduced with (1)

77 compartmentalisation of knowledge which is often by localisation and context-driven tasks, viz. for navigation it can be; marking the goal point, get to the red mark, cross three static obstacles etc. (2) machine learning and (3) since we are in the age of the Internet, use of an online knowledge base to search for the suitable meaning of a given object or instance.

2.6.3 The problem of relevance — The frame problem

Of the three problems, the frame problem has attracted most controversy [166] and has had the audacity to change the discipline of AI. It is not an issue pertaining to AI or programming, but rather is a lacuna in logic and our understanding of the world, and is often reworded as ‘the problem of relevance’. The problem is about which changes are relevant for the job at hand. For example if I am walking down a street and I see a banana skin about six feet ahead of me, I will change my direction a bit to avoid the banana skin.

Now, consider that as I am walking, instead of a banana skin, there is a typhoon far away, in the Siberian tundra. Apparently it will not affect my walk, but there are situations where relevance cannot be ascertained this easily.

The problem has been interpreted by many renowned philosophers and AI scientists.

Dennett [89] discusses the frame problem with an illustration where a robotRobot R1has to go inside a room to obtain a battery to power itself and avoid a ticking time bomb, as shown inFigure 2.32.Robot R1identifies the battery and the bomb and walks out with a wagon which has both the battery and the time bomb and soon the bomb explodes, and the robot is blown to bits.Robot R1knows the bomb is on the wagon, but fails to comprehend that bringing the wagon out of the room, also brings out the bomb, thus it fails the task as it cannot deduce the consequences of its own actions. To overcome this seemingly trivial issue, the robot designers develop Robot Deducer R1D1 which is more accomplished than R1 as it is designed not only to recognise the intended implications of its acts, but also their side effects. The underlying idea is that R1D1 will not make the mistake of rolling out the battery together with the bomb because it will be able to deduce the implications (read blunders) of doing so. Put to work, R1D1 tries to deduce various (often unnecessary and extraneous) consequences of its actions. Soon the bomb explodes and, in its blissful thinking, R1D1 meets the same fate as its predecessor. A third improved design,Robot Relevant Deducer R2D1, tries an exhaustive list-like approach to categorise the consequences of its actions as ‘relevant’ and ‘irrelevant’, a thorough approach to find all possible consequences of its actions. Unfortunately the bomb goes off andRobot Relevant Deducer R2D1is also blown to bits. All three robots suffer from the ‘frame problem’. None of them have a way of gleaning out what is relevant to the problem at hand7.

Across these three robots, as the amount of knowledge increases, so does the number of lines of code to make the robot deduce the relevant information to accommodate the problem at hand. As the number of lines of code and seemingly more sense of reason are pushed into programming these robots, their efficiency decreases, viz. R1 with the simplest coding and information was at least able to walk out with a wagon, which was not achieved by R1D1 and R2D1, both of which were blown apart in deducing the various consequences of the action. R1D1 and R2D1 were in a dilemma as to when to stop thinking, or Hamlet’s problem,8. The frame problem can be seen as Hamlet’s problem in the engineering domain.

7The STAR WARS reference is indeed compelling, but it is more of a conjecture that what R2D2 may have done was to bring out the battery yet also avoid the ticking time bomb, in process the solving the frame problem.

8Shakespeare’s play Hamlet is a tragedy themed on the ‘hero-as-fool’ genre. The apparent heir to the throne of Denmark, Prince Hamlet is the protagonist of the play. He is philosophical and contemplative, and he does not act on instinct, but rather tries very hard to make sure that every action is premeditated. His

FIGURE 2.32 Dennett’s illustration of the frame problem, where a robot has to go inside a room and obtain a battery to power itself while avoiding a ticking time bomb.

With this illustrative example Dennett demonstrates that the frame problem may seem like an issue of bad design, glitch, poor hardware and coding errors, but instead of being a technological or design shortcoming, it is an epistemological problem and a consequence of human common-sense reasoning, of ‘psychological reality’ which being situated cannot be fully reached by a programming code.

The frame problem arises in an attempt to find a sufficient number of axioms using symbolic representations [269] for a viable description of a robot’s environment. The nomenclature is from the technique of ‘frame’, as used in development of stage characters or an animation where the moving parts and characters are superimposed on a ‘frame’.

Typically the background usually doesn’t change. Description of a frame, or a robot’s environment takes a large number of statements, but a great many of them hardly change in most common interactions. Therefore, in terms of relative motion, the frame problem is described in terms of spatial location of objects which presumably do not move in the course of the movement of the robot. Hence the robot’s actions must be structured in relation to what all stays static across the event.

In a further illustration to further the understanding of the frame problem, a conversation between two friends as shown in Figure 2.33can easily go bizarre if the non-effects of even the most trivial things are mentioned. However it is to be noted that the seemingly absurd notions as Joe’s bar continuing to exist past until 7:00 PM, death of one of the two friends in the conversation and apocalypse by a meteorite hit can make sense in war, insurgencies

character is further tagged with fruitless attempts to demonstrate provability, and various of his inquiries cannot be answered with certainty. Hamlet’s struggle to avenge the murder of his father leads to melancholy and a wrestle with his own sanity. In context, Dennett’s third robot, R2D1 is in a Hamlet-like stream of thought, before being tragically blown up.