• Tidak ada hasil yang ditemukan

Artificial Intelligence

N/A
N/A
Protected

Academic year: 2023

Membagikan "Artificial Intelligence"

Copied!
1151
0
0

Teks penuh

He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence. He is a member of the American Association for Artificial Intelligence and the Association for Computing Machinery.

1 INTRODUCTION

W HAT I S AI?

  • Acting humanly: The Turing Test approach
  • Thinking humanly: The cognitive modeling approach
  • Acting rationally: The rational agent approach

In most of the book, however, we will adopt the working hypothesis that perfect rationality is a good starting point for analysis. It simplifies the problem and provides the appropriate framework for most of the basic material in the field.

T HE F OUNDATIONS OF A RTIFICIAL I NTELLIGENCE

  • Philosophy
  • Mathematics
  • Economics
  • Neuroscience
  • Psychology
  • Computer engineering
  • Linguistics

In 1943, the same group developed the Colossus, a powerful general-purpose machine based on vacuum tubes.9 The first programmable computer was the Z-3, Konrad Zuse's invention in Germany in 1941. They viewed goal-directed behavior as arising from a regulatory mechanism that tries to minimize "error" - the difference between current state and target state.

Figure 1.2 The parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a cell nucleus
Figure 1.2 The parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a cell nucleus

T HE H ISTORY OF A RTIFICIAL I NTELLIGENCE

  • The gestation of artificial intelligence (1943–1955)
  • The birth of artificial intelligence (1956)
  • Early enthusiasm, great expectations (1952–1969)
  • A dose of reality (1966–1973)
  • Knowledge-based systems: The key to power? (1969–1979)
  • AI becomes an industry (1980–present)
  • The return of neural networks (1986–present)
  • AI adopts the scientific method (1987–present)
  • The emergence of intelligent agents (1995–present)
  • The availability of very large data sets (2001–present)

Later systems also incorporated the main theme of McCarthy's Advice Taker approach - the clean separation of the knowledge (in the form of rules) from the reasoning component. It was able to overcome ambiguity and understand pronoun references, but that was mainly because it was designed specifically for one area—the block world.

Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”
Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”

T HE S TATE OF THE A RT

Such work suggests that the "knowledge bottleneck" in AI—the problem of how to express all the knowledge a system needs—can be solved in many applications by learning methods rather than by hand-coded knowledge engineering, provided that the learning algorithms have enough data to proceed (Halevyet al., 2009). Newsweekmagazine described the match as "The Last Battle of the Brains." The value of IBM stock increased by $18 billion.

S UMMARY

Machine translation: A computer program automatically translates from Arabic to English, allowing an English speaker to see the headline "Erdogan confirms Turkey won't accept any pressure, urges them to recognize Cyprus." The program uses a statistical model built from examples of Arabic-to-English translations and from examples of English text amounting to two trillion words (Brantset al., 2007). None of the computer scientists on the team speak Arabic, but they understand statistics and machine learning algorithms.

2 INTELLIGENT AGENTS

A GENTS AND E NVIRONMENTS

Internally, the agent function for the artificial agent will be performed by the agent program. A partial tableau of this agent function is shown in Figure 2.3, and the agent program that implements it is shown in Figure 2.8 on page 48.

Figure 2.1 Agents interact with environments through sensors and actuators.
Figure 2.1 Agents interact with environments through sensors and actuators.
  • Rationality
  • Omniscience, learning, and autonomy

If the geography of the environment is unknown, the agent will have to explore it rather than sticking to boxes AandB. To the extent that an agent relies on the foreknowledge of its designer rather than on its own perceptions, we say that the agent lacks autonomy.

T HE N ATURE OF E NVIRONMENTS

  • Specifying the task environment
  • Properties of task environments

Fully observable environments are convenient because the agent does not need to maintain any internal state to keep track of the world. UNKNOWN itself, but to the agent's (or designer's) knowledge of the "physical laws" of the environment.

Figure 2.5 Examples of agent types and their PEAS descriptions.
Figure 2.5 Examples of agent types and their PEAS descriptions.

T HE S TRUCTURE OF A GENTS

  • Agent programs
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents
  • How the components of agent programs work

Let P be the set of possible perceptions and let T be the lifetime of the agent (the total number of perceptions it will receive). First, we need information about how the world evolves independently of the agent—for example, that an overtaking car will generally be closer behind than it was a moment ago. Instead, the box labeled "how the world is now" (Figure 2.11) represents the agent's "best guess" (or sometimes best guess).

The critic is necessary because the perceptions themselves give no indication of the agent's success.

Figure 2.9 Schematic diagram of a simple reflex agent.
Figure 2.9 Schematic diagram of a simple reflex agent.

S UMMARY

Show that the action of a rational agent can depend not only on the state of the environment, but also on the time step it has reached. Discuss possible agent plans for cases in which clean squares can become dirty and the geography of the environment is unknown. How do your answers change if the agent's perceptions give it the clean/dirty status of each square in the environment. 2.11 Consider a modified version of the vacuum environment in Exercise 2.8, in which the geography of the environment—its extent, boundaries, and obstacles—is unknown, as is the initial impurity configuration.

How does it affect your agent program if the dirt sensor gives the wrong answer 10% of the time.

3 SOLVING PROBLEMS BYSEARCHING

P ROBLEM -S OLVING A GENTS

  • Well-defined problems and solutions A problem can be defined formally by five components
  • Formulating problems

In that case, it makes sense for the agent to adopt the goal of getting to Bucharest. For now, let us assume that the agent will consider actions on the level of driving from one major city to another. Solving problems by searching we assume that the environment is observable so that the agent always knows the current state.

For an agent trying to get to Bucharest, time is of the essence, so the price of a route can be its length in kilometers.

Figure 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time
Figure 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time

E XAMPLE P ROBLEMS

  • Toy problems
  • Real-world problems

The goal is to reach a specified target state, as shown on the right in the figure. States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. The simplest formulation defines the actions as movements of the empty space Left, Right, Up or Down.

Checking a step in the sequence for feasibility is undoing some of the work already done.

Figure 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.
Figure 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.

S EARCHING FOR S OLUTIONS

  • Infrastructure for search algorithms
  • Measuring problem-solving performance

A boundary (white nodes) always separates the explored region of the state space (black nodes) from the unexplored region (gray nodes). In (c), the remaining root successors were expanded in a clockwise direction. the initial state to the unexplored state must pass through the boundary state. Search algorithms require a data structure to keep track of the search tree being built.

In theoretical computer science, the typical measure is the size of the state-space graph |V|+|E|, where V is the set of vertices (nodes) of the graph and E is the set of edges (links).

Figure 3.6 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed
Figure 3.6 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed

U NINFORMED S EARCH S TRATEGIES

  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening depth-first search
  • Bidirectional search
  • Comparing uninformed search strategies

For different values ​​of the depth-first solution, it gives the time and memory required for breadth-first search with a branching factor of b = 10. The properties of depth-first search depend strongly on whether it is a graph or tree search variant. used. For example, in Figure 3.16, the search will depth-first explore the entire left subtree, even if node C is the target node.

In the remainder of this section, we focus primarily on the tree version of depth-first search.

Figure 3.11 Breadth-first search on a graph.
Figure 3.11 Breadth-first search on a graph.

I NFORMED (H EURISTIC ) S EARCH S TRATEGIES

  • Greedy best-first search
  • A* search: Minimizing the total estimated solution cost
  • Memory-bounded heuristic search
  • Learning to search better

RELATIVE ERROR defined as Δ≡h∗−h, where h∗ is the actual cost of getting from the root to the target, and the relative error is defined as ≡(h∗−h)/h∗. The complexity results are highly dependent on the state space assumptions. When the state space has many target states – especially near-optimal target states – the search process may stray from the optimal path and there is an additional cost proportional to the number of targets whose costs are within a factor of the optimal cost. As the recursion finishes, RBFS replaces the f value of each node along the path with a backup value: the best value of its child nodes.

limit value for each recursive call is displayed at the top of each current node, and each node is labeled with itsf-cost. a) The path via Rimnicu Vilcea is followed until the current best leaf (Pitesti) has a value worse than the best alternative path (Fagaras).

Figure 3.22 Values of h SLD —straight-line distances to Bucharest.
Figure 3.22 Values of h SLD —straight-line distances to Bucharest.

H EURISTIC F UNCTIONS

  • The effect of heuristic accuracy on performance
  • Generating admissible heuristics from relaxed problems
  • Generating admissible heuristics from subproblems: Pattern databases Admissible heuristics can also be derived from the solution cost of a subproblem of a given
  • Learning heuristics from experience

The answer is "basically yes." From the definitions of both heuristics, it is easy to see that for any node h2(n)≥h1(n). Clearly, the cost of the optimal solution to this subproblem is a lower bound on the cost of the entire problem. Then it is easy to see that the sum of both costs is still the lower bound of the cost of solving the entire problem.

Each instance consists of a state from the solution path and the actual cost of the solution from that point.

Figure 3.28 A typical instance of the 8-puzzle. The solution is 26 steps long.
Figure 3.28 A typical instance of the 8-puzzle. The solution is 26 steps long.

S UMMARY

Most of the state space search problems analyzed in this chapter have a long history in the literature and are less trivial than they seem. The automation of the relaxation process was successfully implemented by Prieditis (1993), building on previous work with Mostow (Mostow and Prieditis, 1989). Specify an upper bound on the total size of the state space defined by your formulation.

Which of the following heuristics are acceptable for the problem of moving all vehicles to their destinations.

Figure 3.31 A scene with polygonal obstacles. S and G are the start and goal states.
Figure 3.31 A scene with polygonal obstacles. S and G are the start and goal states.

4 BEYOND CLASSICALSEARCH

L OCAL S EARCH A LGORITHMS AND O PTIMIZATION P ROBLEMS

  • Hill-climbing search
  • Simulated annealing
  • Local beam search
  • Genetic algorithms

Local Search Algorithms and Optimization Problems 125 The success of hill climbing depends very much on the shape of the state-space landscape: if there are few local maxima and plateaus, a random restart hill climb will find a good solution very quickly. The inner loop of the simulated-annealing algorithm (Figure 4.5) is very similar to hill climbing. The probability decreases exponentially with the "badness" of the move - the amount ΔE by which the evaluation is worsened.

In effect, the states that generate the best followers are saying to the others, "Come over here, the grass is greener!" The algorithm quickly abandons fruitless searches and shifts its resources to where the most progress is being made.

Figure 4.1 A one-dimensional state-space landscape in which elevation corresponds to the objective function
Figure 4.1 A one-dimensional state-space landscape in which elevation corresponds to the objective function

L OCAL S EARCH IN C ONTINUOUS S PACES

Finding the empirical gradient is like climbing a steepest-ascent hill in a discretized version of the state space. An optimization problem is bounded if the solutions must satisfy some strict constraints on the values ​​of the variables. The difficulty of constrained optimization problems depends on the nature of the constraints and the objective function.

This is a special case of the more general problem of convex optimization, which allows the constraint region to be any convex region and the objective to.

S EARCHING WITH N ONDETERMINISTIC A CTIONS

  • The erratic vacuum world
  • Try, try again

Now suppose we introduce nondeterminism in the form of a powerful but erratic vacuum cleaner. For example, in the erratic vacuum world, the Vacuum action in state 1 leads to a state in the set {5,7} – the dirt in the right square can possibly be vacuumed up. The solution is shown with bold lines in the figure; it corresponds to the plan given in equation (4.3).

Consider a slippery vacuum world, which is identical to a regular (non-slippery) vacuum world, except that movement sometimes fails and the agent remains in the same place.

Figure 4.9 The eight possible states of the vacuum world; states 7 and 8 are goal states.
Figure 4.9 The eight possible states of the vacuum world; states 7 and 8 are goal states.

S EARCHING WITH P ARTIAL O BSERVATIONS

  • Searching with no observation
  • Searching with observations
  • Solving partially observable problems
  • An agent for partially observable environments

The previous definitions enable the automatic construction of the belief state problem formulation from the definition of the underlying physical problem. For example, in the sensorless vacuum world, the initial belief state is and we need to find an action sequence that works in all 8 states. Therefore, given this as an initial perception, the initial belief state for the vacuum world with local sensing will be {1,3}.

The observation prediction stage determines the set of perceptions o that can be observed in the predicted belief state:.

Figure 4.13 (a) Predicting the next belief state for the sensorless vacuum world with a deterministic action, Right
Figure 4.13 (a) Predicting the next belief state for the sensorless vacuum world with a deterministic action, Right

O NLINE S EARCH A GENTS AND U NKNOWN E NVIRONMENTS

  • Online search problems
  • Online search agents
  • Online local search
  • Learning in online search

The cost is the total road cost of the road that the agent actually travels. In depth-first search, this means going back to the state from which the agent most recently entered the current state. H(s) just initializes the heuristic estimate h(s) and is updated as the agent gains experience in the state space.

In (a), the agent appears to be stuck in a flat local minimum in the shadowed state.

Figure 4.19 A simple maze problem. The agent starts at S and must reach G but knows nothing of the environment.
Figure 4.19 A simple maze problem. The agent starts at S and must reach G but knows nothing of the environment.

S UMMARY

It was one of the first applications of computers; The simplex algorithm (Dantzig, 1949) is still used despite its worst-case exponential complexity. The observation will be a list of the positions, relative to the agent, of the visible vertices. Adjust the environment so that the agent lands on an unintended destination 30% of the time (chosen at random from the other visible vertices, if any; otherwise no movement at all).

Give an example of the agent successfully overcoming two consecutive movement errors and still reaching the goal.

5 ADVERSARIAL SEARCH

  • G AMES
  • O PTIMAL D ECISIONS IN G AMES
    • The minimax algorithm
    • Optimal decisions in multiplayer games
  • A LPHA –B ETA P RUNING
    • Move ordering
  • I MPERFECT R EAL -T IME D ECISIONS
    • Cutting off search
    • Forward pruning
  • S TOCHASTIC G AMES
    • Evaluation functions for games of chance
  • P ARTIALLY O BSERVABLE G AMES
    • Kriegspiel: Partially observable chess

A node's minimax value is the utility (to MAX) of being in the corresponding state, assuming both players play optimally from there to the end of the game. In other words, the root value and thus the minimax decision are independent of the pruned leaf xandy values. Otherwise, an agent using the evaluation function may make a mistake, even if he can look ahead to the end of the game.

For example, suppose that our experience suggests that 72% of the states encountered in two-pawn vs.

Figure 5.1 A (partial) game tree for the game of tic-tac-toe. The top node is the initial state, and MAX moves first, placing an X in an empty square
Figure 5.1 A (partial) game tree for the game of tic-tac-toe. The top node is the initial state, and MAX moves first, placing an X in an empty square

Gambar

Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”
Figure 2.1 Agents interact with environments through sensors and actuators.
Figure 2.2 A vacuum-cleaner world with just two locations.
Figure 2.4 PEAS description of the task environment for an automated taxi.
+7

Referensi

Dokumen terkait

Water movement Water movement in the field can take several forms: flow over the ground surface; in water-saturated soils and subsoils aquifers; and in unsaturated soil.. In each case