• Tidak ada hasil yang ditemukan

Artificial Intelligence A Modern Approach

N/A
N/A
sayur sayurusai

Academic year: 2023

Membagikan "Artificial Intelligence A Modern Approach "

Copied!
1151
0
0

Teks penuh

He is a fellow and former member of the Executive Council of the American Association for Artificial Intelligence. He is a member of the American Society for Artificial Intelligence and the Society for Computing Machinery.

1 INTRODUCTION

W HAT I S AI?

  • Acting humanly: The Turing Test approach
  • Thinking humanly: The cognitive modeling approach
  • Acting rationally: The rational agent approach

For most of the book, however, we will accept the working hypothesis that perfect rationality is a good starting point for analysis. This simplifies the problem and provides the appropriate setting for most of the foundational material in the field.

T HE F OUNDATIONS OF A RTIFICIAL I NTELLIGENCE

  • Philosophy
  • Mathematics
  • Economics
  • Neuroscience
  • Psychology
  • Computer engineering
  • Linguistics

In 1943, the same group developed the Colossus, a powerful universal machine based on vacuum tubes.9 The first programmable computer was the Z-3, Konrad Zuse's invention in Germany in 1941. They saw goal-directed behavior as arising from a regulatory mechanism that tries to minimize " error" - the difference between current state and target state.

T HE H ISTORY OF A RTIFICIAL I NTELLIGENCE

  • The gestation of artificial intelligence (1943–1955)
  • The birth of artificial intelligence (1956)
  • Early enthusiasm, great expectations (1952–1969)
  • A dose of reality (1966–1973)
  • Knowledge-based systems: The key to power? (1969–1979)
  • AI becomes an industry (1980–present)
  • The return of neural networks (1986–present)
  • AI adopts the scientific method (1987–present)
  • The emergence of intelligent agents (1995–present)
  • The availability of very large data sets (2001–present)

Later systems also incorporated the main theme of McCarthy's Advice Taker approach – the clean separation of knowledge (in the form of rules) from the thinking component. It was able to overcome ambiguity and understand pronoun references, but that was mainly because it was designed specifically for one area - the block world.

T HE S TATE OF THE A RT

Work like this suggests that the "knowledge bottleneck" in AI—the problem of how to express all the knowledge a system needs—can be solved in many applications by learning methods rather than hand-coded knowledge technology, provided the learning algorithms have enough data to go by on (Halevy et al., 2009). Newsweek magazine described the fight as "The Brain's Last Stand." The value of IBM's shares increased by $18 billion.

S UMMARY

Machine translation: A computer program automatically translates from Arabic to English, allowing an English speaker to see the headline: “Ardogan confirms Turkey would not accept any pressure and urges them to recognize Cyprus.” The program uses a statistical model constructed from examples of translations from Arabic to English and from examples of English text with a total of two trillion words (Brantset al., 2007). None of the computer scientists on the team speak Arabic, but they do understand statistics and machine learning algorithms.

2 INTELLIGENT AGENTS

A GENTS AND E NVIRONMENTS

Internally, the agent function of an artificial agent will be implemented by an agent program. A partial tabulation of this agent function is shown in Figure 2.3, and an agent program that implements it is shown in Figure 2.8 on page 48.

  • Rationality
  • Omniscience, learning, and autonomy

If the geography of the environment is unknown, the agent will have to explore it instead of sticking to the AandB squares. To the extent that an agent relies on the prior knowledge of its designer rather than on its own observations, we say that the agent has no autonomy.

T HE N ATURE OF E NVIRONMENTS

  • Specifying the task environment
  • Properties of task environments

Fully observable environments are convenient because the agent does not need to maintain any internal state to keep track of the world. UNKNOWN itself, but for the agent's (or designer's) state of knowledge of the "laws of physics" of the environment.

T HE S TRUCTURE OF A GENTS

  • Agent programs
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents
  • How the components of agent programs work

Let P be the set of possible observations and let T be the lifetime of the agent (the total number of observations it will receive). First, we need some information about how the world evolves independently of the agent – ​​for example, that an overtaking car will generally be closer behind than it was a moment ago. Instead, the “what the world looks like now” box (Figure 2.11) represents the agent's “best guess” (or sometimes best guess).

The critic is necessary because the perceptions themselves do not indicate the agent's success.

S UMMARY

Show that the action of a rational agent depends not only on the state of the environment but also on the time step it has reached. Discuss possible agent designs for the cases where clean squares can become dirty and the geography of the environment is unknown. How do your answers to and b change if the agent's perceptions give it the clean/dirty status of each square in the environment. 2.11 Consider a modified version of the vacuum environment of Exercise 2.8, where the geography of the environment—its extent, boundaries, and obstacles—is unknown, as is the original debris configuration.

How is your agent program affected if the dirt sensor gives the wrong answer 10% of the time.

3 SOLVING PROBLEMS BYSEARCHING

P ROBLEM -S OLVING A GENTS

  • Well-defined problems and solutions A problem can be defined formally by five components
  • Formulating problems

In that case, it makes sense for the agent to adopt the goal of getting to Bucharest. For now, let us assume that the agent will consider actions on the level of driving from one major city to another. Solving problems by searching we assume that the environment is observable so that the agent always knows the current state.

For the agent trying to get to Bucharest, time is of the essence, so the cost of a path may be its length in kilometers.

E XAMPLE P ROBLEMS

  • Toy problems
  • Real-world problems

The goal is to reach a certain target state as shown on the right side of the figure. States: The state description specifies the location of each of the eight tiles and the void in one of the nine squares. The simplest formulation defines actions as movements of empty space left, right, up or down.

Checking a step in the sequence for feasibility is undoing some of the work already done.

S EARCHING FOR S OLUTIONS

  • Infrastructure for search algorithms
  • Measuring problem-solving performance

The boundary (white nodes) always separates the explored region of the state space (black nodes) from the unexplored region (gray nodes). In (c), the remaining successors of the root are expanded in clockwise order. the initial state to an unexplored state must pass through a state in the boundary. Search algorithms require a data structure to keep track of the search tree being constructed.

In theoretical computer science, the typical measure is the size of the state space graph, |V|+|E|, where V is the set of vertices (nodes) of the graph and E is the set of edges (links).

U NINFORMED S EARCH S TRATEGIES

  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening depth-first search
  • Bidirectional search
  • Comparing uninformed search strategies

It shows, for different values ​​of the solution depthd, the time and memory required for a breadth-first search with branching factor b = 10. The properties of depth-first search depend strongly on whether the graph or tree search version was used. For example, in Figure 3.16, depth-first search examines the entire left subtree, even if node Ci is a target node.

For the remainder of this section, we focus primarily on the tree search version of depth-first search.

I NFORMED (H EURISTIC ) S EARCH S TRATEGIES

  • Greedy best-first search
  • A* search: Minimizing the total estimated solution cost
  • Memory-bounded heuristic search
  • Learning to search better

RELATIVE ERROR defined as Δ≡h∗−h, where h∗ is the actual cost of getting from the root to the goal, and the relative error is defined as ≡(h∗−h)/h∗. The complexity results depend very strongly on the assumptions made about the state space. When the state space has many goal states—especially near-optimal goal states—the search process can deviate from the optimal path and there is an extra cost proportional to the number of goals whose cost is within a factor of the optimal cost. As the recursion unwinds, RBFS replaces the f-value of each node along the path with a backed-up value—the bestef value of its children.

-limit value for each recursive call is shown on top of each current node, and each node is labeled with sift cost. a) The path via Rimnicu Vilcea is followed until the current best leaf (Pitesti) has a value weaker than the best alternative path (Fagaras).

H EURISTIC F UNCTIONS

  • The effect of heuristic accuracy on performance
  • Generating admissible heuristics from relaxed problems
  • Generating admissible heuristics from subproblems: Pattern databases Admissible heuristics can also be derived from the solution cost of a subproblem of a given
  • Learning heuristics from experience

The answer is "basically yes." From the definitions of both heuristics, it is easy to see that for any node h2(n)≥h1(n). Clearly, the cost of the optimal solution to this subproblem is a lower bound on the cost of the entire problem. Then it is easy to see that the sum of both costs is still the lower bound of the cost of solving the entire problem.

Each example consists of a state from the solution path and the actual cost of the solution from that point.

S UMMARY

Most of the state-space search problems analyzed in this chapter have a long history in the literature and are less trivial than they might appear. The automation of the relaxation process was successfully implemented by Prieditis (1993), building on previous work by Mostow (Mostow and Prieditis, 1989). Give an upper bound on the total size of the state space defined by your formulation.

Which of the following heuristics are acceptable for the problem of moving all vehicles to their destinations.

4 BEYOND CLASSICALSEARCH

L OCAL S EARCH A LGORITHMS AND O PTIMIZATION P ROBLEMS

  • Hill-climbing search
  • Simulated annealing
  • Local beam search
  • Genetic algorithms

Local Search Algorithms and Optimization Problems 125 The success of hill climbing depends very much on the shape of the state-space landscape: if there are few local maxima and plateaus, a random restart hill climb will find a good solution very quickly. The inner loop of the simulated-annealing algorithm (Figure 4.5) is very similar to hill climbing. The probability decreases exponentially with the "badness" of the move - the amount ΔE by which the evaluation is worsened.

In effect, the states that generate the best followers are saying to the others, "Come over here, the grass is greener!" The algorithm quickly abandons fruitless searches and shifts its resources to where the most progress is being made.

L OCAL S EARCH IN C ONTINUOUS S PACES

Finding the empirical gradient is like climbing a steepest-ascent hill in a discretized version of the state space. An optimization problem is bounded if the solutions must satisfy some strict constraints on the values ​​of the variables. The difficulty of constrained optimization problems depends on the nature of the constraints and the objective function.

It is a special case of the more general problem of convex optimization, allowing the constraint region to be any convex region and so can the target.

S EARCHING WITH N ONDETERMINISTIC A CTIONS

  • The erratic vacuum world
  • Try, try again

Now suppose we introduce nondeterminism in the form of a powerful but erratic vacuum cleaner. For example, in the erratic vacuum world, the Vacuum action in state 1 leads to a state in the set {5,7} – the dirt in the right square might be vacuumed up. The solution is shown with bold lines in the figure; it corresponds to the plan given in equation (4.3).

Consider the slippery vacuum world, which is identical to the regular (non-volatile) vacuum world, except that movement actions sometimes fail, leaving the agent in the same location.

S EARCHING WITH P ARTIAL O BSERVATIONS

  • Searching with no observation
  • Searching with observations
  • Solving partially observable problems
  • An agent for partially observable environments

The preceding definitions enable the automatic construction of the belief state problem formulation from the definition of the underlying physical problem. For example, in the sensorless vacuum world, the initial belief state is and we need to find an action sequence that works in all 8 states. Therefore, given this as the initial perception, the initial belief state of the local sentient vacuum world will be {1,3}.

The observation prediction phase determines the set of observations that can be observed in the predicted belief state:.

O NLINE S EARCH A GENTS AND U NKNOWN E NVIRONMENTS

  • Online search problems
  • Online search agents
  • Online local search
  • Learning in online search

The cost is the total path cost of the path that the agent actually travels. In depth-first search, this means going back to the state from which the agent last entered the current state. H(s) starts out being just the heuristic estimate h(s) and is updated as the agent gains experience in the state space.

In (a), the mean appears to be stuck in a flat local minimum in the shaded state.

S UMMARY

It was one of the first applications of computers; the simplex algorithm (Dantzig, 1949) is still used despite the exponential complexity in the worst case. The perception will be a list of the positions, relative to the agent, of the visible vertices. Modify the environment so that 30% of the time the agent ends up at an undesired destination (randomly chosen from other visible vertices if any; otherwise, no movement at all).

Shows an example of the agent successfully overcoming two consecutive movement errors and still reaching the goal.

5 ADVERSARIAL SEARCH

  • G AMES
  • O PTIMAL D ECISIONS IN G AMES
    • The minimax algorithm
    • Optimal decisions in multiplayer games
  • A LPHA –B ETA P RUNING
    • Move ordering
  • I MPERFECT R EAL -T IME D ECISIONS
    • Cutting off search
    • Forward pruning
  • S TOCHASTIC G AMES
    • Evaluation functions for games of chance
  • P ARTIALLY O BSERVABLE G AMES
    • Kriegspiel: Partially observable chess

The minimax value of a node is the utility (for MAX) of being in the corresponding state, provided both players play optimally from there to the end of the game. In other words, the value of the root and thus the minimum decision is independent of the values ​​of the pruned leaves. Otherwise, an agent using the evaluation function may fail even though it can look ahead all the way to the end of the game.

For example, suppose that our experience suggests that 72% of the states encountered in two-pawn vs.

Referensi

Dokumen terkait