• Tidak ada hasil yang ditemukan

Artificial Intelligence

N/A
N/A
Protected

Academic year: 2023

Membagikan "Artificial Intelligence"

Copied!
1145
0
0

Teks penuh

He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence. He is a member of the American Association for Artificial Intelligence and the Association for Computing Machinery.

1 INTRODUCTION

W HAT I S AI?

  • Acting humanly: The Turing Test approach
  • Thinking humanly: The cognitive modeling approach
  • Thinking rationally: The “laws of thought” approach
  • Acting rationally: The rational agent approach

For most of the book, however, we will accept the working hypothesis that perfect rationality is a good starting point for analysis. It simplifies the problem and provides an adequate setting for most of the foundational material in the field.

T HE F OUNDATIONS OF A RTIFICIAL I NTELLIGENCE

  • Philosophy
  • Mathematics
  • Economics
  • Neuroscience
  • Psychology
  • Computer engineering
  • Linguistics

In 1943, the same group developed the Colossus, a powerful general-purpose machine based on vacuum tubes.9 The first operational programmable computer was the Z-3, the invention of Konrad Zuse in Germany in 1941. They viewed goal-directed behavior as arising from a regulatory mechanism that tries to minimize "error" - the difference between current state and goal state.

Figure 1.2 The parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a cell nucleus
Figure 1.2 The parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a cell nucleus

T HE H ISTORY OF A RTIFICIAL I NTELLIGENCE

  • The gestation of artificial intelligence (1943–1955)
  • The birth of artificial intelligence (1956)
  • Early enthusiasm, great expectations (1952–1969)
  • A dose of reality (1966–1973)
  • Knowledge-based systems: The key to power? (1969–1979)
  • AI becomes an industry (1980–present)
  • The return of neural networks (1986–present)
  • AI adopts the scientific method (1987–present)
  • The emergence of intelligent agents (1995–present)
  • The availability of very large data sets (2001–present)

Later systems also include the main theme of McCarthy's Advice Taker approach: the clean separation of the knowledge (in the form of rules) and the reasoning component. It was able to overcome ambiguity and understand pronoun references, but this was mainly because it was designed specifically for one area: the block world.

Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”
Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”

T HE S TATE OF THE A RT

Work like this suggests that the "knowledge bottleneck" in AI—the problem of how to express all the knowledge a system needs—can be solved in many applications by learning methods rather than hand-coded knowledge engineering, provided the learning algorithms are sufficiently data to go on (Halevy et al., 2009). Newsweek magazine described the match as "The brain's last statement". The value of IBM's stock rose by $18 billion.

S UMMARY

None of the computer scientists on the team speak Arabic, but they do understand statistics and machine learning algorithms. An insightful and comprehensive history of AI is provided by Nils Nillson (2009), one of the early pioneers in this field.

2 INTELLIGENT AGENTS

A GENTS AND E NVIRONMENTS

Internally, the agent function of an artificial agent will be implemented by an agent program. A partial tabulation of this agent function is shown in Figure 2.3, and an agent program that implements it is shown in Figure 2.8 on page 48.

Figure 2.1 Agents interact with environments through sensors and actuators.
Figure 2.1 Agents interact with environments through sensors and actuators.
  • Rationality
  • Omniscience, learning, and autonomy

If the geography of the environment is unknown, the agent will have to explore it instead of sticking to squares A and B. To the extent that an agent relies on the prior knowledge of its designer rather than its own perceptions, we say that the agent lacks autonomy.

T HE N ATURE OF E NVIRONMENTS

  • Specifying the task environment
  • Properties of task environments

CONTINUOUS environment, to the way it is handled, and to the agent's perceptions and actions. UNKNOWN itself, but to the agent's (or designer's) knowledge of the "physical laws" of the environment.

Figure 2.5 Examples of agent types and their PEAS descriptions.
Figure 2.5 Examples of agent types and their PEAS descriptions.

T HE S TRUCTURE OF A GENTS

  • Agent programs
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
  • Learning agents
  • How the components of agent programs work

Instead, the box labeled “how the world is now” (Figure 2.11) represents the agent's “best guess” (or sometimes best guesses). Criticism is necessary because perceptions alone do not indicate an agent's success.

Figure 2.9 Schematic diagram of a simple reflex agent.
Figure 2.9 Schematic diagram of a simple reflex agent.

S UMMARY

Discuss possible agent designs for the cases in which clean squares can become dirty and the geography of the environment is unknown. How do your responses change and change if the agent's perceptions give it the clean/dirty status of each square in the environment. 2.11 Consider a modified version of the vacuum environment in Exercise 2.9, in which the geography of the environment – ​​its extent, boundaries and obstacles – is unknown, as is the initial dirt configuration. How is your agent program affected if the dirt sensor gives the wrong answer 10% of the time.

3 SOLVING PROBLEMS BYSEARCHING

P ROBLEM -S OLVING A GENTS

  • Well-defined problems and solutions A problem can be defined formally by five components
  • Formulating problems

In that case, it makes sense for the agent to approve the intention to go to Bucharest. We will assume that the environment is known, so the agent knows which states are reached by each action. For the agent trying to get to Bucharest, time is of the essence, so the cost of a path can be its length in kilometers.

Figure 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time
Figure 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time

E XAMPLE P ROBLEMS

  • Toy problems
  • Real-world problems

The goal is to achieve a specified goal state, as shown on the right in the figure. The simplest formulation defines the actions as movements of the empty space left, right, up or down. Checking a step in the sequence for feasibility is undoing some of the work already done.

Figure 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.
Figure 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.

S EARCHING FOR S OLUTIONS

  • Infrastructure for search algorithms
  • Measuring problem-solving performance

The boundary (white nodes) always separates the explored region of the state space (black nodes) from the unexplored region (gray nodes). In (c), the other successors of the root have been expanded clockwise. the initial state to an unexplored state must pass through a state in the border. Search algorithms require a data structure to maintain the search tree that is being built.

Figure 3.6 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed
Figure 3.6 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded; nodes that have been generated but not yet expanded are outlined in bold; nodes that have not yet been generated are shown in faint dashed

U NINFORMED S EARCH S TRATEGIES

  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening depth-first search
  • Bidirectional search
  • Comparing uninformed search strategies

It shows, for different values ​​of the solution depthd, the time and memory required for a breadth-first search with branching factor b = 10. For example, in Figure 3.16, depth-first search will explore the entire left subtree even if node Cis is a target node. For the remainder of this section, we focus primarily on the tree search version of depth-first search.

Figure 3.11 Breadth-first search on a graph.
Figure 3.11 Breadth-first search on a graph.

I NFORMED (H EURISTIC ) S EARCH S TRATEGIES

  • Greedy best-first search
  • A* search: Minimizing the total estimated solution cost
  • Memory-bounded heuristic search
  • Learning to search better

RELATIVE ERROR defined as ∆≡h∗−h, where kuh∗ is the actual cost of getting from root to goal, and relative error is defined as ǫ≡(h∗−h)/h∗. The complexity results are highly dependent on the assumptions made about the state space. When the state space has many goal states—especially states close to the optimal goal—the search process may deviate from the optimal path and incur an additional cost proportional to the number of goals whose cost is within a factor ǫ of the optimal cost. As the recursion unwinds, RBFS replaces the f value of each node along the path with the supported value—the best value of its children.

Figure 3.22 Values of h SLD —straight-line distances to Bucharest.
Figure 3.22 Values of h SLD —straight-line distances to Bucharest.

H EURISTIC F UNCTIONS

  • The effect of heuristic accuracy on performance
  • Generating admissible heuristics from relaxed problems
  • Generating admissible heuristics from subproblems: Pattern databases Admissible heuristics can also be derived from the solution cost of a subproblem of a given
  • Learning heuristics from experience

The answer is: “Basically yes.” From the definitions of the two heuristics it is easy to see that, for any node, h2(n)≥h1(n). Then it is easy to see that the sum of the two costs is still a lower bound on the cost of solving the entire problem. Each example consists of a state of the solution path and the actual cost of the solution from that point.

Figure 3.28 A typical instance of the 8-puzzle. The solution is 26 steps long.
Figure 3.28 A typical instance of the 8-puzzle. The solution is 26 steps long.

S UMMARY

Give an upper bound on the total size of the state space defined by your formulation. Write a problem generator for the TSP examples where cities are represented by random dots on the unit square. LJXUH 4HE FIRST LEVEL OF!.$n/2SEARCH THE TREE FOR A PROBLEM IN THE VACUUM WORLD OF LOCALENSING IS THE FIRST STEP TO THE SOLUTION.

Figure 3.31 A scene with polygonal obstacles. S and G are the start and goal states.
Figure 3.31 A scene with polygonal obstacles. S and G are the start and goal states.

N THE FIRST WEB SEARCH AGENT IS SHOWN IN &FIGURE 4IT'S AGENT SAVES ITS MAP IN A TABLE 2%35,4[s, a] WHICH RECORDS THE STATE RESULTING FROM PERFORMING AN ACTION IN STATES 7WHICH THE ACTION FROM THE CURRENT STATE HAS NOT BEEN INVESTIGATED THE AGENT ATTEMPTS TO ACTION 4PROBLEM COMES WHEN THE AGENT HAS TRIED ALL ACTIONS IN THE STATE )N MY FIRST ADVANCED SEARCH THE STATE IS SIMPLY DROPPED FROM THE QUEUE ON THE ONLINE SEARCH. THE AGENT MUST PHYSICALLY GO BACK ON THE FIRST DEEP SEARCH THAT MEANS BACK TO THE STATE FROM WHICH THE AGENT LAST ENTERED CURRENT STATE 4O GET THE ALGORITHM TO KEEP A TABLE THAT IS. USING THE MAZE GIVEN IN &FIGS, IT IS QUITE EASY TO SEE THAT IN THE WORST CASE THE AGENT WILL END UP FLOWING EACH LINK IN THE STATE SPACE EXACTLY TWICE &OR RESEARCH IS THIS OPTIMAL FOR SEARCHING. 7HEN THE ENVIRONMENT IS PARTIALLY OBSERVABLE, EHOLHI VWDWH REPRESENTS A SET OF POSSIBLE STATES IN WHICH THE AGENT IS.

5 ADVERSARIAL SEARCH

G AMES

Azero-sum game is (confusingly) defined as a game where the total payout for all players is the same for each instance of the game. But regardless of the size of the game tree, MAX's job is to look for a good move. We show part of the tree, alternating moves of MIN(O) and MAX(X), until we finally reach final states, to which utilities can be assigned according to the rules of the game.

O PTIMAL D ECISIONS IN G AMES

  • The minimax algorithm
  • Optimal decisions in multiplayer games

△ nodes are "MAX nodes", in which it is MAX's turn to move, and ▽ nodes are "MINnodes". Terminal nodes indicate values ​​of use forMAX; other nodes are labeled with their minimum values. The best move of MAX to the root isa1, because it leads to the state with the highest minimum value, and the best response of MIN is b1, because it leads to the state with the lowest minimum value. The minimum value of a node is the utility (for MAX) of being in the corresponding state, assuming both players play optimally from there to the end of the game. The recursion continues up to the leaves of the tree, and then the minimum values ​​are supported through the tree as the recursion unwinds.

Figure 5.2 A two-ply game tree. The △ nodes are “ MAX nodes,” in which it is MAX ’s turn to move, and the ▽ nodes are “ MIN nodes.” The terminal nodes show the utility values for MAX ; the other nodes are labeled with their minimax values
Figure 5.2 A two-ply game tree. The △ nodes are “ MAX nodes,” in which it is MAX ’s turn to move, and the ▽ nodes are “ MIN nodes.” The terminal nodes show the utility values for MAX ; the other nodes are labeled with their minimax values

A LPHA –B ETA P RUNING

  • Move ordering

At each point, we show the range of possible values ​​for each node. a) The first card below B has a value of 3. Therefore, B, which is a knot, has a value of at most 3. The second card below B has a value of 12; MIN would avoid this move, so the value of Bis is still maximum 3. c) The third leaf below B has a value of 8; we have seen the successor states of all B, so the value of B is exactly 3. Now, we can conclude that the root value is at least 3, because. This is an example of alpha-beta pruning. e) The first card down has the value 14, so it is worth more than 14.

Figure 5.5 Stages in the calculation of the optimal decision for the game tree in Figure 5.2.
Figure 5.5 Stages in the calculation of the optimal decision for the game tree in Figure 5.2.

I MPERFECT R EAL -T IME D ECISIONS

  • Cutting off search
  • Forward pruning
  • Search versus lookup

For example, suppose our experience shows that 72% of the states we encounter in a two-pawn game are simply added up against these trait values ​​to get a position score. Most of Black's moves will lead to the eventual capture of the bishop and will therefore be labeled as "bad" moves.

Figure 5.8 Two chess positions that differ only in the position of the rook at lower right.
Figure 5.8 Two chess positions that differ only in the position of the rook at lower right.

S TOCHASTIC G AMES

  • Evaluation functions for games of chance

Instead, we can only calculate the expected value of a position: the average. over all possible outcomes of the chance nodes. It turns out that to avoid this sensitivity, the evaluation function should be a positive linear transformation of the probability of winning from a position (or, more generally, of the expected utility of the position). But if we put bounds on the possible values ​​of the utility function, then we can arrive at bounds for the mean without looking at every number.

Figure 5.11 Schematic game tree for a backgammon position.
Figure 5.11 Schematic game tree for a backgammon position.

P ARTIALLY O BSERVABLE G AMES

  • Kriegspiel: Partially observable chess
  • Card games

Sometimes a checkmate strategy works for some of the board states in the current belief state, but not others. Evaluation functions are similar to those of the observable game, but include a component for the size of the belief state—smaller is better. One branch of the fork leads to a bigger pile of gold, but take the wrong fork and you'll get hit by a bus.

Figure 5.13 Part of a guaranteed checkmate in the KRK endgame, shown on a reduced board
Figure 5.13 Part of a guaranteed checkmate in the KRK endgame, shown on a reduced board

S TATE - OF - THE -A RT G AME P ROGRAMS

Gambar

Figure 1.4 A scene from the blocks world. S HRDLU (Winograd, 1972) has just completed the command “Find a block which is taller than the one you are holding and put it in the box.”
Figure 2.1 Agents interact with environments through sensors and actuators.
Figure 2.9 Schematic diagram of a simple reflex agent.
Figure 2.11 A model-based reflex agent.
+7

Referensi

Dokumen terkait

From all the definitions above, the researcher concluded that classroom action research is a classroom action in a research, that can be conducted either by