KIK614303
Artificial Intelligence
What is AI?
The science of making machines that:
Think like people
Act like people
Think rationally
Act rationally
Purposes of AI
• To build models of (or
replicate) human cognition
• Psychology, neuroscience, cognitive science: the brain is tricky
• To build useful intelligent artifacts
• Engineering
• To create and understand intelligence as a general property of systems
• Rationality within
computational limitations
Rationality
• Maximally achieving pre-defined goals
• Goals are expressed in terms of the utility of outcomes • Being rational means maximizing your expected utility
History of AI
1940-1950: Early days1943: McCulloch & Pitts: Boolean circuit model of brain
1950: Turing's “Computing Machinery and Intelligence”
1950—70: Excitement: Look, Ma, no hands!
1950s: Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine
1956: Dartmouth meeting: “Artificial Intelligence” adopted
1965: Robinson's complete algorithm for logical reasoning 1966-9: Failure of naïve MT and learning methods
1970—90: Knowledge-based approaches
1969—79: Early development of knowledge-based systems 1980—88: Expert systems industry booms
1988—93: Expert systems industry busts: “AI Winter” 1990—: Statistical approaches
Resurgence of probability, focus on uncertainty General increase in technical depth
Agents and learning systems… “AI Spring”?
Intelligent Agent
• An agent is an entity that
perceives and acts.
• A rational agent selects
actions that maximize its (expected) utility.
• Characteristics of the
percepts, environment, and
action space dictate
Intelligent Agents
• What is an agent?
• What makes an agent rational?
Key points:
• Performance measure
• Actions
• Percept sequence
Agents and Environments
• An agent is anything that can
perceive its environment through
sensors and act upon that
environment through actuators
• Human agent: eyes, ears, and other organs for sensors; hands, legs,
mouth, and other body parts for actuators
• Robotic agent: camera and
microphone for sensors; various motors for actuators
Agent
?
Sensors
Actuators
Environment
Percepts
Environments
•
To design an agent we must specify its
task environment.
•
PEAS description of the task environment:
•
P
erformance
•
E
nvironment
•
A
ctuators
Environment Types
• Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current state and the action executed by the agent. (If the environment is
deterministic except for the actions of other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent
Environment Types
•
Static
(vs. dynamic): The environment is unchanged
while an agent is deliberating. (The environment is
semidynamic
if the environment itself does not
change with the passage of time but the agent's
performance score does)
•
Discrete
(vs. continuous): A limited number of distinct,
clearly defined percepts and actions.
Reflex Agents
• Select action on the basis of only the current percept.
– E.g. the vacuum-agent
• Large reduction in possible percept/action
situations(next page). • Implemented through
condition-action rules
Goal-based Agents
• The agent needs a goal to know which situations are desirable.
– Things become difficult when long
sequences of actions are required to find the goal.
• Typically investigated in search and
planning research.
Learning Agents
• Learning element: introduce improvements in performance element.
– Critic provides feedback on agents
performance based on fixed performance standard.
• Performance element: selecting actions based on percepts.
– Corresponds to the previous agent programs
• Problem generator: suggests actions that will lead to new and informative experiences.
Problem Solving
• It is possible to convert difficult goals into one or more easier-to-achieve subgoals.
• Using the problem-reduction method, you generally recognize goals and convert them into
appropriate subgoals.
Problem Reduction Method
Example Problem
− 5𝑥41 − 𝑥2 5 2 𝑑𝑥 I. 𝑥1𝑑𝑥 = ln 𝑥
II. 𝑥𝑛𝑑𝑥 = 𝑥𝑛+1 𝑛+1 III. cos 𝑥 𝑑𝑥 = sin 𝑥
1. −𝑓(𝑥)𝑑𝑥 = − 𝑓(𝑥)𝑑𝑥 2. 𝑐𝑓(𝑥)𝑑𝑥 = 𝑐 𝑓(𝑥)𝑑𝑥 3. 𝑓𝑖(𝑥) 𝑑𝑥 = 𝑓𝑖(𝑥)𝑑𝑥 4. 𝑃𝑚(𝑥)
𝑄𝑛(𝑥) 𝑑𝑥 =
𝑃𝑚 𝑥 𝑑𝑥
𝑄𝑛 𝑥 𝑑𝑥 (m≥n)
A. f(sinx, cosx, tanx, cotx, secx, cscx)
≈ g1(sinx, cosx) ≈ g2(tanx, cscx) ≈
g3(cotx, secx)
B. 𝑓(𝑡𝑎𝑛𝑥)𝑑𝑥 = 𝑓(𝑦) 1+𝑦2𝑑𝑦
C. 1-x2 ... x=siny [sin2y+cos2y=1]
D. 1+x2 ... x=tany [sec2y-tan2y=1]
Depth-First Search
S a b d p a c e p h f r qq c G a q e p h f r q
q c G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first
Implementation:
Search Algorithm Properties
• Complete: Guaranteed to find a solution if one exists? • Optimal: Guaranteed to find the least cost path?
• Time complexity? • Space complexity?
• Cartoon of search tree:
• b is the branching factor
• m is the maximum depth
• solutions at various depths
• Number of nodes in entire tree?
• 1 + b + b2 + …. bm = O(bm)
… b
1 node b nodes
b2 nodes
bm nodes
Depth-First Search (DFS) Properties
… b
1 node b nodes
b2 nodes
bm nodes
m tiers
• What nodes DFS expand?
• Some left prefix of the tree.
• Could process the whole tree!
• If m is finite, takes time O(bm)
• How much space does the fringe
take?
• Only has siblings on path to root, so O(bm)
• Is it complete?
• m could be infinite, so only if we prevent cycles (more later)
• Is it optimal?
• No, it finds the “leftmost”
Breadth-First Search
S a b d p a c e p h f r qq c G a q e p h f r q
q c G a S G d b p q c e h a f r Search Tiers Strategy: expand a shallowest node first
Implementation:
Breadth-First Search (BFS) Properties
• What nodes does BFS expand?
• Processes all nodes above shallowest solution
• Let depth of shallowest solution be s
• Search takes time O(bs)
• How much space does the fringe
take?
• Has roughly the last tier, so O(bs)
• Is it complete?
• s must be finite if a solution exists, so yes!
• Is it optimal?
• Only if costs are all 1 (more on costs later)
… b
1 node b nodes
b2 nodes
bm nodes
s tiers
Uniform Cost Search
S a b d p a c e p h f r qq c G a q e p h f r q
q c G
a Strategy: expand a
cheapest node first: Fringe is a priority queue (priority: cumulative cost) S G d b p q c e h a f r
3 9 1
…
Uniform Cost Search (UCS) Properties
• What nodes does UCS expand?
• Processes all nodes with cost less than cheapest solution!
• If that solution costs C* and arcs cost at least ,
then the “effective depth” is roughly C*/
• Takes time O(bC*/) (exponential in effective
depth)
• How much space does the fringe take?
• Has roughly the last tier, so O(bC*/)
• Is it complete?
• Assuming best solution has a finite cost and minimum arc cost is positive, yes!
• Is it optimal?
• Yes! (Proof next lecture via A*)
b
C*/
“tiers” c 3
Route Finding Problem
Goal
Start
States
Actions
Flowchart of search algorithms
Initialize queue with the initial state
Is the queue empty?
Is this node a goal?
Remove the first node from the queue
No
Generate children and add them into the queue according to some strategy
No
Yes
Return fail
Yes
Searching with a Search Tree
• Search:
• Expand out potential plans (tree nodes)
• Maintain a frontier of partial plans under consideration
Heuristic Search
Def.:
A search heuristic h(n) is an estimate of the cost of the optimal (cheapest) path from node n to a goal node.
Estimate: h(n1)
Estimate: h(n2)
Estimate: h(n3)
n3
n2 n1
• h can be extended to paths: h(<n0,…,nk>)=h(nk)
A
*search
•
Idea: avoid expanding paths that are already
expensive
•
The evaluation function
f
(
n
) is the estimated total
cost of the path through node
n
to the goal:
•
f
(
n
)
= g
(
n
)
+ h
(
n
)
• g(n): cost so far to reach n (path cost)
Properties of A*
• Complete?
• Yes – unless there are infinitely many nodes with f(n) ≤ C*
• Optimal?
• Yes
• Time?
• Number of nodes for which f(n) ≤ C*
(exponential)
• Space?
• Exponential
… b …
b
Route Finding
374
253 366
329
A* search example
34
Start: Arad
A* search example
35
Start: Arad
A* search example
36
Start: Arad
A* search example
37
Start: Arad
A* search example
38
Start: Arad
A* search example
39
Start: Arad
A* search example
40
Start: Arad
Remarks : Problem solving by search
Toy Problem
• Toy problems
• Vacuum world
• The N-Queen
• Rubik's cube
• The 8-Puzzle
Real World Problem
• Touring problems
• Route Finding
• Travelling salesperson
• VLSI Layout
• Robot navigation