• Tidak ada hasil yang ditemukan

Chapter 9 Chapter 9

N/A
N/A
Protected

Academic year: 2022

Membagikan "Chapter 9 Chapter 9"

Copied!
19
0
0

Teks penuh

(1)

Chapter 9 Chapter 9

Greedy Technique Greedy Technique

(2)

Greedy Technique Greedy Technique

Constructs a solution to an

Constructs a solution to an optimization problem piece by optimization problem piece by piece through a sequence of choices that are:

piece through a sequence of choices that are:

feasible, i.e. satisfying the constraintsfeasible, i.e. satisfying the constraints

locallylocally optimal ( optimal (with respect to some neighborhood definitionwith respect to some neighborhood definition))

greedy (in terms of some measure), and irrevocablegreedy (in terms of some measure), and irrevocable

For some problems, it yields a

For some problems, it yields a globallyglobally optimal solution for every optimal solution for every instance. For most, does not but can be useful for fast

instance. For most, does not but can be useful for fast

approximations. We are mostly interested in the former case in approximations. We are mostly interested in the former case in this class.

this class.

Defined by an

objective function and a set of constraints

(3)

Applications of the Greedy Strategy Applications of the Greedy Strategy

Optimal solutions:Optimal solutions:

change making for “normal” coin denominationschange making for “normal” coin denominations

minimum spanning tree (MST)minimum spanning tree (MST)

single-source shortest paths single-source shortest paths

simple scheduling problemssimple scheduling problems

Huffman codesHuffman codes

Approximations/heuristics:Approximations/heuristics:

traveling salesman problem (TSP)traveling salesman problem (TSP)

knapsack problemknapsack problem

other combinatorial optimization problemsother combinatorial optimization problems

(4)

Change-Making Problem Change-Making Problem

Given unlimited amounts of coins of denominations

Given unlimited amounts of coins of denominations dd1 1 > … > > … > ddm m , , give change for amount

give change for amount n n with the least number of coinswith the least number of coins Example:

Example: dd1 1 = 25c, d= 25c, d2 2 =10c, d=10c, d3 3 = 5c, = 5c, dd4 4 = 1c and = 1c and n = n = 48c48c Greedy solution:

Greedy solution:

Greedy solution is Greedy solution is

optimal for any amount and “normal’’ set of denominationsoptimal for any amount and “normal’’ set of denominations

may not be optimal for arbitrary coin denominationsmay not be optimal for arbitrary coin denominations

<1, 2, 0, 3>

For example, d1 = 25c, d2 = 10c, d3 = 1c, and n = 30c

Ex: Prove the greedy algorithm is optimal for the above denominations.

Q: What are the objective function and constraints?

(5)

Minimum Spanning Tree (MST) Minimum Spanning Tree (MST)

Spanning treeSpanning tree of a connected graph of a connected graph GG: a connected acyclic : a connected acyclic subgraph of

subgraph of G G that includes all of that includes all of GG’s vertices’s vertices

Minimum spanning treeMinimum spanning tree of a weighted, connected graph of a weighted, connected graph GG: : a spanning tree of

a spanning tree of GG of the minimum total weight of the minimum total weight Example:

Example:

c

b d a

6

2

4

3

1

c

b d a

2

3

1

c

b d a

6

4 1

(6)

Prim’s MST algorithm Prim’s MST algorithm

Start with tree Start with tree TT11 consisting of one (any) vertex and “grow” consisting of one (any) vertex and “grow”

tree one vertex at a time to produce MST through

tree one vertex at a time to produce MST through a series of a series of expanding subtrees T

expanding subtrees T11, T, T22, …, T, …, Tnn

On each iteration, On each iteration, construct Tconstruct Ti+1i+1 from T from Ti i by adding vertex by adding vertex not in

not in TTi i that is that is closest to those already in closest to those already in TTii (this is a (this is a

“greedy” step!)

“greedy” step!)

Stop when all vertices are includedStop when all vertices are included

(7)

Example Example

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

(8)

Notes about Prim’s algorithm Notes about Prim’s algorithm

Proof by induction that this construction actually yields an Proof by induction that this construction actually yields an MST (

MST (CLRS, Ch. 23.1CLRS, Ch. 23.1). Main property is given in the next ). Main property is given in the next page.

page.

Needs priority queue for locating closest fringe vertex. The Needs priority queue for locating closest fringe vertex. The Detailed algorithm can be found in Levitin, P. 310.

Detailed algorithm can be found in Levitin, P. 310.

EfficiencyEfficiency

O(nO(n22)) for weight matrix representation of graph and array for weight matrix representation of graph and array implementation of priority queue

implementation of priority queue

OO((mm log log nn) for adjacency lists representation of graph with ) for adjacency lists representation of graph with n n vertices and vertices and m m edges and min-heap implementation of the edges and min-heap implementation of the priority queue

priority queue

(9)

The Crucial Property behind Prim’s Algorithm The Crucial Property behind Prim’s Algorithm

Claim

Claim: Let : Let G = (V,E)G = (V,E) be a weighted graph and be a weighted graph and (X,Y)(X,Y) be a be a partition of V (called a

partition of V (called a cutcut). Suppose ). Suppose e = (x,y)e = (x,y) is an edge is an edge of Eof E across the cut, where across the cut, where xx is in is in XX and and yy is in is in YY, and , and ee has has the minimum weight among all such crossing edges

the minimum weight among all such crossing edges (called a

(called a lightlight edge). Then there is an MST containing edge). Then there is an MST containing ee..

y x

X

Y

(10)

Another greedy algorithm for MST: Kruskal’s Another greedy algorithm for MST: Kruskal’s

Sort the edges in nondecreasing order of lengthsSort the edges in nondecreasing order of lengths

““Grow” tree one edge at a time to produce MST through Grow” tree one edge at a time to produce MST through a a series of expanding forests F

series of expanding forests F11, F, F22, …, F, …, Fn-1n-1

On each iteration, add the next edge on the sorted list On each iteration, add the next edge on the sorted list

unless this would create a cycle. (If it would, skip the edge.) unless this would create a cycle. (If it would, skip the edge.)

(11)

Example Example

c

b d a

4

2

6 1

3

c

b d a

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

4

2

6 1

3

c

b d a

2

1

3

(12)

Notes about Kruskal’s algorithm Notes about Kruskal’s algorithm

Algorithm looks easier than Prim’s but is harder to Algorithm looks easier than Prim’s but is harder to implement (checking for cycles!)

implement (checking for cycles!)

Cycle checking: a cycle is created iff added edge connects Cycle checking: a cycle is created iff added edge connects vertices in the same connected component

vertices in the same connected component

Union-find Union-find algorithms – see section 9.2algorithms – see section 9.2

Runs in Runs in O(m log m)O(m log m) time, with time, with m = |E|. m = |E|. The time is mostly The time is mostly spent on sorting.

spent on sorting.

(13)

Minimum spanning tree vs. Steiner tree Minimum spanning tree vs. Steiner tree

c

d b

a 1

1 1

1

c

d b

a

vs s

In general, a Steiner minimal tree (SMT) can be much shorter than a minimum spanning tree

(MST), but SMTs are hard to compute.

(14)

Shortest paths – Dijkstra’s algorithm Shortest paths – Dijkstra’s algorithm

Single Source Shortest Paths Problem

Single Source Shortest Paths Problem: Given a weighted : Given a weighted

connected (directed) graph G, find shortest paths from source vertex connected (directed) graph G, find shortest paths from source vertex ss to each of the other vertices

to each of the other vertices Dijkstra’s algorithm

Dijkstra’s algorithm: Similar to Prim’s MST algorithm, with : Similar to Prim’s MST algorithm, with a different way of computing numerical labels: Among vertices a different way of computing numerical labels: Among vertices not already in the tree, it finds vertex

not already in the tree, it finds vertex u with the smallest u with the smallest sum sum

ddvv + + ww((vv,,uu)) where

where

vv is a vertex for which shortest path has been already found is a vertex for which shortest path has been already found on preceding iterations (such vertices form a tree rooted at on preceding iterations (such vertices form a tree rooted at ss)) ddvv is the length of the shortest path from source is the length of the shortest path from source s to s to vv

w w(v(v,u,u) is the length (weight) of edge from ) is the length (weight) of edge from vv to to uu

(15)

Example

Example

d 4

Tree vertices Remaining vertices Tree vertices Remaining vertices a(-,0) b(a,3) c(-,∞) d(a,7) e(-,∞)

a

b 4

e 3

7

2 5 6 c

a

b

d

4 c

e

3

7 4

2 5 6

a

b

d

4 c

e

3

7 4

2 5 6

a

b

d

4 c

e

3

7 4

2 5 6

b(a,3) c(b,3+4) d(b,3+2) e(-,∞)

d(b,5) c(b,7) e(d,5+4)

c(b,7) e(d,9)

e(d,9)

d

a

b

d

4 c

e

3

7 4

2 5 6

(16)

Notes on Dijkstra’s algorithm Notes on Dijkstra’s algorithm

Correctness can be proven by induction on the number of vertices.Correctness can be proven by induction on the number of vertices.

Doesn’t work for graphs with negative weights (whereas Floyd’s algorithm Doesn’t work for graphs with negative weights (whereas Floyd’s algorithm does, as long as there is no negative cycle). Can you find a

does, as long as there is no negative cycle). Can you find a counterexample for counterexample for Dijkstra’s algorithm?

Dijkstra’s algorithm?

Applicable to both undirected and directed graphsApplicable to both undirected and directed graphs

EfficiencyEfficiency

O(|V|O(|V|22) for graphs represented by weight matrix and array implementation ) for graphs represented by weight matrix and array implementation of priority queue

of priority queue

O(|E|log|V|) for graphs represented by adj. lists and min-heap O(|E|log|V|) for graphs represented by adj. lists and min-heap implementation of priority queue

implementation of priority queue

Don’t mix up Dijkstra’s algorithm with Prim’s algorithm! More details of the Don’t mix up Dijkstra’s algorithm with Prim’s algorithm! More details of the algorithm are in the text and ref books.

algorithm are in the text and ref books.

We prove the invariants: (i) when a vertex is added to the tree, its correct distance is calculated and (ii) the distance is at least those of the previously added vertices.

(17)

Coding Problem Coding Problem

Coding

Coding: assignment of bit strings to alphabet characters: assignment of bit strings to alphabet characters Codewords

Codewords: bit strings assigned for characters of alphabet: bit strings assigned for characters of alphabet Two types of codes:

Two types of codes:

fixed-length encodingfixed-length encoding (e.g., ASCII) (e.g., ASCII)

variable-length encodingvariable-length encoding (e,g., Morse code) (e,g., Morse code)

Prefix-free codes (or

Prefix-free codes (or prefix-codesprefix-codes): no codeword is a prefix of another codeword): no codeword is a prefix of another codeword

Problem: If frequencies of the character occurrences are Problem: If frequencies of the character occurrences are

known, what is the best binary prefix-free code?known, what is the best binary prefix-free code?

It allows for efficient (online) decoding!

E.g. consider the encoded string (msg) 10010110…

E.g. We can code {a,b,c,d} as {00,01,10,11} or {0,10,110,111} or {0,01,10,101}.

The one with the shortest average code length. The average code length represents on the average how many bits are required to transmit or store a character.

E.g. if P(a) = 0.4, P(b) = 0.3, P(c) = 0.2, P(d) = 0.1, then the average length of code #2 is 0.4 + 2*0.3 + 3*0.2 + 3*0.1 = 1.9 bits

(18)

Huffman codes Huffman codes

Any binary tree with edges labeled with 0’s and 1’s yields a prefix-free Any binary tree with edges labeled with 0’s and 1’s yields a prefix-free code of characters assigned to its leaves

code of characters assigned to its leaves

Optimal binary tree minimizing the averageOptimal binary tree minimizing the average length of a codeword can be constructed length of a codeword can be constructed as follows: as follows:

Huffman’s algorithm Huffman’s algorithm Initialize

Initialize nn one-node trees with alphabet characters and the tree weights one-node trees with alphabet characters and the tree weights with their frequencies.

with their frequencies.

Repeat the following step

Repeat the following step nn-1 times: join two binary trees with smallest -1 times: join two binary trees with smallest

weights into one (as left and right subtrees) and make its weight equal the weights into one (as left and right subtrees) and make its weight equal the sum of the weights of the two trees.

sum of the weights of the two trees.

Mark edges leading to left and right subtrees with 0’s and 1’s, respectively.

Mark edges leading to left and right subtrees with 0’s and 1’s, respectively.

0 1

1 0

1

represents {00, 011, 1}

(19)

Example Example

character A

character A B C D B C D __

frequency 0.35 0.1 0.2 0.2 0.15 frequency 0.35 0.1 0.2 0.2 0.15 codeword 11 100 00 01 101 codeword 11 100 00 01 101 average bits per character: 2.25 average bits per character: 2.25 for fixed-length encoding: 3 for fixed-length encoding: 3 compression ratio

compression ratio: (3-2.25)/3*100% = 25%: (3-2.25)/3*100% = 25%

0.25 0.1

B

0.15 _

0.2 C

0.2 D

0.35 A

0.2 C

0.2 D

0.35 A

0.1 B

0.15 _

0.4

0.2 C

0.2 D

0.6

0.25

0.1 B

0.15 _

0.6 1.0

0 1

0.4

0.2 C

0.2

D 0.25

0.1 B

0.15 _

0 1 0

0

1

1 0.25

0.1 B

0.15 _

0.35 A

0.4

0.2 C

0.2 D

0.35 A

0.35 A

Referensi

Dokumen terkait