n
n−1
n−2
n−3 n−4
n−3
n−4 n−5
n−2
n−3
n−4 n−5
n−4
n−5 n−6
Figure 2.5 The recursion tree for RECURSIVEFIBONACCI(n). Vertices enclosed in dashed circles represent duplicated effort—the same value had been calculated in another vertex in the tree at a higher level. As the tree grows larger, the number of dashed vertices increases exponentially (2i−2at leveli), while the number of regular vertices increases linearly (2per level).
scribe its running time, since this is an attribute of the algorithm, and not an attribute of the computer you happen to be using.
Unfortunately, determining how many operations an algorithm will per- form is not always easy. We can see that USCHANGEwill always perform17 operations (one for each assignment, subtraction, multiplication, and divi- sion), but this is a very simple algorithm. An algorithm like SELECTIONSORT, on the other hand, will perform a different number of operations depending on what it receives as input: it will take less time to sort a5-element list than it will to sort a5000-element list. You might be tempted to think that SE-
LECTIONSORTwill take1000times longer to sort a5000-element array than it will to sort a5-element array. But you would be wrong. As we will see, it actually takes on the order of10002 = 1,000,000times longer, no matter what kind of computer you use. It is typically the case that the larger the input is, the longer the algorithm takes to process it.
If we know how to compute the number of basic operations that an al- gorithm performs, then we have a basis to compare it against a different algorithm that solves the same problem. Rather than tediously count every
2.7 Fast versus Slow Algorithms 35 multiplication and addition, we can perform this comparison by gaining a high-level understanding of the growth of each algorithm’s operation count as the size of the input increases. To illustrate this, suppose an algorithmA performs11n3 operations on an input of size n, and a different algorithm, B, solves the same problem in99n2+ 7operations. Which algorithm,Aor B, is faster? Although, Amay be faster thanB for some smalln(e.g., for nbetween0and9),Bwill become faster with largen(e.g., for alln ≥10).
Sincen3 is, in some sense, a “faster-growing” function thann2with respect ton, the constants11,99, and7 do not affect the competition between the two algorithms for largen (see figure 2.6). We refer toA as a cubic algo- rithm and toBas a quadratic algorithm, and say thatAis less efficient than Bbecause it performs more operations to solve the same problem whennis large. Thus, we will often be somewhat imprecise when we count operations in algorithms—the behavior of algorithms on small inputs does not matter.
Let us estimate how long BRUTEFORCECHANGEwill take on an input in- stance ofM cents, and denominations (c1, c2, . . . , cd). To calculate the total number of operations in thefor loop, we can take the approximate num- ber of operations performed in each iteration and multiply this by the total number of iterations. Since there are roughlyMc1 ·Mc2 · · ·Mc
d iterations, thefor loop performs on the order ofd·c1·cM2···cd d operations, which dwarfs the other operations in the algorithm.
This type of algorithm is often referred to as anexponentialalgorithm in contrast to quadratic, cubic, or other polynomial algorithms. The expres- sion for the running time of exponential algorithms includes a term likeMd, wheredis aparameterof the problem (i.e.,dmay deliberately be made arbi- trarily large by changing the input to the algorithm), while the running time of a polynomial algorithm is bounded by a term likeMk wherek is a con- stant not related to the size of any parameters. For example, an algorithm with running timeM1 (linear),M2 (quadratic),M3(cubic), or evenM2005 is polynomial. Of course, an algorithm with running timeM2005is not very practical, perhaps less so than some exponential algorithms, and much ef- fort in computer science goes into designing faster and faster polynomial algorithms. Sincedmay be large when the algorithm is called with a long list of denominations [e.g., c = (1,2,3,4,5, . . . ,100)], we see that BRUTE- FORCECHANGEcan take a very long time to execute.
We have seen that the running time of an algorithm is often related to the sizeof its input. However, the running time of an algorithm can also vary among inputs of thesamesize. For example, suppose SELECTIONSORTfirst
0 1 2 3 4 5 6 7 8 9 10 11 12 0
4000 8000 12000 16000 20000
99x2+ 7 11x3
6000 logx
x
Figure 2.6 A comparison of a logarithmic (h(x) = 6000 logx), a quadratic (f(x) = 99x2 + 7), and a cubic (g(x) = 11x3) function. Afterx = 8, bothf(x)andg(x) are larger thanh(x). Afterx= 9,g(x)is larger thanf(x), even though for values0 through9,f(x)is larger thang(x). The functions that we chose here are irrelevant and arbitrary: any three (positive-valued) functions with leading terms oflogx,x2, and x3respectively would exhibit the same basic behavior, though the crossover points might be different.
checked to see if its input were already sorted. It would take this modi- fied SELECTIONSORTless time to sort an ordered list of 5000 elements than it would to sort an unordered list of 5000 elements. As we see in the next section, when we speak of the running time of an algorithm as a function of input size, we refer to that one input—or set of inputs—of a particular size that the algorithm will take the longest to process. In the modified SELEC-
TIONSORT, that input would be any not-already-sorted list.