• Tidak ada hasil yang ditemukan

Exponentiation Algorithms

Implementation Issues

6.2. Exponentiation Algorithms

CHAPTER 6

96 6. IMPLEMENTATION ISSUES

6.2.1. Binary Exponentiation: To stop this explosion in the size of any intermediate results we use the fact that we are working modulo n. But even here one needs to be careful; a naive algorithm would compute the above example by computing

x= 123,

x2=x×x (mod 511) = 310, x3=x×x2 (mod 511) = 316, x4=x×x3 (mod 511) = 32, x5=x×x4 (mod 511) = 359.

This requires four modular multiplications, which seems fine for our small example. But for a gen- eral exponentiation by a 2048-bit exponent using this method would require around 22048modular multiplications. If each such multiplication could be done in under one millionth of a second we would still require around 10600years to perform this operation.

However, it is easy to see that, even in our small example, we can reduce the number of required multiplications by being a little more clever:

x= 123,

x2=x×x (mod 511) = 310, x4=x2×x2 (mod 511) = 32, x5=x×x4 (mod 511) = 359.

Which only requires three modular multiplications rather than the previous four. To understand why we only require three modular multiplications notice that the exponent 5 has binary represen- tation 0b101 and so

Has bit lengtht= 3,

Has Hamming weighth= 2.

In the above example we required 1 = (h−1) general multiplications and 2 = (t−1) squarings.

This fact holds in general, in that a modular exponentiation can be performed using

(h−1) multiplications,

(t−1) squarings,

wheret is the bit length of the exponent and h is the Hamming weight. The average Hamming weight of an integer ist/2 so the number of multiplications and squarings is on average

t+t/21.

For a 2048-bit modulus this means that the average number of modular multiplications needed to perform exponentiation by a 2048-bit exponent is at most 4096 and on average 3072.

Algorithm 6.1:Binary exponentiation: Right-to-left variant y←1.

whiled= 0do

if (d mod 2)= 0then y←(y·x) modn.

d←d−1.

d←d/2.

x←(x·x) modn.

The method used to achieve this improvement in performance is called the binary exponentiation method. This is because it works by reading each bit of the binary representation of the exponent in turn, starting with the least significant bit and working up to the most significant bit. Algorithm 6.1 explains the method by computing

y=xd (modn).

The above binary exponentiation algorithm has a number of different names: some authors call it thesquare and multiplyalgorithm, since it proceeds by a sequence of squarings and multiplications, other authors call it the Indian exponentiationalgorithm. Algorithm 6.1 is called a right-to-left exponentiation algorithm since it processes the bits ofd from the least significant bit (the right one) up to the most significant bit (the left one).

6.2.2. Window Exponentiation Methods: Most of the time it is faster to perform a squaring operation than a general multiplication. Hence to reduce time even more one tries to reduce the total number of modular multiplications even further. This is done usingwindowtechniques which trade off precomputations (i.e. storage) against the time in the main loop.

To understand window methods better we first examine the binary exponentiation method again. But this time instead of a right-to-left variant, we process the exponent from the most significant bit first, thus producing aleft-to-right binary exponentiation algorithm; see Algorithm 6.2. Again we assume we wish to compute

y=xd (modn).

We first give a notation for the binary representation of the exponent d=

t i=0

di2i,

wheredi∈ {0,1}. The algorithm processes a single bit of the exponent on every iteration of the loop. Again the number of squarings is equal to tand the expected number of multiplications is equal tot/2.

Algorithm 6.2:Binary exponentiation: Left-to-right variant y←1.

fori=t downto0do y←(y·y) modn.

if di= 1then y←(y·x) modn.

In a window method we processwbits of the exponent at a time, as in Algorithm 6.3. We first precompute a table

xi=xi (modn) fori= 0, . . . ,2w1.

Then we write our exponent out, but this time takingwbits at a time, d=

t/w i=0

di·2i·w, wheredi∈ {0,1,2, . . . ,2w1}.

It is perhaps easier to illustrate this with an example. Suppose we wish to compute y=x215 (modn)

with a window width ofw= 3. We compute thedi as 215 = 3·26+ 2·23+ 7.

98 6. IMPLEMENTATION ISSUES

Algorithm 6.3:Window exponentiation method y←1.

fori=t/w downto0do

forj= 0tow−1 do y←(y·y) modn.

j←di.

y←(y·xj) modn.

Hence, our iteration to computex215 (modn) computes in order y= 1,

y=y·x3=x3, y=y8=x24, y=y·x2=x26, y=y8=y208, y=y·x7=x215.

6.2.3. Sliding Window Method: With a window method as above, we still performtsquarings but the number of multiplications reduces tot/w on average. One can do even better by adopting a sliding window method, where we now encode our exponent as

d= l i=0

di·2ei

wheredi∈ {1,3,5, . . . ,2w1}andei+1−ei≥w. By choosing only odd values fordiand having a variable window width we achieve both decreased storage for the precomputed values and improved efficiency. After precomputingxi=xi fori= 1,3,5, . . . ,2w1, we execute Algorithm 6.4.

Algorithm 6.4:Sliding window exponentiation y←1.

fori=l downto0do

forj= 0toei+1−ei1do y←(y·y) modn.

j←di.

y←(y·xj) modn.

forj= 0 toe01do y←(y·y) modn.

The number of squarings remains again att, but now the number of multiplications reduces to l, which is aboutt/(w+ 1) on average. In our example of computingy=x215 (modn) we have

215 = 27+ 5·24+ 7,

and so we execute the steps

y= 1,

y=y·x=x1, y=y8=x8, y=y·x5=x13, y=y16 =x208, y=y·x7=x215.

6.2.4. Generalizations to Any Group: Notice that all of the above window algorithms apply to exponentiation in any abelian group and not just the integers modulo n. Hence, we can use these algorithms to computeαdin a finite field or to compute [d]P on an elliptic curve; in the latter case we call this point multiplication rather than exponentiation.

An advantage with elliptic curve variants is that negation comes for free, in that givenP it is easy to compute−P. This leads to the use of signed binary and signed window methods. We only present the signed window method. We precompute

Pi= [i]P fori= 1,3,5, . . . ,2w11,

which requires only half the storage of the equivalent sliding window method or one quarter of the storage of the equivalent standard window method. We now write our multiplicanddas

d= l i=0

di·2ei

wheredi∈ {±135, . . . ,±(2w11)}. The signed sliding window method for elliptic curves is then given by Algorithm 6.5.

Algorithm 6.5:Signed sliding window method Q←0.

fori=l downto0do

forj= 0toei+1−ei1do Q←[2]Q.

j←di.

if j >0 then Q←Q+Pj. else Q←Q−Pj.

forj= 0 toe01do Q←[2]Q.