• Tidak ada hasil yang ditemukan

Introduction to Public-Key Cryptography

1.2 Complexity

The aim of complexity theory is to define formal models for the processors and algorithms that we use in our everyday computers and to provide a classification of the algorithms with respect to their memory or time consumption.

Surprisingly all the complex computations carried out with a computer can be simulated by an au- tomaton given by a very simple mathematical structure called a Turing machine. ATuring machine is defined by a finite set of states: an initial state, a finite set of symbols, and a transition function. A Turing machine proceeds step-by-step following the rules given by the transition function and can write symbols on a memory string. It is then easy to define the execution time of an algorithm as the number of steps between its beginning and end and the memory consumption as the number of symbols written on the memory string. For convenience, in the course of this book we will use a slightly strongermodel of computation, called aRandom Access Machinebecause it is very close to the behavior of our everyday microprocessors. Then determining the execution time of an algorithm boils down to counting the number of basic operations on machine words needed for its execution.

For more details, the reader should refer to [PAP 1994].

The security of protocols is often linked to the assumed hardness of some problems. In the theory of computation a problem is a set of finite-length questions (strings) with associated finite-length answers (strings). In our context the input will usually consist of mathematical objects like integers or group elements coded with a string. The problems can be loosely classified into problems to

§ 1.2 Complexity 3

compute something, e.g., a further group element and problems that ask for ayes or noanswer.

Definition 1.1 A problem is called adecision problemif the problem is todecidewhether a state- ment about an input is true or false.

A problem is called acomputation problemif it asks tocomputean output maybe more elaborate than true or false on a certain set of inputs.

One can formulate a computation problem from a decision problem. Many protocols base their security on a decision problem rather than on a computation problem.

Example 1.2 The problem to compute the square root of16is a computation problem whereas the question, whether4is a square root of16, is a decision problem. Here, the decision problem can be answered by just computing the square42= 16and comparing the answers.

A further decision problem in this context is also to answer whether 16 is a square. Clearly this decision problem can be answered by solving the above computation problem.

Example 1.3 A second important example that we will discuss in the next section is the problem to decide whether a certain integermis a prime. This is related to the computation problem of finding the factorization ofm.

Given a model of computation, one can attach a certain functionf to an algorithm that bounds a certain resource used for the computations given the length of the input called thecomplexity parameter. If the resource considered is the execution time (resp. the memory consumption) of the algorithm,f measures itstime complexity(resp. space complexity). In fact, in order to state the complexity independently from the specific processor used it is convenient to express the cost of an algorithm only “up to a constant factor.” In other words, what is given is not the exact operation count as a function of the input size, but the growth rate of this count.

The Schoolbook multiplication ofn-digit integers, for example, is an “n2algorithm.” By this it is understood that, in order to multiply twon-digit integers, no more thanc n2single-digit multiplica- tions are necessary, for some real constantc— but we are not interested inc. The “big-O” notation is one way of formalizing this “sloppiness,” as [GAGE1999] put it.

Definition 1.4 Letf andgbe two real functions ofsvariables. The functiong(N1, . . . , Ns)is of orderf(N1, . . . , Ns)denoted byO

f(N1, . . . , Ns)

if for a positive constantcone has

|g(N1, . . . , Ns)|cf(N1, . . . , Ns),

withNi> N for some constantN. Sometimes a finite set of values of the tuples(N1, . . . , Ns)is excluded, for example those for which the functionsf andghave no meaning or are not defined.

Additional to this“big-O” notationone needs the“small-o”notation.

The functiong(N1, . . . , Ns)is of ordero

f(N1, . . . , Ns)

if one has lim

N1,...,Ns→∞

g(N1, . . . , Ns) f(N1, . . . , Ns) = 0.

Finally we writef(n) =O g(n)

as a shorthand forf(n) =O

g(n) lgkg(n)

for somek.

Note that we denote bylgthe logarithm to base2 and byln the natural logarithm. As these ex- pressions differ only by constants the big-O expressions always contain the binary logarithm. In case other bases are needed we uselogabto denote the logarithm ofbto basea. This must not be confused with the discrete logarithm introduced in Section 1.5 but the meaning should be clear from the context.

4 Ch. 1 Introduction to Public-Key Cryptography

Example 1.5 Considerg(N) = 10N2+ 30N+ 5000. It is of orderO(N2)as forc = 5040one hasg(N)cN2for allN. We may writeg(N) =O(N2). In additiong(N)iso(N3).

Example 1.6 Consider the task of computing then-fold of some integerm. Instead of computing n×m =m+m+· · ·+m(n-times) we can do much better reducing the complexity of scalar multiplication fromO(n)toO(lgn). We make the following observation: we have4m= 2(2m) and a doubling takes about the same time as an addition of two distinct elements. Hence, the number of operations is reduced from3to2. This idea can be extended to other scalars:5m= 2(2m) +m needing3operations instead of4. In more generality letn=l1

i=0ni2i, ni∈ {0,1}be the binary expansion ofnwithl−1 =lgn. Then

n×m= 2(2(· · ·2(2(2m+nl2m) +nl3m) +· · ·+n2m) +n1m) +n0m.

This way of computingn×m needsl 1 doublings andl1

i=0nil additions. Hence, the algorithm has complexityO(lgn). Furthermore, we can bound the constantcfrom above by2.

Algorithms achieving a smaller constant are treated in Chapter 9 together with a general study of scalar multiplication.

An algorithm has running timeexponential inN if its running time can be bounded from above and below byef(N)andeg(N)for some polynomialsf, g. In particular, its running time is of order O

ef(N)

. Its running time ispolynomial inN if it is of orderO f(N)

. Algorithms belonging to the first category are computationally hard, those of the second are easy. Note that the involved constants can imply that for a certain chosen smallNan exponential-time algorithm may take less time than a polynomial-time one. However, the growth ofN to achieve a certain increase in the running time is smaller in the case of exponential running time.

Definition 1.7 For the complexity of algorithms depending onN we define the shorthand LN(α, c) := exp

(c+o(1))(lnN)α(ln lnN)1α

with0α1andc >0. Theo(1)refers to the asymptotic behavior ofN. If the second parameter is omitted, it is understood that it equals1/2.

The parameterαis the more important one. Depending on it,LN(α, c)interpolates between poly- nomial complexity forα= 0and exponential complexity forα= 1. Forα <1the complexity is said to besubexponential.

One might expect a cryptographic primitive to be efficient while at the same time difficult to break. This is why it is important to classify the hardness of a problem — and to find instances of hard problems. Note that for cryptographic purposes, we need problems that arehard on average, i.e., it should be rather easy to construct really hard instances of a given problem. (The classification of problemsPand notPmust therefore be considered with care, keeping in mind that a particular problem inNPcan be easy to solve in most cases that can be constructed in practice, and there need to be only some hard instances for the problem itself to let it be inNP.)

In practice, we often measure the hardness of a problem by the complexity of the best known algorithm to solve it. The complexity of an algorithm solving a particular problem can only be an upper bound for the complexity of solving the problem itself, hence security is always only “to our best knowledge.” For some problems it is possible to give also lower bounds showing that an algorithm needs at least a certain number of steps. It is not our purpose to give a detailed treatment of complexity here. The curious reader will find a broader and deeper discussion in [GRKN+ 1994, Chapter 9], [SHP 2003], and [BRBR1996, Chapter 3].