• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:J-a:Journal Of Economic Dynamics And Control:Vol25.Issue6-7.May2001:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:J-a:Journal Of Economic Dynamics And Control:Vol25.Issue6-7.May2001:"

Copied!
19
0
0

Teks penuh

(1)

*I would like to thank Shu-Heng Chen and two anonymous referees for their invaluable help, and the Evangelisches Studienwerk Haus Villigst for"nancial support.

E-mail address:[email protected] (T. Riechmann).

Some frequently cited papers are Andreoni and Miller (1995), Arifovic (1994,1995,1996), Axelrod (1987), Birchenhall (1995), and Bullard and Du!y (1998).

25 (2001) 1019}1037

Genetic algorithm learning and evolutionary

games

Thomas Riechmann

*

Universita(t Hannover, Institut fu(r Volkswirtschaftslehre, Ko(nigsworther Platz 1, 30167 Hannover, Germany

Accepted 12 June 2000

Abstract

This paper links the theory of genetic algorithm (GA) learning to evolutionary game theory. It is shown that economic learning via genetic algorithms can be described as

a speci"c form of an evolutionary game. It will be pointed out that GA learning results in

a series of near Nash equilibria which during the learning process build up to"nally

approach a neighborhood of an evolutionarily stable state. In order to characterize this kind of dynamics, a concept of evolutionary superiority and evolutionary stability of genetic populations is developed, which allows for a comprehensive analysis of the

evolutionary dynamics of the standard GA learning processes. 2001 Elsevier Science

B.V. All rights reserved.

JEL classixcation: C63}D73}D83

Keywords: Learning; Genetic algorithms; Evolutionary games

1. Introduction

Genetic algorithms (GAs) have frequently been used in economics to charac-terize a well de"ned form of social learning. They have been applied

(2)

The reader will be provided to have a basic understanding of genetic algorithms, which can be gained from e.g. Goldberg (1989) or Mitchell (1996).

to mainstream economics problems and mathematically analyzed as to their speci"c dynamic and stochastic properties (Dawid, 1994; Riechmann, 1999). But, although widely seen as to conduct a rather evolutionary economic line of thought, up to now there is no piece of work explicitly focusing on what it is that makes GA learning an evolutionary kind of behavior. At the same time, although there is a number of papers employing GA learning in a game theoretic context, there is still a lack of an explicit comparison of the similarities and di!erences between GA-learning and game theory. This paper "lls both gaps.

In this paper, it is shown that genetic algorithm learning is an evolutionary process and even more than this: It is an evolutionary game. As will be demonstrated, these conceptual clari"cations are of considerable help in ex-plaining the economic dynamics of the GA learning process.

The paper starts with a brief explanation of the genetic algorithm in focus. In the following three sections, three propositions will be given that will be needed in order to carry out a precise analysis of the GA learning process. These propositions are (a) every GA is a dynamic game; (b) every GA is an evolution-ary game; and (c) in GA learning processes, populations tend to converge towards a Nash equilibrium. In Section 6, a concept of evolutionary superiority and evolutionary stability is developed, which serves to apply a weak ordering on the space of genetic populations. After this, GA learning can be characterized by means of evolutionary concepts, which is comprehensively done in the fourth section.

2. The canonical genetic algorithm

As there is a growingly large number of variants of genetic algorithms in economic research, this paper will mainly deal with the most basic GA, the so-called canonical GA, which is described in detail by Goldberg (1989). As genetic algorithms have been well introduced into economic research, this paper will not explicitly review the speci"c structure and working principles of GAs.

(3)

This means that GA models of e.g. the travelling salesman problem, which surely have an economic subject, are nevertheless not&economic'GAs in the above meaning. For more information on state dependency see Dawid (1999).

Sometimes, these&strategies'are so myopic, that a game theorist would rather call them&actions'.

It is important to clarify the following point: a genetic individual is not interpreted as an economic agent, but as an economic strategyused byan economic agent. This interpretation allows for several agents employing the same strategy.

Note, that this function of the market has already been described by Hayek (1978).

Dexnition 1 (Economic genetic algorithm). An economic genetic algorithm is a genetic algorithm with a state-dependent"tness function.

This means that in economic GAs the"tness of an economic agent does not only depend on her own strategy, but also on the strategies of all other agents involved in the model.

The genetic algorithms in focus are algorithms modeling processes ofsocial learning via interaction within a single population of economic agents. The above sentence contains an implicit de"nition of social learning: Social learning means all kinds of processes, where agents learn from one another. Examples for social learning are learning by imitation or learning by communication. Learn-ing by experiment, on the contrary, is no form of social learnLearn-ing. It is a form of isolatedindividuallearning.

The canonical GA is a stochastic process which repeatedly turns one popula-tion of bit strings into another. These bit strings are called genetic individuals. In economic research they normally stand for some economic strategyor routine in the sense of Nelson and Winter (1982). These routines are said to be used by economic agents.

(4)

Fig. 1. Structure of the canonical genetic algorithm.

The"rst and most famous paper of this type is the one by Axelrod (1987). Another one is the one by Marks (1992).

canonical GA (often called roulette-wheel selection) does so by applying a biased stochastic process: Selection for the next generation is done by repeatedly drawing with replacement strategies from the pool of the old population to be reproduced into the next one. The chance of each strategy to be drawn is equal to its relative"tness, which is the ratio of its market success to the sum of the market success of all strategies in the population. Thus, the number of di!erent strategies within a population is reduced again. Fig. 1 shows an outline of the genetic algorithm described above.

3. Economic genetic algorithms as dynamic games

Genetic algorithms have been applied to analyze learning in games before. There is a number of papers which use a genetic algorithm to explicitly formulate agents'behavior in economic games.This section shows that in fact every economic genetic algorithm, even if it is not explicitly using a game theoretic setting, is a game.

(5)

A model of this type can be found in Arifovic (1994).

actions of all other players involved in the same situation. In economic genetic algorithms, all agents that are members of the same population are involved within the same situation. Moreover, in models of economic GA-learning (see De"nition 1 above), the "tness (alias economic success, alias game theoretic payo!) of an economic agent depends on two things: (a) her own strategy and (b) the strategies of all other members of the same population. In other words: The "tness of an economic agent depends on the state of her population. In GA-learning-lingo this reads: Agents' "tness is state dependent.

Proposition 2 (GA-games). Every simple,one-population,economic genetic algo-rithm is a dynamic game.

Note, that it is state dependency, which is the common characteristic for both, game theoretic situations and economic GA-learning models. More than this, the game is repeated in every round of the genetic algorithm, so that every individual is given the chance to alter and hopefully improve her strategy. Thus, it can be concluded that in fact every simple, one-population, economic genetic algorithm is a dynamic game.

As an illustration, imagine a cobweb model of the supply side of a market.In every period of time, each agent represents a "rm which faces the problem of maximizing its pro"t by chosing the appropriate quantity of supply. Each"rm's "tness, which is the same as one period's pro"t, is given by

"tness"agent's quantity GFFHFFI

'

)( price

GHI

!unit costs). (1)

The quantity the agent supplies re#ects her own strategy, whereas the ex-post market price re#ects the state of the whole population, which means, given total demand, it re#ects aggregate supply, i.e. the sum of all individual supply strategies. While total supply has an important in#uence on each agent's"tness, the cobweb model turns out to be a model of state-dependent"tness which is the same as an economic game.

In technical terms, even more can be said about the types of games, GA learning can be interpreted as. GA learning models describe a repeated eco-nomic game. Imagine a genetic algorithm using a population of M genetic individuals with the length of each individual's bit string of¸. Due to the binary coding of genetic individuals, this means that each genetic individual represents one out of 2*di!erent values. This means that the GA is able to deal with every economic strategy in the set of all available strategiesS, whereS has the size

N"S"2*. Thus, the GA can be interpreted as a repeated symmetric one

(6)

While this notion is true for most of the economic GA learning models, it is not true for GA models that explicitly describe economic games, including Axelrod (1987) and Andreoni and Miller (1995).

For such an interpretation see e.g. Dawid (1999).

&normal'evolutionary games, within most economic GA learning models, the &rules'of the game are di!erent. Whereas in evolutionary games most of the time a strategy is repeatedly paired with single competing strategies, in genetic algorithm learning, each strategy plays against the whole aggregate rest of the population. There is no direct opponent to a single strategy. Instead, every economic agent aims to "nd a strategy i3S performing as good as possible relative to its environment, which is completely determined by the current populationn and the objective functionR()).

4. Economic genetic algorithms as evolutionary games

Close relationships between economic learning models and models of evolu-tionary theory have been recognized before. Marimon (1993) gives a clear notion of the similarities of learning models on the one hand and evolutionary pro-cesses on the other. As genetic algorithms, too, have been broadly interpreted as models of economic learning, in this section it is argued that they can be regarded as evolutionary processes as well.

At a "rst glance, it is the structure of genetic algorithms and evolutionary models that suggests a close relationship between GAs and evolutionary economic theory: Both face the central structure of a population of economic agents interacting within some well de"ned economic environment and aiming to optimize individual behavior.

As the aim of this paper is to describe economic genetic algorithms as evolutionary processes, the "rst question to be answered is the question of whether GAs are evolutionary processes at all. In the following, it will be argued that GAs are a speci"c form of evolutionary processes, i.e. evolutionary games.

Proposition 3 (GA-evolutionary games). Every simple,one-population,economic genetic algorithm is an evolutionary game.

In order to give some evidence for this proposition, a de"nition is needed that clearly states what an evolutionary game is. This paper makes use of the de"nition by Friedman (1998, p. 16), who gives three characteristics for an evolutionary game:

(7)

For a more precise description, cf. Riechmann (1999).

This is valid for all variants of GA-selection processes, not only for the standard roulette-wheel selection (Goldberg, 1989). For more, di!erent types of GA-selection operators see e.g. Goldberg and Deb (1991) and Michalewicz (1996).

(a) higher payo!strategies tend over time to displace lower payo!strategies; (b) there is inertia; (c) players do not intentionally in#uence other players'future actions.

Prior to checking these three points, it is important to note that economic GAs are in fact models of strategic interaction, which has implicitly already been stated in Proposition 2. In the interpretation as models of social learning, GAs deal with a number of economic agents, each trying to"nd a behavioral strategy which, due to her surrounding, gives her the best payo! possible. GAs are models ofsociallearning, which in fact is a way of learning by interaction. Thus, it can be stated, that GAs are in fact models of&strategic interaction'.

Now, let us focus on Friedman's ( De"nition 4) three characteristics: (a) Higher payo!strategies tend over time to displace lower payo!strategies. Displacement of strategies in genetic algorithms is a process of change in the genetic population over time. This, in turn, is a question of selection and reproduction. GAs are in fact dynamic processes which reproductively prefer higher payo! strategies to lower payo! ones. It has been shown that in the canonical GA, the probability of a strategyito be reproduced from its current populationn into the population of the next period,P(in), depends only on its relative"tnessR(in), which is the strategy's payo!or market success relative to the aggregate payo! of the population n. Higher relative "tness leads to a higher reproduction probability

dP(in)

dR(in)'0. (2)

Thus, Friedman's condition (a) is satis"ed. (b) There is inertia.

According to Friedman, inertia means that&2changes in behavior do not take place too abruptly'. (Friedman, 1998, p. 16).

(8)

The Hamming distance between the genetic individualsiandj, in short, is the number of places, in which these genetic individuals di!er, i.e.the number of bits to be #ipped in order to turn individualiintoj. See e.g. Mitchell (1996, p. 7).

Normally, the mutation probabilityranges somewhere between and .

The result given in (3) deserves two further remarks. First, the fact that for'big changes are more likely than small ones explains the fact that for relatively large values of, GA results are close to white noise. Second, the result yields an interesting interpretation for the"eld of economic learning. If mutation is interpreted as learning by experiment, (3) shows that a little experimenting is a good thing to do, while too many experiments will disturb the generation of valuable new economic strategies. If mutation is interpreted as making mistakes in imitation or communication (see e.g. Alchian, 1950), Eq. (3) simply means that one should not make too many of those mistakes. individuals, the mutation operator is quite simple. Mutation randomly alters (&#ips') single bits of the bit string encoding an economic strategy. Each bit of the string has a small probability to be changed. , which is called the mutation probability, is uniform for every bit of any genetic individual of every population.

To carry out an analysis concerning the connection of mutation and abrupt-ness of change, a measure for the abruptabrupt-ness is needed. This paper will make use of the Hamming distance between two genetic individuals.According to the Hamming distance, a change from individualito individualjis said to be more abrupt than the change from individualitok, if the Hamming distance between

iandj,H(i,j) is greater than the Hamming distance betweeniandk,H(i,k). The question of whether small changes by mutation are more likely than big ones can be answered as follows: The probability of an economic strategyito be turned into strategyjby mutation,P

K(i,j), depends on the length of the genetic individuals'bit string,¸, the mutation probability, and the number of bits that have to be#ipped in order to turniintoj, i.e. the Hamming distance between

iandj,H(i,j)

For the usual parameter values of,this means the obvious: Small changes in strategy are more likely than big changes.Thus, it becomes evident that GA learning processes are processes which contain some inertia.

(9)

In fact, genetic algorithms can be shown to be Markov processes, the main characteristic of which is the&no memory property'. This property says that there is no memory to the history of the GA process.

For an in-depth discussion of this, refer to Davis and Principe (1993) or Riechmann (1999). of the other agents in her population. All an economic agent in a GA model can do is to try her best to adopt to her neighbors'pastactions, for the near past is all such an economic agent can remember.Taking into account these very limited individual abilities, it becomes obvious, that there is no room for intentional in#uences on other agents'actions.

From the above it can be concluded, that models of economic GA learning are in fact models which can be interpreted as evolutionary games.

5. Genetic populations as near Nash equilibria

The main structure (information scientists would call it&data structure') in genetic algorithm learning models is the genetic population. A population contains all the agents that are economically active in a certain period of time. The agents can completely be characterized by their economic strategy. Thus, a population can be described by counting how often each of the di!erent possible strategies is used by the members of the population. Thus, one way of representing a population is to give the (absolute or relative) frequency of each possible strategy within the population. This means that a population is nothing more than a distribution of di!erent economic or behavioral strategies.This is true for genetic populations as well as for populations in their game theoretic interpretation. It can be said that a genetic population is a game theoretic population.

From Section 4 it is known, that every economic agent aims to"nd the best performing strategyi3Swith respect to the objective functionR()) and given the strategies of the rest of her populationn. This means that every economic agent faces problem (5)

max GZ1

R(in). (5)

This immediately leads to the concept of Nash equilibria. A Nash strategy is de"ned as the best strategygiven the strategies of the competitors, and a Nash strategy is exactly what every economic agent, alias genetic individual, is trying to reach.

Proposition 5 (GA populations*near Nash equilibrium). In every simple,one

(10)

This is, of course a slight simpli"cation: Ruling out non-Nash strategies may alter the population in a way that make former Nash strategies non-Nash and vice versa.

Replicator dynamics (see e.g. Weibull, 1995 or Hofbauer and Sigmund, 1988, 1998), which have often been used to characterize evolutionary dynamics, seem to be unsuited for some economic problems. (Mailath, 1992, p. 286 even suggests that &there is nothing in economics to justify replicator dynamics'.) Applied to the analysis of GA learning, replicator dynamics, not directly accounting for stochastics, are simply not precise enough to cover the whole GA learning process. As an illustration, in the cobweb model by Arifovic (1994), there is no single, direct opponent to the "rm. Instead, it is the aggregate output of all the competitors which each"rm has to take into account. This is to say: Although there is no direct opponent to agents in a market, which is di!erent from most of the&normal'evolutionary games, markets can still be seen as games, even if the rules are slightly di!erent. The main point still remains the same: Agents are trying to play Nash.

It has already been shown that selection and reproduction work in favor of relatively well performing strategies. This means that via selection and repro-duction strategies which are not Nash strategies are eliminated from the popula-tion, whereas the population share of Nash strategies is increased.At the same time, due to the fact that agents do not stop experimenting, some agents carry out experiments that result in non-Nash strategies. Thus, although selection and reproduction drive the population towards a Nash equilibrium, learning by experiment prevents the population from fully reaching such an equilibrium. This means that genetic populations represent a state which is not a real Nash equilibrium but which is not far from it. This state will be called &near-Nash equilibrium'throughout this paper.

Thus, at a"rst step, an economic genetic algorithm can be viewed as modeling a system of economic agents, each of them trying to play a Nash strategy against the rest of the population. In economic terms this means that every agent tries to coordinate her strategy with the other agents'ones, for this is the best way of maximizing her pro"t (or utility or payo! or whatever the model wants the agent to maximize). The population is driven to the neighborhood of a Nash equilibrium by the forces of the the market, which are represented by selection in economic genetic algorithms. But, due to the e!ects of ongoing experimentation, the population will never be able to fully reach the Nash equilibrium.

6. Evolutionary stability of genetic populations

(11)

This paper will make use of the concept of evolutionary stability, especially the notion of evolutionarily stable strategies or evolutionarily stable states (ESS). See i.e. Maynard Smith (1982), Hofbauer and Sigmund (1988, 1998), Samuelson (1997, Weibull (1995), Marks (1992), or Mailath (1992). In short, a strategy is evolutionarily stable if, relative to its population, it performs better than any new,&invading'strategy. Though widely used in economic dynamics, the concept of ESS has a serious weakness which makes it only limitedly suitable for the analysis of genetic algorithms: There is no explicit formulation of the selection process underlying the concept of evolutionary stability. ESS are based on the notion that invading&mutant'strategies are somehow rejected or elimi-nated from the population. It is not clear how this rejection is carried out. Genetic algorithms, in contrast, present a clear concept of rejection: Every strategy will be exposed to a test, which has been described as a one-against-the-rest game in the previous sections. Then the strategy will be reproduced or rejected with a probability depending on its performance (i.e. market performance) in this game. Thus, the rejection of strategies in GA learning models is a question of reproduction. GA reproduction has two main features, it selects due to performance and it selects due to probability, which means that a bad strategy will be rejected almost surely although not with probability one.

Thus, a re"ned concept of evolutionary stability is needed for genetic algo-rithms. A possible way of setting up a concept of evolutionary stability for genetic algorithms which is keeping the spirit of the ESS is the following: A genetic population is evolutionarily stable if the process of the genetic algorithm rejects an invasion by one or more strategies from the genetic population. Invasion itself can either take the form of a totally new strategy entering the population or it can simply mean a change in the frequency of the strategies already contained within the population. Thus, a more precise de"nition of an evolutionarily stable population might be: A population is evolutionarily stable if it is resistant against changes in its composition (see De"nition 7).

6.1. Evolutionary superiority

More formally, a genetic populationn will be calledevolutionarily superiorto populationm , (denoted asn' m) if it exhibits two characteristics:

(a) Every strategyicontained within populationn gains at least the same"tness in the basic populationn as it gains in the invaded populationm, while at least one strategy gains even more"tness inn than inm.

(12)

This point has, in earlier drafts of this paper, led to misconception: Neither the concept of evolutionary superiority nor the concept of Pareto superiority make any statements about some kind of welfare. Originally, Pareto superiority is just a means to order points within a highly dimensioned space. Applying evolutionary superiority analogously to the original meaning of the Pareto criterion has no welfare implication at all. It is just used in order to make genetic populations weakly comparable with respect to the process of the genetic algorithm and GA's way of turning one population into another.

Note that the above characterizes a kind of weak dominance concept. This de"nition of evolutionary superiority induces a partial ordering on the space of genetic populations, which resembles the concept of Pareto superiority.

In more mathematical terms, the de"nition reads:

Dexnition 6 (Evolutionary superiority). A genetic populationn is called evolu-tionarily superior to populationm, if

R(in)5R(im) i3n (6)

j withR(jn)'R(jm) ( 7)

R(km)(R(im) ∀i3n; ∀k3m n . ( 8)

To illustrate this concept, consider the following example of an oligopoly game. Assume a population of"ve"rms, which covers the whole supply side of the market. Demand is given exogenously and does not change over time. The current population is characterized by the vector of output quantities, i.e.

n"(3, 2, 2, 3, 2).

The "rst element of n gives the output level of "rm number one, the second element gives the output of"rm number two and so on. Let us further suppose that the resulting aggregate output level of 12 leads to a market price that in turn leads to positive pro"ts for every single"rm within the population. Now, at the end of the current period,"rm number two starts experimenting and decides to change its output level to 10. In other words: Infection is taking place, changing populationn to

m"(3, 10, 2, 3, 2).

(13)

See Riechmann (1999) for the restrictions di!erent learning techniques impose on the set of available strategies.

Note e.g. the similarity to (Weibull's 1995, pp. 36) de"nition.

number two and replace it by a strategy from the rest of the population. By that, the infection that turned populationn into populationm is rejected. Though this does not inevitably mean that populationn is directly regained by that process, it shows at least, that in the light of GA dynamics,n is&better'thanm. Population

n is evolutionarily superior to populationm. This is veri"ed by application of the two criteria from above: Strategies one and three to"ve (i.e. the output strategy of "rm number one and "rms number two to "ve) lead to greater payo! in populationn than in populationm. Hence, criterion (a) is met. Moreover, strategy number two in populationm is the worst performing strategy in this population. By this, criterion (b) is met as well. Populationn is evolutionarily superior tom. At last, let us focus evolutionary superiority ofn overm by using De"nition 6. The "tness of each of the"ve"rms'strategies is higher in populationn than inm (Eqs. (6) and (7)). Note that this is especially true for"rm number two, which su!ers from a particularly high loss due to its change in strategy. Firm number two is the only one that has changed its strategy and at the same time it is the worst performing"rm, thus ful"lling condition (8).

For full validity, a further remark is necessary: Within genetic algorithms, invading strategies can only result from reproduction (&imitation'), crossover (&communication') or mutation (&experiment') within the population itself. This means that the "nal outcome of GAs without mutation (i.e. processes with learning by imitation and communication only), which are uniform populations, may have other populations being superior to them, but}without mutation }better populations simply cannot arise.

6.2. Evolutionarily stable populations

A population is evolutionarily stable in the concept of De"nition 6 if there is no other population withinS, the set of all populations, which is evolutionarily superior to it.

Dexnition 7 (Evolutionary stability of genetic populations). A genetic popula-tionn is called an evolutionarily stable population, if

x

m 3S with m'CQ n. ( 9)

(14)

Again, see Riechmann (1999) for a more detailed explanation.

This may be regarded as a weakness of the concept of genetic algorithm learning, as it neglects the possibility of modelling path dependence or lock-ins. So it may be worthwhile to mention two further points, which are mainly beyond the scope of this paper. First, depending on the underlying (economic) problem, some GAs spend long times supporting populations which are not evolutionarily stable. Some keywords pointing to this topic are&deceptiveness'of genetic algorithms and the problem of &premature convergence'. Secondly, the lack of ability to model lasting lock-ins or path dependence applies to the basic genetic algorithm. There are variations of genetic algorithms capable of modelling these phenomena. One keyword pointing into this direction of research may be&niching mechanisms'. Again, a good starting point for more descriptions of all of the various special cases and variants of GAs is Goldberg (1989).

is regained, or infection causes the transition to another evolutionarily stable population, provided there is one.

Due to the fact that genetic algorithm selection is a probabilistic rather than a deterministic process, invading strategies, even in an evolutionarily stable population, may not be rejected within a single round of the algorithm. It can only be stated that the invader will be driven out of the population within"nite time. That is to say: If a genetic population is evolutionarily stable, it will recover from an invasion within a"nite number of steps of the GA, which means thatin the long runthe population will not lastingly be changed. Nevertheless, once an evolutionarily stable population is invaded, there may appear a number of evolutionarily inferior populations within the next few rounds of the GA. These populations represent transitory states of the process of rejecting the invader. Riechmann (1999) shows that there is in fact more than one population that will occur in the long run. These may be transitory populations as well as di!erent populations which are evolutionarily stable, too.

7. Evolutionary dynamics

(15)

Birchenhall et al. (1997) clearly show that there is a connection between the extent of diversity in a population and the learning speed that is achieved by members of that population.

To this interpretation,&development'is a term ofstabilityrather thanoptimality.

This re#ects a rather classical economic thought, given, e.g., in Hayek (1969) (usually quoted as Hayek, 1978).

Knowing the special form of the dynamic process of the GA and the direction in which this process will lead, a few more words can be said about the role of heterogeneity for the dynamics. It seems important to notice the way economic change takes place. Starting with an arbitrary population, genetic operators (i.e. learning mechanisms) cause changes in the population, while new types of behavior are tested. The test is performed by exposing the strategies to the market. The market reveals the quality of each tested strategy relative to all other strategies within the population. Then selection leads economic agents to abandon poorly performing strategies and adopt better ones (imitation) or even create new ones by communication (crossover) or experimentation (mutation). After that, again, strategies are tested and evaluated by the market, by that way coordinating the agents'strategies, and so on.

There are two crucial aspects of this repeated process: First, it is the diversity of strategies that drives economic change, i.e. the succession of populations constantly altering their composition. Under the regime of genetic algorithm learning, this change in individual as well as in social behavior heavily (while not entirely) relies on learning by imitation and learning by communication. Evidently, these kinds of learning can only take place within heterogeneous populations. The more diverse a population is, the greater is the number of di!erent strategies or even parts of strategies that a member of this population can learn by imitating the other members or by communicating to them.Thus, in a way, it can be said that it is heterogeneity that is the main driving force behind economic change.

(16)

Summarizing, under the regime of the market, the evolutionary dynamic of genetic algorithm learning is mainly driven by two forces: Heterogeneity, which constantly induces behavioral (and by that, economic) change, and the market as a coordination device, revealing information about the quality of each type of behavior and ruling out poorly performing strategies, thus turning economic change into economic development.

Finally, looking at genetic algorithm learning from an evolutionary point of view, one more point has to be added. The evolutionary dynamic of the GA learning process is essentially a two-stage process. It has been shown that, as long as possible, genetic algorithm learning and market selection &improve' individual and as a result social behavior. But, this is just the"rst stage of the underlying dynamics. Yet, once an evolutionarily stable state of behavior has been reached, there certainly is no room for further&improvement'. But, due to the special structure of genetic algorithms, this does not mean that in this state economic agents stop changing their behavior. Instead, at this point the second stage of the dynamic process starts. Learning, or what has above been called change, still continues and will not cease to continue. Still, there will appear new ways of individual behavior within a population. Now it is the role of the market (i.e. selection) to drive these strategies out of the population again. Due to the probabilistic nature of the GA, this process may take more than one period, thus producing one or even more transitory populations until an evolutionarily stable population is regained. To put it in di!erent words: Even after an evolutionarily stable state is reached, evolutionary stability is continuously challenged by new strategies. While in the"rst phase of the GA learning process some of the new strategies are integrated into the population, in the second phase all of the invaders will be replaced again. So there is an ongoing near-equilibrium move-ment resulting from the continuous rejection of invading strategies.

In fact, genetic algorithm learning leads to an

interplay of coordinating tendencies arising from competitive adaptions in the markets and de-coordinating tendencies caused by the introduction of novelty

(Witt, 1993, p. xix), which has often been regarded as a key feature of evolu-tionarily economic analysis of the market. See Witt (1985).

(17)

A survey on various selection schemes can be found in Goldberg and Deb (1991).

For an interpretation and an extension of the election operator, see Franke (1997).

It has been largely neglected throughout this paper, that the stability properties of at least some variations of economic GAs do certainly depend on the underlyingeconomicproblem, too. This means that there might be some problems that can cause lasting changes of the population even in GAs with elitist selection. Examples for this might be found at genuinely cyclic problems like cases of Lotka}Volterra dynamics, see, e.g. Hofbauer and Sigmund, 1998, pp. 11, which, in economics, have been applied to models of business cycles, Goodwin (1967).

theory, a clear-cut economic reasoning can be found, why this state of Lyapunov stability shows up: It is a process of near-equilibrium dynamics, caused by the continuously ongoing challenge of the ESS by newly learned strategies and the rejection of these strategies by the market that prevents social behavior from total convergence but still keeps it close enough to a stable state.

8. Conclusions

Economic genetic algorithm learning can be shown to be a speci"c form of an evolutionary game. In this paper, this has been discussed for the most basic form of genetic algorithm, the canonical GA. For populations of GAs, concepts of evolutionary superiority and evolutionary stability are developed, which help to explain the way processes of GA learning behave. This behavior turns out to be an evolutionary, two step process: First, there is a movement of populations towards an evolutionarily stable state. Second, once such a state has been reached, the learning process turns into a near equilibrium dynamic of getting out of evolutionarily stable states and returning there again.

The notion of GA learning as an evolutionary process can be transfered to the analysis of modi"ed genetic algorithms. As an example, consider certain changes to the selection operator. Elitist selection schemes, including the selection within evolution strategies cf. BaKck et al. (1991) and Arifovic's (1994) election operator,ensure that at least the"rst best genetic individual of a population will become a member of the next generation's population. In contrast to roulette-wheel selection, elitist selection ensures that invading strategies which turn out to be the worst strategies throughout the population will be replaced at once. This means that there is no room for transitory populations. Bad strat-egies, i.e. strategies obeying condition (8), are ruled out even before they can enter a population. This certainly leads, in most casesto asymptotic behav-ioral stability.

(18)

References

Alchian, A.A., 1950. Uncertainty, evolution and economic theory. Journal of Political Economy 58, 211}221.

Andreoni, J., Miller, J.H., 1995. Auctions with arti"cial adaptive agents. Games and Economic Behavior 10, 39}64.

Arifovic, J., 1994. Genetic algorithm learning and the cobweb-model. Journal of Economic Dynam-ics and Control 18, 3}28.

Arifovic, J., 1995. Genetic algorithms and in#ationary economies. Journal of Monetary Economics 36, 219}243.

Arifovic, J., 1996. The behavior of the exchange rate in the genetic algorithm and experimental economies. Journal of Political Economy 104, 510}541.

Axelrod, R., 1987. The evolution of strategies in the iterated prisoner's dilemma. In: Davis, L. (Ed.), Genetic Algorithms and Simulated Annealing. Pitman, London, pp. 32}41.

BaKck, T., Ho!meister, F., Schwefel, H.-P., 1991. A survey of evolution strategies. In: Belew, R.K., Booker, L.B. (Eds.), Proceedings of the 4th International Conference on Genetic Algorithms. Morgan Kaufmann, San Mateo, CA, pp. 2}9.

Birchenhall, C., 1995. Modular technical change and genetic algorithms. Computational Economics 8, 233}253.

Birchenhall, C., Kastrinos, N., Metcalfe, S., 1997. Genetic algorithms in evolutionary modelling. Journal of Evolutionary Economics 7, 375}393.

Bullard, J., Du!y, J., 1998. A model of learning and emulation with arti"cial adaptive agents. Journal of Economic Dynamics and Control 22, 179}207.

Davis, T.E., Principe, J.C., 1993. A markov chain framework for the simple genetic algorithm. Evolutionary Computation 1, 269}288.

Dawid, H., 1994. A Markov chain analysis of genetic algorithms with a state dependent"tness function. Complex Systems 8, 407}417.

Dawid, H., 1999. Adaptive Learning by Genetic Algorithms, 2nd Edition. Springer, Berlin. Franke, R., 1997. Behavioral heterogeneity and genetic algorithm learning in the cobweb

model. Discussion Paper 9, IKSF } Fachbereich 7 } Wirtschaftswissenschaft, UniversitaKt Bremen.

Friedman, D., 1998. On economic applications of evolutionary game theory. Journal of Evolution-ary Economics 8, 15}43.

Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA.

Goldberg, D.E., Deb, K., 1991. A comparative analysis of selection schemes used in genetic algorithms. In: Rawlins, Gregory J.E. (Ed.), Foundations of Genetic Algorithms. Morgan Kaufmann, San Mateo, CA, pp. 69}93.

Goodwin, R.M., 1967. A growth cycle. In: Feinstein, C.H. (Ed.), Socialism, Capitalism and Economic Growth. Cambridge University Press, Cambridge, UK.

Hayek, F.A.v., 1969. Freiburger Studien. Der Wettbewerb als Entdeckungsverfahren. J.C.B. Mohr (Paul Siebeck), TuKbingen, pp. 249}265 (Chapter 15).

Hayek, F.A.v., 1978. New studies in philosophy, politics, economics and the history of ideas. Competition as a Discovery Process. Routledge & Kegan Paul, London, pp. 179}190 (Chapter 12).

Hofbauer, J., Sigmund, K., 1988. The Theory of Evolution and Dynamical Systems. Cambridge University Press, Cambridge, UK.

Hofbauer, J., Sigmund, K., 1998. Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge, UK.

(19)

Marimon, R., 1993. Adaptive learning, evolutionary dynamics and equilibrium selection in games. European Economic Review 37, 603}611.

Marks, R.E., 1992. Breeding hybrid strategies: optimal behavior for oligopolists. Journal of Evolu-tionary Economics 2, 17}38.

Maynard Smith, J., 1982. Evolution and the Theory of Games. Cambridge University Press, Cambridge, UK.

Michalewicz, Z., 1996. Genetic Algorithms # Data Structures " Evolution Programs, 3rd Edition. Springer, Berlin.

Mitchell, M., 1996. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, London. Nelson, R.R., Winter, S.G., 1982. An Evolutionary Theory of Economic Change. MIT Press,

Cambridge, MA, London.

Riechmann, T., 1999. Learning and behavioral stability}an economic interpretation of genetic algorithms. Journal of Evolutionary Economics 9, 225}242.

Samuelson, L., 1997. Evolutionary Games and Equilibrium Selection. MIT Press Series on Eco-nomic Learning and Social Evolution. MIT Press, Cambridge, MA, London.

Weibull, J.W., 1995. Evolutionary Game Theory. MIT Press, Cambridge, MA, London. Witt, U., 1985. Coordination of individual economic activities as an evolving process of

self-organization. Economic AppliqueHe XXXVII, 569}595.

Gambar

Fig. 1. Structure of the canonical genetic algorithm.

Referensi

Dokumen terkait

This paper discusses parametric reform options to control losses generated by a public- ly managed pay-as-you-go (PAYG) pension system under alternative de " cit reduction

[r]

In both the human subject experiment and in the arti " cial agent environment the answer is the same: some type 1 players are strict funda- mentalists, never experimenting with

Keywords: Agent-based computational economics; Social learning; Genetic program- ming; Business school; Arti " cial stock markets.. Background

Similarly, a single agent who does not make use of the restart strategy while everybody else does will experience larger waiting times and larger variance, on average, than he

The result states that, in the case of a quadratic cost function (or equivalently linear supply), for generic non-linear, decreasing demand curves (and therefore for generic

As shown in Section 4.3 the existence of increasing returns is necessary for the emergence of local endogenous # uctuations, either determinis- tic or stochastic; whereas the

To focus on the principal-agent problem, our model abstracts from cost of adjustment and depreciation introducing the utility function of corporate managers instead, assumes