• Tidak ada hasil yang ditemukan

Our approach to stability

Dalam dokumen The New Economy and Macroeconomic Stability (Halaman 88-95)

Structural change and instantaneous equilibrium

We are now in a position to clarify our approach to the analysis of the stability of the NE.

The basic challenge is to overcome the limitations of standard approaches to complexity by developing a method that can cope with multiple interrelations between key variables.

In what follows, we will focus upon our meta-model, that is, our set of instructions for conducting stability analysis. These instructions address important issues such as the kind of patterns to be considered and the role that existing macro models play in the interpretation of these patterns.

Some general features of our meta-model

Our approach is consonant with the neo-modern paradigm and draws on the broad view of complexity underlying the SFA. Despite its limitations, the SFA provides a very useful benchmark for our conceptualization of the NE. Indeed, the best way to present our methodology is to point out its similarities and differences to the SFA. Let us start by noting that we subscribe to the same canon of ‘light theory’ underlying SFA and, therefore, do not seek to build an all-encompassing dynamic model of the economy.1 Moreover, we agree that existing macroeconomic theories have a role to play in the analysis. We also share the view that the real-world economy and economic methodology have many different layers. Indeed, the basic contribution made in this volume is to take full advantage of this argument, pushing the frontiers of knowledge in the realm of complexity even further. Our approach amounts to suggesting that the study of macroeconomic stability calls for consideration of a new layer of complexity, both in terms of actual real-world phenomena and methods of analysis. We shall now try to justify this claim.

Focus on complex historical patterns

In saying that our approach to stability analysis involves attention to another distinct layer of reality we mean, among other things, that the patterns we take into account when describing the main features of the NE are quite different from those emphasized by the SFA. We take into consideration more than actual statistical data, such as aggregate time series, or the iterative processes of computer simulations. By and large, these sources generate instances of ‘simple’ patterns (e.g. the short-run non-proportionality of consumption and income or the behaviour of a particular type of financial market).

Instead, in dealing with the NE, we also need to consider ‘complex’ patterns referring mainly to the structure of the system, rather than to iterative processes; in other words, we have to focus on what Pryor (2000) calls ‘dimensions of structural complexity’.

It is important to note that the formation of such complex patterns, unlike SFA patterns, cannot be studied on the grounds of non-linear dynamic models or statistical techniques. In particular, we hold that probabilistic laws concerning such patterns cannot be found, for such laws are based on the assumption that the causal structure is stable.

But as pointed out by Dow, very few and relatively simple aspects of life

such as mortality rates or incidence of damage to houses…are the areas where frequency distributions can be constructed and probability statistics calculated…these are then used as the basis for our insurance premiums.

Even then premiums change with changing patterns of health or weather.

This is the closest we get in real life to deterministic laws, albeit with a substantial stochastic variation.

(Dow 2002:144) On the basis of this view, and in line with post-modern criticism, we call into question any ambition towards ‘grand theory’ underlying the SFA. However, we do not abandon all attempts at simplification, whether at the descriptive or theoretical level. In fact, we simplify the description of the NE by singling out a few empirical generalizations or patterns that seek to capture significant qualitative features of our economies linked to the processes of change and evolution. The empirical generalizations we have in mind are

‘not numerical laws but “stylised facts”: generalizations that do not hold exactly, and which are typically not established by any formal statistical procedure’ (Backhouse 1997:104) or reference to ceteris paribus clauses. Alternatively, these patterns could also be regarded as ‘tendencies’ or ‘empirical laws’ concerning the economy as a whole, that is, systemic empirical laws.2 In this book, we view the NE as consisting of a number of patterns linked to the acceleration of certain processes, such as globalization, technological change, the weight of finance and services, the focus on data and forecasts, the role of the state and so on, which seem to characterize the NE in all major countries.

In conclusion, the stylized facts relevant to our stability analysis are broader than those normally considered by the SFA or the standard economic literature. In particular, as shown in Chapter 10, ideally they should be able to shift our focus from recurrent phenomena to irreversible trends, from simple data concerning macro aggregates to wider micro-macro features, from isolated phenomena to interrelated phenomena and from simple economic factors to complementary institutions. However, we shall find that establishing these broad stylized facts is no simple task and that it can only be carried out by developing an appropriate methodology

Institutions as generating functions of complex patterns

First, however, we need to clarify some other aspects of these empirical laws. More precisely, we have to take up the issue of what the generating functions, or foundations, of these laws consist in. Given that they cannot be accounted for by statistics or

probability theory, where do they lie? We suggest that such laws are rooted in certain institutional factors, broadly intended as encompassing ‘not simply organizations, such as corporations, banks and universities, but also integrated and systematic social entities such as money, language, and law’ (Hodgson 1998:179). For example, at the root of the recent acceleration of the globalization trend we find institutional features, such as regulations and treaties that unify markets and promote trade, as well as new technologies.

This is not to say that institutions determine everything. We strongly disagree with attempts to explain the emergence of institutions on the basis of given individuals alone, as implied by the methodological individualism underlying neoclassical theory and the so-called ‘New Institutional Economics’ (see e.g. Schotter 1981; Williamson 1985; for a critique, see for example, Hodgson 1998:176; Hollingsworth 2000:602). But this does not imply that the opposite is necessarily true. As Hodgson points out:

It is simply arbitrary to…say ‘it is all reducible to individuals’ just as much as to say it is ‘all social and institutional’…neither individual or institutional factors have legitimate explanatory primacy. The idea that all explanations have to be solely and ultimately in terms of individual (or institutions) is thus unfounded.

(2003:xv) What is true, of course, is that ‘institutions are formed and changed by individuals just as individuals are shaped and constrained by institutions.’ (Hollingsworth 2000:603).

However, in order to avoid an infinite regress problem, a choice must be made. We choose to emphasize the view that ‘human activity can only be understood as emerging in a context with some pre-existing institutions’ (Hodgson 2003:xvii), and ‘that institutions not only constrain but also shape individuals’ (Hollingsworth 2000:603). Various research projects linked to this view (see e.g. Hodgson 2003 for an overview) are being carried out. One can first focus on the effects of institutional constraints and downward causation and then seek to understand how the interaction between individuals gives rise to new institutional forms. For example, one can attempt to develop a theory of economic and institutional evolution along Darwinian lines.

Our macroeconomic standpoint leads us to emphasize mainly the first causal link, that is, how institutions constrain agents’ behaviour and give rise to new perceptions and dispositions within them. Indeed, as many institutionalists hold, ‘to take the institution as a socially constructed invariant—or emergent property—is a basis for consideration of macroeconomic dynamics and behaviour’ (Hodgson 1998:189).3 This view reveals a crucial divergence from standard macro theory that considers institutions as external constraints that limit but do not influence or shape individual behaviour in view of the key assumption of exogenous preferences.

The idea that institutions shape individual behaviour and preferences is well rooted in the history of economic thought. One need only recall, for example, that the emphasis on malleability of individual preferences underlies a broad evolutionary approach to economic analysis and in particular ‘old’ American Institutionalist economics, from Veblen to Galbraith and their modern counterparts, including Hodgson (e.g. Hodgson 1998, 2001, 2003, 2004b). As Hodgson states ‘Wesley Mitchell argued that the evolution

Our approach to stability 73

of money cannot be understood simply in terms of cost reduction and individual convenience…the evolution of money changes the mentality, preferences and way of thinking of individual themselves’ (2003: xvi). Moreover, ‘Mitchell thought of business cycles as a phenomenon arising out of the patterns of behaviour generated by the institutions of a developed money economy…it is institutions that create the regularities in the behaviour of the mass of people that quantitative work analyses’ (Rutherford 2001:177).

For our purposes, it is important to note that this view also underlies more recent theoretical developments, including modern reformulations of evolutionary views (e.g.

Nelson and Winter and their followers) and the complexity approach. In particular, it underlies the work of those authors who emphasize an institutional view of complexity, such as North (1991), Prasch (2000a, b) and Viskovatoff (2000). All of these contributions are of relevance for us in that they address the role played by institutions in economic growth. As Nelson and Sampat (2001:39) have remarked, institutions affect what most economists regard as the ‘proximate’ factors behind productivity growth and increased standards of living, such as ‘technological advance, physical capital formation, education and the efficiency of the economy and the resource allocation process’ (ibid.).

Moreover ‘institutions influence the ways in which economic actors get things done’

(ibid.).

More precisely, North effectively describes the positive role of institutions in granting stability as well as in generating structural patterns and growth. Once having defined institutions as ‘the rules of the game in society or…the humanly devised constraints that shape human interaction’ (North 1990:3) and having observed that they consist of ‘both informal constraints (sanctions, taboos, customs, traditions and codes of conduct) and formal rules (constitutions, law, property rights)’ (1991:97), North argues that:

Throughout history, institutions have been devised by human beings to create order and reduce uncertainty in exchange… Institutions provide the incentive structure of an economy; as that structure evolves, it shapes the direction of economic change towards growth, stagnation or decline.

(ibid.) Moreover, following the view that economic history over time can be understood as a series of stages from local autarky to specialization and division of labour, where each successive step represents increasing specialization and division of labour and continuously more productive technology, North points out that ‘spontaneous’ passage from one stage to another does not necessarily occur. Without certain institutions, inefficient forms of exchange, the Suq for example, will not disappear:

What is missing in the Suq are the fundamental underpinnings of institutions…these include an effective legal structure and court system to enforce contracts which in turn depend on the development of political institutions that will create such a framework. In their absence there is no incentive to alter the system.

(North 1991:104)

Macro models and interpretations of complex patterns

Now that we have specified the new kind of empirical laws that will be considered in this book, we shall examine our other claim that this approach to stability involves the development of a new method or branch of macroeconomics. In other words, we acknowledge that descriptive simplification based on the identification of empirical laws underlying the NE is only a first step. Some kind of theoretical simplification is also required; in particular, we have to show that the institutional approach to complexity being endorsed here makes it possible to perform a feasible analysis of stability. As our approach abandons the search for dynamic patterns of simplification on the basis of mathematical and statistical tools, we must also propose an alternative method of achieving simplification.

Strictly speaking, this does imply having to give up the use of formal models altogether. While we subscribe to the view that the economy is an ‘open’ system and we therefore seek to account for a plurality of relations, we also believe that simplification calls for some kind of modelling approach (see e.g. Chick and Dow 2001; Chick 2003).

As noted by the SFA theorists, a lot of data can be compressed into models, thus making them undeniably useful in dealing with a large number of variables in a compact way, as in our macroeconomic analysis. However, it is also true that there are ‘better’ and ‘worse’

models, more or less ‘general’ models, models that are more or less capable of accounting for the complex patterns we have singled out and so on. Our suggestion, which follows from the ‘light theory’ canon, is that the use of dynamic models, in an attempt to summarize the complex evolution of the economy within the outlines of a simplistic formula, needs to be abandoned.

This critique, however, does not automatically carry over into the familiar structural

‘static’ macroeconomic models, such as IS-LM. In what follows, we shall see that these can also play a role in our stability analysis. Indeed, like the SFA, our approach is not meant to replace all existing methods or theories. On the contrary, it acknowledges the role of the major macro theoretical paradigms in achieving the simplification required by a manageable approach to stability.

However, we make the important claim that this contribution is limited. Indeed, neither SFA nor existing macroeconomic theory alone is sufficient to successfully cope with the global stability issue. In order to understand this claim, a few points need to be considered. First of all, in our view, theories are explanatory devices. In what follows, we shall maintain an ‘essentialist’ scientific realist perspective, according to which the aim of science is ‘to discover the hidden essential mechanisms causing the observable events’

(Boylan and O’Gorman 1995:62).4 This point reveals an important difference with respect to the SFA. As pointed out by Pryor (2000), one weakness of the SFA is that its main aim is to demonstrate that certain events can occur. However, in our view, this is not enough; one must also try and explain why they occur and establish what major causal mechanisms are involved.

Second, we believe that the macroeconomic reality is so complex that no single model can really hope to achieve a full understanding of it. In particular, it is difficult to imagine how one can model the interactions between the key trends of the NE that we have singled out. A large number of causal mechanisms seem to be operating simultaneously;

Our approach to stability 75

these can be considered as different aspects of a full account of stability, like pieces of a puzzle.

Third, the puzzle metaphor also allows us to see how existing macro theories might be applied for our purposes. To carry out stability analysis, we need to follow a two-stage strategy in order to complete the puzzle. In the first stage, we take inventory of all the different parts, that is, the various causal mechanisms, revealed by existing macroeconomic theories. In the second, we try to piece them together according to a certain pattern. In principle, there are as many potential patterns as there are theories.

The key steps in our simplification strategy

On these grounds, we can now outline our strategy for simplification aimed at providing a manageable account of the stability of the NE. A number of steps can be identified. The first is to consider only two basic macro paradigms: neoclassical and Keynesian. The rationale for this move towards simplification is that both paradigms are in line with our essentialist scientific realist perspective and can be regarded as singling out two contrasting sets of essential causal factors at play. Most of the many other approaches or variants available in macroeconomics can be shown to differ from the two basic paradigms only in incidental factors. In other words, the distinction between essential and incidental factors enables us to simplify the range of possible interpretations of the puzzle.

The second step of our simplification strategy is to view both paradigms as capable of conceptualizing instantaneous equilibrium and identifying the causal mechanisms operating at any given point in time. In line with our refusal of any ambition towards grand dynamic theory, we narrow down the potential of both theories to this weak notion of equilibrium. We believe that while the ‘light theory’ canon rules out stationary long- run equilibrium states and deterministic laws of long-term evolution of complex systems, it is not necessarily inconsistent with instantaneous equilibrium. Whereas the SFA regards economic science as consisting of the isolation of universal patterns of dynamic simplifications, we identify this notion of equilibrium as the scientific core of economics and suggest that the necessary simplification is to be found in static analysis.

Instantaneous equilibrium plays the role of a benchmark in our analysis, as it shows the key factors playing a causal role at any given moment in time.

The third step in simplification is to create a single ‘map’ of all the possible causal mechanisms at play. This helps us describe what is going on in the NE in terms of stability and instability factors. This map, which can be regarded as a new layer of macroeconomics, can be drawn by combining the insights of the two basic paradigms concerning the possible effects of specific NE trends such as globalization, rapid technological growth and so on. This move—which follows from the view that description is ‘theory laden’ (see e.g. Boylan and O’Gorman 1995:77–8)—makes sense because these paradigms are, in a certain, sense complementary. What constitutes a primary or causal factor for neoclassical theory turns out to be merely a secondary factor in Keynes’s theory and vice versa. In principle, this means that they tend to identify alternative sources of stability and instability. What causes instability for one theory may be seen as a stability factor in the other. This third step should make it obvious that our

approach does not consist so much in elaborating a new theory as in combining the insights of already existing theories into a meta-model. The individual insights are treated in isolated fashion, like the pieces of a puzzle.

The last step in our simplification strategy is to devise a plausible broad account of stability, referred to as a ‘meta-interpretation of the NE’, by comparing the alternative scenarios proposed by the two paradigms. The two paradigms tend not only to suggest specific causal mechanisms but also to provide contrasting global interpretations of the

‘NE map’. Each theory thus builds a kind of scenario in which some tendencies described by the map are bound to prevail over others. The aim of our comparison is to try to overcome the one sidedness of each account of the NE by singling out criteria for assessing their relative plausibility. We contend that plausibility is linked to the explanatory power of a theory and regard ‘the notion of explanatory power as distinct criterion for theory choice’ (Boylan and O’Gorman 1995:91). Other criteria, such as predictive power, need to remain in the background, because, as noted long ago by John Stuart Mill and by many other theorists since (see e.g. Lawson 1997), the multiplicity of causal factors operating in the economic world implies the absence of significant invariant empirical regularities in non-experimental situations (see Boylan and O’Gorman 1995:95).

Our approach to stability 77

Dalam dokumen The New Economy and Macroeconomic Stability (Halaman 88-95)