Under the label of precision we group those stylized facts referring to attempts at improving a number of activities, such as measurement, data collection, formalization and prediction, made on the grounds of ICT.1 In particular, we shall focus on how ICT provides external impetus to change in economics (e.g. Dow 2002:6). As with other features of the NE, this is not a new phenomenon. It goes back, at least, to the nineteenth century, when the positivist ideal that social theory should be capable of replicating the success of natural science began to be affirmed. Then, at the beginning of the twentieth century, the search for precision went hand in hand with the development of macroeconomics as an autonomous discipline, based on relatively simple mathematical models. Both these transformations were made possible by legal and institutional changes, such as the rise of independent central banks and statistics institutes linked to governments’ need to control the growing complexity of social and economic systems. It is beyond doubt, however, owing to the very nature of ICT, that the NE implies a dramatic acceleration in the quest for precision compared to the past. One need only think of the Internet’s potential for satisfying all kinds of information needs, or to the fact that ICT facilitates the use of ever more sophisticated forecasting techniques. Moreover, recent trends in economic theory call for the construction of less ambitious but more rigorous paradigms, or models, than available in the past.
In the opinion of most economists, precision has prevailing positive effects on the stability of the economy. For example, having more detailed information about smaller parts of the system, in principle, makes it easier to understand and control it. However, there are others who point out that efforts to ‘make it precise’ may actually increase instability, because they still fail to capture some of the most elusive and complex features of the NE, such as intangibles, thus giving a false sense of improvement. In what follows, we shall focus on both effects, placing the emphasis on the signs of acceleration or differences of recent trends as compared with past decades.
More data and better measurement techniques
The tendency to achieve more precision in the NE than in the past can be seen in a number of significant phenomena that may reduce instability. First of all, the NE stimulates the search for greater quantitative information about events. It should be clear that this is not a recent phenomenon; it has been pursued in a systematic fashion since at least the nineteenth century. There is no doubting, however, that it has undergone a sharp acceleration in recent times, because ICT expands the scope for data collection and analysis (see e.g. Dow 2002:6), including those aspects of economic life that involve complex quality changes. One instance is the adoption of the hedonic price indexes for
dealing with quality changes of ICT products (see e.g. Baily 2001:222; OECD 2001;
Gordon 2002:50). In principle, it is clear that greater quantitative information increases stability because it tends to reduce the grey area of phenomena which are not adequately understood and controlled, thus allowing improvements in the decisions made by agents and in policy intervention. As noted, for example, by Stiroh (2000:41). the attempt to measure inputs properly and the extension of the definition of investment beyond tangible assets (i.e. to include investment in human capital. R&D and public infrastructure) have led to advances in understanding productivity growth.
A more important role of ‘formalist’ values in economics
A second phenomenon induced by the NE is that a large part of economics has moved significantly in the direction of mathematization. Not only do we see the rapid growth of specialized branches of economics using advanced mathematical tools, but there is also a more general tendency to regard the development of an adequate formalization as the standard of presentation for all types of economic analysis. Once again, this is not new.
One could observe that this tendency has been quite well established since the 1920s when the general equilibrium model and various types of macroeconomic models became increasingly popular. It is true, however, that the NE implies a sharp acceleration of the move towards formalization; in particular, ‘formalist’ values tend to acquire an excessive importance in economics (see e.g. Backhouse 1997; Lawson 1997:4, 2003:3–4; Dow 2002:10). It must be noted that this acceleration concerns not just the quantity but also the type of models that are used by economists, reflecting a major change in the nature of the discipline. This change clearly appears in today’s use of the term ‘economic theory’
itself: this ceases to mean a body of propositions about the world and has come to mean a set of mathematical theorems (see Backhouse 1997:208; Lawson 2003). For example, as noted by Fisher, in practice, theorists tend to concentrate on exemplifying theory, that is, simple models stripped down to the bare essentials in order to illustrate specific theoretical points, for reasons of mathematical tractability (see Backhouse 1997:20–1).2
This attitude concerning formalization is justified by the fact that economics has failed to exhibit the empirical progress that was once expected. As pointed out by Backhouse, for example, such a failure is due to ‘the continually changing nature of the economic world’ (ibid.: 207) that makes it harder for a consensus to emerge. Indeed, ‘continual change…has caused economists to retreat into theory. The reason is that they want to produce general, if not universal theories, which militates against applied work…’ (ibid.).
Strictly speaking, reference to data is still regarded as being important but only to illustrate the theory derived in a purely deductive fashion on the grounds of strong restrictions.3 In particular, the idea that theories need to be tested, or that theory choice should depend, at least in the last resort, on empirical evidence has been abandoned (ibid.: 182). In other words, theorists’ major concern today is for theoretical progress, the achievement of heuristic progress, in terms of precision, internal consistency, greater conceptual clarity and analytical innovations (Blaug 1994, 2000; Backhouse 1997:100–
3).4
It is possible to interpret this ‘formalist’ move as having beneficial effects on the economy. First of all, it might promote stability as it induces people to believe that, due
Precision 153
to the relative neglect of empirical evidence it involves, the fundamental principles of economics, including stability, are somehow ‘virtually beyond question’. (Backhouse 1997:182). Second, theoretical progress may generate greater stability by inducing economists to believe that the application of mathematical techniques has definitely transformed economics into a cumulative discipline (Backhouse 1997:3–4; Blanchard 2000). If the last, more formalized macro model is better than the previous ones, it is legitimate to expect, for example, that it can improve policy-making and represent a better benchmark for agents’ expectations.
Attempt to improve forecasting
A further expression of the tendency towards precision in the NE is the improvement of the forecasting techniques to help decision-making and policy intervention. Focus on forecasting is, of course, nothing new. Ever since the nineteenth century, the predominant view among economists has been that the adoption of better measurement systems and more sophisticated analytical techniques in economics, as in physics, is justified in the end by the need to improve predictive performance. It is difficult to deny, however, that the NE stimulates the drive towards more accurate prediction than in the past because of the challenges arising from a more complex economic environment. One can note, for example, that several attempts have been made to reduce forecasting errors in macroeconomics in the last decades. While in the 1960s and the 1970s, econometricians tried to limit such error by constructing vast multi-equation macro models, a pluralistic strategy based on the concerted use of a range of small models, as complementary lines of enquiry, has more recently emerged in the policy-making literature (see e.g. Dow 2002:30–1).5
However, this is not all. It is important to note that the use of more sophisticated statistical or econometric techniques today is not taken for granted but is subject to vast critical scrutiny. While in past decades it was firmly believed that the steady application of these techniques would eventually produce precise quantitative laws and solid foundations for predictive exercises, in the NE doubt concerning the usefulness or the actual achievements of these techniques has begun to emerge due to widespread predictive failures induced by the greater variability of parameters and uncertainty.
Indeed, some commentators are aware that the NE per se does not improve the predictive performance of econometrics. As noted, for example, by Backhouse: ‘despite the immense effort, undreamed-of increasing computing power, and the development of vastly more sophisticated statistical techniques, econometrics has failed to produce the quantitative laws that many economists, at one time, believed it would’ (Backhouse 1997:136).
In particular, ICT does not compensate for the lack of foresight concerning cyclical downturns, as shown by the recent recession. As noted by Baily: ICT has not improved our capacity to see the economic future. Downturns are intrinsically hard to call and the consensus forecast rarely catches them. In this downturn they were pretty wide off the mark’ (Baily 2001:250; see also Krugman 2001; Banerji 2002:21).6
These developments explain why many practitioners have recently started to modify their views concerning the best way to produce predictions. Instead of trying to obtain
precise parameter estimates on the grounds of sophisticated econometric models, they call for the adoption of other methods, such as reliance on pragmatic, informal, empirical work aimed at describing broad stylized facts or regularities that theory can explain (see e.g. Backhouse 1997:176), calibration methods7 or even the tracing of simple scenarios based on different assumptions about the values of the key magnitudes to prepare people for what, conceivably, could be in store.8
It should be clear that while all these methods taken together do not guarantee predictive success (they are indeed quite likely to miss the ‘true’ or ‘objective’ target), they still imply some improvement in forecasting that might contribute to increasing stability in the NE. First of all, by increasing the number and variety of claims about the future, these methods make the formation of forecasters’ consensus view more robust.
Second, this view affects reality itself by influencing agents’ expectations (not unlike the observer in quantum physics influences the object of analysis with his measurement tools). Indeed, forecasters’ consensus view plays the role of benchmark for market expectations; it is part of the market process. ICT may increase stability in that it favours a faster convergence of individuals’ expectations to the benchmark, for agents in the NE tend to acquire ever more information in order to deal with complexity.
Measurement problems
Now we must focus on the potential negative effects of efforts to obtain more precise measurement, modelling and forecasting techniques in the NE. This attempt, too, may create stability problems. While seemingly paradoxical, this claim can be supported by strong arguments. Let us start by focusing on the search for greater and more precise quantitative information.
First of all, this search may exercise a negative influence on agents’ behaviour. As noted by Viskovatoff (2000:145), for example, the advent of the availability of large amounts of data has led to an important shift in the way managers make decisions: they actually rely on management by numbers. Investment decisions are based on quantitative decision rules that use measures, such as expected profitability or return on investment (ROI), and abstract from the specific qualitative features of a contemplated investment project. These rules may create excessive risk aversion, focus on short-run profitability and bias against introducing innovative products; in general, the positive aspects of investment will be less apparent in a quantitative description.
Second, more precise quantitative information may also cause greater instability if it is not accompanied by a parallel revision of measurement methods and indicators. The point is that the NE has become increasingly difficult to measure using conventional methods. Many key phenomena, such as intangibles, still defy proper measurement. For example, there are major problems with measuring human capital (see e.g. the Economist 28–9–2004), quality change and true output growth (see e.g. Brynjolfsson and Hitt 2002:37, 41–2). The productivity gains associated with ICT tend to be underestimated because traditional growth accounting techniques focus on the relatively observable aspects of output, like prices and quantities, while neglecting the intangible benefits of improved quality, new products, customer service and speed. Moreover, good measures of productivity growth are next to impossible to achieve in non-market sectors such as
Precision 155
education, health and general government as well as in finance and transportation (Cohen et al. 2000; Baily 2001:222).9
These considerations imply that a more accurate application of existing methods may well produce more information, but will also increasingly miss the target. As noted by Eustace: ‘our economic and business measurement systems…are tracking—with ever increasing efficiency—a smaller and smaller proportion of the real economy’ (2000:6).
The continual application of standard methods may thus increase instability, because it generates a distorted picture of the economy and a false sense of improvement which may well lead to the underestimation of new phenomena and the adoption of wrong policy stances.
Modelling problems
There is reason to believe, too, that the current formalist turn in economics may generate instability. This may happen especially if it widens the gap between theoretical and empirical progress (see e.g. Blaug 1994, 2000; Backhouse 1997), that is to say, if the new models which are assumed to represent advances in terms of greater conceptual clarity and analytical innovation fail to contribute to progress in terms of a deeper grasp of the inner triggers of economic behaviour and the workings of the economic system in the age of the NE. This gap is bound to have destabilizing consequences, mainly because it generates a false impression of knowledge, leading agents and policy-makers who are deeply influenced by theory to be overconfident about its practical implications. In particular, focusing on the modelling of various small parts of the economy, taken in isolation, gives the impression that one knows, or can take for granted, the relevant causal links, the correct distinction between exogenous and endogenous variables, and so on, when in fact this is not the case. As already noted, in the NE there is a growing uncertainty over causal links, which means that the distinction between exogenous and endogenous variables cannot be made once and for all. To take just one example, we certainly have better models today to account for expectations formation and technological progress than past generations have had. Indeed, these models may capture some endogenous aspects, such as forms of learning linked with production. However, it would be wrong to believe that they necessarily imply an improvement in explanatory terms or that they open the way to better policy conduct; expectations or technological progress, for example, also depend upon historical, institutional or cultural factors which cannot be fully endogenized.
Forecasting problems
As noted in the previous section, a number of commentators are aware that better technology and more information in the NE do not necessarily compensate for lack of foresight and may fail to grant an improvement in predictive performance. This is why they stress the limits of econometrics and call for alternative methods (such as calibration and scenarios) of forecasting.10 While attempts to obtain precision may help stability in that they make the consensus view more robust, they may also create instability if it is
forgotten that this consensus view is not objectively true (i.e. it quite likely fails to capture the true parameters of the economy) but merely constitutes a conventional representation capable of lulling agents’ anxiety. Two points should be noted here. The first is that, in general, conventions are intrinsically fragile constructions and may easily break down or cause unjustified fluctuations in public opinion if they do not rest on more solid grounds such as, for example, theorists’ ability to achieve true empirical progress in terms of improved analysis of real causal mechanisms and identification of the most plausible future outcomes. Second, better and more timely information in the NE may increase overconfidence as it creates a false impression of knowledge when in fact it only provides faster convergence to the conventional view. Overconfidence in the NE is likely to be more dangerous than in past decades because of the increased probability that the consensus view is wide off the mark. For example, let us take forecasts concerning income growth. In this regard, quite frankly, many recognize that: ‘forecasting the rate of economic growth is always hazardous, but it is more hazardous now than usual’ (DeLong and Summers 2001:12).11
As noted, for example by Baily (2001), the point is that, although uncertainty should be no worse than has been the case historically (downturn is not unusual and cyclical volatility lower), longer run uncertainty about growth prospects (for the next 5 or 10 years) is increased. In particular, he suggests that uncertainties year by year that used to be partially offsetting (e.g. in the 1960s, when uncertainty about the short run did not lead to exploding uncertainty over the longer term, because potential income was fairly predictable) have become cumulative, owing to the greater unpredictability of potential income. Indeed, he points out that today we face ‘unusual uncertainty’ especially as concerns the productivity trend ‘which makes potential income harder to predict than we thought’ (Baily 2001:253). It should be clear that the greater unpredictability of potential income has negative consequences for policy-making and stability. Monetary policy, for example, is largely based on estimates of the size of the output gap, that is, the gap between actual income and potential income. Given large measurement errors concerning the size of this gap, ‘a monetary policy that tries hard to smooth the cycle could easily increase output volatility’ (the Economist, 28–9–2002).
Precision 157