• Tidak ada hasil yang ditemukan

07350015%2E2014%2E956636

N/A
N/A
Protected

Academic year: 2017

Membagikan "07350015%2E2014%2E956636"

Copied!
6
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=ubes20

Download by: [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI

TANJUNGPINANG, KEPULAUAN RIAU] Date: 11 January 2016, At: 20:19

Journal of Business & Economic Statistics

ISSN: 0735-0015 (Print) 1537-2707 (Online) Journal homepage: http://www.tandfonline.com/loi/ubes20

Comment

G. Kenny

To cite this article: G. Kenny (2014) Comment, Journal of Business & Economic Statistics, 32:4, 500-504, DOI: 10.1080/07350015.2014.956636

To link to this article: http://dx.doi.org/10.1080/07350015.2014.956636

Published online: 28 Oct 2014.

Submit your article to this journal

Article views: 105

View related articles

(2)

Potter, S. (2012a), “The Failure to Forecast the Great Recession,” available at

http://libertystreeteconomics.newyorkfed.org. [483,488]

——— (2012b), “Subjective Forecast Distributions,” Bank of England Presentation–October. [496]

Stock, J. H., and Watson, M. W. (2002), “Forecasting Using Principal Com-ponents From a Large Number of Predictors,”Journal of the American Statistical Association, 97, 1167–1179. [484]

Stockton, D. (2012), “Review of the Monetary Policy Committee’s Forecast-ing Capability,” Report Presented to the Court of the Bank of England. [483,488,490]

Timmermann, A. (2006), “Forecast Combinations,” inHandbook of Economic Forecasting (Vol. 1), eds. G. Elliott, C. Granger, and A. Timmermann, Amsterdam: North-Holland, pp. 136–196. [492]

Waggoner, D. F., and Zha, T. (1999), “Conditional Forecasts in Dy-namic Multivariate Models,” Review of Economics and Statistics, 81, 639–651. [493]

Wang, M.-C. (2009), “Comparing the DSGE Model With the Factor Model: An Out-of-Sample Forecasting Experiment,”Journal of Forecasting, 28, 167–182. [484]

Wieland, V., and Wolters, M. H. (2013), “Forecasting and Policy Mak-ing,” in Handbook of Economic Forecasting (Vol. 2A), eds. G. El-liott and A. Timmermann, Amsterdam: North-Holland, pp. 237–524. [484]

Woodford, M. (2010), “Robustly Optimal Monetary Policy With Near-Rational Expectations,” American Economic Review, 100, 274–303. [494]

Comment

G. K

ENNY

European Central Bank, 60311 Frankfurt am Main, Germany (Geoff.Kenny@ecb.europa.eu)

The article by Alessi, Ghysels, Onorante, Peach, and Potter (henceforth AGOPP) covers a lot of important ground on the themes of central bank forecasting and its evolution in the light of the financial crisis. In particular, the article (i) reviews the central elements of the forecasting process at European Central Bank (ECB) and Federal Reserve Bank of New York (FRBNY), (ii) provides some comparative analysis of point forecasts across the two institutions, (iii) proposes an empirical framework and application to test the importance of high-frequency financial data for improving these projections, (iv) outlines a method for the construction of density forecasts and scenarios that has been proposed at the FRBNY, (v) presents a practical illustra-tion of this method and compares the assessment of macroeco-nomic risks by FRBNY staff with that implied by private sector forecasts.

For those with an interest in central bank forecasts, their suc-cesses and failures during the recent financial crisis and great recession, the article thus provides a truly rich set of material that warrants careful study. Although the article offers many specific insights, the main findings that are emphasized are (i) the major challenges posed by the crisis to central bank fore-casting, (ii) that the FRBNY had a better assessment of risks during the global crisis than private sector forecasters, and (iii) key lessons learned are that mixed frequency approaches with financial data and scenario-driven approaches are likely to fea-ture prominently in the fufea-ture practice of forecasting at cen-tral banks. Overall, I am impressed with the efforts made by the authors to integrate such a large and varied set of mate-rial. I also think that the article will be of particular interest to economists and researchers working outside of central banking who are interested in understanding the processes that underpin the macroeconomic forecasts that play a pivotal role in mone-tary policy making. In the discussion below, I will expand on some open questions I had and offer some of my own thoughts on what can be learnt from these recent experiences.

1. SOME GENERAL OBSERVATIONS

A first general comment relates to how a reader should un-derstand the nature of the review that has been conducted. In line with the solid disclaimer that appears on the first page, it is

important to recognize the article should not be misinterpreted as resulting from a formal collaborative effort between the two institutions that form the focus of the study. Equally, the ar-ticle cannot and does not claim to have been conducted in a similar spirit to the Stockton (2012) review of Monetary Pol-icy Committee (MPC) forecasts at the Bank of England (BoE), which AGOPP cite. The latter review was undertaken by an external expert albeit with a specific mandate designed by the Bank of England and presumably full institutional cooperation of the Bank of England staff in facilitating the review process (e.g., by providing the relevant information, etc.). According to Stockton’s terms of reference (described on p. 59 of his report), his mandate was to “examine whether the MPC forecasting procedures allow it to take full account of the relevant risks and uncertainties, and thus support the MPC’s monetary policy decisions to meet the inflation target.” Moreover, “the review will be used to inform decisions about the Bank’s forecasting procedures and methods.” In addition, Stockton stresses that he has participated in forecasting meetings at a technical level and undertaken considerable interactions with MPC members, staff in the bank involved in the forecasting process as well as consultations with external counterparts in academia, the fi-nancial sector, and other institutions. This distinction between the current article and Stockton (2012) is an important one. In particular, it is not so clear how much information about the respective forecasting processes the authors have had access to and/or how widely they have been able to consult with relevant people involved and other stakeholders.

2. MACROECONOMIC FORECASTING: LEARNING FROM EXPERIENCES IN DIFFERENT

CENTRAL BANKS

What sets the article by AGOPP apart is its focus on compar-ing the experiences and forecastcompar-ing activities of two institutions with, presumably, the intention that such a comparative review

© 2014American Statistical Association Journal of Business & Economic Statistics

October 2014, Vol. 32, No. 4 DOI:10.1080/07350015.2014.956636

(3)

of experiences can help identify important lessons for the fu-ture. However, the forecasting processes in the two institutions are reviewed sequentially in Section 2 and, hence, some of the comparative aspect to their analysis is reduced. For example, it was not fully clear what the authors would identify as the main commonalities and important differences between the two institutions’ processes and, more importantly whether there are specific lessons that one institution might learn from the other.

As the authors mention, one major difference between the way forecasts are produced at the FRBNY and the ECB is that in the former the forecast is not an aggregate offorecasts for regional or state level economic developments while in the latter case a single euro area forecast is ultimately derived as an aggre-gation of forecasts for the countries participating in the single currency. The way in which top–down area wide perspectives (e.g., using the ECB’s New Area Wide Model (NAWM), see Christoffel, Coenen, and Warne2008) are integrated with bot-tom up country perspectives represents a defining feature of the ECB approach that clearly distinguishes it from the Federal Reserve. Indeed, in contrast to the euro area, policy makers in the U.S. Federal Reserve are not supplied with a single forecast for the U.S. economy. Rather the presentation of forecasts for monetary policy purposes in the United States includes U.S. economic forecasts prepared by staff at the Federal Reserve Board of Governors (FRBG) and these are presented along with a range of forecasts of the U.S. economy from regional Fed Presidents. (Regional information on current conditions but not forecasts in the 13 Federal Reserve districts is also provided to the Federal Open Market Committee (FOMC) via the so-called Beige book.) To strengthen the lessons from their comparative study, it would have been interesting if the authors could expand on these differences and how they would assess their likely im-pact on forecast quality and, ultimately, policy discussions. For example, as the FOMC is confronted with a multiplicity of dif-ferent point forecasts from staff in the Federal Reserve System, it would have been interesting if the authors could assess whether this might impact the coherence and communication of the dis-cussion of risks to the economic outlook, which—as AGOPP have correctly highlighted—is a key area to prioritize at the two central banks. Conversely, the authors could have elucidated further on how they assess the pros and cons of the setup at the ECB, where the policy-making body receives a single forecast from Eurosystem/ECB staff around which it can then build its discussion and communication of the associated risks.

3. IMPROVING BASELINE FORECASTS

In Section 3 of their article, AGOPP present a comparative analysis of ECB and FRBNY point forecasts for GDP growth and inflation both prior to and during the financial crisis. Al-though technical elements (sample sizes, variable definitions) complicate such a comparison and call for due caveats, the au-thors are nonetheless able to tease out some substantive results. In particular, both central banks clearly experienced a worsening of their forecast performance for GDP during the crisis while, for inflation, only in the case of the ECB is such a deteriora-tion observed. As the authors point out, much of the difference identified for inflation relates to the forecasting environment, which was more volatile in the case of the euro area. Overall AGOPP conclude that, like most economists, both central banks

“were behind the curve” and hypothesize that their tools used for forecasting may have failed to take fully into account “pow-erful adverse feedback loops between the financial system and the real economy.”

One way the authors suggest to improve point forecasts is the incorporation of high-frequency data from financial mar-kets using MIDAS regressions. As the authors demonstrate in Section 4, MIDAS offers an efficient, low cost, and rigorous way in which to exploit the high-frequency news in financial markets. As AGOPP acknowledge in their conclusions, some of the analysis of financial data with MIDAS is subject to a hindsight bias. In particular, the authors conduct their analysis in the full knowledge that we have just experienced a major global financial crisis. In real time, when considering the re-liability of the signals embodied in financial news, forecasters also have had to be mindful of the potential for false signals provided by financial markets. A good case in point has been the recent cyclical evolution of the euro area economy where persistent improvements in many financial market series may have provided an overly optimistic signal on the strength of the recovery. In evaluating the scope of the MIDAS regressions to improve on both ECB and FRBNY projections, it is also possible that some of the assumptions made in conducting the analysis may favor the financial information. For example, in the case of the ECB projections, the assumed cut-off for the conditioning information set is taken to be the 61st day in the quarter and this islaterthan the actual cut-off. For example, in the March 2011 exercise (which is included in the author’s esti-mation sample), the cut-off date was February 18, 2011, which is only the 49th day of the quarter. Moreover, for some technical assumptions such as money market interest rates, equity prices, and government bond yields, it was February 10, 2011 (see ECB Monthly Bulletin, March 2011, p. 86). Hence, for this projection round, AGOPP effectively permit their MIDAS regressions to have a lot of additional financial market information that was explicitlynottaken into account in the ECB projections. While the authors’ approximation is understandable, this difference in assumed and actual cut-off dates may account for the relative improvement in the forecasts that the MIDAS regressions actu-ally yield. The above caveats aside, I still agree with AGOPP’s general conclusion that both institutions could have improved their forecasting performance had more attention been paid to high-frequency financial market signals that were available.

If they are to improve the quality of information that can be derived from their point forecasts, economists in central banks will have to work intensively on many other fronts as well— some of which AGOPP mention in their conclusions. One im-portant front is the exploitation of new data sources and “big data” collected from nonofficial private sources such as scanner data, point of sale credit card data, or data based on Internet search engines. In addition to high-frequency financial market data, such data may help fill the blind spot created by substan-tial publication lags for official macroeconomic time series. A recent workshop at the ECB provides many examples of articles aimed at using such big datasets for forecasting (articles can be downloaded at http://www.ecb.europa.eu/events/conferences/ html/20140407 workshop on using big data.en.html). As a re-sult of these lags—for national accounts and other key data series—macroeconomists conduct their forecasting tasks not just without a precise knowledge of the future state of the

(4)

economy but also with very incomplete knowledge of itscurrent state. Over the last decade or so, the macroeconomic forecasting community has rightly prioritized better forecasts of the current quarter GDP (“nowcasts”) as a key area where improvements in predictive performance might be achieved. Indeed prior to the financial crisis, nowcasting techniques based on dynamic factor models that exploited large cross-sections of data, as recently surveyed in Banbura et al. (2013), had established a strong rep-utation for current quarter GDP forecasting. During the recent recession however, the signals from these models were less re-liable (see Kenny and Morgan 2011) even though they often included many of the financial series emphasized by AGOPP. Chart 5, p. 12, in Kenny and Morgan (2011) shows the break-down of these models for the case of the euro area. The models did not capture the strength of the drop in output from 2008Q3 to 2009Q1. In addition, however, they also overestimated growth considerably in 2009Q4 and 2010Q1.

Aside from working with empirically intensive reduced form methods, macroeconomic forecasters working in central banks have also been rethinking their more structural models. Much progress has already been achieved in enriching structural models with financial information and banking, the modeling of nonlinearities and financial and other instabilities with regime switching, time-varying parameters, and stochastic volatilities. In addition, as a complement to macroeconomic models, I would also mention the importance of more formal methods to intro-duce judgment in central bank forecasting processes. Such judg-ments of course need to be correctly calibrated, for example, by drawing on qualitative signals, supplementary tools that help bring in information that is outside the main forecasting process or even gained through institutional contacts.

4. THE SCIENTIFIC CONSCIENCE: ANALYZING RISKS AND EVALUATING RISK FORECASTS

“If you twist my arm, you can make me give a single number as a guess about next year’s GNP. But you will have to twist hard. My scientific conscience would feel more comfortable giving you my subjective probability distribution for all the values of GNP.” – Samuelson (1965, p. 278).

Very much in line with Samuelson’s scientific conscience above, a second main conclusion that the authors draw (and one which I would also endorse unreservedly) is that scenario-driven approaches and risk analysis are likely to feature prominently in the future practice of forecasting at central banks. This con-clusion follows naturally from the limitations to point forecasts as a sound basis for optimal decision making under uncertainty, for example, as a result of asymmetric loss or highly costly tail events. Moreover, in practice, there are natural limitations on the scope to improve point forecasts and this strengthens fur-ther the case to review a range of alternative macroeconomic scenarios with varying likelihoods of occurrence when setting monetary policy. To support this call for a renewed emphasis on risk analysis, AGOPP offer (i) a detailed presentation of new approaches to risk modeling at the FRBNY as well as (ii) a careful comparison and evaluation of risk assessments made by FRBNY staff with the signals from private sector forecasters participating in the U.S. SPF.

In their presentation of new approaches to risk modeling at FRBNY, the authors illustrate how their framework can be used to highlight many important economic events and risks and how such risks may interact. AGOPP describe their framework as a generalization of the two-piece normal distributions that is used to derive the famous fan charts at the Bank of England (BoE). As the BoE framework can already be used to generate quite nonstandard (e.g., skewed) distributions to represent judgment on specific scenarios or particular prior beliefs, I was not so clear on the precise advantage that this generalization yields. AGOPP could perhaps have elaborated more on how particular behavioral or other economic mechanisms enter the derivation of the various scenarios that they consider. In drawing lessons from the recent crisis, the comparison and evaluation of risk assessments made by FRBNY staff with those of private sec-tor forecasters from the U.S. SPF is a particularly timely and important contribution. The applied literature on forecast evalu-ation is amply supplied with analysis of point forecasts, but far less is known about the performance of density forecasts and risk assessments. This is especially the case for risk analysis conducted at central banks. Leeper (2003, p. 16) summarized the point well: “If Banks routinely report risk assessments, then those assessments should be systematically evaluated, just as the accuracy of Banks’ inflation forecasts are evaluated ....

Happily for central bankers, the authors’ comparison yields the result that the FRBNY risk forecasts may have better rec-ognized downside risks during the financial crisis than did the private sector forecasters participating in the U.S. SPF. Practi-cal constraints may have limited the authors’ scope to extend such an analysis to the ECB. Clearly, however, this would be of interest in the future. In this respect, with the publication of the staff quarterly projection exercises, the Governing Council of the ECB communicates its own risk assessments in qualitative terms. For example, in summarizing the view of the Govern-ing Council in December 2008, the monthly Bulletin of the ECB states that “the economic outlook remains surrounded by an exceptionally high degree of uncertainty. Risks to economic growth lie on the downside.” Looking forward, there is scope to assess the information about future forecast errors in these qualitative signals, and compare the results to the equivalent in-dicators from the private sector as collected in the ECB SPF for the euro area), this is a striking result that if taken at face value implies a considerable asymmetry between private and central bank forecasters. Some caution is, however, warranted. As with the MIDAS regressions discussed above, it should be born in mind that it is challenging to do such a relative evaluation on a completely equal or comparable basis. For example, differences in the production schedule for the two sets of forecasts means that they may have different effective forecast horizons and conditioning information sets. In the case of the SPF, it is also important to distinguish the actual publication date of the results from the production date that is earlier and defines the cut-off date for information. An additional note of caution relates to the SPF information that is actually used. In the first instance, this is based on the cross-section of point forecasts and not on any individual or aggregated probabilistic information. Hence, to the extent that individual SPF forecasters indicated specific risks via their reported probability distributions rather than by adjusting their point forecasts, this is not taken into account.

(5)

AGOPP also briefly report on a comparison of the FRBNY den-sities with the denden-sities of a single SPF forecaster (the one with the most pessimistic point forecast), although this analysis is not presented in any detail. In addition, while the SPF forecasts are real time by definition (they cannot exploit information that was made available after publication), it was not fully clear to me if the FRBNY risk assessments are fully real time: Were the reported FRBNY densities actually published at the time? Were some of these methods not developed ex post in response to the crisis developments? Finally, all of the analysis in Section 6 is couched in terms of anticipating the downside risks associated with the trough of the recession. It would also have been in-teresting if the authors could have compared and commented on how the SPF and FRBNY assessments faired in terms of anticipating the eventual trough and subsequent recovery.

The relatively good performance of the central bank risk as-sessments that is uncovered in the author’s analysis should also be weighed carefully with other results that have not been as flattering. Much of the existing literature on comparing the risk assessments embodied in private sector and central bank density forecasts has focused on inflation. For example, Casillas-Olvera and Bessler (2006) compared the probability forecasts of the BoE with those of external experts from a “shadow committee” of forecasters using the Brier score. While they find that the performance of output densities is broadly equivalent, the MPC performs worse in the case of inflation. (In explaining this worse performance, they find that the MPC showed a larger respon-siveness to information not related to the forecasted variable and that it may have hedged somewhat or engaged in “wishful thinking.”) More recently, Boero, Smith, and Wallis (2011) con-firmed this finding although when the ranked probability score is used instead of the Brier score, the better performance of the private sector density forecasts is less marked. Another related study by Kn¨uppel and Schultefrankenfeld (2012), again for in-flation, focuses more specifically on the discrimination between upside and downside risks. Although this article does not offer a comparison with private sector densities, their results suggest no conclusive evidence for a systematic signaling content in the inflation risk assessments by either the Bank of England or the Sveriges Riksbank.

5. ANOTHER CHALLENGE AHEAD: THE PROLIFERATION AND COMPLEXITY

OF ECONOMIC MODELS

As the preceding discussion has shown, the recent financial crisis has stimulated many new modeling approaches that of-fer some prospect to improve the quality of information em-bodied in macroeconomic forecasts. In addition, technological advances in computing are facilitating the speedy development of very flexible modeling structures with increasingly complex economic dynamics and the exploitation of large datasets in a speedy and computationally efficient way. Against this back-ground, a key challenge within policy-making institutions will be how to integrate and weigh the varied signals that experts and policy makers will receive from an increasingly large and com-plex cross-section of tools. Ten years ago, a central bank would have been operating close to the technological frontier if it was actively using a single benchmark estimated DSGE model with

nominal rigidities for forecasting and policy analysis. Today, there is scope for having a range of such models with a range of additional financial frictions impacting on sovereigns, house-holds, firms, and financial institutions. In addition, there is scope for integrating the insights from DSGE models with more em-pirical tools such as vector autoregressions (VARs) with regime switching and time-varying parameters.

Against this backdrop of an increasing large array of modeling tools and approaches, a major challenge will be how to recon-cile conflicting signals and forecasts. In this respect, the forecast combination literature offers some prospect of a formal way to weigh alternative modeling paradigms. For example, Geweke and Amisano (2011) highlighted the computational efficiency of the forecast combination perspective for density forecasting under the realistic assumption that all models are a false approx-imation of the true underlying process driving the economy. Their work shows significant time variation in the “value” of different modeling approaches including factor models, DSGE models and Bayesian vector autoregressions. Moreover, the op-timal combination approach may better allow for uncertainty because, unlike Bayesian model averaging, the model with the best forecasting track record need not asymptotically dominate all other models. As a recent application, Del Negro, Hasegawa, and Schorfheide (2014) had undertaken a real-time, albeit ex post, analysis of the weight on DSGE models with and without financial frictions in a dynamic optimal prediction pool. Their analysis suggests that the weight on models with financial fric-tions should have increased noticeably in the run up to the Great Recession and, in particular, before the Lehman crisis.

6. CONCLUDING REMARKS

In conclusion, AGOPP have done the central banking and forecasting-related research communities an excellent service by attempting to distill the lessons from recent experiences with macroeconomic forecasting at the ECB and the FRBNY. As the authors have demonstrated, much can be learned from such a comparative approach. I therefore hope it will continue to be pursued in the future, also adding the experiences of other institutions such as the Federal Research Board of Governors and the Bank of Japan.

DISCLAIMER

The opinions expressed in this discussion are those of the author and do not necessarily reflect the views of the ECB or the Eurosystem.

REFERENCES

Banbura, M., Giannone, D., Modugno, M., and Reichlin, L. (2013), “Now-Casting and the Real Time Data-Flow,” inHandbook of Economic Forecast-ing(Vol. 2), eds. G. Elliott and A. Timmermann, North Holland: Elsevier. [502]

Boero, G., Smith, J., and Wallis, K. F. (2011), “Scoring Rules and Survey Density Forecasts,”International Journal of Forecasting, 27, 379–393. [503] Casillas-Olvera, G., and Bessler, D. A. (2006), “Probability Forecasting and

Central Bank Accountability,”Journal of Policy Modelling, 28, 223–234. [503]

Christoffel, K., Coenen, G., and Warne, A. (2008), “The New Area-Wide Model of the Euro Area A Micro Founded Model for Forecasting and Policy

(6)

Analysis,” ECB Working Paper No. 944, European Central Bank, Frank-furt, Germany. [501]

Del Negro, M., Hasegawa, R. B., and Schorfheide, F. (2014), “Dynamic Pre-diction Pools: An Investigation of Financial Frictions and Forecasting Per-formance,” Paper Presented at the 8th ECB Workshop on Forecasting Tech-niques, 13–14 June 2014, Frankfurt. [503]

Geweke, J., and Amisano, G. (2011), “Optimal Prediction Pools,”Journal of Econometrics, 164, 130–141. [503]

Kenny, G., and Morgan, J. (2011), “Some Lessons From the Crisis for the Economic Analysis,” ECB Occasional Paper No. 130, European Central Bank, Frankfurt, Germany. [502]

Kn¨uppel, M., and Schultefrankenfeld, G. (2012), “How Informative are Cen-tral Bank Assessments of Macroeconomic Risks,”International Journal of Central Banking, 8, 87–139. [503]

Leeper, E. M. (2003), “An Inflation Report,” NBER Working Paper No. 10089, National Bureau of Economic Research, Cambridge, MA. [502]

Samuelson, P.A. (1965), “Economic Forecasting and Science,”Michigan Quar-terly Review, 4, 274–280. [502]

Stockton, D. (2012), “Review of the Monetary Policy Committee’s Forecasting Capability,” Report presented to the Court of the Bank of England. Available at http://www.bankofengland.co.uk/publications/Documents/news/2012/ cr3stockton.pdf. [500]

Comment

Chiara S

COTTI

Federal Reserve Board, Washington, D.C. 20551 (chiara.scotti@frb.gov)

1. INCLUDING FINANCIAL INFORMATION INTO THE FORECASTING PROCESS

Asset prices represent a class of potentially useful predic-tors of inflation and output growth because they are forward-looking. They have been considered in this regard for a long time. Mitchell and Burns (1938), for example, included the Dow Jones composite index of stock prices in their initial list of leading indicators of expansions and contractions in the U.S. economy. Since then, the literature has grown considerably and a number of asset prices have been identified as good predic-tors of economic activity and inflation, namely, interest rates, term spreads or slope, stock returns, dividend yields, and ex-change rates, among others. As shown in Stock and Watson (2003), some asset prices have a significant marginal predic-tive content for output growth at some times in some coun-tries. However, it might be difficult to know ex-ante what asset price works in which country because its importance might change over time. Moreover, the existing literature has focused on in-sample analysis to identify predictive relations, which is no guarantee of a good out-of-sample performance. For these reasons, it has been difficult to systematically use asset price information for macroeconomic forecasting, although central banks have surely been using them to guide their judgmental forecasts.

AGOPP consider a large number of financial series, and gen-erate a prediction through a mixed-data sampling (MIDAS) regression from each one of these series. They then employ Bayesian model averaging to combine the K macro predictions coming from the K financial series. Through this method, they let the data determine which forecast should have the highest weight and which financial series should therefore be treated as most prominent. Their main exercise, however, covers the period 2008Q2 to 2012Q4, the height of the European debt crisis, a period in which the macro-finance link was particu-larly strong. Not surprisingly, they find that fixed income vari-ables, especially the short-end of the curve and the VSTOXX, are among the best explanatory variables. In what follows, I will expand on the issue of including financial information into macro forecasting and on the impact of uncertainty on macro variables.

2. EURO-AREA FORECASTING WITH FINANCIAL INFORMATION

Central banks around the world have recognized the impor-tance of financial information for some time. Because there are lots of financial series that could potentially be considered, an alternative to AGOPP could be to first consolidate finan-cial information into one series. One example is to construct a stress index that summarizes, through a principal component analysis, a number of euro-area financial variables that are be-lieved to be useful indicators of financial stress (Pruitt2012). These include market information from equities (return, volatil-ity, correlation, and bid-ask spreads), sovereign bond yields, credit default swaps, LIBOR-OIS spreads, and corporate AAA-BBB bond spreads.

Figure 1shows the evolution since 1991 of financial stress and GDP in the euro area. Financial stress is high in the early part of the sample due largely to the European Exchange Rate Mechanism (ERM) crisis. From the mid-1990s, the financial stress index declines and remains low until the 2008 global financial crisis and, subsequently, the European debt crisis. The advantage of using a stress index is that this factor is the one linear combination of all series that explains as much of their joint variance as possible and as such it filters out the noisy information idiosyncratic to the specific underlying series. Since the stress index can be constructed daily, it might be interesting to run the MIDAS regressions with the stress index.

Just to confirm the importance of the link between financial information and macroeconomic activity, I include the stress index into a five-variable SVAR after euro-area GDP growth, inflation rate, labor productivity growth, and the 3- month in-terest rate. This methodology is motivated by the work of Van Roye (2011) and Carlson et al. (2008). I run the SVAR with and without the stress index over the period 1991–2013 and 1991–2007. As shown inTable 1, in both samples, including

© 2014American Statistical Association Journal of Business & Economic Statistics

October 2014, Vol. 32, No. 4 DOI:10.1080/07350015.2014.956873

Color versions of one or more of the figures in the article can be found online atwww.tandfonline.com/r/jbes.

Referensi

Dokumen terkait

[r]

Sehubungan dengan masih berjalannya proses evaluasi penawaran untuk paket lelang yang tersebut di atas, maka dengan ini kami menginformasikan kepada para peserta

Membawa dokumen asli dan 1 (satu) Set Fotocopy dari data-data formulir isian kualifikasi yang diinput di dalam Sistem Pengadaan Secara Elektronik (SPSE) pada

Populasi penelitian terdiri dari tiga bagian yaitu (1) populasi untuk penentuan nilai daya dukung efektif yang terdiri dari variabel biotik berupa flora

[r]

bahwa untuk melaksanakan ketentuan Pasal 5 ayat (2), Pasal 12 ayat (10), dan Pasal 14 ayat (5) Peraturan Pemerintah Nomor 33 Tahun 2005 tentang Tata Cara Privatisasi

Surat Izin Usaha Jasa Konsultansi sesuai dengan ketentuan peraturan perundang- undangan yang berlaku dan sesuai dengan paket pekerjaan;7. Akte pendirian perusahaan

[r]