Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=ubes20
Download by: [Universitas Maritim Raja Ali Haji] Date: 11 January 2016, At: 19:15
Journal of Business & Economic Statistics
ISSN: 0735-0015 (Print) 1537-2707 (Online) Journal homepage: http://www.tandfonline.com/loi/ubes20
Rejoinder
Francis X. Diebold
To cite this article: Francis X. Diebold (2015) Rejoinder, Journal of Business & Economic Statistics, 33:1, 24-24, DOI: 10.1080/07350015.2014.983237
To link to this article: http://dx.doi.org/10.1080/07350015.2014.983237
Published online: 26 Jan 2015.
Submit your article to this journal
Article views: 110
View related articles
24 Journal of Business & Economic Statistics, January 2015
Kiefer, N. M., and Vogelsang, T. J. (2005), “A New Asymptotic Theory for Heteroskedasticity-Autocorrelation Robust Tests,”Econometric Theory, 21, 1130–1164. [23]
Li, J., and Patton, A. J. (2013), “Asymptotic Inference about Predictive Accuracy using High Frequency Data,” working paper, Duke University. [23] McCracken, M. W. (2007), “Asymptotics for Out-of-Sample Tests of Granger
Causality,”Journal of Econometrics, 140, 719–752. [23]
Newey, W. K., and West, K. D. (1987), “A Simple, Positive Semidefinite, Het-eroskedasticity and Autocorrelation Consistent Covariance Matrix,” Econo-metrica, 55, 703–708. [23]
Patton, A. J. (2011), “Volatility Forecast Comparison using Im-perfect Volatility Proxies,” Journal of Econometrics, 160, 246–256. [22]
Rossi, B. (2005), “Testing Long-Horizon Predictive Ability with High Persis-tence, and the Meese-Rogoff Puzzle,”International Economic Review, 46, 61–92. [22]
West, K. D. (1996), “Asymptotic Inference about Predictive Ability,” Econo-metrica, 64, 1067–1084. [22,23]
White, H. (2000), “A Reality Check for Data Snooping,”Econometrica, 68, 1097–1126. [22,23]
Rejoinder
Francis X. D
IEBOLDDepartment of Economics, University of Pennsylvania, Philadelphia, PA 19104 ([email protected])
I am grateful to all discussants for their fine insights. Obviously, it would be inappropriate and undesirable to attempt to address all comments here. Instead I will go to the opposite extreme, simply offering brief remarks on those that resonated most with me. In doing so I will implicitly extract what I view as an emerging theme, namely that important foundations for split-sample comparisons are emerging, as old hunches are for-mally verified (e.g., robustness to data mining) and compelling new motivation is provided (e.g., use with vintage data).1
Patton emphasizes that a key DM vs. WCM distinction (per-hapsthekey distinction) is not so much the limiting distribution theory and the conditions needed to obtain it, but rather the hypothesis being tested. In particular, DM comparisons are at estimated parameter values, whereas WCM comparisons are at pseudo-true parameter values. This is an under-appreciated yet very important point, providing a different and compelling per-spective on the appeal of the DM approach. I am grateful to Patton for raising and emphasizing it.
Inoue and by Killian are unified in their implicit emphasis on negative aspects of split-sample DM-type tests for model selection, insofar as a large price may be paid in terms of power loss, with nothing gained in terms of finite-sample robustness to data mining. Inoue and Kilian have been sounding that alarm for many years, and my article clearly reflects their influence on me. However, my pendulum has recently swung back a bit. Hence, my Section 6, which gives reasons for split-sample analysis. My
1I use “split sample” here and throughout as a catch-all term that includes
pseudo-out-of-sample comparisons based not only on truly split samples, but also on expanding or rolling samples.
reason 6.5, for example, counters the view that nothing is gained in terms of finite-sample robustness to data mining; recent work suggests that there is a gain, even if split-sample methods don’t provide complete insurance. And big data (e.g., high-frequency data), which I neglected to mention in the article, counter the view that a large price is paid in terms of power loss, as half of a huge sample is still huge.
Other discussants support my view that the pendulum is swinging back. Hansen and Timmermann, for example, em-phasize emerging work that promotes split-sample analysis. In particular, they clearly show how and why split-sample meth-ods can provide superior robustness to data mining, strongly supporting my split-sample reason 6.5 (which of course derives largely from their work). I applaud their insights and look for-ward to more.
Wright emphasizes a completely different but equally com-pelling reason for split-sample analysis: real-time comparisons using vintage data. He argues that in real-time environments with vintage data, split-sample methods are intrinsically rele-vant, and perhaps even uniquely relevant. But should not all
compelling comparisons then use real-time vintage data? And if so, should notallcompelling comparisons then be split sam-ple? Put differently, if we overcome our laziness and henceforth move from “final-revised comparisons” to best-practice “vin-tage comparisons,” shouldn’t we make a corresponding move to split-sample comparisons? Time will tell.
© 2015American Statistical Association Journal of Business & Economic Statistics
January 2015, Vol. 33, No. 1 DOI:10.1080/07350015.2014.983237