RDP 2012-07: Estimates of Uncertainty around the RBA's Forecasts 1. Introduction

Consumers of forecasts should routinely be told something about the size of past errors. Stevens (2004)

Each quarter, the RBA presents its forecasts for key macroeconomic variables in its Statement on Monetary Policy (SMP). Readers of the SMP may be interested in how informative those forecasts are. For example, how likely is it that inflation will be close to the forecast? How much weight should be placed upon different parts of the forecast? This paper addresses these questions by examining the historical properties of the RBA's forecasts. In particular, we use estimates of past forecast accuracy to construct confidence intervals for forecasts of CPI inflation, underlying inflation, real GDP growth and the unemployment rate.

Estimates of forecast uncertainty may also help to clarify communication. A difficulty policymakers face in discussing the economic outlook is that a single point estimate will have a very high probability of being incorrect. Even though the forecast is meant to be interpreted as the centre of a range of possible outcomes, misunderstandings are common. Perceptions that the central bank was wrong (whether well-founded or not) can undermine credibility and transparency. Our estimates enable the forecast to be considered as a range, which avoids many of these problems. Considering the forecast as a range rather than a point conveys additional information, is hopefully less susceptible to misunderstanding and highlights the considerable uncertainty attached to the outlook.

For these and other reasons, many central banks provide measures of uncertainty with their forecasts. We summarise these presentations in Appendix A. This paper begins by constructing similar measures of uncertainty for Australia, building on the overseas experience. We calculate forecast errors over the past two decades, measure their dispersion and hence construct confidence intervals. For example, 70 per cent of the RBA's forecasts for underlying inflation one year ahead have been within half a percentage point of actual outcomes. If future forecast errors are similar to those in the past, then there is a 70 per cent probability of actual underlying inflation falling within half a percentage point of the current forecast. We construct similar estimates for other variables at other horizons.

We then compare these estimates to some relevant benchmarks. Some of our findings are:

  • Uncertainty about the forecasts is high. Confidence intervals span a wide range of outcomes.
  • RBA one-year-ahead forecasts have substantial explanatory power for both the level and change in inflation. This contrasts with the experience of some foreign central banks.
  • However, deviations of underlying inflation from the target at longer horizons are not predictable. For reasons discussed in Section 4.2, this is a desirable feature of an inflation-targeting framework.
  • Underlying inflation is more predictable than headline CPI.
  • Forecasting economic activity is more difficult. As has been found for many forecasters overseas, the RBA's forecasts of GDP growth lack explanatory power.
  • Forecasts of the unemployment rate outperform a random walk only for a few quarters ahead.
  • Relative to private sector forecasts, RBA forecasts of inflation have been marginally more accurate while forecasts of GDP growth have been less accurate. The differences are small.
  • Uncertainty about some key variables does not increase with the forecast horizon. We know about as much about economic growth in the current quarter as we do about growth two years ahead.

The paper also discusses various properties of our confidence intervals, alternative measures of forecast uncertainty and some problems with using past errors as a gauge of forecast uncertainty.

Many of our results are qualitatively consistent with previous RBA work. For example, our confidence intervals, appropriately scaled, are similar to the model-based density forecasts of Gerard and Nimark (2008). We discuss the relationship between estimates of uncertainty derived from models and those derived from forecast errors in Section 6.1. More broadly, the RBA has regularly emphasised the difficulties of forecasting and the considerable uncertainty about the economic outlook. See, for example, Stevens (1999, 2004, 2011). In contrast to the approach of some foreign central banks, the RBA has responded to this uncertainty by placing relatively less emphasis on forecasts and more on analysis of current economic developments in its leading publications. In the SMP, forecasts of select variables are presented in a table using ranges beyond the near-term horizon to avoid an impression of excessive precision.

Any discussion of past forecast errors will raise questions about whether the forecasts might be improved. This is an important issue of ongoing research, but it is not our focus here. In this paper, we are primarily interested in how readers of the SMP should interpret uncertainty about a given forecast.