RDP 8302: Economic Forecasts and their Assessment Concluding Remarks

The foregoing discussion has presented a sceptical view of the accuracy of economic forecasting and of the value of trying to assess different forecasters. What then is the practical relevance of these generally negative conclusions?

It is easiest to look first at the question of the assessment of forecasts. Here, it is probably true to say that assessments of different forecasters will not be able to tell very much. In particular, they will not be able to unearth a consistently superior forecaster. There is also a great danger of making pronouncements about forecasters on the basis of a small number of observations. Even when a warning is given about the dangers of generalising from one or a few observations, the public will often wish to, and writers of assessments will inadvertently encourage them to do so.[17]

Some assessments of forecasts are not aimed at distinguishing between good and bad forecasters, but between easy-to-forecast variables and hard-to-forecast variables.

Once again, the choice of evaluation period can have an extremely important influence on the conclusions reached.[18] Even the addition of one year to the evaluation period can significantly alter the results as was shown in part I(b). Another example where the addition of one year to the evaluation period alters the conclusion is in Smyth and Ash (1975). They concluded that OECD forecasts of GDP between 1967 II and 1973 II were inferior to a naive model and that there was no evidence of improvement vis-a-vis the naive model. The addition of 1974 to the evaluation period reverses both conclusions, even though in absolute terms the OECD forecast in that year was poor (see part 1(a)).

The second practical issue concerns the usefulness of making forecasts. The earlier part of this paper suggests that accurate forecasts will only be possible in the cases of some variables forecast for a short period ahead in uneventful years. In which case, the question arises of whether there is any point in making forecasts at all.

One answer is to say that forecasts are useful because of their conservatism and tendency to cluster around an accepted view. These apparent shortcomings of published forecasts enable them to crystallise the conventional wisdom about what is likely to happen. Whether they subsequently prove to be accurate or inaccurate, they at least tell something about what the majority is thinking and how they will probably act. They also provide an insurance policy for the decision-maker who is faced with great uncertainty. If a decision (e.g. an investment decision) turns out to have been unwise, the decision-maker can placate an angry chief executive or board by pointing out that it was taken on the basis of the best information available. Economic consultancy and commercial economic forecasting owe a lot of their demand to these considerations.

A more fundamental reason for forecasting is that there is no choice; forecasts have to be made. This is true in the case of businesses making investment, hiring, lending or borrowing decisions and also in the case of governments making decisions about economic policy. In the case of macroeconomic policy, it would be impossible to devise a budget, a means of financing it, a monetary projection etc. Other than on the basis of economic forecasts. Thus, even those who recognise the severe limitations of economic forecasting have to engage in it, and have to be serious enough about it to avoid obvious errors and inconsistencies. There are, however, ways of reducing the amount of reliance that has to be placed on the accuracy of forecasts. One view favours the replacement of “fine tuning” by “fiscal or monetary rules”. This has happened in a number of countries over the last decade or so. It should be noted that this does not eliminate the need for forecasts, it merely reduces it.

Whether it is due to scepticism about the reliability of economic forecasts or not, there seems to have been a reduction in demands for the publication of official government forecasts. This would seem to be logical. If all forecasts are going to be misleading at times, is there a point in having one forecast with an official imprimatur on it? Would government be held responsible for adverse consequences experienced by the private sector as a result of their acting on the basis of it? Would governments, the press or lobby groups start to regard the official forecast as a target? These sorts of considerations have made many in government feel reluctant to publish comprehensive forecasts. While this reluctance is understandable, if taken too far it would deny the principle of public accountability. It is understandable that the public would wish to be assured that the government was not trying to pull the wool over their eyes when framing economic policies. It is reasonable, therefore, that the main assumptions (i.e. forecasts) on which such things as the budget and the monetary projection are based should be made public. To this extent there is a role for publication of official forecasts, but beyond that, the case is weaker.

Footnotes

e.g. Statements such as “the best forecasters were …” creep into Pagan et. al. (1982). [17]

Zarnowitz (1979) summarises his findings as “the accuracy and properties of forecasts depend heavily on the economic characteristics of the periods covered but only weakly and not systematically on the differences among the forecasters”. [18]