RDP 9604: Issues in Modelling Monetary Policy 4. General Observations

In a review of the RBA experience with macroeconomic modelling, published in 1980, Jonson and Norton drew a series of lessons, the first of which was, ‘it is hard to build a good model’. We conclude by outlining a number of reasons why that observation remains valid, at least as it applies to the problem of quantifying the effects of monetary policy. There are four points.

The first point concerns the problem of disentangling cause and effect. Most monetary policy actions are both causes and consequences of developments in the wider economy, and empirical separation of the two is unlikely to be straightforward. At the simplest level, this is the problem that (for example) interest-rate increases are contractionary but are often correlated with the high growth and inflation outcomes that induce them. Econometric analysis obviously has tools to deal with this standard simultaneity problem, but Friedman (1995) argues that these techniques might not always be very effective in practice. If monetary policy actions are mainly systematic responses to the broad macroeconomic forces affecting prices and output, it may be hard to get good instrumental variables for those actions. Use of lags in econometric equations to overcome this problem will not necessarily be effective, since monetary policy actions are, by nature, anticipatory and might reflect non-modelled information about future inflation and output. Friedman argues that this characteristic can lead to artificially low, or even incorrectly-signed, estimates of the effects of policy: in particular, if monetary policy is aimed at only partially offsetting future price and output shocks (as is optimal under plausible assumptions), policy actions will be correlated with subsequent price and output movements in the opposite direction to their true causal impact. Or, if monetary policy were fully successful in stabilising prices and output, it would appear ex post to have had no effect because the supposedly affected variables would not have moved. All this points to the conclusion that, even where econometrically stable relationships exist, it might nonetheless be hard to get good estimates of the effects of policy actions. It is also possible that the lengths of transmission lags are over-estimated.

A second problem concerns the unexplained behaviour of the exchange rate, a key variable in the policy transmission process. Policy analysis requires estimates or assumptions concerning all the channels of transmission and results are dependent on the full set of such assumptions. In this context the exchange rate poses serious empirical problems. The standard theoretical assumption linking exchange rates to short-term interest rates is the uncovered interest parity equation. As discussed earlier, this theoretical relationship is convincingly rejected by the data with a number of anomalies apparent, including wrong-signed coefficients and, in Australia, excessive sensitivity of the exchange rate to the terms of trade. Attempts to rationalise these sorts of results by expanding the framework to incorporate models of financial risk premia have not been successful. A related strand of literature, initiated by Meese and Rogoff (1983) emphasises the poor performance of macroeconomic models in tracking exchange rates out of sample. These sorts of results are essentially negative in the sense that they do not show how to obtain an alternative model of the structural interactions between policy and exchange rates that is empirically convincing.

Thirdly, there is a set of issues that might be summarised as the problem of non-mechanical linkages. This refers to a wide range of one-off factors, shifts in unobservable variables like expectations and business confidence, and ‘long and variable lags’ of policy transmission, that are not easily amenable to mechanical modelling. Econometricians might be tempted to classify all such problems as components of model error terms. But the point is that many of these influences can be big enough and persistent enough to shape the characteristics of an entire cycle. Examples include the effects of financial deregulation, the role of the Accord, and the recent asset-price cycle, or a major structural shift like the transition to low inflation. Another example is the apparently differing severity of reactions of the economy to episodes of monetary tightening in the mid and late 1980s. What these sorts of events illustrate is that good explanation of any medium-term episode is likely to involve a significant role for factors special to the period, and an understanding of these factors is needed to complement more mechanistic approaches to quantifying the role of policy.

Finally, there is the problem of model uncertainty – that is, uncertainty about the structure and parameters of the economic model. It seems clear that models can give predictions and policy messages that differ from one another to an economically significant degree. Because models differ in basic design features like size, data periodicity and theoretical underpinnings, there is no straightforward criterion for determining which is ‘the’ correct model or which is, in some overall sense, most useful for policy analysis. This source of uncertainty is argued to have important implications for policy.[15] In particular, if policymakers are risk averse, model uncertainty will generally be an argument for reduced ‘activism’ in policy relative to the certainty case, in the sense of reducing the degree of responsiveness of the policy instrument to a given shock. Intuitively this is because, when the model is uncertain, the results of a policy action are more uncertain the bigger the action. Since any individual model assumes away this source of uncertainty, models are likely to overstate the attractiveness of policy activism and the degree of macroeconomic control that can reasonably be attained.

All of the points raised here are really aspects of inherent uncertainty about how the economy works. They are not intended as arguments against the use of large scale econometric models, since these uncertainties apply in one way or another to any form of policy analysis, but they do argue for modesty about how much accuracy can be achieved. They also suggest an important role for sensitivity analysis of model results, and evaluation of policy rules across a range of alternative model specifications.

Footnote

The argument set out here is elaborated by Blinder (1995). [15]