RDP 2000-10: Monetary Policy-Making in the Presence of Knightian Uncertainty 2. The Current Literature
December 2000
- Download the Paper 298KB
In the literature, the monetary policy-maker's decision has usually been described as choosing a path of interest rates to minimise losses due to deviations of output from potential and inflation from target, subject to the dynamics of the economy. The policy-maker's objective function is usually assumed to be quadratic, and the dynamics of the economy are assumed to be linear. Using these assumptions, the monetary policy decision-maker's problem can be written more formally as:
subject to:
where:
- x is a vector of variables that affect the policy-maker's losses, which generally includes deviations of inflation from the inflation target and output from potential;
- Ω summarises the preferences of the policy-maker by assigning weights to each policy objective;
- i is the path of nominal interest rates;
- r is the path of real interest rates;
- δ is the discount rate;
- B describes the dynamic structure of the model;
- C describes how the economy responds to the policy instrument; and
- Λ captures additive shocks to the economy.
One of the puzzles that arises when estimating paths of optimal policy using this framework, is that the optimal path of interest rates is much more volatile and subject to policy reversals than paths that are typically observed (Lowe and Ellis 1997). Eijffinger, Schaling and Verhagen (1999) and Guthrie and Wright (2000) introduce small menu costs into the policy-maker's environment to generate extended periods of no change in model-generated interest rate paths. While this exercise would appear to go some way towards reconciling model-generated and observed interest rate paths, it is not clear what the fixed costs of changing interest rates are that would generate this result, making it difficult to justify this approach.
Blinder (1998) suggests that actual interest rates may be less volatile than those predicted by these models because central bankers take into account the fact that they are uncertain about the models they are working with. Since this suggestion was made, a significant body of research investigating the effects of uncertainty on model-generated interest rate paths has emerged.
One important source of uncertainty arises from the fact that the models being used are estimated and consequently, policy-makers are uncertain about the true parameter values. Parameter uncertainty can be characterised by a normal distribution around the estimated parameters with the estimated variance-covariance matrix. The effects of this specification of parameter uncertainty were first investigated by Brainard (1967) who shows, in a simple static model of the macroeconomy, that adjustments to the policy instrument will be dampened if the policy-maker is uncertain about the parameters of the model. However, this result does not generalise to the dynamic case. Shuetrim and Thompson (1999) show that although uncertainty about the effect of the interest rate on output decreases the willingness of policy-makers to change interest rates, uncertainty about the dynamic structure of the model can lead to the opposite result: policy-makers may wish to change interest rates by more than they would in the absence of uncertainty. Similar theoretical results have been established by Söderström (1999).
Empirical work presented by Sack (2000) and Debelle and Cagliarini (2000) suggests that the first of these two effects dominates, and that the path of optimal policy does appear to be less volatile when parameter uncertainty is taken into account. However, other studies, notably Rudebusch (1999) and Estrella and Mishkin (1999), do not find such convincing results. Although Sack and Wieland (1999) suggest that the differences in these results may be due to the degree of richness in the dynamic structures or to the number of ‘uncertain’ parameters considered, the general conclusion appears to be that parameter uncertainty is of secondary importance for explaining the differences between model-generated and actual interest rate paths.
Policy-makers, however, may be less clear about the specific nature of parameter uncertainty than this research assumes. In fact, policy-makers may be uncertain not only about the parameters, but also about the general specification of the model being used. This concern about general model uncertainty is captured by Blinder (1998, pp 12–13) who recommended:
Use a wide variety of models and don't ever trust one of them too much. … My usual procedure was to simulate policy on as many of these models as possible, throw out the outlier(s), and average the rest to get a point estimate of a dynamic multiplier path. This can be viewed as a rough – make that very rough – approximation to optimal information processing.
As this quote suggests, general model uncertainty, unlike the approach to parameter uncertainty discussed above, cannot be conveniently summarised by a single probability distribution. Allowing for a less structured form of uncertainty requires more sophisticated techniques. Recently, a number of papers have used robust control methods to deal with general model uncertainty, which includes general parameter uncertainty as a special case. In order to obtain solutions, this literature makes the extra assumption that policy-makers make interest rate decisions that are robust to a specified range of model mis-specifications. This means that, having decided the range within which the true model can deviate from the estimated model, the policy-maker will modify policy settings to minimise the losses associated with the worst possible outcome. Policy-makers who are more averse to uncertainty and wish policy to be more robust will consider a wider range of possible deviations from the estimated model.
Analysis that considers very general forms of model uncertainty finds that policy reacts more to deviations of inflation from target and output from potential than would be the case if this uncertainty were not taken into account (Sargent 1999; Onatski and Stock 2000; Tetlow and von zur Muehlen 2000). Consequently, the path of interest rates appears to be more volatile, rather than less, when general model uncertainty is taken into account. In Section 5 we discuss the intuition of this result in more detail, in the context of decision-making under different forms of Knightian uncertainty.
The final form of uncertainty that has received attention is data uncertainty. This form of uncertainty becomes particularly important when policy responds to deviations of output from potential. One source of uncertainty about the output gap is the fact that output data are frequently revised. Policy made on the basis of first-release data may not look optimal by the time the data have been revised. It is also possible that with the help of coincident indicators, which are not subject to the same measurement problems, policy-makers could have a better view of the true state of the economy than the output data and consequently, their actions may look better-considered as the data are revised. The second, and more significant, problem facing measurement of the output gap is that potential output is not directly observed, and there is little agreement as to how it should be estimated.
Orphanides (1998) examines the effects of measurement error on the performance of efficient rules. Orphanides compares data that were available at the time monetary policy decisions were made with ex post data to evaluate the size and nature of measurement error in the United States. He then estimates the impact this data noise has on the behaviour of policy-makers. He finds that in the presence of data noise, a central bank following an optimal Taylor rule will act more cautiously in response to output and inflation data than they would if they were certain that the data were correct. Similar results have been found by Smets (1998) and Orphanides et al (1999) using more sophisticated macroeconomic models.
In summary, several types of uncertainty facing policy-makers have been explored as possible inclusions to the standard optimal policy framework to help reconcile model-generated and observed interest rate paths. Parameter uncertainty is the best understood of these, partly because it is relatively straightforward to implement. Unfortunately, parameter uncertainty characterised by a single probability distribution does not appear to be sufficient to explain differences between model-generated and observed interest rates. Both model and data uncertainty can be thought of as examples of Knightian uncertainty insofar as it is not clear how to capture them with a single probability distribution. Despite this, they appear to have different implications for the behaviour of interest rates. In Sections 3 and 4 we formalise the distinction between risk and uncertainty and the different ways decision-making under Knightian uncertainty can be modelled in order to understand these results more clearly.