RDP 2012-08: Estimation and Solution of Models with Expectations and Structural Changes Appendix A: The Kalman Filter Equations
December 2012 – ISSN 1320-7229 (Print), ISSN 1448-5109 (Online)
- Download the Paper 1.08MB
Take the state equation
and the observation equation
Define and
The recursion begins from ŷ1|0 with the unconditional mean of y1, in our case
where µ is the steady state under the initial structure, that is µ = (I – Q)−1 C and
implies vec . Presuming that ŷt|t−1 and Σt|t−1 are in hand then
and the forecast error will be
The latter implies that
Next, update the inference on the value of yt with data up to t as in Hamilton (1994):
This follows from
after using . Equation (9) then implies
where Kt = Qt+1Σt|t−1 H′ (HΣt|t−1 H′ + V)−1 is the Kalman gain matrix.
This last expression, combined with Equation (9), implies that
The associated recursions for the mean squared error (MSE) matrices are given by,
If the initial state and the innovations are Gaussian, the conditional distribution of zt is normal with mean Hŷt|t−1 and conditional variance HΣt|t−1 H′ + V. The forecast errors, ut, can then be used to construct the log likelihood function for the sample as follows:
This is Equation (20) in the text.