# Linear State Space Models¶

“We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis de Laplace

## Overview¶

This lecture introduces the linear state space dynamic system

Easy to use and carries a powerful theory of prediction

A workhorse with many applications

• representing dynamics of higher-order linear systems

• predicting the position of a system $$j$$ steps into the future

• predicting a geometric sum of future values of a variable like

• non financial income
• dividends on a stock
• the money supply
• a government deficit or surplus
• etc., etc., $$\ldots$$
• key ingredient of useful models

• Friedman’s permanent income model of consumption smoothing
• Barro’s model of smoothing total tax collections
• Rational expectations version of Cagan’s model of hyperinflation
• Sargent and Wallace’s “unpleasant monetarist arithmetic”
• etc., etc., $$\ldots$$

## The Linear State Space Model¶

Objects in play

• An $$n \times 1$$ vector $$x_t$$ denoting the state at time $$t = 0, 1, 2, \ldots$$
• An iid sequence of $$m \times 1$$ random vectors $$w_t \sim N(0,I)$$
• A $$k \times 1$$ vector $$y_t$$ of observations at time $$t = 0, 1, 2, \ldots$$
• An $$n \times n$$ matrix $$A$$ called the transition matrix
• An $$n \times m$$ matrix $$C$$ called the volatility matrix
• A $$k \times n$$ matrix $$G$$ sometimes called the output matrix

Here is the linear state-space system

(1)\begin{split}\begin{align} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{align}\end{split}

### Primitives¶

The primitives of the model are

1. the matrices $$A, C, G$$
2. shock distribution, which we have specialized to $$N(0,I)$$
3. the distribution of the initial condition $$x_0$$, which we have set to $$N(\mu_0, \Sigma_0)$$

Given $$A, C, G$$ and draws of $$x_0$$ and $$w_1, w_2, \ldots$$, the model (1) pins down the values of the sequences $$\{x_t\}$$ and $$\{y_t\}$$

Even without these draws, the primitives 1–3 pin down the probability distributions of $$\{x_t\}$$ and $$\{y_t\}$$

Later we’ll see how to compute these distributions and their moments

#### Martingale difference shocks¶

We’ve made the common assumption that the shocks are independent standardized normal vectors

But some of what we say will go through under the assumption that $$\{w_{t+1}\}$$ is a martingale difference sequence

A martingale difference sequence is a sequence that is zero mean when conditioned on past information

In the present case, since $$\{x_t\}$$ is our state sequence, this means that it satisfies

$\EE [w_{t+1} | x_t, x_{t-1}, \ldots ] = 0$

This is a weaker condition than that $$\{w_t\}$$ is iid with $$w_{t+1} \sim N(0,I)$$

### Examples¶

By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state space model

The following examples help to highlight this point

They also illustrate the wise dictum finding the state is an art

#### Second-order difference equation¶

Let $$\{y_t\}$$ be a deterministic sequence that satifies

(2)$y_{t+1} = \phi_0 + \phi_1 y_t + \phi_2 y_{t-1} \quad \text{s.t.} \quad y_0, y_{-1} \text{ given}$

To map (2) into our state space system (1), we set

$\begin{split}x_t= \begin{bmatrix} 1 \\ y_t \\ y_{t-1} \end{bmatrix} \qquad A = \begin{bmatrix} 1 & 0 & 0 \\ \phi_0 & \phi_1 & \phi_2 \\ 0 & 1 & 0 \end{bmatrix} \qquad C= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix}\end{split}$

You can confirm that under these definitions, (1) and (2) agree

The next figure shows dynamics of this process when $$\phi_0 = 1.1, \phi_1=0.8, \phi_2 = -0.8, y_0 = y_{-1} = 1$$

Later you’ll be asked to recreate this figure

#### Univariate Autoregressive Processes¶

We can use (1) to represent the model

(3)$y_{t+1} = \phi_1 y_{t} + \phi_2 y_{t-1} + \phi_3 y_{t-2} + \phi_4 y_{t-3} + \sigma w_{t+1}$

where $$\{w_t\}$$ is iid and standard normal

To put this in the linear state space format we take $$x_t = \begin{bmatrix} y_t & y_{t-1} & y_{t-2} & y_{t-3} \end{bmatrix}'$$ and

$\begin{split}A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \qquad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}\end{split}$

The matrix $$A$$ has the form of the companion matrix to the vector $$\begin{bmatrix}\phi_1 & \phi_2 & \phi_3 & \phi_4 \end{bmatrix}$$.

The next figure shows dynamics of this process when

$\phi_1 = 0.5, \phi_2 = -0.2, \phi_3 = 0, \phi_4 = 0.5, \sigma = 0.2, y_0 = y_{-1} = y_{-2} = y_{-3} = 1$

#### Vector Autoregressions¶

Now suppose that

• $$y_t$$ is a $$k \times 1$$ vector
• $$\phi_j$$ is a $$k \times k$$ matrix and
• $$w_t$$ is $$k \times 1$$

Then (3) is termed a vector autoregression

To map this into (1), we set

$\begin{split}x_t = \begin{bmatrix} y_t \\ y_{t-1} \\ y_{t-2} \\ y_{t-3} \end{bmatrix} \quad A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \end{bmatrix} \quad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \quad G = \begin{bmatrix} I & 0 & 0 & 0 \end{bmatrix}\end{split}$

where $$I$$ is the $$k \times k$$ identity matrix and $$\sigma$$ is a $$k \times k$$ matrix

#### Seasonals¶

We can use (1) to represent

1. the deterministic seasonal $$y_t = y_{t-4}$$
2. the indeterministic seasonal $$y_t = \phi_4 y_{t-4} + w_t$$

In fact both are special cases of (3)

With the deterministic seasonal, the transition matrix becomes

$\begin{split}A = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}\end{split}$

It is easy to check that $$A^4 = I$$, which implies that $$x_t$$ is strictly periodic with period 4:[1]

$x_{t+4} = x_t$

Such an $$x_t$$ process can be used to model deterministic seasonals in quarterly time series.

The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.

### Moving Average Representations¶

A nonrecursive expression for $$x_t$$ as a function of $$x_0, w_1, w_2, \ldots, w_t$$ can be found by using (1) repeatedly to obtain

(6)\begin{split}\begin{align} x_t & = Ax_{t-1} + Cw_t \\ & = A^2 x_{t-2} + ACw_{t-1} + Cw_t \nonumber \\ & \qquad \vdots \nonumber \\ & = \sum_{j=0}^{t-1} A^j Cw_{t-j} + A^t x_0 \nonumber \end{align}\end{split}

Representation (6) is a moving average representation

It expresses $$\{x_t\}$$ as a linear function of

1. current and past values of the process $$\{w_t\}$$ and
2. the initial condition $$x_0$$

As an example of a moving average representation, let the model be

$\begin{split}A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\end{split}$

You will be able to show that $$A^t = \begin{bmatrix} 1 & t \cr 0 & 1 \end{bmatrix}$$ and $$A^j C = \begin{bmatrix} 1 & 0 \end{bmatrix}'$$

Substituting into the moving average representation (6), we obtain

$\begin{split}x_{1t} = \sum_{j=0}^{t-1} w_{t-j} + \begin{bmatrix} 1 & t \end{bmatrix} x_0\end{split}$

where $$x_{1t}$$ is the first entry of $$x_t$$

The first term on the right is a cumulated sum of martingale differences, and is therefore a martingale

The second term is a translated linear function of time

For this reason, $$x_{1t}$$ is called a martingale with drift

## Distributions and Moments¶

### Unconditional Moments¶

Using (1), it’s easy to obtain expressions for the (unconditional) means of $$x_t$$ and $$y_t$$

We’ll explain what unconditional and conditional mean soon

Letting $$\mu_t := \EE [x_t]$$ and using linearity of expectations, we find that

(7)$\mu_{t+1} = A \mu_t \quad \text{with} \quad \mu_0 \text{ given}$

Here $$\mu_0$$ is a primitive given in (1)

The variance-covariance matrix of $$x_t$$ is $$\Sigma_t := \EE [ (x_t - \mu_t) (x_t - \mu_t)']$$

Using $$x_{t+1} - \mu_{t+1} = A (x_t - \mu_t) + C w_{t+1}$$, we can determine this matrix recursively via

(8)$\Sigma_{t+1} = A \Sigma_t A' + C C' \quad \text{with} \quad \Sigma_0 \text{ given}$

As with $$\mu_0$$, the matrix $$\Sigma_0$$ is a primitive given in (1)

As a matter of terminology, we will sometimes call

• $$\mu_t$$ the unconditional mean of $$x_t$$
• $$\Sigma_t$$ the unconditional variance-convariance matrix of $$x_t$$

This is to distinguish $$\mu_t$$ and $$\Sigma_t$$ from related objects that use conditioning information, to be defined below

However, you should be aware that these “unconditional” moments do depend on the initial distribution $$N(\mu_0, \Sigma_0)$$

#### Moments of the Observations¶

Using linearity of expectations again we have

(9)$\EE [y_t] = \EE [G x_t] = G \mu_t$

The variance-covariance matrix of $$y_t$$ is easily shown to be

(10)$\Var [y_t] = \Var [G x_t] = G \Sigma_t G'$

### Distributions¶

In general, knowing the mean and variance-covariance matrix of a random vector is not quite as good as knowing the full distribution

However, there are some situations where these moments alone tell us all we need to know

These are situations in which the mean vector and covariance matrix are sufficient statistics for the population distribution

(Sufficient statistics form a list of objects that characterize a population distribution)

One such situation is when the vector in question is Gaussian (i.e., normally distributed)

This is the case here, given

1. our Gaussian assumptions on the primitives
2. the fact that normality is preserved under linear operations

In fact, it’s well-known that

(11)$u \sim N(\bar u, S) \quad \text{and} \quad v = a + B u \implies v \sim N(a + B \bar u, B S B')$

In particular, given our Gaussian assumptions on the primitives and the linearity of (1) we can see immediately that both $$x_t$$ and $$y_t$$ are Gaussian for all $$t \geq 0$$ [2]

Since $$x_t$$ is Gaussian, to find the distribution, all we need to do is find its mean and variance-covariance matrix

But in fact we’ve already done this, in (7) and (8)

Letting $$\mu_t$$ and $$\Sigma_t$$ be as defined by these equations, we have

(12)$x_t \sim N(\mu_t, \Sigma_t)$

By similar reasoning combined with (9) and (10),

(13)$y_t \sim N(G \mu_t, G \Sigma_t G')$

### Ensemble Interpretations¶

How should we interpret the distributions defined by (12)(13)?

Intuitively, the probabilities in a distribution correspond to relative frequencies in a large population drawn from that distribution

Let’s apply this idea to our setting, focusing on the distribution of $$y_T$$ for fixed $$T$$

We can generate independent draws of $$y_T$$ by repeatedly simulating the evolution of the system up to time $$T$$, using an independent set of shocks each time

The next figure shows 20 simulations, producing 20 time series for $$\{y_t\}$$, and hence 20 draws of $$y_T$$

The system in question is the univariate autoregressive model (3)

The values of $$y_T$$ are represented by black dots in the left-hand figure

In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our sample of 20 $$y_T$$‘s

(The parameters and source code for the figures can be found in file linear_models/paths_and_hist.py from the applications repository)

Here is another figure, this time with 100 observations

Let’s now try with 500,000 observations, showing only the histogram (without rotation)

The black line is the population density of $$y_T$$ calculated from (13)

The histogram and population distribution are close, as expected

By looking at the figures and experimenting with parameters, you will gain a feel for how the population distribution depends on the model primitives listed above, as intermediated by the distribution’s sufficient statistics

#### Ensemble means¶

In the preceding figure we approximated the population distribution of $$y_T$$ by

1. generating $$I$$ sample paths (i.e., time series) where $$I$$ is a large number
2. recording each observation $$y^i_T$$
3. histogramming this sample

Just as the histogram approximates the population distribution, the ensemble or cross-sectional average

$\bar y_T := \frac{1}{I} \sum_{i=1}^I y_T^i$

approximates the expectation $$\EE [y_T] = G \mu_T$$ (as implied by the law of large numbers)

Here’s a simulation comparing the ensemble averages and population means at time points $$t=0,\ldots,50$$

The parameters are the same as for the preceding figures, and the sample size is relatively small ($$I=20$$)

The ensemble mean for $$x_t$$ is

$\bar x_T := \frac{1}{I} \sum_{i=1}^I x_T^i \to \mu_T \qquad (I \to \infty)$

The limit $$\mu_T$$ is a “long-run average”

(By long-run average we mean the average for an infinite ($$I = \infty$$) number of sample $$x_T$$‘s)

Another application of the law of large numbers assures us that

$\frac{1}{I} \sum_{i=1}^I (x_T^i - \bar x_T) (x_T^i - \bar x_T)' \to \Sigma_T \qquad (I \to \infty)$

### Joint Distributions¶

In the preceding discussion we looked at the distributions of $$x_t$$ and $$y_t$$ in isolation

This gives us useful information, but doesn’t allow us to answer questions like

• what’s the probability that $$x_t \geq 0$$ for all $$t$$?
• what’s the probability that the process $$\{y_t\}$$ exceeds some value $$a$$ before falling below $$b$$?
• etc., etc.

Such questions concern the joint distributions of these sequences

To compute the joint distribution of $$x_0, x_1, \ldots, x_T$$, recall that joint and conditional densities are linked by the rule

$p(x, y) = p(y \, | \, x) p(x) \qquad \text{(joint }=\text{ conditional }\times\text{ marginal)}$

From this rule we get $$p(x_0, x_1) = p(x_1 \,|\, x_0) p(x_0)$$

The Markov property $$p(x_t \,|\, x_{t-1}, \ldots, x_0) = p(x_t \,|\, x_{t-1})$$ and repeated applications of the preceding rule lead us to

$p(x_0, x_1, \ldots, x_T) = p(x_0) \prod_{t=0}^{T-1} p(x_{t+1} \,|\, x_t)$

The marginal $$p(x_0)$$ is just the primitive $$N(\mu_0, \Sigma_0)$$

In view of (1), the conditional densities are

$p(x_{t+1} \,|\, x_t) = N(Ax_t, C C')$

#### Autocovariance functions¶

An important object related to the joint distribution is the autocovariance function

(14)$\Sigma_{t+j, t} := \EE [ (x_{t+j} - \mu_{t+j})(x_t - \mu_t)' ]$

Elementary calculations show that

(15)$\Sigma_{t+j,t} = A^j \Sigma_t$

Notice that $$\Sigma_{t+j,t}$$ in general depends on both $$j$$, the gap between the two dates, and $$t$$, the earlier date

## Stationarity and Ergodicity¶

Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space models

### Visualizing Stability¶

Let’s look at some more time series from the same model that we analyzed above

This picture shows cross-sectional distributions for $$y$$ at times $$T, T', T''$$

Note how the time series “settle down” in the sense that the distributions at $$T'$$ and $$T''$$ are relatively similar to each other — but unlike the distribution at $$T$$

Apparently, the distributions of $$y_t$$ converge to a fixed long-run distribution as $$t \to \infty$$

When such a distribution exists it is called a stationary distribution

### Stationary Distributions¶

In our setting, a distribution $$\psi_{\infty}$$ is said to be stationary for $$x_t$$ if

$x_t \sim \psi_{\infty} \quad \text{and} \quad x_{t+1} = A x_t + C w_{t+1} \quad \implies \quad x_{t+1} \sim \psi_{\infty}$

Since

1. in the present case all distributions are Gaussian
2. a Gaussian distribution is pinned down by its mean and variance-covariance matrix

we can restate the definition as follows: $$\psi_{\infty}$$ is stationary for $$x_t$$ if

$\psi_{\infty} = N(\mu_{\infty}, \Sigma_{\infty})$

where $$\mu_{\infty}$$ and $$\Sigma_{\infty}$$ are fixed points of (7) and (8) respectively

### Covariance Stationary Processes¶

Let’s see what happens to the preceding figure if we start $$x_0$$ at the stationary distribution

Now the differences in the observed distributions at $$T, T'$$ and $$T''$$ come entirely from random fluctuations due to the finite sample size

By

• our choosing $$x_0 \sim N(\mu_{\infty}, \Sigma_{\infty})$$
• the definitions of $$\mu_{\infty}$$ and $$\Sigma_{\infty}$$ as fixed points of (7) and (8) respectively

we’ve ensured that

$\mu_t = \mu_{\infty} \quad \text{and} \quad \Sigma_t = \Sigma_{\infty} \quad \text{for all } t$

Moreover, in view of (15), the autocovariance function takes the form $$\Sigma_{t+j,t} = A^j \Sigma_\infty$$, which depends on $$j$$ but not on $$t$$

This motivates the following definition

A process $$\{x_t\}$$ is said to be covariance stationary if

• both $$\mu_t$$ and $$\Sigma_t$$ are constant in $$t$$
• $$\Sigma_{t+j,t}$$ depends on the time gap $$j$$ but not on time $$t$$

In our setting, $$\{x_t\}$$ will be covariance stationary if $$\mu_0, \Sigma_0, A, C$$ assume values that imply that none of $$\mu_t, \Sigma_t, \Sigma_{t+j,t}$$ depends on $$t$$

### Conditions for Stationarity¶

#### The globally stable case¶

The difference equation $$\mu_{t+1} = A \mu_t$$ is known to have unique fixed point $$\mu_{\infty} = 0$$ if all eigenvalues of $$A$$ have moduli strictly less than unity

That is, if (np.absolute(np.linalg.eigvals(A)) < 1).all() == True

The difference equation (8) also has a unique fixed point in this case, and, moreover

$\mu_t \to \mu_{\infty} = 0 \quad \text{and} \quad \Sigma_t \to \Sigma_{\infty} \quad \text{as} \quad t \to \infty$

regardless of the initial conditions $$\mu_0$$ and $$\Sigma_0$$

This is the globally stable case — see these notes for more a theoretical treatment

However, global stability is more than we need for stationary solutions, and often more than we want

To illustrate, consider our second order difference equation example

Here the state is $$x_t = \begin{bmatrix} 1 & y_t & y_{t-1} \end{bmatrix}'$$

Because of the constant first component in the state vector, we will never have $$\mu_t \to 0$$

How can we find stationary solutions that respect a constant state component?

#### Processes with a constant state component¶

To investigate such a process, suppose that $$A$$ and $$C$$ take the form

$\begin{split} A = \begin{bmatrix} A_1 & a \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} C_1 \\ 0 \end{bmatrix}\end{split}$

where

• $$A_1$$ is an $$(n-1) \times (n-1)$$ matrix
• $$a$$ is an $$(n-1) \times 1$$ column vector

Let $$x_t = \begin{bmatrix} x_{1t}' & 1 \end{bmatrix}'$$ where $$x_{1t}$$ is $$(n-1) \times 1$$

It follows that

\begin{split}\begin{align} x_{1,t+1} & = A_1 x_{1t} + a + C_1 w_{t+1} \\ \end{align}\end{split}

Let $$\mu_{1t} = \EE [x_{1t}]$$ and take expectations on both sides of this expression to get

(16)$\mu_{1,t+1} = A_1 \mu_{1,t} + a$

Assume now that the moduli of the eigenvalues of $$A_1$$ are all strictly less than one

Then (16) has a unique stationary solution, namely,

$\mu_{1\infty} = (I-A_1)^{-1} a$

The stationary value of $$\mu_t$$ itself is then $$\mu_\infty := \begin{bmatrix} \mu_{1\infty}' & 1 \end{bmatrix}'$$

The stationary values of $$\Sigma_t$$ and $$\Sigma_{t+j,t}$$ satisfy

(17)\begin{split}\begin{align} \Sigma_\infty & = A \Sigma_\infty A' + C C' \\ \Sigma_{t+j,t} & = A^j \Sigma_\infty \nonumber \end{align}\end{split}

Notice that here $$\Sigma_{t+j,t}$$ depends on the time gap $$j$$ but not on calendar time $$t$$

In conclusion, if

• $$x_0 \sim N(\mu_{\infty}, \Sigma_{\infty})$$ and
• the moduli of the eigenvalues of $$A_1$$ are all strictly less than unity

then the $$\{x_t\}$$ process is covariance stationary, with constant state component

Note

If the eigenvalues of $$A_1$$ are less than unity in modulus, then (a) starting from any initial value, the mean and variance-covariance matrix both converge to their stationary values; and (b) iterations on (8) converge to the fixed point of the discrete Lyapunov equation in the first line of (17)

### Ergodicity¶

Let’s suppose that we’re working with a covariance stationary process

In this case we know that the ensemble mean will converge to $$\mu_{\infty}$$ as the sample size $$I$$ approaches infinity

#### Averages over time¶

Ensemble averages across simulations are interesting theoretically, but in real life we usually observe only a single realization $$\{x_t, y_t\}_{t=0}^T$$

So now let’s take a single realization and form the time series averages

$\bar x := \frac{1}{T} \sum_{t=1}^T x_t \quad \text{and} \quad \bar y := \frac{1}{T} \sum_{t=1}^T y_t$

Do these time series averages converge to something interpretable in terms of our basic state-space representation?

The answer depends on something called ergodicity

Ergodicity is the property that time series and ensemble averages coincide

More formally, ergodicity implies that time series sample averages converge to their expectation under the stationary distribution

In particular,

• $$\frac{1}{T} \sum_{t=1}^T x_t \to \mu_{\infty}$$
• $$\frac{1}{T} \sum_{t=1}^T (x_t -\bar x_T) (x_t - \bar x_T)' \to \Sigma_\infty$$
• $$\frac{1}{T} \sum_{t=1}^T (x_{t+j} -\bar x_T) (x_t - \bar x_T)' \to A^j \Sigma_\infty$$

In our linear Gaussian setting, any covariance stationary process is also ergodic

## Noisy Observations¶

In some settings the observation equation $$y_t = Gx_t$$ is modified to include an error term

Often this error term represents the idea that the true state can only be observed imperfectly

To include an error term in the observation we introduce

• An iid sequence of $$\ell \times 1$$ random vectors $$v_t \sim N(0,I)$$
• A $$k \times \ell$$ matrix $$H$$

and extend the linear state-space system to

(18)\begin{split}\begin{align} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t + H v_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{align}\end{split}

The sequence $$\{v_t\}$$ is assumed to be independent of $$\{w_t\}$$

The process $$\{x_t\}$$ is not modified by noise in the observation equation and its moments, distributions and stability properties remain the same

The unconditional moments of $$y_t$$ from (9) and (10) now become

(19)$\EE [y_t] = \EE [G x_t + H v_t] = G \mu_t$

The variance-covariance matrix of $$y_t$$ is easily shown to be

(20)$\Var [y_t] = \Var [G x_t + H v_t] = G \Sigma_t G' + HH'$

The distribution of $$y_t$$ is therefore

$y_t \sim N(G \mu_t, G \Sigma_t G' + HH')$

## Prediction¶

The theory of prediction for linear state space systems is elegant and simple

### Forecasting Formulas – Conditional Means¶

The natural way to predict variables is to use conditional distributions

For example, the optimal forecast of $$x_{t+1}$$ given information known at time $$t$$ is

$\EE_t [x_{t+1}] := \EE [x_{t+1} \mid x_t, x_{t-1}, \ldots, x_0 ] = Ax_t$

The right-hand side follows from $$x_{t+1} = A x_t + C w_{t+1}$$ and the fact that $$w_{t+1}$$ is zero mean and independent of $$x_t, x_{t-1}, \ldots, x_0$$

That $$\EE_t [x_{t+1}] = \EE[x_{t+1} \mid x_t]$$ is an implication of $$\{x_t\}$$ having the Markov property

$x_{t+1} - \EE_t [x_{t+1}] = Cw_{t+1}$

The covariance matrix of the forecast error is

$\EE [ (x_{t+1} - \EE_t [ x_{t+1}] ) (x_{t+1} - \EE_t [ x_{t+1}])'] = CC'$

More generally, we’d like to compute the $$j$$-step ahead forecasts $$\EE_t [x_{t+j}]$$ and $$\EE_t [y_{t+j}]$$

With a bit of algebra we obtain

$x_{t+j} = A^j x_t + A^{j-1} C w_{t+1} + A^{j-2} C w_{t+2} + \cdots + A^0 C w_{t+j}$

In view of the iid property, current and past state values provide no information about future values of the shock

Hence $$\EE_t[w_{t+k}] = \EE[w_{t+k}] = 0$$

It now follows from linearity of expectations that the $$j$$-step ahead forecast of $$x$$ is

$\EE_t [x_{t+j}] = A^j x_t$

The $$j$$-step ahead forecast of $$y$$ is therefore

$\EE_t [y_{t+j}] = \EE_t [G x_{t+j} + H v_{t+j}] = G A^j x_t$

### Covariance of Prediction Errors¶

It is useful to obtain the covariance matrix of the vector of $$j$$-step-ahead prediction errors

(21)$x_{t+j} - \EE_t [ x_{t+j}] = \sum^{j-1}_{s=0} A^s C w_{t-s+j}$

Evidently,

(22)$V_j := \EE_t [ (x_{t+j} - \EE_t [x_{t+j}] ) (x_{t+j} - \EE_t [x_{t+j}] )^\prime ] = \sum^{j-1}_{k=0} A^k C C^\prime A^{k^\prime}$

$$V_j$$ defined in (22) can be calculated recursively via $$V_1 = CC'$$ and

(23)$V_j = CC^\prime + A V_{j-1} A^\prime, \quad j \geq 2$

$$V_j$$ is the conditional covariance matrix of the errors in forecasting $$x_{t+j}$$, conditioned on time $$t$$ information $$x_t$$

Under particular conditions, $$V_j$$ converges to

(24)$V_\infty = CC' + A V_\infty A'$

Equation (24) is an example of a discrete Lyapunov equation in the covariance matrix $$V_\infty$$

A sufficient condition for $$V_j$$ to converge is that the eigenvalues of $$A$$ be strictly less than one in modulus.

Weaker sufficient conditions for convergence associate eigenvalues equaling or exceeding one in modulus with elements of $$C$$ that equal $$0$$

### Forecasts of Geometric Sums¶

In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system (1)

We want the following objects

• Forecast of a geometric sum of future $$x$$‘s, or $$\EE_t \left[ \sum_{j=0}^\infty \beta^j x_{t+j} \right]$$
• Forecast of a geometric sum of future $$y$$‘s, or $$\EE_t \left[\sum_{j=0}^\infty \beta^j y_{t+j} \right]$$

These objects are important components of some famous and interesting dynamic models

For example,

• if $$\{y_t\}$$ is a stream of dividends, then $$\EE \left[\sum_{j=0}^\infty \beta^j y_{t+j} | x_t \right]$$ is a model of a stock price
• if $$\{y_t\}$$ is the money supply, then $$\EE \left[\sum_{j=0}^\infty \beta^j y_{t+j} | x_t \right]$$ is a model of the price level

#### Formulas¶

Fortunately, it is easy to use a little matrix algebra to compute these objects

Suppose that every eigenvalue of $$A$$ has modulus strictly less than $$\frac{1}{\beta}$$

It then follows that $$I + \beta A + \beta^2 A^2 + \cdots = \left[I - \beta A \right]^{-1}$$

• Forecast of a geometric sum of future $$x$$‘s
$\EE_t \left[\sum_{j=0}^\infty \beta^j x_{t+j} \right] = [I + \beta A + \beta^2 A^2 + \cdots \ ] x_t = [I - \beta A]^{-1} x_t$
• Forecast of a geometric sum of future $$y$$‘s
$\EE_t \left[\sum_{j=0}^\infty \beta^j y_{t+j} \right] = G [I + \beta A + \beta^2 A^2 + \cdots \ ] x_t = G[I - \beta A]^{-1} x_t$

## Code¶

Our preceding simulations and calculations are based on code in the file lss.py from the QuantEcon.py package

The code implements a class for handling linear state space models (simulations, calculating moments, etc.)

We repeat it here for convenience

"""
Filename: lss.py
Reference: http://quant-econ.net/py/linear_models.html

Computes quantities associated with the Gaussian linear state space model.
"""

from textwrap import dedent
import numpy as np
from numpy.random import multivariate_normal
from scipy.linalg import solve
from numba import jit

@jit
def simulate_linear_model(A, x0, v, ts_length):
"""
This is a separate function for simulating a vector linear system of
the form

x_{t+1} = A x_t + v_t given x_0 = x0

Here x_t and v_t are both n x 1 and A is n x n.

The purpose of separating this functionality out is to target it for
optimization by Numba.  For the same reason, matrix multiplication is
broken down into for loops.

Parameters
----------
A : array_like or scalar(float)
Should be n x n
x0 : array_like
Should be n x 1.  Initial condition
v : np.ndarray
Should be n x ts_length-1.  Its t-th column is used as the time t
shock v_t
ts_length : int
The length of the time series

Returns
--------
x : np.ndarray
Time series with ts_length columns, the t-th column being x_t
"""
A = np.asarray(A)
n = A.shape[0]
x = np.empty((n, ts_length))
x[:, 0] = x0
for t in range(ts_length-1):
# x[:, t+1] = A.dot(x[:, t]) + v[:, t]
for i in range(n):
x[i, t+1] = v[i, t]                   #Shock
for j in range(n):
x[i, t+1] += A[i, j] * x[j, t]   #Dot Product
return x

class LinearStateSpace(object):
"""
A class that describes a Gaussian linear state space model of the
form:

x_{t+1} = A x_t + C w_{t+1}

y_t = G x_t + H v_t

where {w_t} and {v_t} are independent and standard normal with dimensions
k and l respectively.  The initial conditions are mu_0 and Sigma_0 for x_0
~ N(mu_0, Sigma_0).  When Sigma_0=0, the draw of x_0 is exactly mu_0.

Parameters
----------
A : array_like or scalar(float)
Part of the state transition equation.  It should be n x n
C : array_like or scalar(float)
Part of the state transition equation.  It should be n x m
G : array_like or scalar(float)
Part of the observation equation.  It should be k x n
H : array_like or scalar(float), optional(default=None)
Part of the observation equation.  It should be k x l
mu_0 : array_like or scalar(float), optional(default=None)
This is the mean of initial draw and is n x 1
Sigma_0 : array_like or scalar(float), optional(default=None)
This is the variance of the initial draw and is n x n and
also should be positive definite and symmetric

Attributes
----------
A, C, G, H, mu_0, Sigma_0 : see Parameters
n, k, m, l : scalar(int)
The dimensions of x_t, y_t, w_t and v_t respectively

"""

def __init__(self, A, C, G, H=None, mu_0=None, Sigma_0=None):
self.A, self.G, self.C = list(map(self.convert, (A, G, C)))
#-Check Input Shapes-#
ni,nj = self.A.shape
if ni != nj:
raise ValueError("Matrix A (shape: %s) needs to be square" % (self.A.shape))
if ni != self.C.shape[0]:
raise ValueError("Matrix C (shape: %s) does not have compatible dimensions with A. It should be shape: %s" % (self.C.shape, (ni,1)))
self.m = self.C.shape[1]
self.k, self.n = self.G.shape
if self.n != ni:
raise ValueError("Matrix G (shape: %s) does not have compatible dimensions with A (%s)"%(self.G.shape, self.A.shape))
if H is None:
self.H = None
self.l = None
else:
self.H = self.convert(H)
self.l = self.H.shape[1]
if mu_0 is None:
self.mu_0 = np.zeros((self.n, 1))
else:
self.mu_0 = self.convert(mu_0)
self.mu_0.shape = self.n, 1
if Sigma_0 is None:
self.Sigma_0 = np.zeros((self.n, self.n))
else:
self.Sigma_0 = self.convert(Sigma_0)

def __repr__(self):
return self.__str__()

def __str__(self):
m = """\
Linear Gaussian state space model:
- dimension of state space          : {n}
- number of innovations             : {m}
- dimension of observation equation : {k}
"""
return dedent(m.format(n=self.n, k=self.k, m=self.m))

def convert(self, x):
"""
Convert array_like objects (lists of lists, floats, etc.) into
well formed 2D NumPy arrays

"""
return np.atleast_2d(np.asarray(x, dtype='float'))

def simulate(self, ts_length=100):
"""
Simulate a time series of length ts_length, first drawing

x_0 ~ N(mu_0, Sigma_0)

Parameters
----------

ts_length : scalar(int), optional(default=100)
The length of the simulation

Returns
-------
x : array_like(float)
An n x ts_length array, where the t-th column is x_t
y : array_like(float)
A k x ts_length array, where the t-th column is y_t

"""
x0 = multivariate_normal(self.mu_0.flatten(), self.Sigma_0)
w = np.random.randn(self.m, ts_length-1)
v = self.C.dot(w) # Multiply each w_t by C to get v_t = C w_t
# == simulate time series == #
x = simulate_linear_model(self.A, x0, v, ts_length)

if self.H is not None:
v = np.random.randn(self.l, ts_length)
y = self.G.dot(x) + self.H.dot(v)
else:
y = self.G.dot(x)

return x, y

def replicate(self, T=10, num_reps=100):
"""
Simulate num_reps observations of x_T and y_T given
x_0 ~ N(mu_0, Sigma_0).

Parameters
----------
T : scalar(int), optional(default=10)
The period that we want to replicate values for
num_reps : scalar(int), optional(default=100)
The number of replications that we want

Returns
-------
x : array_like(float)
An n x num_reps array, where the j-th column is the j_th
observation of x_T

y : array_like(float)
A k x num_reps array, where the j-th column is the j_th
observation of y_T

"""
x = np.empty((self.n, num_reps))
for j in range(num_reps):
x_T, _ = self.simulate(ts_length=T+1)
x[:, j] = x_T[:, -1]
if self.H is not None:
v = np.random.randn(self.l, num_reps)
y = self.G.dot(x) + self.H.dot(v)
else:
y = self.G.dot(x)

return x, y

def moment_sequence(self):
"""
Create a generator to calculate the population mean and
variance-convariance matrix for both x_t and y_t, starting at
the initial condition (self.mu_0, self.Sigma_0).  Each iteration
produces a 4-tuple of items (mu_x, mu_y, Sigma_x, Sigma_y) for
the next period.

Yields
------
mu_x : array_like(float)
An n x 1 array representing the population mean of x_t
mu_y : array_like(float)
A  k x 1 array representing the population mean of y_t
Sigma_x : array_like(float)
An n x n array representing the variance-covariance matrix
of x_t
Sigma_y : array_like(float)
A k x k array representing the variance-covariance matrix
of y_t

"""
# == Simplify names == #
A, C, G, H = self.A, self.C, self.G, self.H
# == Initial moments == #
mu_x, Sigma_x = self.mu_0, self.Sigma_0

while 1:
mu_y = G.dot(mu_x)
if H is None:
Sigma_y = G.dot(Sigma_x).dot(G.T)
else:
Sigma_y = G.dot(Sigma_x).dot(G.T) + H.dot(H.T)

yield mu_x, mu_y, Sigma_x, Sigma_y

# == Update moments of x == #
mu_x = A.dot(mu_x)
Sigma_x = A.dot(Sigma_x).dot(A.T) + C.dot(C.T)

def stationary_distributions(self, max_iter=200, tol=1e-5):
"""
Compute the moments of the stationary distributions of x_t and
y_t if possible.  Computation is by iteration, starting from the
initial conditions self.mu_0 and self.Sigma_0

Parameters
----------
max_iter : scalar(int), optional(default=200)
The maximum number of iterations allowed
tol : scalar(float), optional(default=1e-5)
The tolerance level that one wishes to achieve

Returns
-------
mu_x_star : array_like(float)
An n x 1 array representing the stationary mean of x_t
mu_y_star : array_like(float)
An k x 1 array representing the stationary mean of y_t
Sigma_x_star : array_like(float)
An n x n array representing the stationary var-cov matrix
of x_t
Sigma_y_star : array_like(float)
An k x k array representing the stationary var-cov matrix
of y_t

"""
# == Initialize iteration == #
m = self.moment_sequence()
mu_x, mu_y, Sigma_x, Sigma_y = next(m)
i = 0
error = tol + 1

# == Loop until convergence or failure == #
while error > tol:

if i > max_iter:
fail_message = 'Convergence failed after {} iterations'
raise ValueError(fail_message.format(max_iter))

else:
i += 1
mu_x1, mu_y1, Sigma_x1, Sigma_y1 = next(m)
error_mu = np.max(np.abs(mu_x1 - mu_x))
error_Sigma = np.max(np.abs(Sigma_x1 - Sigma_x))
error = max(error_mu, error_Sigma)
mu_x, Sigma_x = mu_x1, Sigma_x1

# == Prepare return values == #
mu_x_star, Sigma_x_star = mu_x, Sigma_x
mu_y_star, Sigma_y_star = mu_y1, Sigma_y1

return mu_x_star, mu_y_star, Sigma_x_star, Sigma_y_star

def geometric_sums(self, beta, x_t):
"""
Forecast the geometric sums

S_x := E [sum_{j=0}^{\infty} beta^j x_{t+j} | x_t ]

S_y := E [sum_{j=0}^{\infty} beta^j y_{t+j} | x_t ]

Parameters
----------
beta : scalar(float)
Discount factor, in [0, 1)

beta : array_like(float)
The term x_t for conditioning

Returns
-------
S_x : array_like(float)
Geometric sum as defined above

S_y : array_like(float)
Geometric sum as defined above

"""
I = np.identity(self.n)
S_x = solve(I - beta * self.A, x_t)
S_y = self.G.dot(S_x)

return S_x, S_y

def impulse_response(self, j=5):
"""
Pulls off the imuplse response coefficients to a shock
in w_{t} for x and y

Important to note: We are uninterested in the shocks to
v for this method

* x coefficients are C, AC, A^2 C...
* y coefficients are GC, GAC, GA^2C...

Parameters
----------
j : Scalar(int)
Number of coefficients that we want

Returns
-------
xcoef : list(array_like(float, 2))
The coefficients for x
ycoef : list(array_like(float, 2))
The coefficients for y
"""
# Pull out matrices
A, C, G, H = self.A, self.C, self.G, self.H
Apower = np.copy(A)

# Create room for coefficients
xcoef = [C]
ycoef = [np.dot(G, C)]

for i in range(j):
xcoef.append(np.dot(Apower, C))
ycoef.append(np.dot(G, np.dot(Apower, C)))
Apower = np.dot(Apower, A)

return xcoef, ycoef


Hopefully the code is relatively self explanatory and adequately documented

One Python construct you might not be familiar with is the use of a generator function in the method moment_sequence()

Go back and read the relevant documentation if you’ve forgotten how generator functions work

Examples of usage are given in the solutions to the exercises

## Exercises¶

### Exercise 1¶

Replicate this figure using the LinearStateSpace class from lss.py

### Exercise 2¶

Replicate this figure modulo randomness using the same class

### Exercise 3¶

Replicate this figure modulo randomness using the same class

The state space model and parameters are the same as for the preceding exercise

### Exercise 4¶

Replicate this figure modulo randomness using the same class

The state space model and parameters are the same as for the preceding exercise, except that the initial condition is the stationary distribution

Hint: You can use the stationary_distributions method to get the initial conditions

The number of sample paths is 80, and the time horizon in the figure is 100

Producing the vertical bars and dots is optional, but if you wish to try, the bars are at dates 10, 50 and 75

## Solutions¶

Solution notebook

Footnotes

 [1] The eigenvalues of $$A$$ are $$(1,-1, i,-i)$$.
 [2] The correct way to argue this is by induction. Suppose that $$x_t$$ is Gaussian. Then (1) and (11) imply that $$x_{t+1}$$ is Gaussian. Since $$x_0$$ is assumed to be Gaussian, it follows that every $$x_t$$ is Gaussian. Evidently this implies that each $$y_t$$ is Gaussian.
• Share page