Code should execute sequentially if run in a Jupyter notebook

- See the set up page to install Jupyter, Julia (0.6+) and all necessary libraries
- Please direct feedback to contact@quantecon.org or the discourse forum

# Time Iteration with Euler Equations¶

Contents

## Overview¶

In this lecture we’ll continue our earlier study of the stochastic optimal growth model

In that lecture we solved the associated discounted dynamic programming problem using value function iteration

The beauty of this technique is its broad applicability

With numerical problems, however, we can often attain higher efficiency in specific applications by deriving methods that are carefully tailored to the application at hand

The stochastic optimal growth model has plenty of structure to exploit for this purpose, especially when we adopt some concavity and smoothness assumptions over primitives

We’ll use this structure to obtain an **Euler equation** based method that’s more efficient than value function iteration for this and some other closely related applications

In a subsequent lecture we’ll see that the numerical implementation part of the Euler equation method can be further adjusted to obtain even more efficiency

## The Euler Equation¶

Let’s take the model set out in the stochastic growth model lecture and add the assumptions that

- \(u\) and \(f\) are continuously differentiable and strictly concave
- \(f(0) = 0\)
- \(\lim_{c \to 0} u'(c) = \infty\) and \(\lim_{c \to \infty} u'(c) = 0\)
- \(\lim_{k \to 0} f'(k) = \infty\) and \(\lim_{k \to \infty} f'(k) = 0\)

The last two conditions are usually called **Inada conditions**

Recall the Bellman equation

Let the optimal consumption policy be denoted by \(c^*\)

We know that \(c^*\) is a \(v^*\) greedy policy, so that \(c^*(y)\) is the maximizer in (1)

The conditions above imply that

- \(c^*\) is the unique optimal policy for the stochastic optimal growth model
- the optimal policy is continuous, strictly increasing and also
**interior**, in the sense that \(0 < c^*(y) < y\) for all strictly positive \(y\), and - the value function is strictly concave and continuously differentiable, with

The last result is called the **envelope condition** due to its relationship with the envelope theorem

To see why (2) might be valid, write the Bellman equation in the equivalent form

differentiate naively with respect to \(y\), and then evaluate at the optimum

Section 12.1 of EDTC contains full proofs of these results, and closely related discussions can be found in many other texts

Differentiability of the value function and iteriority of the optimal policy imply that optimal consumption satisfies the first order condition associated with (1), which is

Combining (2) and the first-order condition (3) gives the famous **Euler equation**

We can think of the Euler equation as a functional equation

over interior consumption policies \(\sigma\), one solution of which is the optimal policy \(c^*\)

Our aim is to solve the functional equation (5) and hence obtain \(c^*\)

### The Coleman Operator¶

Recall the Bellman operator

Just as we introduced the Bellman operator to solve the Bellman equation, we will now introduce an operator over policies to help us solve the Euler equation

This operator \(K\) will act on the set of all \(\sigma \in \Sigma\) that are continuous, strictly increasing and interior (i.e., \(0 < \sigma(y) < y\) for all strictly positive \(y\))

Henceforth we denote this set of policies by \(\mathscr P\)

- The operator \(K\) takes as its argument a \(\sigma \in \mathscr P\) and
- returns a new function \(K\sigma\), where \(K\sigma(y)\) is the \(c \in (0, y)\) that solves

We call this operator the **Coleman operator** to acknowledge the work of [Col90] (although many people have studied this and other closely related iterative techniques)

In essence, \(K\sigma\) is the consumption policy that the Euler equation tells you to choose today when your future consumption policy is \(\sigma\)

The important thing to note about \(K\) is that, by construction, its fixed points coincide with solutions to the functional equation (5)

In particular, the optimal policy \(c^*\) is a fixed point

Indeed, for fixed \(y\), the value \(Kc^*(y)\) is the \(c\) that solves

In view of the Euler equation, this is exactly \(c^*(y)\)

### Is the Coleman Operator Well Defined?¶

In particular, is there always a unique \(c \in (0, y)\) that solves (7)?

The answer is yes, under our assumptions

For any \(\sigma \in \mathscr P\), the right side of (7)

- is continuous and strictly increasing in \(c\) on \((0, y)\)
- diverges to \(+\infty\) as \(c \uparrow y\)

The left side of (7)

- is continuous and strictly decreasing in \(c\) on \((0, y)\)
- diverges to \(+\infty\) as \(c \downarrow 0\)

Sketching these curves and using the information above will convince you that they cross exactly once as \(c\) ranges over \((0, y)\)

With a bit more analysis, one can show in addition that \(K \sigma \in \mathscr P\) whenever \(\sigma \in \mathscr P\)

## Comparison with Value Function Iteration¶

How does Euler equation time iteration compare with value function iteration?

Both can be used to compute the optimal policy, but is one faster or more accurate?

There are two parts to this story

First, on a theoretical level, the two methods are essentially isomorphic

In particular, they converge at the same rate

We’ll prove this in just a moment

The other side to the story is the speed of the numerical implementation

It turns out that, once we actually implement these two routines, time iteration is faster and more accurate than value function iteration

### Equivalent Dynamics¶

Let’s talk about the theory first

To explain the connection between the two algorithms, it helps to understand the notion of equivalent dynamics

(This concept is very helpful in many other contexts as well)

Suppose that we have a function \(g \colon X \to X\) where \(X\) is a given set

The pair \((X, g)\) is sometimes called a **dynamical system** and we
associate it with trajectories of the form

Equivalently, \(x_t = g^t(x_0)\), where \(g\) is the \(t\)-th composition of \(g\) with itself

Here’s the picture

Now let another function \(h \colon Y \to Y\) where \(Y\) is another set

Suppose further that

- there exists a bijection \(\tau\) from \(X\) to \(Y\)
- the two functions
**commute**under \(\tau\), which is to say that \(\tau(g(x)) = h (\tau(x))\) for all \(x \in X\)

The last statement can be written more simply as

or, by applying \(\tau^{-1}\) to both sides

Here’s a commutative diagram that illustrates

Here’s a similar figure that traces out the action of the maps on a point \(x \in X\)

Now, it’s easy to check from (8) that \(g^2 = \tau^{-1} \circ h^2 \circ \tau\) holds

In fact, if you like proofs by induction, you won’t have trouble showing that

is valid for all \(n\)

What does this tell us?

It tells us that the following are equivalent

- iterate \(n\) times with \(g\), starting at \(x\)
- shift \(x\) to \(Y\) using \(\tau\), iterate \(n\) times with \(h\) starting at \(\tau(x)\), and shift the result \(h^n(\tau(x))\) back to \(X\) using \(\tau^{-1}\)

We end up with exactly the same object

### Back to Economics¶

Have you guessed where this is leading?

What we’re going to show now is that the operators \(T\) and \(K\) commute under a certain bijection

The implication is that they have exactly the same rate of convergence

To make life a little easier, we’ll assume in the following analysis (although not always in our applications) that \(u(0) = 0\)

#### A Bijection¶

Let \(\mathscr V\) be all strictly concave, continuously differentiable functions \(v\) mapping \(\mathbb R_+\) to itself and satisfying \(v(0) = 0\) and \(v'(y) > u'(y)\) for all positive \(y\)

For \(v \in \mathscr V\) let

Although we omit details, \(\sigma := M v\) is actually the unique \(v\)-greedy policy

- See proposition 12.1.18 of EDTC

It turns out that \(M\) is a bijection from \(\mathscr V\) to \(\mathscr P\)

A (solved) exercise below asks you to confirm this

#### Commutative Operators¶

It is an additional solved exercise (see below) to show that \(T\) and \(K\) commute under \(M\), in the sense that

In view of the preceding discussion, this implies that

Hence, \(T\) and \(K\) converge at exactly the same rate!

## Implementation¶

We’ve just shown that the operators \(T\) and \(K\) have the same rate of convergence

However, it turns out that, once numerical approximation is taken into account, significant differences arises

In particular, the image of policy functions under \(K\) can be calculated faster and with greater accuracy than the image of value functions under \(T\)

Our intuition for this result is that

- the Coleman operator exploits more information because it uses first order and envelope conditions
- policy functions generally have less curvature than value functions, and hence admit more accurate approximations based on grid point information

### The Operator¶

Here’s some code that implements the Coleman operator

```
#=
Author: Shunsuke Hori
=#
using QuantEcon
"""
g: input policy function
grid: grid points
beta: discount factor
u_prime: derivative of utility function
f: production function
f_prime: derivative of production function
shocks::shock draws, used for Monte Carlo integration to compute expectation
Kg: output value is stored
"""
function coleman_operator!(g::AbstractVector,
grid::AbstractVector,
beta::AbstractFloat,
u_prime::Function,
f::Function,
f_prime::Function,
shocks::AbstractVector,
Kg::AbstractVector=similar(g))
# This function requires the container of the output value as argument Kg
# Construct linear interpolation object #
g_func=LinInterp(grid, g)
# solve for updated consumption value #
for (i,y) in enumerate(grid)
function h(c)
vals = u_prime.(g_func.(f(y - c)*shocks)).*f_prime(y - c).*shocks
return u_prime(c) - beta * mean(vals)
end
Kg[i] = brent(h, 1e-10, y-1e-10)
end
return Kg
end
# The following function does NOT require the container of the output value as argument
function coleman_operator(g::AbstractVector,
grid::AbstractVector,
beta::AbstractFloat,
u_prime::Function,
f::Function,
f_prime::Function,
shocks::AbstractVector)
return coleman_operator!(g, grid, beta, u_prime,
f, f_prime, shocks, similar(g))
end
```

It has some similarities to the code for the Bellman operator in our optimal growth lecture

For example, it evaluates integrals by Monte Carlo and approximates functions using linear interpolation

Here’s that Bellman operator code again, which needs to be executed because we’ll use it in some tests below

```
#=
@authors : Spencer Lyon, John Stachurski
=#
using QuantEcon
using Optim
"""
The approximate Bellman operator, which computes and returns the
updated value function Tw on the grid points. An array to store
the new set of values Tw is optionally supplied (to avoid having to
allocate new arrays at each iteration). If supplied, any existing data in
Tw will be overwritten.
#### Arguments
`w` : Vector
The value of the input function on different grid points
`grid` : Vector
The set of grid points
`beta` : AbstractFloat
The discount factor
`u` : Function
The utility function
`f` : Function
The production function
`shocks` : Vector
An array of draws from the shock, for Monte Carlo integration (to
compute expectations).
`Tw` : Vector, optional (default=similar(w))
Array to write output values to
`compute_policy` : Bool, optional (default=false)
Whether or not to compute policy function
"""
function bellman_operator(w::Vector,
grid::Vector,
beta::AbstractFloat,
u::Function,
f::Function,
shocks::Vector,
Tw::Vector = similar(w);
compute_policy::Bool = false)
# === Apply linear interpolation to w === #
w_func = LinInterp(grid, w)
if compute_policy
sigma = similar(w)
end
# == set Tw[i] = max_c { u(c) + beta E w(f(y - c) z)} == #
for (i, y) in enumerate(grid)
objective(c) = - u(c) - beta * mean(w_func.(f(y - c) .* shocks))
res = optimize(objective, 1e-10, y)
if compute_policy
sigma[i] = res.minimizer
end
Tw[i] = - res.minimum
end
if compute_policy
return Tw, sigma
else
return Tw
end
end
```

### Testing on the Log / Cobb–Douglas case¶

As we did for value function iteration, let’s start by testing our method in the presence of a model that does have an analytical solution

Here’s a struct containing data from the log-linear growth model we used in the value function iteration lecture

```
#=
Author: Shunsuke Hori
=#
struct Model{TF <: AbstractFloat, TR <: Real, TI <: Integer}
alpha::TR # Productivity parameter
beta::TF # Discount factor
gamma::TR # risk aversion
mu::TR # First parameter in lognorm(mu, sigma)
s::TR # Second parameter in lognorm(mu, sigma)
grid_min::TR # Smallest grid point
grid_max::TR # Largest grid point
grid_size::TI # Number of grid points
u::Function # utility function
u_prime::Function # derivative of utility function
f::Function # production function
f_prime::Function # derivative of production function
grid::Vector{TR} # grid
end
"""
construct Model instance using the information of parameters and functional form
arguments: see above
return: Model type instance
"""
function Model(; alpha::Real=0.65, # Productivity parameter
beta::AbstractFloat=0.95, # Discount factor
gamma::Real=1.0, # risk aversion
mu::Real=0.0, # First parameter in lognorm(mu, sigma)
s::Real=0.1, # Second parameter in lognorm(mu, sigma)
grid_min::Real=1e-6,# Smallest grid point
grid_max::Real=4.0, # Largest grid point
grid_size::Integer=200, # Number of grid points
u::Function= c->(c^(1-gamma)-1)/(1-gamma), # utility function
u_prime::Function = c-> c^(-gamma), # u'
f::Function = k-> k^alpha, # production function
f_prime::Function = k -> alpha*k^(alpha-1) # f'
)
grid=collect(linspace(grid_min, grid_max, grid_size))
if gamma == 1 # when gamma==1, log utility is assigned
u_log(c) = log(c)
m = Model(alpha, beta, gamma, mu, s, grid_min, grid_max,
grid_size, u_log, u_prime, f, f_prime, grid)
else
m = Model(alpha, beta, gamma, mu, s, grid_min, grid_max,
grid_size, u, u_prime, f, f_prime, grid)
end
return m
end
```

Next we generate an instance

```
m = Model(gamma=1.0) # model instance with specific parameter
```

We also need some shock draws for Monte Carlo integration

```
shock_size = 250 # Number of shock draws in Monte Carlo integral
shocks = collect(exp.(m.mu + m.s * randn(shock_size))) # generate shocks
```

As a preliminary test, let’s see if \(K c^* = c^*\), as implied by the theory

```
using PyPlot
"""
This function plots a specified policy function and map of the policy function
If they are same, the policy function is true
m: instance of Model type, storing all parameters
shocks: shocks for Monte Carlo integration
c_star: true policy function
"""
function verify_true_policy(m::Model,
shocks::AbstractVector,
c_star::AbstractVector)
# Compute (Kc^*)
c_star_new = coleman_operator(c_star,
m.grid, m.beta, m.u_prime,
m.f, m.f_prime, shocks)
# Plot c^* and Kc^* #
fig, ax = subplots()
ax[:plot](m.grid, c_star, label=L"optimal policy $c^*$")
ax[:plot](m.grid, c_star_new, label=L"$Kc^*$")
ax[:legend](loc="upper left")
end
```

```
c_star = (1 - m.alpha * m.beta) * m.grid # True policy (c^*)
verify_true_policy(m,shocks,c_star)
```

Here’s the result:

We can’t really distinguish the two plots, so we are looking good, at least for this test

Next let’s try iterating from an arbitrary initial condition and see if we converge towards \(c^*\)

The initial condition we’ll use is the one that eats the whole pie: \(c(y) = y\)

```
"""
This function plots the resulting policy function of each iteration and
compares it with the true one
m: instance of Model type, storing all parameters
shocks:: shocks for Monte Carlo integration
c_star: true policy function
n_iter: number of iterations
g_init: initial policy function
"""
function check_convergence(m::Model,
shocks::AbstractVector,
c_star::AbstractVector,
g_init::AbstractVector;
n_iter::Integer=15)
fig, ax = subplots(figsize=(9, 6))
jet = ColorMap("jet")
g=g_init;
ax[:plot](m.grid, g, color=jet(0), lw=2,
alpha=0.6, label=L"initial condition $c(y) = y$")
for i = 1:n_iter
new_g = coleman_operator(g, m.grid, m.beta, m.u_prime,
m.f, m.f_prime, shocks)
g = new_g
ax[:plot](m.grid, g, color=jet(i / n_iter), lw=2, alpha=0.6)
end
ax[:plot](m.grid, c_star, "k-", lw=2, alpha=0.8,
label=L"true policy function $c^*$")
ax[:legend](loc="upper left")
end
```

```
check_convergence(m,shocks,c_star,m.grid,n_iter=15)
```

We see that the policy has converged nicely, in only a few steps

Now let’s compare the accuracy of iteration using the Coleman and Bellman operators

We’ll generate

- \(K^n c\) where \(c(y) = y\)
- \((M \circ T^n \circ M^{-1}) c\) where \(c(y) = y\)

In each case we’ll compare the resulting policy to \(c^*\)

The theory on equivalent dynamics says we will get the same policy function and hence the same errors

But in fact we expect the first method to be more accurate for reasons discussed above

```
"""
This function updates the argument of the function by its output
func: function whose argument is updated by its return
arg_init: initial argument of the function
sim_length: number of iteration
"""
function iterate_updating(func::Function,
arg_init::AbstractVector;
sim_length::Integer=20)
arg=arg_init;
for i=1:sim_length
new_arg=func(arg)
arg=new_arg
end
return arg
end
"""
This function compares the gap between true policy and policy function
derived from policy function iteration and value function iteration
m: Model instance
shocks: shocks for Monte Carlo integration
g_init: initial policy function
w_init: initial value function
sim_length: number of iterations
"""
function compare_error(m::Model,
shocks::AbstractVector,
g_init::AbstractVector,
w_init::AbstractVector;
sim_length::Integer=20)
g, w = g_init, w_init
## two functions for simplification
bellman_single_arg(w) = bellman_operator(w, m.grid, m.beta, m.u,
m.f, shocks)
coleman_single_arg(g) = coleman_operator(g, m.grid, m.beta, m.u_prime,
m.f, m.f_prime, shocks)
g=iterate_updating(coleman_single_arg,m.grid,sim_length=20)
w=iterate_updating(bellman_single_arg,m.u.(m.grid),sim_length=20)
new_w, vf_g = bellman_operator(w, m.grid, m.beta, m.u,
m.f, shocks, compute_policy=true)
pf_error = c_star - g
vf_error = c_star - vf_g
fig, ax = subplots()
ax[:plot](m.grid, 0 * m.grid, "k-", lw=1)
ax[:plot](m.grid, pf_error, lw=2, alpha=0.6, label="policy iteration error")
ax[:plot](m.grid, vf_error, lw=2, alpha=0.6, label="value iteration error")
ax[:legend](loc="lower left")
end
```

```
compare_error(m,shocks,m.grid,m.u.(m.grid),sim_length=20)
```

Here’s the result, which shows the errors in each case

As you can see, time iteration is much more accurate for a given number of iterations

## Exercises¶

### Exercise 1¶

Show that (9) is valid. In particular,

- Let \(v\) be strictly concave and continuously differentiable on \((0, \infty)\)
- Fix \(y \in (0, \infty)\) and show that \(MTv(y) = KMv(y)\)

### Exercise 2¶

Show that \(M\) is a bijection from \(\mathscr V\) to \(\mathscr P\)

### Exercise 3¶

Consider the same model as above but with the CRRA utility function

Iterate 20 times with Bellman iteration and Euler equation time iteration

- start time iteration from \(c(y) = y\)
- start value function iteration from \(v(y) = u(y)\)
- set \(\gamma = 1.5\)

Compare the resulting policies and check that they are close

### Exercise 4¶

Do the same exercise, but now, rather than plotting results, time how long 20 iterations takes in each case

## Solutions¶

### Solution to Exercise 1¶

Let \(T, K, M, v\) and \(y\) be as stated in the exercise

Using the envelope theorem, one can show that \((Tv)'(y) = u'(c(y))\) where \(c(y)\) solves

Hence \(MTv(y) = (u')^{-1} (u'(c(y))) = c(y)\)

On the other hand, \(KMv(y)\) is the \(c(y)\) that solves

We see that \(c(y)\) is the same in each case

### Solution to Exercise 2¶

We need to show that \(M\) is a bijection from \(\mathscr V\) to \(\mathscr P\)

To see this, first observe that, in view of our assumptions above, \(u'\) is a strictly decreasing continuous bijection from \((0,\infty)\) to itself

It follows that \(h\) has the same properties

Moreover, for fixed \(v \in \mathscr V\), the derivative \(v'\) is a continuous, strictly decreasing function

Hence, for fixed \(v \in \mathscr V\), the map \(M v = h \circ v'\) is strictly increasing and continuous, taking values in \((0, \infty)\)

Moreover, interiority holds because \(v'\) strictly dominates \(u'\), implying that

In particular, \(\sigma(y) := (Mv)(y)\) is an element of \(\mathscr P\)

To see that each \(\sigma \in \mathscr P\) has a preimage \(v \in \mathscr V\) with \(Mv = \sigma\), fix any \(\sigma \in \mathscr P\)

Let \(v(y) := \int_0^y u'(\sigma(x)) dx\) with \(v(0) = 0\)

With a small amount of effort you will be able to show that \(v \in \mathscr V\) and \(Mv = \sigma\)

It’s also true that \(M\) is one-to-one on \(\mathscr V\)

To see this, suppose that \(v\) and \(w\) are elements of \(\mathscr V\) satisfying \(Mv = Mw\)

Then \(v(0) = w(0) = 0\) and \(v' = w'\) on \((0, \infty)\)

The fundamental theorem of calculus then implies that \(v = w\) on \(\mathbb R_+\)

### Solution to Exercise 3¶

Here’s the code, which will execute if you’ve run all the code above

```
# Model instance with risk aversion = 1.5
# others are same as the previous instance
m_ex=Model(gamma=1.5)
```

```
"""
This function conducts the analysis of exercise 2
m: instance of Model type, storing all parameters
shocks: shocks for Monte Carlo integration
g_init: initial policy
w_init: initial value
sim_length: number of iterations
"""
function exercise2(m::Model,
shocks::AbstractVector,
g_init::AbstractVector=m.grid,
w_init::AbstractVector=m.u.(m.grid);
sim_length::Integer=20)
# initial policy and value
g, w = g_init, w_init
# iteration
bellman_single_arg(w) = bellman_operator(w, m.grid, m.beta, m.u,
m.f, shocks)
coleman_single_arg(g) = coleman_operator(g, m.grid, m.beta, m.u_prime,
m.f, m.f_prime, shocks)
g=iterate_updating(coleman_single_arg, m.grid, sim_length=20)
w=iterate_updating(bellman_single_arg, m.u.(m.grid), sim_length=20)
new_w, vf_g = bellman_operator(w, m.grid, m.beta, m.u,
m.f, shocks, compute_policy=true)
fig, ax = subplots()
ax[:plot](m.grid, g, lw=2, alpha=0.6, label="policy iteration")
ax[:plot](m.grid, vf_g, lw=2, alpha=0.6, label="value iteration")
ax[:legend](loc="upper left")
end
```

```
exercise2(m_ex,shocks,m.grid,m.u.(m.grid),sim_length=20)
```

Here’s the resulting figure

The policies are indeed close

### Solution to Exercise 4¶

Here’s the code

It assumes that you’ve just run the code from the previous exercise

```
"""
This function implements the exercise 3
m: Model instance
shocks: shocks for Monte Carlo integration
"""
function exercise3(m::Model, shocks::AbstractVector)
bellman_single_arg(w) = bellman_operator(w, m.grid, m.beta, m.u,
m.f, shocks)
coleman_single_arg(g) = coleman_operator(g, m.grid, m.beta, m.u_prime,
m.f, m.f_prime, shocks)
println("Timing value function iteration")
@time iterate_updating(bellman_single_arg,m.u.(m.grid),sim_length=20)
println("Timing Coleman policy function iteration")
@time iterate_updating(coleman_single_arg,m.grid,sim_length=20)
return nothing
end
```

```
exercise3(m_ex,shocks)
```

```
Timing value function iteration
0.688099 seconds (4.08 M allocations: 367.814 MB, 4.53% gc time)
Timing Coleman policy function iteration
20.729765 seconds (256.97 M allocations: 6.887 GB, 2.37% gc time)
```