How to read this lecture...

Code should execute sequentially if run in a Jupyter notebook

The Endogenous Grid Method

Overview

We solved the stochastic optimal growth model using

  1. value function iteration
  2. Euler equation based time iteration

We found time iteration to be significantly more accurate at each step

In this lecture we’ll look at an ingenious twist on the time iteration technique called the endogenous grid method (EGM)

EGM is a numerical method for implementing policy iteration invented by Chris Carroll

It is a good example of how a clever algorithm can save a massive amount of computer time

(Massive when we multiply saved CPU cycles on each implementation times the number of implementations worldwide)

The original reference is [Car06]

Key Idea

Let’s start by reminding ourselves of the theory and then see how the numerics fit in

Theory

Take the model set out in the time iteration lecture, following the same terminology and notation

The Euler equation is

(1)\[(u'\circ c^*)(y) = \beta \int (u'\circ c^*)(f(y - c^*(y)) z) f'(y - c^*(y)) z \phi(dz)\]

As we saw, the Coleman operator is a nonlinear operator \(K\) engineered so that \(c^*\) is a fixed point of \(K\)

It takes as its argument a continuous strictly increasing consumption policy \(g \in \Sigma\)

It returns a new function \(Kg\), where \((Kg)(y)\) is the \(c \in (0, \infty)\) that solves

(2)\[u'(c) = \beta \int (u' \circ g) (f(y - c) z ) f'(y - c) z \phi(dz)\]

Exogenous Grid

As discussed in the lecture on time iteration, to implement the method on a computer we need numerical approximation

In particular, we represent a policy function by a set of values on a finite grid

The function itself is reconstructed from this representation when necessary, using interpolation or some other method

Previously, to obtain a finite represention of an updated consumption policy we

  • fixed a grid of income points \(\{y_i\}\)
  • calculated the consumption value \(c_i\) corresponding to each \(y_i\) using (2) and a root finding routine

Each \(c_i\) is then interpreted as the value of the function \(K g\) at \(y_i\)

Thus, with the points \(\{y_i, c_i\}\) in hand, we can reconstruct \(Kg\) via approximation

Iteration then continues...

Endogenous Grid

The method discussed above requires a root finding routine to find the \(c_i\) corresponding to a given income value \(y_i\)

Root finding is costly because it typically involves a significant number of function evaluations

As pointed out by Carroll [Car06], we can avoid this if \(y_i\) is chosen endogenously

The only assumption required is that \(u'\) is invertible on \((0, \infty)\)

The idea is this:

First we fix an exogenous grid \(\{k_i\}\) for capital (\(k = y - c\))

Then we obtain \(c_i\) via

(3)\[c_i = (u')^{-1} \left\{ \beta \int (u' \circ g) (f(k_i) z ) \, f'(k_i) \, z \, \phi(dz) \right\}\]

where \((u')^{-1}\) is the inverse function of \(u'\)

Finally, for each \(c_i\) we set \(y_i = c_i + k_i\)

It is clear that each \((y_i, c_i)\) pair constructed in this manner satisfies (2)

With the points \(\{y_i, c_i\}\) in hand, we can reconstruct \(Kg\) via approximation as before

The name EGM comes from the fact that the grid \(\{y_i\}\) is determined endogenously

Implementation

Let’s implement this version of the Coleman operator and see how it performs

The Operator

Here’s an implementation of \(K\) using EGM as described above

#=

Authors: Shunsuke Hori

=#

using QuantEcon

"""
The approximate Coleman operator, updated using the endogenous grid
method.

Parameters
----------
g : Function
    The current guess of the policy function
k_grid : AbstractVector
    The set of *exogenous* grid points, for capital k = y - c
beta : AbstractFloat
    The discount factor
u_prime : Function
    The derivative u'(c) of the utility function
u_prime_inv : Function
    The inverse of u' (which exists by assumption)
f : Function
    The production function f(k)
f_prime : Function
    The derivative f'(k)
shocks : AbstractVector
    An array of draws from the shock, for Monte Carlo integration (to
    compute expectations).
"""

function coleman_egm(g::Function,
                     k_grid::AbstractVector,
                     beta::AbstractFloat,
                     u_prime::Function,
                     u_prime_inv::Function,
                     f::Function,
                     f_prime::Function,
                     shocks::AbstractVector)

    # Allocate memory for value of consumption on endogenous grid points
    c = similar(k_grid)

    # Solve for updated consumption value
    for (i, k) in enumerate(k_grid)
        vals = u_prime.(g.(f(k) * shocks)) .* f_prime(k) .* shocks
        c[i] = u_prime_inv(beta * mean(vals))
    end

    # Determine endogenous grid
    y = k_grid + c  # y_i = k_i + c_i

    # Update policy function and return
    Kg = LinInterp(y,c)
    Kg_f(x)=Kg(x)
    return Kg_f
end

Note the lack of any root finding algorithm

We’ll also run our original implementation, which uses an exogenous grid and requires root finding, so we can perform some comparisons

#=

Author: Shunsuke Hori

=#

using QuantEcon

"""
g: input policy function
grid: grid points
beta: discount factor
u_prime: derivative of utility function
f: production function
f_prime: derivative of production function
shocks::shock draws, used for Monte Carlo integration to compute expectation
Kg: output value is stored
"""
function coleman_operator!(g::AbstractVector,
                           grid::AbstractVector,
                           beta::AbstractFloat,
                           u_prime::Function,
                           f::Function,
                           f_prime::Function,
                           shocks::AbstractVector,
                           Kg::AbstractVector=similar(g))

    # This function requires the container of the output value as argument Kg

    # Construct linear interpolation object #
    g_func=LinInterp(grid, g)

    # solve for updated consumption value #
    for (i,y) in enumerate(grid)
        function h(c)
            vals = u_prime.(g_func.(f(y - c)*shocks)).*f_prime(y - c).*shocks
            return u_prime(c) - beta * mean(vals)
        end
        Kg[i] = brent(h, 1e-10, y-1e-10)
    end
    return Kg
end

# The following function does NOT require the container of the output value as argument
function coleman_operator(g::AbstractVector,
                          grid::AbstractVector,
                          beta::AbstractFloat,
                          u_prime::Function,
                          f::Function,
                          f_prime::Function,
                          shocks::AbstractVector)

    return coleman_operator!(g, grid, beta, u_prime,
                             f, f_prime, shocks, similar(g))
end

Let’s test out the code above on some example parameterizations, after the following imports

using QuantEcon
using PyPlot
using LaTeXStrings

Testing on the Log / Cobb–Douglas case

As we did for value function iteration and time iteration, let’s start by testing our method with the log-linear benchmark

The first step is to bring in the model that we used in the Coleman policy function iteration

#=

Author: Shunsuke Hori

=#

struct Model{TF <: AbstractFloat, TR <: Real, TI <: Integer}
    alpha::TR    # Productivity parameter
    beta::TF     # Discount factor
    gamma::TR    # risk aversion
    mu::TR       # First parameter in lognorm(mu, sigma)
    s::TR        # Second parameter in lognorm(mu, sigma)
    grid_min::TR # Smallest grid point
    grid_max::TR # Largest grid point
    grid_size::TI # Number of grid points
    u::Function       # utility function
    u_prime::Function # derivative of utility function
    f::Function       # production function
    f_prime::Function # derivative of production function
    grid::Vector{TR}   # grid
end

"""
construct Model instance using the information of parameters and functional form

arguments: see above

return: Model type instance
"""
function Model(; alpha::Real=0.65,   # Productivity parameter
                 beta::AbstractFloat=0.95,    # Discount factor
                 gamma::Real=1.0,    # risk aversion
                 mu::Real=0.0,       # First parameter in lognorm(mu, sigma)
                 s::Real=0.1,        # Second parameter in lognorm(mu, sigma)
                 grid_min::Real=1e-6,# Smallest grid point
                 grid_max::Real=4.0, # Largest grid point
                 grid_size::Integer=200, # Number of grid points
                 u::Function= c->(c^(1-gamma)-1)/(1-gamma), # utility function
                 u_prime::Function = c-> c^(-gamma),        # u'
                 f::Function = k-> k^alpha,                 # production function
                 f_prime::Function = k -> alpha*k^(alpha-1) # f'
                 )

    grid=collect(linspace(grid_min, grid_max, grid_size))

    if gamma == 1   # when gamma==1, log utility is assigned
        u_log(c) = log(c)
        m = Model(alpha, beta, gamma, mu, s, grid_min, grid_max,
                grid_size, u_log, u_prime, f, f_prime, grid)
    else
        m = Model(alpha, beta, gamma, mu, s, grid_min, grid_max,
                  grid_size, u, u_prime, f, f_prime, grid)
    end
    return m
end

Next we generate an instance

mlog = Model(gamma=1.0) # Log Linear model

We also need some shock draws for Monte Carlo integration

shock_size = 250     # Number of shock draws in Monte Carlo integral
shocks = exp.(mlog.mu + mlog.s * randn(shock_size))

As a preliminary test, let’s see if \(K c^* = c^*\), as implied by the theory

c_star(y)=(1 - mlog.alpha * mlog.beta) * y

# == Some useful constants == #
ab = mlog.alpha * mlog.beta
c1 = log(1 - ab) / (1 - mlog.beta)
c2 = (mlog.mu + mlog.alpha * log(ab)) / (1 - mlog.alpha)
c3 = 1 / (1 - mlog.beta)
c4 = 1 / (1 - ab)

v_star(y)=c1 + c2 * (c3 - c4) + c4 * log(y)
function verify_true_policy(m::Model,
                            shocks::AbstractVector,
                            c_star::Function)
            k_grid=m.grid
    c_star_new = coleman_egm(c_star,
            k_grid, m.beta, m.u_prime, m.u_prime, m.f, m.f_prime, shocks)
    fig, ax = subplots(figsize=(9, 6))

    ax[:plot](k_grid, c_star.(k_grid), label=L"optimal policy $c^*$")
    ax[:plot](k_grid, c_star_new.(k_grid), label=L"$Kc^*$")
    ax[:legend](loc="upper left")
end
verify_true_policy(mlog,shocks,c_star)

Notice that we’re passing u_prime to coleman_egm twice

The reason is that, in the case of log utility, \(u'(c) = (u')^{-1}(c) = 1/c\)

Hence u_prime and u_prime_inv are the same

In any case, here’s the result:

../_images/egm_k_to_c.png

We can’t really distinguish the two plots

In fact it’s easy to see that the difference is essentially zero:

c_star_new = coleman_egm(c_star,
            mlog.grid, mlog.beta, mlog.u_prime, mlog.u_prime,
            mlog.f, mlog.f_prime, shocks)
maximum(abs, (c_star_new.(mlog.grid) -c_star.(mlog.grid)))
9.881666666666682e-6

Next let’s try iterating from an arbitrary initial condition and see if we converge towards \(c^*\)

Let’s start from the consumption policy that eats the whole pie: \(c(y) = y\)

g_init(x)=x
n=15
function check_convergence(m::Model,
                           shocks::AbstractVector,
                           c_star::Function,
                           g_init::Function,
                           n_iter::Integer)
            k_grid=m.grid
    g=g_init
    fig, ax = subplots(figsize=(9, 6))
    jet = ColorMap("jet")
    plot(m.grid, g.(m.grid),
            color=jet(0), lw=2, alpha=0.6, label=L"initial condition $c(y) = y$")
    for i in 1:n_iter
        new_g = coleman_egm(g, k_grid,
                            m.beta, m.u_prime, m.u_prime, m.f, m.f_prime, shocks)
        g = new_g
        ax[:plot](k_grid,new_g.(k_grid), alpha=0.6, color=jet(i / n_iter), lw=2)
    end

    ax[:plot](k_grid, c_star.(k_grid),
                "k-", lw=2, alpha=0.8, label= L"true policy function $c^*$")
    ax[:legend](loc="upper left")
end
check_convergence(mlog,shocks,c_star,g_init,n)

We see that the policy has converged nicely, in only a few steps

../_images/egm_pol_seq.png

Speed

Now let’s compare the clock times per iteration for the standard Coleman operator (with exogenous grid) and the EGM version

We’ll do so using the CRRA model adopted in the exercises of the Euler equation time iteration lecture

Here’s the model and some convenient functions

mcrra=Model(alpha = 0.65,beta = 0.95,gamma = 1.5)
u_prime_inv(c)= c^(-1/mcrra.gamma)

Here’s the result

function compare_clock_time(m::Model,
                            u_prime_inv::Function;
                            sim_length::Integer=20)
    k_grid=m.grid
    crra_coleman(g) = coleman_operator(g, k_grid,
                            m.beta, m.u_prime, m.f, m.f_prime, shocks)
    crra_coleman_egm(g) = coleman_egm(g, k_grid,
                            m.beta, m.u_prime, u_prime_inv, m.f, m.f_prime, shocks)

    print("Timing standard Coleman policy function iteration")
    g_init = k_grid
    g = g_init
    @time begin
        for i in 1:sim_length
            new_g = crra_coleman(g)
            g = new_g
        end
    end

    print("Timing policy function iteration with endogenous grid")
    g_init_egm(x) = x
    g = g_init_egm
    @time begin
        for i in 1:sim_length
            new_g = crra_coleman_egm(g)
            g = new_g
        end
    end
    return nothing
end
compare_clock_time(mcrra, u_prime_inv, sim_length=20)
Timing standard Coleman policy function iteration  7.314275 seconds (53.48 M allocations: 1.227 GB, 1.77% gc time)
Timing policy function iteration with endogenous grid  0.776051 seconds (5.24 M allocations: 116.309 MB, 1.39% gc time)

We see that the EGM version is more than 9 times faster

At the same time, the absence of numerical root finding means that it is typically more accurate at each step as well