How to read this lecture...

Code should execute sequentially if run in a Jupyter notebook

The Income Fluctutation Problem

Overview

Next we study the an optimal savings problem for an infinitely lived consumer—the “common ancestor” described in [LS12], section 1.3

This is an important sub-problem for many representative macroeconomic models

Our presentation of the model will be relatively brief

  • For further details on economic intuition, implication and models, see [LS12]
  • Proofs of all mathematical results stated below can be found in this paper

To solve the model we will use Euler equation based time iteration, similar to this lecture

  • Globally convergent under mild assumptions, even when utility is unbounded (both above and below)
  • Numerically, faster and more efficient than value function iteration, at least for this model

References

Useful references include [Dea91], [DH10], [Kuh13], [Rab02], [Rei09] and [SE77]

The Optimal Savings Problem

Consider a household that chooses a state-contingent consumption plan \(\{c_t\}_{t \geq 0}\) to maximize

\[\mathbb{E} \, \sum_{t=0}^{\infty} \beta^t u(c_t)\]

subject to

(1)\[c_t + a_{t+1} \leq Ra_t + z_t, \qquad c_t \geq 0, \qquad a_t \geq -b \qquad t = 0, 1, \ldots\]

Here

  • \(\beta \in (0,1)\) is the discount factor
  • \(a_t\) is asset holdings at time \(t\), with ad-hoc borrowing constraint \(a_t \geq -b\)
  • \(c_t\) is consumption
  • \(z_t\) is non-capital income (wages, unemployment compensation, etc.)
  • \(R := 1 + r\), where \(r > 0\) is the interest rate on savings

Assumptions

  1. \(\{z_t\}\) is a finite Markov process with Markov matrix \(\Pi\) taking values in \(Z\)
  2. \(|Z| < \infty\) and \(Z \subset (0,\infty)\)
  3. \(r > 0\) and \(\beta R < 1\)
  4. \(u\) is smooth, strictly increasing and strictly concave with \(\lim_{c \to 0} u'(c) = \infty\) and \(\lim_{c \to \infty} u'(c) = 0\)

The asset space is \([-b, \infty)\) and the state is the pair \((a,z) \in S := [-b,\infty) \times Z\)

A feasible consumption path from \((a,z) \in S\) is a consumption sequence \(\{c_t\}\) such that \(\{c_t\}\) and its induced asset path \(\{a_t\}\) satisfy

  1. \((a_0, z_0) = (a, z)\)
  2. the feasibility constraints in (1), and
  3. measurability of \(c_t\) w.r.t. the filtration generated by \(\{z_1, \ldots, z_t\}\)

The meaning of the third point is just that consumption at time \(t\) can only be a function of outcomes that have already been observed

The value function \(V \colon S \to \mathbb{R}\) is defined by

(2)\[V(a, z) := \sup \, \mathbb{E} \left\{ \sum_{t=0}^{\infty} \beta^t u(c_t) \right\}\]

where the supremum is over all feasible consumption paths from \((a,z)\).

An optimal consumption path from \((a,z)\) is a feasible consumption path from \((a,z)\) that attains the supremum in (2)

Given our assumptions, it is known that

  1. For each \((a,z) \in S\), a unique optimal consumption path from \((a,z)\) exists
  2. This path is the unique feasible path from \((a,z)\) satisfying the Euler equality
(3)\[u' (c_t) = \max \left\{ \beta R \, \mathbb{E}_t [ u'(c_{t+1}) ] \,,\; u'(Ra_t + z_t + b) \right\}\]

and the transversality condition

(4)\[\lim_{t \to \infty} \beta^t \, \mathbb{E} \, [ u'(c_t) a_{t+1} ] = 0.\]

Moreover, there exists an optimal consumption function \(c^* \colon S \to [0, \infty)\) such that the path from \((a,z)\) generated by

\[(a_0, z_0) = (a, z), \quad z_{t+1} \sim \Pi(z_t, dy), \quad c_t = c^*(a_t, z_t) \quad \text{and} \quad a_{t+1} = R a_t + z_t - c_t\]

satisfies both (3) and (4), and hence is the unique optimal path from \((a,z)\)

In summary, to solve the optimization problem, we need to compute \(c^*\)

Computation

There are two standard ways to solve for \(c^*\)

  1. Value function iteration (VFI)
  2. Time iteration, sometimes called policy function iteration (PFI), using the Euler equality

Policy function iteration

We can rewrite (3) to make it a statement about functions rather than random variables

In particular, consider the functional equation

(5)\[u' \circ c \, (a, z) = \max \left\{ \gamma \int u' \circ c \, \{R a + z - c(a, z), \, \acute z\} \, \Pi(z,d \acute z) \, , \; u'(Ra + z + b) \right\}\]

where \(\gamma := \beta R\) and \(u' \circ c(s) := u'(c(s))\)

Equation (5) is a functional equation in \(c\)

In order to identify a solution, let \(\mathscr{C}\) be the set of candidate consumption functions \(c \colon S \to \mathbb R\) such that

  • each \(c \in \mathscr{C}\) is continuous and (weakly) increasing
  • \(\min Z \leq c(a,z) \leq Ra + z + b\) for all \((a,z) \in S\)

In addition, let \(K \colon \mathscr{C} \to \mathscr{C}\) be defined as follows:

For given \(c\in \mathscr{C}\), the value \(Kc(a,z)\) is the unique \(t \in J(a,z)\) that solves

(6)\[u'(t) = \max \left\{ \gamma \int u' \circ c \, \{R a + z - t, \, \acute z\} \, \Pi(z,d \acute z) \, , \; u'(Ra + z + b) \right\}\]

where

(7)\[J(a,z) := \{t \in \mathbb{R} \,:\, \min Z \leq t \leq Ra+ z + b\}\]

We refer to \(K\) as Coleman’s policy function operator [Col90]

It is known that

  • \(K\) is a contraction mapping on \(\mathscr{C}\) under the metric
\[\rho(c, d) := \| \, u' \circ c - u' \circ d \, \| := \sup_{s \in S} | \, u'(c(s)) - u'(d(s)) \, | \qquad \quad (c, d \in \mathscr{C})\]
  • The metric \(\rho\) is complete on \(\mathscr{C}\)
  • Convergence in \(\rho\) implies uniform convergence on compacts

In consequence, \(K\) has a unique fixed point \(c^* \in \mathscr{C}\) and \(K^n c \to c^*\) as \(n \to \infty\) for any \(c \in \mathscr{C}\)

By the definition of \(K\), the fixed points of \(K\) in \(\mathscr{C}\) coincide with the solutions to (5) in \(\mathscr{C}\)

In particular, it can be shown that the path \(\{c_t\}\) generated from \((a_0,z_0) \in S\) using policy function \(c^*\) is the unique optimal path from \((a_0,z_0) \in S\)

TL;DR The unique optimal policy can be computed by picking any \(c \in \mathscr{C}\) and iterating with the operator \(K\) defined in (6)

Value function iteration

The Bellman operator for this problem is given by

(8)\[Tv(a, z) = \max_{0 \leq c \leq Ra + z + b} \left\{ u(c) + \beta \int v(Ra + z - c, \acute z) \Pi(z, d \acute z) \right\}\]

We have to be careful with VFI (i.e., iterating with \(T\)) in this setting because \(u\) is not assumed to be bounded

  • In fact typically unbounded both above and below — e.g. \(u(c) = \log c\)
  • In which case, the standard DP theory does not apply
  • \(T^n v\) is not guaranteed to converge to the value function for arbitrary continous bounded \(v\)

Nonetheless, we can always try the strategy “iterate and hope”

  • In this case we can check the outcome by comparing with PFI
  • The latter is known to converge, as described above

Implementation

The code in ifp.jl provides implementations of both VFI and PFI

The code is repeated here and a description and clarifications are given below

#=
Tools for solving the standard optimal savings / income fluctuation
problem for an infinitely lived consumer facing an exogenous income
process that evolves according to a Markov chain.

@author : Spencer Lyon <spencer.lyon@nyu.edu>

@date: 2014-08-18

References
----------

http://quant-econ.net/jl/ifp.html

=#
using Interpolations
using Optim

# utility and marginal utility functions
u(x) = log(x)
du(x) = 1 ./ x

"""
Income fluctuation problem

##### Fields

- `r::Float64` : Strictly positive interest rate
- `R::Float64` : The interest rate plus 1 (strictly greater than 1)
- `bet::Float64` : Discount rate in (0, 1)
- `b::Float64` :  The borrowing constraint
- `Pi::Matrix{Floa64}` : Transition matrix for `z`
- `z_vals::Vector{Float64}` : Levels of productivity
- `asset_grid::LinSpace{Float64}` : Grid of asset values

"""
type ConsumerProblem
    r::Float64
    R::Float64
    bet::Float64
    b::Float64
    Pi::Matrix{Float64}
    z_vals::Vector{Float64}
    asset_grid::LinSpace{Float64}
end

function ConsumerProblem(;r=0.01, bet=0.96, Pi=[0.6 0.4; 0.05 0.95],
                         z_vals=[0.5, 1.0], b=0.0, grid_max=16, grid_size=50)
    R = 1 + r
    asset_grid = linspace(-b, grid_max, grid_size)

    ConsumerProblem(r, R, bet, b, Pi, z_vals, asset_grid)
end

"""
Given a matrix of size `(length(cp.asset_grid), length(cp.z_vals))`, construct
an interpolation object that does linear interpolation in the asset dimension
and has a lookup table in the z dimension
"""
function Interpolations.interpolate(cp::ConsumerProblem, x::AbstractMatrix)
    sz = (length(cp.asset_grid), length(cp.z_vals))
    if size(x) != sz
        msg = "x must have dimensions $(sz)"
        throw(DimensionMismatch(msg))
    end

    itp = interpolate(x, (BSpline(Linear()), NoInterp()), OnGrid())
    scale(itp, cp.asset_grid, 1:sz[2])
end

"""
Apply the Bellman operator for a given model and initial value.

##### Arguments

- `cp::ConsumerProblem` : Instance of `ConsumerProblem`
- `v::Matrix`: Current guess for the value function
- `out::Matrix` : Storage for output
- `;ret_policy::Bool(false)`: Toggles return of value or policy functions

##### Returns

None, `out` is updated in place. If `ret_policy == true` out is filled with the
policy function, otherwise the value function is stored in `out`.

"""
function update_bellman!(cp::ConsumerProblem, V::Matrix, out::Matrix;
                           ret_policy::Bool=false)
    # simplify names, set up arrays
    R, Pi, bet, b = cp.R, cp.Pi, cp.bet, cp.b
    asset_grid, z_vals = cp.asset_grid, cp.z_vals

    z_idx = 1:length(z_vals)
    vf = interpolate(cp, V)

    # compute lower_bound for optimization
    opt_lb = 1e-8

    # solve for RHS of Bellman equation
    for (i_z, z) in enumerate(z_vals)
        for (i_a, a) in enumerate(asset_grid)

            function obj(c)
                y = 0.0
                for j in z_idx
                    y += vf[R*a+z-c, j] * Pi[i_z, j]
                end
                return -u(c)  - bet * y
            end
            res = optimize(obj, opt_lb, R.*a.+z.+b)
            c_star = Optim.minimizer(res)

            if ret_policy
                out[i_a, i_z] = c_star
            else
               out[i_a, i_z] = - obj(c_star)
            end

        end
    end
    out
end

update_bellman(cp::ConsumerProblem, V::Matrix; ret_policy=false) =
    update_bellman!(cp, V, similar(V); ret_policy=ret_policy)

"""
Extract the greedy policy (policy function) of the model.

##### Arguments

- `cp::CareerWorkerProblem` : Instance of `CareerWorkerProblem`
- `v::Matrix`: Current guess for the value function
- `out::Matrix` : Storage for output

##### Returns

None, `out` is updated in place to hold the policy function

"""
get_greedy!(cp::ConsumerProblem, V::Matrix, out::Matrix) =
    update_bellman!(cp, V, out, ret_policy=true)

get_greedy(cp::ConsumerProblem, V::Matrix) =
    update_bellman(cp, V, ret_policy=true)

"""
The approximate Coleman operator.

Iteration with this operator corresponds to policy function
iteration. Computes and returns the updated consumption policy
c.  The array c is replaced with a function cf that implements
univariate linear interpolation over the asset grid for each
possible value of z.

##### Arguments

- `cp::CareerWorkerProblem` : Instance of `CareerWorkerProblem`
- `c::Matrix`: Current guess for the policy function
- `out::Matrix` : Storage for output

##### Returns

None, `out` is updated in place to hold the policy function

"""
function update_coleman!(cp::ConsumerProblem, c::Matrix, out::Matrix)
    # simplify names, set up arrays
    R, Pi, bet, b = cp.R, cp.Pi, cp.bet, cp.b
    asset_grid, z_vals = cp.asset_grid, cp.z_vals
    z_size = length(z_vals)
    gam = R * bet
    vals = Array{Float64}(z_size)

    cf = interpolate(cp, c)

    # linear interpolation to get consumption function. Updates vals inplace
    cf!(a, vals) = map!(i->cf[a, i], vals, 1:z_size)

    # compute lower_bound for optimization
    opt_lb = 1e-8

    for (i_z, z) in enumerate(z_vals)
        for (i_a, a) in enumerate(asset_grid)
            function h(t)
                cf!(R*a+z-t, vals)  # update vals
                expectation = dot(du(vals), vec(Pi[i_z, :]))
                return abs(du(t) - max(gam * expectation, du(R*a+z+b)))
            end
            opt_ub = R*a + z + b  # addresses issue #8 on github
            res = optimize(h, min(opt_lb, opt_ub - 1e-2), opt_ub,
                           method=Optim.Brent())
            out[i_a, i_z] = Optim.minimizer(res)
        end
    end
    out
end


"""
Apply the Coleman operator for a given model and initial value

See the specific methods of the mutating version of this function for more
details on arguments
"""
update_coleman(cp::ConsumerProblem, c::Matrix) =
    update_coleman!(cp, c, similar(c))

function init_values(cp::ConsumerProblem)
    # simplify names, set up arrays
    R, bet, b = cp.R, cp.bet, cp.b
    asset_grid, z_vals = cp.asset_grid, cp.z_vals
    shape = length(asset_grid), length(z_vals)
    V, c = Array{Float64}(shape...), Array{Float64}(shape...)

    # Populate V and c
    for (i_z, z) in enumerate(z_vals)
        for (i_a, a) in enumerate(asset_grid)
            c_max = R*a + z + b
            c[i_a, i_z] = c_max
            V[i_a, i_z] = u(c_max) ./ (1 - bet)
        end
    end

    return V, c
end

The code contains a type called ConsumerProblem that

  • stores all the relevant parameters of a given model

  • defines methods

    • bellman_operator, which implements the Bellman operator \(T\) specified above
    • coleman_operator, which implements the Coleman operator \(K\) specified above
    • initialize, which generates suitable initial conditions for iteration

The methods bellman_operator and coleman_operator both use linear interpolation along the asset grid to approximate the value and consumption functions

The following exercises walk you through several applications where policy functions are computed

In exercise 1 you will see that while VFI and PFI produce similar results, the latter is much faster

  • Because we are exploiting analytically derived first order conditions

Another benefit of working in policy function space rather than value function space is that value functions typically have more curvature

  • Makes them harder to approximate numerically

Exercises

Exercise 1

The first exercise is to replicate the following figure, which compares PFI and VFI as solution methods

../_images/vfi_vs_pfi.png

The figure shows consumption policies computed by iteration of \(K\) and \(T\) respectively

  • In the case of iteration with \(T\), the final value function is used to compute the observed policy

Consumption is shown as a function of assets with income \(z\) held fixed at its smallest value

The following details are needed to replicate the figure

  • The parameters are the default parameters in the definition of consumerProblem
  • The initial conditions are the default ones from initialize(cp)
  • Both operators are iterated 80 times

When you run your code you will observe that iteration with \(K\) is faster than iteration with \(T\)

In the Julia console, a comparison of the operators can be made as follows

using QuantEcon

cp = ConsumerProblem();

v, c, = initialize(cp);
@time bellman_operator(cp, v);
elapsed time: 0.095017748 seconds (24212168 bytes allocated, 30.48% gc time)
@time coleman_operator(cp, c);
elapsed time: 0.0696242 seconds (23937576 bytes allocated)

Exercise 2

Next let’s consider how the interest rate affects consumption

Reproduce the following figure, which shows (approximately) optimal consumption policies for different interest rates

../_images/ifp_policies.png
  • Other than r, all parameters are at their default values
  • r steps through linspace(0, 0.04, 4)
  • Consumption is plotted against assets for income shock fixed at the smallest value

The figure shows that higher interest rates boost savings and hence suppress consumption

Exercise 3

Now let’s consider the long run asset levels held by households

We’ll take r = 0.03 and otherwise use default parameters

The following figure is a 45 degree diagram showing the law of motion for assets when consumption is optimal

../_images/ifp_savings_plots.png

The green line and blue line represent the function

\[a' = h(a, z) := R a + z - c^*(a, z)\]

when income \(z\) takes its high and low values repectively

The dashed line is the 45 degree line

We can see from the figure that the dynamics will be stable — assets do not diverge

In fact there is a unique stationary distribution of assets that we can calculate by simulation

  • Can be proved via theorem 2 of [HP92]
  • Represents the long run dispersion of assets across households when households have idiosyncratic shocks

Ergodicity is valid here, so stationary probabilities can be calculated by averaging over a single long time series

  • Hence to approximate the stationary distribution we can simulate a long time series for assets and histogram, as in the following figure
../_images/ifp_histogram.png

Your task is to replicate the figure

  • Parameters are as discussed above
  • The histogram in the figure used a single time series \(\{a_t\}\) of length 500,000
  • Given the length of this time series, the initial condition \((a_0, z_0)\) will not matter
  • You might find it helpful to use the MarkovChain class from quantecon

Exercise 4

Following on from exercises 2 and 3, let’s look at how savings and aggregate asset holdings vary with the interest rate

  • Note: [LS12] section 18.6 can be consulted for more background on the topic treated in this exercise

For a given parameterization of the model, the mean of the stationary distribution can be interpreted as aggregate capital in an economy with a unit mass of ex-ante identical households facing idiosyncratic shocks

Let’s look at how this measure of aggregate capital varies with the interest rate and borrowing constraint

The next figure plots aggregate capital against the interest rate for b in (1, 3)

../_images/ifp_agg_savings.png

As is traditional, the price (interest rate) is on the vertical axis

The horizontal axis is aggregate capital computed as the mean of the stationary distribution

Exercise 4 is to replicate the figure, making use of code from previous exercises

Try to explain why the measure of aggregate capital is equal to \(-b\) when \(r=0\) for both cases shown here

Solutions

Exercise 1

using Plots
pyplot()
Plots.PyPlotBackend()
function ex1()
    cp = ConsumerProblem()
    K = 80

    V, c = init_values(cp)
    println("Starting value function iteration")
    for i=1:K
        V = update_bellman(cp, V)
    end
    c1 = update_bellman(cp, V, ret_policy=true)

    V2, c2 = init_values(cp)
    println("Starting policy function iteration")
    for i=1:K
        c2 = update_coleman(cp, c2)
    end

    plot(cp.asset_grid,c1[:, 1], label="value function iteration")
    plot!(cp.asset_grid,c2[:, 1], label="policy function iteration")
    plot!(xlabel="asset lebel",ylabel="Consumption (low income)")
end
ex1 (generic function with 1 method)
ex1()
Starting value function iteration
Starting policy function iteration
../_images/ifp_solutions_ex1_jl.png

Exercise 2

function ex2()
    r_vals = linspace(0, 0.04, 4)
    traces=[]
    legends=[]
    for r_val in r_vals
        cp = ConsumerProblem(r=r_val)
        v_init, c_init = init_values(cp)
        c = compute_fixed_point(x -> update_coleman(cp, x), c_init,
                                max_iter=150, verbose=false)
        traces=push!(traces,c[:, 1])
        legends=push!(legends,"r = $(round(r_val, 3))")
    end
    plot(traces,label=reshape(legends,1,length(legends)))
    plot!(xlabel="asset level", ylabel="Consumption (low income)")
end
ex2 (generic function with 1 method)
ex2()
../_images/ifp_solutions_ex2_jl.png

Exercise 3

using Interpolations

function compute_asset_series(cp, T=500000; verbose=false)
    Pi, z_vals, R = cp.Pi, cp.z_vals, cp.R  # Simplify names
    v_init, c_init = init_values(cp)
    c = compute_fixed_point(x -> update_coleman(cp, x), c_init,
                            max_iter=150, verbose=false)
    cf = interpolate(cp, c)

    a = zeros(T+1)
    z_seq = simulate(MarkovChain(Pi), T)
    for t=1:T
        i_z = z_seq[t]
        a[t+1] = R * a[t] + z_vals[i_z] - cf[a[t], i_z]
    end
    return a
end

function ex3()
    cp = ConsumerProblem(r=0.03, grid_max=4)
    a = compute_asset_series(cp)
    histogram(a,nbins=20,leg=false)
    plot!(xlabel="assets",ytics=(-0.05, 0.75))
    #plot(histogram(x=a, nbinsx=20, histnorm="probability", opacity=0.65),
    #     Layout(xaxis=attr(title="assets", range=(-0.05, 0.75))))
end
ex3 (generic function with 1 method)
ex3()
../_images/ifp_solutions_ex3_jl.png

Exercise 4

function ex4()
    M = 25
    r_vals = linspace(0, 0.04, M)

    xs = []
    ys = []
    legends = []
    for b in [1.0, 3.0]
        asset_mean = zeros(M)
        for (i, r_val) in enumerate(r_vals)
            cp = ConsumerProblem(r=r_val, b=b)
            the_mean = mean(compute_asset_series(cp, 250000))
            asset_mean[i] = the_mean
        end
        xs=push!(xs, asset_mean)
        ys=push!(ys, r_vals)
        legends=push!(legends,"b= $b")
        println("Finished iteration b=$b")
    end
    plot(xs,ys,label=reshape(legends,1,length(legends)))
    plot!(xlabel="capital",ylabel="interest rate", ytics=([0,0.045]))
end
ex4 (generic function with 1 method)
ex4()
Finished iteration b=1.0
Finished iteration b=3.0
../_images/ifp_solutions_ex4_jl.png