Code should execute sequentially if run in a Jupyter notebook

- See the set up page to install Jupyter, Julia (0.6+) and all necessary libraries
- Please direct feedback to contact@quantecon.org or the discourse forum

# Job Search V: On-the-Job Search¶

Contents

## Overview¶

In this section we solve a simple on-the-job search model

### Model features¶

- job-specific human capital accumulation combined with on-the-job search
- infinite horizon dynamic programming with one state variable and two controls

## Model¶

Let

- \(x_t\) denote the time-\(t\) job-specific human capital of a worker employed at a given firm
- \(w_t\) denote current wages

Let \(w_t = x_t(1 - s_t - \phi_t)\), where

- \(\phi_t\) is investment in job-specific human capital for the current role
- \(s_t\) is search effort, devoted to obtaining new offers from other firms.

For as long as the worker remains in the current job, evolution of \(\{x_t\}\) is given by \(x_{t+1} = G(x_t, \phi_t)\)

When search effort at \(t\) is \(s_t\), the worker receives a new job offer with probability \(\pi(s_t) \in [0, 1]\)

Value of offer is \(U_{t+1}\), where \(\{U_t\}\) is iid with common distribution \(F\)

Worker has the right to reject the current offer and continue with existing job

In particular, \(x_{t+1} = U_{t+1}\) if accepts and \(x_{t+1} = G(x_t, \phi_t)\) if rejects

Letting \(b_{t+1} \in \{0,1\}\) be binary with \(b_{t+1} = 1\) indicating an offer, we can write

Agent’s objective: maximize expected discounted sum of wages via controls \(\{s_t\}\) and \(\{\phi_t\}\)

Taking the expectation of \(V(x_{t+1})\) and using (1), the Bellman equation for this problem can be written as

Here nonnegativity of \(s\) and \(\phi\) is understood, while \(a \vee b := \max\{a, b\}\)

### Parameterization¶

In the implementation below, we will focus on the parameterization

with default parameter values

- \(A = 1.4\)
- \(\alpha = 0.6\)
- \(\beta = 0.96\)

The Beta(2,2) distribution is supported on \((0,1)\). It has a unimodal, symmetric density peaked at 0.5

### Back-of-the-Envelope Calculations¶

Before we solve the model, let’s make some quick calculations that provide intuition on what the solution should look like

To begin, observe that the worker has two instruments to build capital and hence wages:

- invest in capital specific to the current job via \(\phi\)
- search for a new job with better job-specific capital match via \(s\)

Since wages are \(x (1 - s - \phi)\), marginal cost of investment via either \(\phi\) or \(s\) is identical

Our risk neutral worker should focus on whatever instrument has the highest expected return

The relative expected return will depend on \(x\)

For example, suppose first that \(x = 0.05\)

- If \(s=1\) and \(\phi = 0\), then since \(G(x,\phi) = 0\), taking expectations of (1) gives expected next period capital equal to \(\pi(s) \mathbb{E} U = \mathbb{E} U = 0.5\)
- If \(s=0\) and \(\phi=1\), then next period capital is \(G(x, \phi) = G(0.05, 1) \approx 0.23\)

Both rates of return are good, but the return from search is better

Next suppose that \(x = 0.4\)

- If \(s=1\) and \(\phi = 0\), then expected next period capital is again \(0.5\)
- If \(s=0\) and \(\phi = 1\), then \(G(x, \phi) = G(0.4, 1) \approx 0.8\)

Return from investment via \(\phi\) dominates expected return from search

Combining these observations gives us two informal predictions:

- At any given state \(x\), the two controls \(\phi\) and \(s\) will function primarily as substitutes — worker will focus on whichever instrument has the higher expected return
- For sufficiently small \(x\), search will be preferable to investment in job-specific human capital. For larger \(x\), the reverse will be true

Now let’s turn to implementation, and see if we can match our predictions

## Implementation¶

The QuantEcon package provides some code for solving the DP problem described above

See in particular jv.jl, which is repeated here for convenience

```
#=
@author : Spencer Lyon <spencer.lyon@nyu.edu>
=#
using Distributions
using QuantEcon
# NOTE: only brute-force approach is available in bellman operator.
# Waiting on a simple constrained optimizer to be written in pure Julia
"""
A Jovanovic-type model of employment with on-the-job search.
The value function is given by
\\[V(x) = \\max_{ϕ, s} w(x, ϕ, s)\\]
for
w(x, ϕ, s) := x(1 - ϕ - s) + β (1 - π(s)) V(G(x, ϕ)) +
β π(s) E V[ max(G(x, ϕ), U)
where
* `x`: human capital
* `s` : search effort
* `ϕ` : investment in human capital
* `π(s)` : probability of new offer given search level s
* `x(1 - ϕ - s)` : wage
* `G(x, ϕ)` : new human capital when current job retained
* `U` : Random variable with distribution F -- new draw of human capital
##### Fields
- `A::Real` : Parameter in human capital transition function
- `α::Real` : Parameter in human capital transition function
- `β::AbstractFloat` : Discount factor in (0, 1)
- `x_grid::AbstractVector` : Grid for potential levels of x
- `G::Function` : Transition `function` for human captial
- `π_func::Function` : `function` mapping search effort to
the probability of getting a new job offer
- `F::UnivariateDistribution` : A univariate distribution from which
the value of new job offers is drawn
- `quad_nodes::Vector` : Quadrature nodes for integrating over ϕ
- `quad_weights::Vector` : Quadrature weights for integrating over ϕ
- `ϵ::AbstractFloat` : A small number, used in optimization routine
"""
struct JvWorker{TR <: Real, TF <: AbstractFloat, TUD <: UnivariateDistribution,
TAV <: AbstractVector, TV <: Vector}
A::TR
α::TR
β::TF
x_grid::TAV
G::Function
π_func::Function
F::TUD
quad_nodes::TV
quad_weights::TV
ϵ::TF
end
"""
Constructor with default values for `JvWorker`
##### Arguments
- `A::Real(1.4)` : Parameter in human capital transition function
- `α::Real(0.6)` : Parameter in human capital transition function
- `β::Real(0.96)` : Discount factor in (0, 1)
- `grid_size::Integer(50)` : Number of points in discrete grid for `x`
- `ϵ::Float(1e-4)` : A small number, used in optimization routine
##### Notes
There is also a version of this function that accepts keyword arguments for
each parameter
"""
# use key word argument
function JvWorker(;A::Real=1.4, α::Real=0.6, β::Real=0.96,
grid_size::Integer=50, ϵ::AbstractFloat=1e-4)
G(x, ϕ) = A .* (x .* ϕ).^α
π_func = sqrt
F = Beta(2, 2)
# integration bounds
a, b = quantile(F, 0.005), quantile(F, 0.995)
# quadrature nodes/weights
nodes, weights = qnwlege(21, a, b)
# Set up grid over the state space for DP
# Max of grid is the max of a large quantile value for F and the
# fixed point y = G(y, 1).
grid_max = max(A^(1.0 / (1.0 - α)), quantile(F, 1 - ϵ))
# range for linspace(ϵ, grid_max, grid_size). Needed for
# CoordInterpGrid below
x_grid = linspace(ϵ, grid_max, grid_size)
JvWorker(A, α, β, x_grid, G, π_func, F, nodes, weights, ϵ)
end
"""
Apply the Bellman operator for a given model and initial value,
returning only the value function
##### Arguments
- `jv::JvWorker` : Instance of `JvWorker`
- `V::Vector`: Current guess for the value function
- `new_V::Vector` : Storage for updated value function
##### Returns
None, `new_V` is updated in place with the value function.
##### Notes
Currently, only the brute-force approach is available.
We are waiting on a simple constrained optimizer to be written in pure Julia
"""
function bellman_operator!(jv::JvWorker, V::AbstractVector, new_V::AbstractVector)
# simplify notation
G, π_func, F, β, ϵ = jv.G, jv.π_func, jv.F, jv.β, jv.ϵ
nodes, weights = jv.quad_nodes, jv.quad_weights
# prepare interpoland of value function
Vf = LinInterp(jv.x_grid, V)
# instantiate the linesearch variables
max_val = -1.0
cur_val = 0.0
max_s = 1.0
max_ϕ = 1.0
search_grid = linspace(ϵ, 1.0, 15)
for (i, x) in enumerate(jv.x_grid)
function w(z)
s, ϕ = z
h(u) = [Vf(max(G(x, ϕ), uval)) * pdf(F, uval) for uval in u]
integral = do_quad(h, nodes, weights)
q = π_func(s) * integral + (1.0 - π_func(s)) * Vf(G(x, ϕ))
return - x * (1.0 - ϕ - s) - β * q
end
for s in search_grid
for ϕ in search_grid
cur_val = ifelse(s + ϕ <= 1.0, -w((s, ϕ)), -1.0)
if cur_val > max_val
max_val, max_s, max_ϕ = cur_val, s, ϕ
end
end
end
new_V[i] = max_val
end
end
"""
Apply the Bellman operator for a given model and initial value, returning policies
##### Arguments
- `jv::JvWorker` : Instance of `JvWorker`
- `V::Vector`: Current guess for the value function
- `out::Tuple{Vector, Vector}` : Storage for the two policy rules
##### Returns
None, `out` is updated in place with the two policy functions.
##### Notes
Currently, only the brute-force approach is available.
We are waiting on a simple constrained optimizer to be written in pure Julia
"""
function bellman_operator!(jv::JvWorker, V::AbstractVector,
out::Tuple{AbstractVector, AbstractVector})
# simplify notation
G, π_func, F, β, ϵ = jv.G, jv.π_func, jv.F, jv.β, jv.ϵ
nodes, weights = jv.quad_nodes, jv.quad_weights
# prepare interpoland of value function
Vf = LinInterp(jv.x_grid, V)
# instantiate variables
s_policy, ϕ_policy = out[1], out[2]
# instantiate the linesearch variables
max_val = -1.0
cur_val = 0.0
max_s = 1.0
max_ϕ = 1.0
search_grid = linspace(ϵ, 1.0, 15)
for (i, x) in enumerate(jv.x_grid)
function w(z)
s, ϕ = z
h(u) = [Vf(max(G(x, ϕ), uval)) * pdf(F, uval) for uval in u]
integral = do_quad(h, nodes, weights)
q = π_func(s) * integral + (1.0 - π_func(s)) * Vf(G(x, ϕ))
return - x * (1.0 - ϕ - s) - β * q
end
for s in search_grid
for ϕ in search_grid
cur_val = ifelse(s + ϕ <= 1.0, -w((s, ϕ)), -1.0)
if cur_val > max_val
max_val, max_s, max_ϕ = cur_val, s, ϕ
end
end
end
s_policy[i], ϕ_policy[i] = max_s, max_ϕ
end
end
function bellman_operator(jv::JvWorker, V::AbstractVector; ret_policies::Bool=false)
out = ifelse(ret_policies, (similar(V), similar(V)), similar(V))
bellman_operator!(jv, V, out)
return out
end
```

The code is written to be relatively generic—and hence reusable

- For example, we use generic \(G(x,\phi)\) instead of specific \(A (x \phi)^{\alpha}\)

Regarding the imports

`fixed_quad`

is a simple non-adaptive integration routine`fmin_slsqp`

is a minimization routine that permits inequality constraints

Next we build a type called `JvWorker`

that

- packages all the parameters and other basic attributes of a given model
- implements the method
`bellman_operator`

for value function iteration

The `bellman_operator`

method
takes a candidate value function \(V\) and updates it to \(TV\) via

where

Here we are minimizing instead of maximizing to fit with SciPy’s optimization routines

When we represent \(V\), it will be with a Julia array `V`

giving values on grid `x_grid`

But to evaluate the right-hand side of (3), we need a function, so
we replace the arrays `V`

and `x_grid`

with a function `Vf`

that gives linear
interpolation of `V`

on `x_grid`

Hence in the preliminaries of `bellman_operator`

from the array

`V`

we define a linear interpolation`Vf`

of its values`c1`

is used to implement the constraint \(s + \phi \leq 1\)`c2`

is used to implement \(s \geq \epsilon\), a numerically stablealternative to the true constraint \(s \geq 0\)

`c3`

does the same for \(\phi\)

Inside the `for`

loop, for each `x`

in the grid over the state space, we
set up the function \(w(z) = w(s, \phi)\) defined in (3).

The function is minimized over all feasible \((s, \phi)\) pairs, either by

- a relatively sophisticated solver from SciPy called
`fmin_slsqp`

, or - brute force search over a grid

The former is much faster, but convergence to the global optimum is not guaranteed. Grid search is a simple way to check results

## Solving for Policies¶

Let’s plot the optimal policies and see what they look like

The code is in this file and looks as follows

```
using QuantEcon
using LaTeXStrings
using Plots
using Plots.PlotMeasures
pyplot()
wp = JvWorker(grid_size=25)
v_init = wp.x_grid .* 0.5
f(x) = bellman_operator(wp, x)
V = compute_fixed_point(f, v_init, max_iter=300)
s_policy, ϕ_policy = bellman_operator(wp, V, ret_policies=true)
# === plot solution === #
p = plot(wp.x_grid, [ϕ_policy s_policy V],
title = ["ϕ policy" "s policy" "value function"],
color = [:orange :blue :green], width = 3,
xaxis = ("x", (0.0, maximum(wp.x_grid))),
yaxis = ((-0.1, 1.1)),
size = (800, 800),
legend = false,
layout = (3, 1), bottom_margin = Measures.Length(:mm, 20), show = false)
```

It produces the following figure

The horizontal axis is the state \(x\), while the vertical axis gives \(s(x)\) and \(\phi(x)\)

Overall, the policies match well with our predictions from section

- Worker switches from one investment strategy to the other depending on relative return
- For low values of \(x\), the best option is to search for a new job
- Once \(x\) is larger, worker does better by investing in human capital specific to the current position

## Exercises¶

### Exercise 1¶

Let’s look at the dynamics for the state process \(\{x_t\}\) associated with these policies

The dynamics are given by (1) when \(\phi_t\) and \(s_t\) are chosen according to the optimal policies, and \(\mathbb{P}\{b_{t+1} = 1\} = \pi(s_t)\)

Since the dynamics are random, analysis is a bit subtle

One way to do it is to plot, for each \(x\) in a relatively fine grid
called `plot_grid`

, a
large number \(K\) of realizations of \(x_{t+1}\) given \(x_t =
x\). Plot this with one dot for each realization, in the form of a 45 degree
diagram. Set

```
K = 50
plot_grid_max, plot_grid_size = 1.2, 100
plot_grid = linspace(0, plot_grid_max, plot_grid_size)
plot(plot_grid, plot_grid, color=:black, linestyle=:dash,
lims=(0, plot_grid_max), legend=:none)
```

By examining the plot, argue that under the optimal policies, the state \(x_t\) will converge to a constant value \(\bar x\) close to unity

Argue that at the steady state, \(s_t \approx 0\) and \(\phi_t \approx 0.6\)

### Exercise 2¶

In the preceding exercise we found that \(s_t\) converges to zero and \(\phi_t\) converges to about 0.6

Since these results were calculated at a value of \(\beta\) close to
one, let’s compare them to the best choice for an *infinitely* patient worker

Intuitively, an infinitely patient worker would like to maximize steady state wages, which are a function of steady state capital

You can take it as given—it’s certainly true—that the infinitely patient worker does not search in the long run (i.e., \(s_t = 0\) for large \(t\))

Thus, given \(\phi\), steady state capital is the positive fixed point \(x^*(\phi)\) of the map \(x \mapsto G(x, \phi)\)

Steady state wages can be written as \(w^*(\phi) = x^*(\phi) (1 - \phi)\)

Graph \(w^*(\phi)\) with respect to \(\phi\), and examine the best choice of \(\phi\)

Can you give a rough interpretation for the value that you see?

## Solutions¶

### Exercise 1¶

Here’s code to produce the 45 degree diagram

```
wp = JvWorker(grid_size=25)
G, π_func, F = wp.G, wp.π_func, wp.F # Simplify names
v_init = collect(wp.x_grid) * 0.5
println("Computing value function")
f2(x) = bellman_operator(wp, x)
V = compute_fixed_point(f2, v_init, max_iter=300)
println("Computing policy functions")
s_policy, ϕ_policy = bellman_operator(wp, V, ret_policies=true)
# Turn the policy function arrays into CoordInterpGrid objects for interpolation
s = LinInterp(wp.x_grid, s_policy)
ϕ = LinInterp(wp.x_grid, ϕ_policy)
h_func(x, b, U) = (1 - b) * G(x, ϕ(x)) + b * max(G(x, ϕ(x)), U)
```

```
Computing value function
Compute iterate 10 with error 0.27832355616098514
Compute iterate 20 with error 0.18485362760592938
Compute iterate 30 with error 0.12277388307811066
Compute iterate 40 with error 0.08154249695446047
Compute iterate 50 with error 0.05415792547131559
Compute iterate 60 with error 0.03596996659293694
Compute iterate 70 with error 0.023890104457239048
Compute iterate 80 with error 0.015867045344705843
Compute iterate 90 with error 0.010538385398092487
Compute iterate 100 with error 0.006999259432742377
Compute iterate 110 with error 0.004648684856004337
Compute iterate 120 with error 0.003087508199701716
Compute iterate 130 with error 0.0020506244622975345
Compute iterate 140 with error 0.0013619593579647926
Compute iterate 150 with error 0.000904569962393964
Compute iterate 160 with error 0.0006007865154593617
Compute iterate 170 with error 0.0003990232399502247
Compute iterate 180 with error 0.00026501850811300187
Compute iterate 190 with error 0.00017601683965118298
Compute iterate 200 with error 0.00011690477038861502
Converged in 204 steps
Computing policy functions
```

```
using LaTeXStrings
using Plots
pyplot()
K = 50
plot_grid_max, plot_grid_size = 1.2, 100
plot_grid = linspace(0, plot_grid_max, plot_grid_size)
ticks = [0.25, 0.5, 0.75, 1.0]
xs = []
ys = []
for x in plot_grid
for i=1:K
b = rand() < π_func(s(x)) ? 1 : 0
U = rand(wp.F)
y = h_func(x, b, U)
push!(xs, x)
push!(ys, y)
end
end
plot(plot_grid, plot_grid, color=:black, linestyle=:dash, legend=:none)
scatter!(xs, ys, alpha=0.25, color=:green, lims=(0, plot_grid_max), ticks=ticks)
plot!(xlabel=L"x_t", ylabel=L"x_{t+1}", guidefont=font(16))
```

Looking at the dynamics, we can see that

- If \(x_t\) is below about 0.2 the dynamics are random, but \(x_{t+1} > x_t\) is very likely
- As \(x_t\) increases the dynamics become deterministic, and \(x_t\) converges to a steady state value close to 1

Referring back to the figure here

http://quant-econ.net/jl/jv.html#solving-for-policies

we see that \(x_t \approx 1\) means that \(s_t = s(x_t) \approx 0\) and \(\phi_t = \phi(x_t) \approx 0.6\)

### Exercise 2¶

```
wp = JvWorker(grid_size=25)
xbar(ϕ) = (wp.A * ϕ^wp.α)^(1.0 / (1.0 - wp.α))
ϕ_grid = linspace(0, 1, 100)
plot(ϕ_grid, [xbar(ϕ) * (1 - ϕ) for ϕ in ϕ_grid], color=:blue,
label=L"$w^*(\phi)$", legendfont=font(12), xlabel=L"$\phi$",
guidefont=font(16), grid=false, legend=:topleft)
```

Observe that the maximizer is around 0.6

This this is similar to the long run value for \(\phi\) obtained in exercise 1

Hence the behaviour of the infinitely patent worker is similar to that of the worker with \(\beta = 0.96\)

This seems reasonable, and helps us confirm that our dynamic programming solutions are probably correct