Download PDF

We are working to support a site-wide PDF but it is not yet available. You can download PDFs for individual lectures through the download badge on each lecture page.

How to read this lecture...

Code should execute sequentially if run in a Jupyter notebook

  • See the set up page to install Jupyter, Julia (1.0+) and all necessary libraries
  • Please direct feedback to or the discourse forum
  • For some notebooks, enable content with "Trust" on the command tab of Jupyter lab
  • If using QuantEcon lectures for the first time on a computer, execute ] add InstantiateFromURL inside of a notebook or the REPL

Solvers, Optimizers, and Automatic Differentiation


In this lecture we introduce a few of the Julia libraries that we’ve found particularly useful for quantitative work in economics


In [ ]:
using InstantiateFromURL
activate_github("QuantEcon/QuantEconLectureAllPackages", tag = "v0.9.6");
In [ ]:
using LinearAlgebra, Statistics, Compat
using ForwardDiff, Flux, Optim, JuMP, Ipopt, BlackBoxOptim, Roots, NLsolve
using LeastSquaresOptim, Flux.Tracker
using Flux.Tracker: update!
using Optim: converged, maximum, maximizer, minimizer, iterations #some extra functions

Introduction to Automatic Differentiation

Automatic differentiation (AD, sometimes called algorithmic differentiation) is a crucial way to increase the performance of both estimation and solution methods

There are essentially four ways to calculate the gradient or Jacobian on a computer

  • Calculation by hand

    • Where possible, you can calculate the derivative on “pen and paper” and potentially simplify the expression
    • Sometimes, though not always, the most accurate and fastest option if there are algebraic simplifications
    • The algebra is error prone for non-trivial setups
  • Finite differences

    • Evaluate the function at least $ N $ times to get the gradient – Jacobians are even worse
    • Large $ \Delta $ is numerically stable but inaccurate, too small of $ \Delta $ is numerically unstable but more accurate
    • Avoid if you can, and use packages (e.g. DiffEqDiffTools.jl ) to get a good choice of $ \Delta $
$$ \partial_{x_i}f(x_1,\ldots x_N) \approx \frac{f(x_1,\ldots x_i + \Delta,\ldots x_N) - f(x_1,\ldots x_i,\ldots x_N)}{\Delta} $$
  • Symbolic differentiation

    • If you put in an expression for a function, some packages will do symbolic differentiation
    • In effect, repeated applications of the chain rule, product rule, etc.
    • Sometimes a good solution, if the package can handle your functions
  • Automatic Differentiation

    • Essentially the same as symbolic differentiation, just occurring at a different time in the compilation process
    • Equivalent to analytical derivatives since it uses the chain rule, etc.

We will explore AD packages in Julia rather than the alternatives

Automatic Differentiation

To summarize here, first recall the chain rule (adapted from Wikipedia)

$$ \frac{dy}{dx} = \frac{dy}{dw} \cdot \frac{dw}{dx} $$

Consider functions composed of calculations with fundamental operations with known analytical derivatives, such as $ f(x_1, x_2) = x_1 x_2 + \sin(x_1) $

To compute $ \frac{d f(x_1,x_2)}{d x_1} $

$$ \begin{array}{l|l} \text{Operations to compute value} & \text{Operations to compute $\frac{\partial f(x_1,x_2)}{\partial x_1}$} \\ \hline w_1 = x_1 & \frac{d w_1}{d x_1} = 1 \text{ (seed)}\\ w_2 = x_2 & \frac{d w_2}{d x_1} = 0 \text{ (seed)} \\ w_3 = w_1 \cdot w_2 & \frac{\partial w_3}{\partial x_1} = w_2 \cdot \frac{d w_1}{d x_1} + w_1 \cdot \frac{d w_2}{d x_1} \\ w_4 = \sin w_1 & \frac{d w_4}{d x_1} = \cos w_1 \cdot \frac{d w_1}{d x_1} \\ w_5 = w_3 + w_4 & \frac{\partial w_5}{\partial x_1} = \frac{\partial w_3}{\partial x_1} + \frac{d w_4}{d x_1} \end{array} $$

Using Dual Numbers

One way to implement this (used in forward-mode AD) is to use dual numbers

Take a number $ x $ and augment it with an infinitesimal $ \epsilon $ such that $ \epsilon^2 = 0 $, i.e. $ x \to x + x' \epsilon $

All math is then done with this (mathematical, rather than Julia) tuple $ (x, x') $ where the $ x' $ may be hidden from the user

With this definition, we can write a general rule for differentiation of $ g(x,y) $ as

$$ g \big( \left(x,x'\right),\left(y,y'\right) \big) = \left(g(x,y),\partial_x g(x,y)x' + \partial_y g(x,y)y' \right) $$

This calculation is simply the chain rule for the total derivative

An AD library using dual numbers concurrently calculates the function and its derivatives, repeating the chain rule until it hits a set of intrinsic rules such as

$$ \begin{align*} x + y \to \left(x,x'\right) + \left(y,y'\right) &= \left(x + y,\underbrace{x' + y'}_{\partial(x + y) = \partial x + \partial y}\right)\\ x y \to \left(x,x'\right) \times \left(y,y'\right) &= \left(x y,\underbrace{x'y + y'x}_{\partial(x y) = y \partial x + x \partial y}\right)\\ \exp(x) \to \exp(\left(x, x'\right)) &= \left(\exp(x),\underbrace{x'\exp(x)}_{\partial(\exp(x)) = \exp(x)\partial x} \right) \end{align*} $$


We have already seen one of the AD packages in Julia

In [ ]:
using ForwardDiff
h(x) = sin(x[1]) + x[1] * x[2] + sinh(x[1] * x[2]) # multivariate.
x = [1.4 2.2]
@show ForwardDiff.gradient(h,x) # use AD, seeds from x

#Or, can use complicated functions of many variables
f(x) = sum(sin, x) + prod(tan, x) * sum(sqrt, x)
g = (x) -> ForwardDiff.gradient(f, x); # g() is now the gradient
@show g(rand(20)); # gradient at a random point
# ForwardDiff.hessian(f,x') # or the hessian

We can even auto-differentiate complicated functions with embedded iterations

In [ ]:
function squareroot(x) #pretending we don't know sqrt()
    z = copy(x) # Initial starting point for Newton’s method
    while abs(z*z - x) > 1e-13
        z = z - (z*z-x)/(2z)
    return z
In [ ]:
using ForwardDiff
dsqrt(x) = ForwardDiff.derivative(squareroot, x)


Another is Flux.jl, a machine learning library in Julia

AD is one of the main reasons that machine learning has become so powerful in recent years, and is an essential component of any machine learning package

In [ ]:
using Flux
using Flux.Tracker
using Flux.Tracker: update!

f(x) = 3x^2 + 2x + 1

# df/dx = 6x + 2
df(x) = Tracker.gradient(f, x)[1]

df(2) # 14.0 (tracked)
In [ ]:
A = rand(2,2)
f(x) = A * x
x0 = [0.1, 2.0]
Flux.jacobian(f, x0)

As before, we can differentiate complicated functions

In [ ]:
dsquareroot(x) = Tracker.gradient(squareroot, x)

From the documentation, we can use a machine learning approach to a linear regression

In [ ]:
W = rand(2, 5)
b = rand(2)

predict(x) = W*x .+ b

function loss(x, y)
 = predict(x)
sum((y .- ).^2)

x, y = rand(5), rand(2) # Dummy data
loss(x, y) # ~ 3
In [ ]:
W = param(W)
b = param(b)

gs = Tracker.gradient(() -> loss(x, y), Params([W, b]))

Δ = gs[W]

# Update the parameter and reset the gradient
update!(W, -0.1Δ)

loss(x, y) # ~ 2.5


There are a large number of packages intended to be used for optimization in Julia

Part of the reason for the diversity of options is that Julia makes it possible to efficiently implement a large number of variations on optimization routines

The other reason is that different types of optimization problems require different algorithms


A good pure-Julia solution for the (unconstrained or box-bounded) optimization of univariate and multivariate function is the Optim.jl package

By default, the algorithms in Optim.jl target minimization rather than maximization, so if a function is called optimize it will mean minimization

Univariate Functions on Bounded Intervals

Univariate optimization defaults to a robust hybrid optimization routine called Brent’s method

In [ ]:
using Optim
using Optim: converged, maximum, maximizer, minimizer, iterations #some extra functions

result = optimize(x-> x^2, -2.0, 1.0)

Always check if the results converged, and throw errors otherwise

In [ ]:
converged(result) || error("Failed to converge in $(iterations(result)) iterations")
xmin = result.minimizer

The first line is a logical OR between converged(result) and error("...")

If the convergence check passes, the logical sentence is true, and it will proceed to the next line; if not, it will throw the error

Or to maximize

In [ ]:
f(x) = -x^2
result = maximize(f, -2.0, 1.0)
converged(result) || error("Failed to converge in $(iterations(result)) iterations")
xmin = maximizer(result)
fmax = maximum(result)

Note: Notice that we call optimize results using result.minimizer, and maximize results using maximizer(result)

Unconstrained Multivariate Optimization

There are a variety of algorithms and options for multivariate optimization

From the documentation, the simplest version is

In [ ]:
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
x_iv = [0.0, 0.0]
results = optimize(f, x_iv) # i.e. optimize(f, x_iv, NelderMead())

The default algorithm in NelderMead, which is derivative-free and hence requires many function evaluations

To change the algorithm type to L-BFGS

In [ ]:
results = optimize(f, x_iv, LBFGS())
println("minimum = $(results.minimum) with argmin = $(results.minimizer) in "*
"$(results.iterations) iterations")

Note that this has fewer iterations

As no derivative was given, it used finite differences to approximate the gradient of f(x)

However, since most of the algorithms require derivatives, you will often want to use auto differentiation or pass analytical gradients if possible

In [ ]:
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
x_iv = [0.0, 0.0]
results = optimize(f, x_iv, LBFGS(), autodiff=:forward) # i.e. use ForwardDiff.jl
println("minimum = $(results.minimum) with argmin = $(results.minimizer) in "*
"$(results.iterations) iterations")

Note that we did not need to use ForwardDiff.jl directly, as long as our f(x) function was written to be generic (see the generic programming lecture )

Alternatively, with an analytical gradient

In [ ]:
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
x_iv = [0.0, 0.0]
function g!(G, x)
    G[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
    G[2] = 200.0 * (x[2] - x[1]^2)

results = optimize(f, g!, x0, LBFGS()) # or ConjugateGradient()
println("minimum = $(results.minimum) with argmin = $(results.minimizer) in "*
"$(results.iterations) iterations")

For derivative-free methods, you can change the algorithm – and have no need to provide a gradient

In [ ]:
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
x_iv = [0.0, 0.0]
results = optimize(f, x_iv, SimulatedAnnealing()) # or ParticleSwarm() or NelderMead()

However, you will note that this did not converge, as stochastic methods typically require many more iterations as a tradeoff for their global-convergence properties

See the maximum likelihood example and the accompanying Jupyter notebook


The JuMP.jl package is an ambitious implementation of a modelling language for optimization problems in Julia

In that sense, it is more like an AMPL (or Pyomo) built on top of the Julia language with macros, and able to use a variety of different commerical and open source solvers

If you have a linear, quadratic, conic, mixed-integer linear, etc. problem then this will likely be the ideal “meta-package” for calling various solvers

For nonlinear problems, the modelling language may make things difficult for complicated functions (as it is not designed to be used as a general-purpose nonlinear optimizer)

See the quick start guide for more details on all of the options

The following is an example of calling a linear objective with a nonlinear constraint (provided by an external function)

Here Ipopt stands for Interior Point OPTimizer, a nonlinear solver in Julia

In [ ]:
using JuMP, Ipopt
# solve
# max( x[1] + x[2] )
# st sqrt(x[1]^2 + x[2]^2) <= 1

function squareroot(x) # pretending we don't know sqrt()
    z = x # Initial starting point for Newton’s method
    while abs(z*z - x) > 1e-13
        z = z - (z*z-x)/(2z)
    return z
m = Model(solver = IpoptSolver())
# need to register user defined functions for AD
JuMP.register(m,:squareroot, 1, squareroot, autodiff=true)

@variable(m, x[1:2], start=0.5) # start is the initial condition
@objective(m, Max, sum(x))
@NLconstraint(m, squareroot(x[1]^2+x[2]^2) <= 1)
@show solve(m)

And this is an example of a quadratic objective

In [ ]:
# solve
# min (1-x)^2 + 100(y-x^2)^2)
# st x + y >= 10

using JuMP,Ipopt
m = Model(solver = IpoptSolver(print_level=0)) # settings for the solver
@variable(m, x, start = 0.0)
@variable(m, y, start = 0.0)

@NLobjective(m, Min, (1-x)^2 + 100(y-x^2)^2)

println("x = ", getvalue(x), " y = ", getvalue(y))

# adding a (linear) constraint
@constraint(m, x + y == 10)
println("x = ", getvalue(x), " y = ", getvalue(y))


Another package for doing global optimization without derivatives is BlackBoxOptim.jl

To see an example from the documentation

In [ ]:
using BlackBoxOptim

function rosenbrock2d(x)
return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2

results = bboptimize(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2);

An example for parallel execution of the objective is provided

Systems of Equations and Least Squares


A root of a real function $ f $ on $ [a,b] $ is an $ x \in [a, b] $ such that $ f(x)=0 $

For example, if we plot the function

$$ f(x) = \sin(4 (x - 1/4)) + x + x^{20} - 1 \tag{1} $$

with $ x \in [0,1] $ we get

The unique root is approximately 0.408

The Roots.jl package offers fzero() to find roots

In [ ]:
using Roots
f(x) = sin(4 * (x - 1/4)) + x + x^20 - 1
fzero(f, 0, 1)


The NLsolve.jl package provides functions to solve for multivariate systems of equations and fixed points

From the documentation, to solve for a system of equations without providing a Jacobian

In [ ]:
using NLsolve

f(x) = [(x[1]+3)*(x[2]^3-7)+18
        sin(x[2]*exp(x[1])-1)] # returns an array

results = nlsolve(f, [ 0.1; 1.2])

In the above case, the algorithm used finite differences to calculate the Jacobian

Alternatively, if f(x) is written generically, you can use auto-differentiation with a single setting

In [ ]:
results = nlsolve(f, [ 0.1; 1.2], autodiff=:forward)

println("converged=$(NLsolve.converged(results)) at root=$( in "*
"$(results.iterations) iterations and $(results.f_calls) function calls")

Providing a function which operates inplace (i.e. modifies an argument) may help performance for large systems of equations (and hurt it for small ones)

In [ ]:
function f!(F, x) # modifies the first argument
    F[1] = (x[1]+3)*(x[2]^3-7)+18
    F[2] = sin(x[2]*exp(x[1])-1)

results = nlsolve(f!, [ 0.1; 1.2], autodiff=:forward)

println("converged=$(NLsolve.converged(results)) at root=$( in "*
"$(results.iterations) iterations and $(results.f_calls) function calls")


Many optimization problems can be solved using linear or nonlinear least squares

Let $ x \in R^N $ and $ F(x) : R^N \to R^M $ with $ M \geq N $, then the nonlinear least squares problem is

$$ \min_x F(x)^T F(x) $$

While $ F(x)^T F(x) \to R $, and hence this problem could technically use any nonlinear optimizer, it is useful to exploit the structure of the problem

In particular, the Jacobian of $ F(x) $, can be used to approximate the Hessian of the objective

As with most nonlinear optimization problems, the benefits will typically become evident only when analytical or automatic differentiation is possible

If $ M = N $ and we know a root $ F(x^*) = 0 $ to the system of equations exists, then NLS is the defacto method for solving large systems of equations

An implementation of NLS is given in LeastSquaresOptim.jl

From the documentation

In [ ]:
using LeastSquaresOptim
function rosenbrock(x)
    [1 - x[1], 100 * (x[2]-x[1]^2)]
LeastSquaresOptim.optimize(rosenbrock, zeros(2), Dogleg())

Note: Because there is a name clash between Optim.jl and this package, to use both we need to qualify the use of the optimize function (i.e. LeastSquaresOptim.optimize)

Here, by default it will use AD with ForwardDiff.jl to calculate the Jacobian, but you could also provide your own calculation of the Jacobian (analytical or using finite differences) and/or calculate the function inplace

In [ ]:
function rosenbrock_f!(out, x)
    out[1] = 1 - x[1]
    out[2] = 100 * (x[2]-x[1]^2)
LeastSquaresOptim.optimize!(LeastSquaresProblem(x = zeros(2),
                                f! = rosenbrock_f!, output_length = 2))

# if you want to use gradient
function rosenbrock_g!(J, x)
    J[1, 1] = -1
    J[1, 2] = 0
    J[2, 1] = -200 * x[1]
    J[2, 2] = 100
LeastSquaresOptim.optimize!(LeastSquaresProblem(x = zeros(2),
                                f! = rosenbrock_f!, g! = rosenbrock_g!, output_length = 2))

Additional Notes

Watch this video from one of Julia’s creators on automatic differentiation