We are working to support a site-wide PDF but it is not yet available. You can download PDFs for individual lectures through the download badge on each lecture page.

Code should execute sequentially if run in a Jupyter notebook

• See the set up page to install Jupyter, Julia (1.0+) and all necessary libraries
• Please direct feedback to contact@quantecon.org or the discourse forum
• For some notebooks, enable content with "Trust" on the command tab of Jupyter lab
• If using QuantEcon lectures for the first time on a computer, execute ] add InstantiateFromURL inside of a notebook or the REPL

Dynamic Stackelberg Problems¶

Overview¶

Previous lectures including LQ dynamic programming, rational expectations equilibrium, and Markov perfect equilibrium lectures have studied decision problems that are recursive in what we can call “natural” state variables, such as

• stocks of capital (fiscal, financial and human)
• wealth
• information that helps forecast future prices and quantities that impinge on future payoffs

Optimal decision rules are functions of the natural state variables in problems that are recursive in the natural state variables

In this lecture, we describe problems that are not recursive in the natural state variables

Kydland and Prescott [KP77], [Pre77] and Calvo [Cal78] gave examples of such decision problems

These problems have the following features

• Time $t \geq 0$ actions of decision makers called followers depend on time $s \geq t$ decisions of another decision maker called a Stackelberg leader
• At time $t=0$, the Stackelberg leader chooses his actions for all times $s \geq 0$
• In choosing actions for all times at time $0$, the Stackelberg leader can be said to commit to a plan
• The Stackelberg leader has distinct optimal decision rules at time $t=0$, on the one hand, and at times $t \geq 1$, on the other hand
• The Stackelberg leader’s decision rules for $t=0$ and $t \geq 1$ have distinct state variables
• Variables that encode history dependence appear in optimal decision rules of the Stackelberg leader at times $t \geq 1$
• These properties of the Stackelberg leader’s decision rules are symptoms of the time inconsistency of optimal government plans

An example of a time inconsistent optimal rule is that of a

• a large agent (e.g., a government) that confronts a competitive market composed of many small private agents, and in which
• private agents’ decisions at each date are influenced by their forecasts of the large agent’s future actions

The rational expectations equilibrium concept plays an essential role

A rational expectations restriction implies that when it chooses its future actions, the Stackelberg leader also chooses the followers’ expectations about those actions

The Stackelberg leader understands and exploits that situation

In a rational expectations equilibrium, the Stackelberg leader’s time $t$ actions confirm private agents’ forecasts of those actions

The requirement to confirm prior followers’ forecasts puts constraints on the Stackelberg leader’s time $t$ decisions that prevent its problem from being recursive in natural state variables

These additional constraints make the Stackelberg leader’s decision rule at $t$ depend on the entire history of the natural state variables from time $0$ to time $t$

This lecture displays these principles within the tractable framework of linear quadratic problems

It is based on chapter 19 of [LS18]

The Stackelberg Problem¶

We use the optimal linear regulator (a.k.a. the linear-quadratic dynamic programming problem described in LQ Dynamic Programming problems) to solve a linear quadratic version of what is known as a dynamic Stackelberg problem

For now we refer to the Stackelberg leader as the government and the Stackelberg follower as the representative agent or private sector

Soon we’ll give an application with another interpretation of these two decision makers

Let $z_t$ be an $n_z \times 1$ vector of natural state variables

Let $x_t$ be an $n_x \times 1$ vector of endogenous forward-looking variables that are physically free to jump at $t$

Let $u_t$ be a vector of government instruments

The $z_t$ vector is inherited physically from the past

But $x_t$ is inherited as a consequence of decisions made by the Stackelberg planner at time $t=0$

Included in $x_t$ might be prices and quantities that adjust instantaneously to clear markets at time $t$

Let $y_t = \begin{bmatrix} z_t \\ x_t \end{bmatrix}$

Define the government’s one-period loss function [1]

$$r(y, u) = y' R y + u' Q u \tag{1}$$

Subject to an initial condition for $z_0$, but not for $x_0$, a government wants to maximize

$$-\sum_{t=0}^\infty \beta^t r(y_t, u_t) \tag{2}$$

The government makes policy in light of the model

$$\begin{bmatrix} I & 0 \\ G_{21} & G_{22} \end{bmatrix} \begin{bmatrix} z_{t+1} \\ x_{t+1} \end{bmatrix} = \begin{bmatrix} \hat A_{11} & \hat A_{12} \\ \hat A_{21} & \hat A_{22} \end{bmatrix} \begin{bmatrix} z_t \\ x_t \end{bmatrix} + \hat B u_t \tag{3}$$

We assume that the matrix on the left is invertible, so that we can multiply both sides of (3) by its inverse to obtain

$$\begin{bmatrix} z_{t+1} \\ x_{t+1} \end{bmatrix} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix} z_t \\ x_t \end{bmatrix} + B u_t \tag{4}$$

or

$$y_{t+1} = A y_t + B u_t \tag{5}$$

The private sector’s behavior is summarized by the second block of equations of (4) or (5)

These equations typically include the first-order conditions of private agents’ optimization problem (i.e., their Euler equations)

These Euler equations summarize the forward-looking aspect of private agents’ behavior and express how their time $t$ decisions depend on government actions at times $s \geq t$

When combined with a stability condition to be imposed below, the Euler equations summarize the private sector’s best response to the sequence of actions by the government.

The government maximizes (2) by choosing sequences $\{u_t, x_t, z_{t+1}\}_{t=0}^\infty$ subject to (5) and an initial condition for $z_0$

Note that we have an initial condition for $z_0$ but not for $x_0$

$x_0$ is among the variables to be chosen at time $0$ by the Stackelberg leader

The government uses its understanding of the responses restricted by (5) to manipulate private sector actions

To indicate the features of the Stackelberg leader’s problem that make $x_t$ a vector of forward-looking variables, write the second block of system (3) as

$$x_t = \phi_1 z_t + \phi_2 z_{t+1} + \phi_3 u_t + \phi_0 x_{t+1} , \tag{6}$$

where $\phi_0 = \hat A_{22}^{-1} G_{22}$.

The models we study in this chapter typically satisfy

Forward-Looking Stability Condition The eigenvalues of $\phi_0$ are bounded in modulus by $\beta^{-.5}$.

This stability condition makes equation (6) explosive if solved ‘backwards’ but stable if solved ‘forwards’.

See the appendix of chapter 2 of [LS18]

So we solve equation (6) forward to get

$$x_t = \sum_{j=0}^\infty \phi_0^j \left[ \phi_1 z_{t+j} + \phi_2 z_{t+j+1} + \phi_3 u_{t+j} \right] . \tag{7}$$

In choosing $u_t$ for $t \geq 1$ at time $0$, the government takes into account how future $z$ and $u$ affect earlier $x$ through equation (7).

A certainty equivalence principle allows us to work with a nonstochastic model (see LQ dynamic programming)

That is, we would attain the same decision rule if we were to replace $x_{t+1}$ with the forecast $E_t x_{t+1}$ and to add a shock process $C \epsilon_{t+1}$ to the right side of (5), where $\epsilon_{t+1}$ is an IID random vector with mean zero and identity covariance matrix

Let $s^t$ denote the history of any variable $s$ from $0$ to $t$

[MS85], [HER85], [PCL92], [Sar87], [Pea92], and others have all studied versions of the following problem:

Problem S: The Stackelberg problem is to maximize (2) by choosing an $x_0$ and a sequence of decision rules, the time $t$ component of which maps a time $t$ history of the natural state $z^t$ into a time $t$ decision $u_t$ of the Stackelberg leader

The Stackelberg leader chooses this sequence of decision rules once and for all at time $t=0$

Another way to say this is that he commits to this sequence of decision rules at time $0$

The maximization is subject to a given initial condition for $z_0$

But $x_0$ is among the objects to be chosen by the Stackelberg leader

The optimal decision rule is history dependent, meaning that $u_t$ depends not only on $z_t$ but at $t \geq 1$ also on lags of $z$

History dependence has two sources: (a) the government’s ability to commit [2] to a sequence of rules at time $0$, and (b) the forward-looking behavior of the private sector embedded in the second block of equations (4) as exhibited by (7)

Solving the Stackelberg Problem¶

Some Basic Notation¶

For any vector $a_t$, define $\vec a_t = [a_t, a_{t+1}, \ldots]$.

Define a feasible set of $(\vec y_1, \vec u_0)$ sequences

$$\Omega(y_0) = \left\{ (\vec y_1, \vec u_0) : y_{t+1} = A y_t + B u_t, \forall t \geq 0 \right\}$$

Note that in the definition of $\Omega(y_0)$, $y_0$ is taken as given.

Eventually, the $x_0$ component of $y_0$ will be chosen, though it is taken as given in $\Omega(y_0)$

Two Subproblems¶

Once again we use backward induction

We express the Stackelberg problem in terms of two subproblems:

Subproblem 1 is solved by a continuation Stackelberg leader at each date $t \geq 1$

Subproblem 2 is solved the Stackelberg leader at $t=0$

Subproblem 1¶

$$v(y_0) = \max_{(\vec y_1, \vec u_0) \in \Omega(y_0)} - \sum_{t=0}^\infty \beta^t r(y_t, u_t) \tag{8}$$

Subproblem 2¶

$$w(z_0) = \max_{x_0} v(y_0) \tag{9}$$

Subproblem 1 takes the vector of forward-looking variables $x_0$ as given

Subproblem 2 optimizes over $x_0$

The value function $w(z_0)$ tells the value of the Stackelberg plan as a function of the vector of natural state variables at time $0$, $z_0$

Two Bellman equations¶

We now describe Bellman equations for $v(y)$ and $w(z_0)$

Subproblem 1¶

The value function $v(y)$ in subproblem 1 satisfies the Bellman equation

$$v(y) = \max_{u, y^*} \left\{ - r(y,u) + \beta v(y^*) \right\} \tag{10}$$

where the maximization is subject to

$$y^* = A y + B u \tag{11}$$

and $y^*$ denotes next period’s value.

Substituting $v(y) = - y'P y$ into Bellman equation (10) gives

$$-y' P y = {\rm max}_{ u, y^*} \left\{ - y' R y - u'Q u - \beta y^{* \prime} P y^* \right\}$$

which as in lecture linear regulator gives rise to the algebraic matrix Riccati equation

$$P = R + \beta A' P A - \beta^2 A' P B ( Q + \beta B' P B)^{-1} B' P A \tag{12}$$

and the optimal decision rule coefficient vector

$$F = \beta( Q + \beta B' P B)^{-1} B' P A , \tag{13}$$

where the optimal decision rule is

$$u_t = - F y_t. \tag{14}$$

Subproblem 2¶

The value function $v(y_0)$ satisfies

$$v(y_0) = - z_0 ' P_{11} z_0 - 2 x_0' P_{21} z_0 - x_0' P_{22} x_0 \tag{15}$$

where

$$P = \left[ \begin{array}{cc} P_{11} & P_{12} \\ P_{21} & P_{22} \end{array} \right]$$

We find an optimal $x_0$ by equating to zero the gradient of $v(y_0)$ with respect to $x_0$:

$$-2 P_{21} z_0 - 2 P_{22} x_0 =0,$$

which implies that

$$x_0 = - P_{22}^{-1} P_{21} z_0. \tag{16}$$

Summary¶

We solve the Stackelberg problem by

• formulating a particular optimal linear regulator
• solving the associated matrix Riccati equation (12) for $P$
• computing $F$
• then partitioning $P$ to obtain representation (16)

Manifestation of time inconsistency¶

We have seen that for $t \geq 0$ the optimal decision rule for the Stackelberg leader has the form

$$u_t = - F y_t$$

or

$$u_t = f_{11} z_{t} + f_{12} x_{t}$$

where for $t \geq 1$, $x_t$ is effectively a state variable, albeit not a natural one, inherited from the past

The means that for $t \geq 1$, $x_t$ is not a function of $z_t$ only (though it is at $t=0$) and that $x_t$ exerts an independent influence on $u_t$

The situation is different at $t=0$

For $t=0$, the optimal choice of $x_0 = - P_{22}^{-1} P_{21} z_0$ described in equation (16) implies that

$$u_0 = (f_{11} - f_{12}P_{22}^{-1} P_{2,1}) z_0 \tag{17}$$

So for $t=0$, $u_0$ is a linear function of the natural state variable $z_0$ only

But for $t \geq 0$, $x_t \neq - P_{22}^{-1} P_{21} z_t$

Nor does $x_t$ equal any other linear combination of $z_t$ for $t \geq 1$

This means that $x_t$ has an independent role in shaping $u_t$ for $t \geq 1$

All of this means that the Stackelberg leader’s decision rule at $t \geq 1$ differs from its decision rule at $t =0$

As indicated at the beginning of this lecture, this difference is a symptom of the time inconsistency of the optimal Stackelberg plan

The history dependence of the government’s plan can be expressed in the dynamics of Lagrange multipliers $\mu_x$ on the last $n_x$ equations of (3) or (4)

These multipliers measure the cost today of honoring past government promises about current and future settings of $u$

We shall soon show that as a result of optimally choosing $x_0$, it is appropriate to initialize the multipliers to zero at time $t=0$

This is true because at $t=0$, there are no past promises about $u$ to honor

But the multipliers $\mu_x$ take nonzero values thereafter, reflecting future costs to the government of confirming the private sector’s earlier expectations about its time $t$ actions

From the linear regulator lecture, the formula $\mu_t = P y_t$ for the vector of shadow prices on the transition equations is

$$\mu_t = \left[\begin{array}{c} \mu_{zt} \\ \mu_{xt} \end{array} \right]$$

The shadow price $\mu_{xt}$ on the forward-looking variables $x_t$ evidently equals

$$\mu_{xt} = P_{21} z_t + P_{22} x_t. \tag{18}$$

So (16) is equivalent with

$$\mu_{x0} = 0 . \tag{19}$$

A Large Firm With a Competitive Fringe¶

As an example, this section studies the equilibrium of an industry with a large firm that acts as a Stackelberg leader with respect to a competitive fringe

Sometimes the large firm is called ‘the monopolist’ even though there are actually many firms in the industry

The industry produces a single nonstorable homogeneous good, the quantity of which is chosen in the previous period

One large firm produces $Q_t$ and a representative firm in a competitive fringe produces $q_t$

The representative firm in the competitive fringe acts as a price taker and chooses sequentially

The large firm commits to a policy at time $0$, taking into account its ability to manipulate the price sequence, both directly through the effects of its quantity choices on prices, and indirectly through the responses of the competitive fringe to its forecasts of prices [3]

The costs of production are ${\cal C}_t = e Q_t + .5 g Q_t^2+ .5 c (Q_{t+1} - Q_{t})^2$ for the large firm and $\sigma_t= d q_t + .5 h q_t^2 + .5 c (q_{t+1} - q_t)^2$ for the competitive firm, where $d>0, e >0, c>0, g >0, h>0$ are cost parameters

There is a linear inverse demand curve

$$p_t = A_0 - A_1 (Q_t + \overline q_t) + v_t, \tag{20}$$

where $A_0, A_1$ are both positive and $v_t$ is a disturbance to demand governed by

$$v_{t+1}= \rho v_t + C_\epsilon \check \epsilon_{t+1} \tag{21}$$

where $| \rho | < 1$ and $\check \epsilon_{t+1}$ is an IID sequence of random variables with mean zero and variance $1$

In (20), $\overline q_t$ is equilibrium output of the representative competitive firm

In equilibrium, $\overline q_t = q_t$, but we must distinguish between $q_t$ and $\overline q_t$ in posing the optimum problem of a competitive firm

The competitive fringe¶

The representative competitive firm regards $\{p_t\}_{t=0}^\infty$ as an exogenous stochastic process and chooses an output plan to maximize

$$E_0 \sum_{t=0}^\infty \beta^t \left\{ p_t q_t - \sigma_t \right\}, \quad \beta \in(0,1) \tag{22}$$

subject to $q_0$ given, where $E_t$ is the mathematical expectation based on time $t$ information

Let $i_t = q_{t+1} - q_t$

We regard $i_t$ as the representative firm’s control at $t$

The first-order conditions for maximizing (22) are

$$i_t = E_t \beta i_{t+1} -c^{-1} \beta h q_{t+1} + c^{-1} \beta E_t( p_{t+1} -d) \tag{23}$$

for $t \geq 0$

We appeal to a certainty equivalence principle to justify working with a non-stochastic version of (23) formed by dropping the expectation operator and the random term $\check \epsilon_{t+1}$ from (21)

We use a method of [Sar79] and [Tow83] [4]

We shift (20) forward one period, replace conditional expectations with realized values, use (20) to substitute for $p_{t+1}$ in (23), and set $q_t = \overline q_t$ and $i_t = \overline i_t$ for all $t\geq 0$ to get

$$\overline i_t = \beta \overline i_{t+1} - c^{-1} \beta h \overline q_{t+1} + c^{-1} \beta (A_0-d) - c^{-1} \beta A_1 \overline q_{t+1} - c^{-1} \beta A_1 Q_{t+1} + c^{-1} \beta v_{t+1} \tag{24}$$

Given sufficiently stable sequences $\{Q_t, v_t\}$, we could solve (24) and $\overline i_t = \overline q_{t+1} - \overline q_t$ to express the competitive fringe’s output sequence as a function of the (tail of the) monopolist’s output sequence

(This would be a version of representation (7))

It is this feature that makes the monopolist’s problem fail to be recursive in the natural state variables $\overline q, Q$

The monopolist arrives at period $t >0$ facing the constraint that it must confirm the expectations about its time $t$ decision upon which the competitive fringe based its decisions at dates before $t$

The monopolist’s problem¶

The monopolist views the sequence of the competitive firm’s Euler equations as constraints on its own opportunities

They are implementability constraints on the monopolist’s choices

Including the implementability constraints, we can represent the constraints in terms of the transition law facing the monopolist:

\begin{aligned} \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ A_0 -d & 1 & - A_1 & - A_1 -h & c \end{bmatrix} \begin{bmatrix} 1 \\ v_{t+1} \\ Q_{t+1} \\ \overline q_{t+1} \\ \overline i_{t+1} \end{bmatrix} & = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & \rho & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & {c\over \beta} \end{bmatrix} \begin{bmatrix} 1 \\ v_t \\ Q_t \\ \overline q_t \\ \overline i_t \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} u_t \end{aligned} \tag{25}

where $u_t = Q_{t+1} - Q_t$ is the control of the monopolist at time $t$

The last row portrays the implementability constraints (24)

Represent (25) as

$$y_{t+1} = A y_t + B u_t \tag{26}$$

Although we have included the competitive fringe’s choice variable $\overline i_t$ as a component of the “state” $y_t$ in the monopolist’s transition law (26), $\overline i_t$ is actually a “jump” variable

Nevertheless, the analysis above implies that the solution of the large firm’s problem is encoded in the Riccati equation associated with (26) as the transition law

Let’s decode it

To match our general setup, we partition $y_t$ as $y_t' = \begin{bmatrix} z_t' & x_t' \end{bmatrix}$ where $z_t' = \begin{bmatrix} 1 & v_t & Q_t & \overline q_t \end{bmatrix}$ and $x_t = \overline i_t$

The monopolist’s problem is

$$\max_{\{u_t, p_{t+1}, Q_{t+1}, \overline q_{t+1}, \overline i_t\}} \sum_{t=0}^\infty \beta^t \left\{ p_t Q_t - {\cal C}_t \right\}$$

subject to the given initial condition for $z_0$, equations (20) and (24) and $\overline i_t = \overline q_{t+1} - \overline q_t$, as well as the laws of motion of the natural state variables $z$

Notice that the monopolist in effect chooses the price sequence, as well as the quantity sequence of the competitive fringe, albeit subject to the restrictions imposed by the behavior of consumers, as summarized by the demand curve (20) and the implementability constraint (24) that describes the best responses of firms in the competitive fringe

By substituting (20) into the above objective function, the monopolist’s problem can be expressed as

$$\max_{\{u_t\}} \sum_{t=0}^\infty \beta^t \left\{ (A_0 - A_1 (\overline q_t + Q_t) + v_t) Q_t - eQ_t - .5gQ_t^2 - .5 c u_t^2 \right\} \tag{27}$$

subject to (26)

This can be written

$$\max_{\{u_t\}} - \sum_{t=0}^\infty \beta^t \left\{ y_t' R y_t + u_t' Q u_t \right\} \tag{28}$$

subject to (26) where

$$R = - \begin{bmatrix} 0 & 0 & {A_0-e \over 2} & 0 & 0 \\ 0 & 0 & {1 \over 2} & 0 & 0 \\ {A_0-e \over 2} & {1 \over 2} & - A_1 -.5g & -{A_1 \over 2} & 0 \\ 0 & 0 & -{A_1 \over 2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}$$

and $Q= {c \over 2}$

Under the Stackelberg plan, $u_t = - F y_t$, which implies that the evolution of $y$ under the Stackelberg plan as

$$\overline y_{t+1} = (A - BF) \overline y_t \tag{29}$$

where $\overline y_t = \begin{bmatrix} 1 & v_t & Q_t & \overline q_t & \overline i_t \end{bmatrix}'$

Recursive formulation of a follower’s problem¶

We now make use of a “Big $K$, little $k$” trick (see rational expectations equilibrium) to formulate a recursive version of a follower’s problem cast in terms of an ordinary Bellman equation

The individual firm faces $\{p_t\}$ as a price taker and believes

\begin{aligned} p_t & = a_0 - a_1 Q_t -a_1 \overline q_t + v_t \\ & \equiv E_p \begin{bmatrix} \overline y_t \end{bmatrix} \end{aligned} \tag{30}

(Please remember that $\overline q_t$ is a component of $\overline y_t$)

From the point of the view of a representative firm in the competitive fringe, $\{\overline y_t\}$ is an exogenous process

A representative fringe firm wants to forecast $\overline y$ because it wants to forecast what it regards as the exogenous price process $\{p_t\}$

Therefore it wants to forecast the determinants of future prices

• future values of $Q$ and
• future values of $\overline q$

An individual follower firm confronts state $\begin{bmatrix} \overline y_t & q_t \end{bmatrix}'$ where $q_t$ is its current output as opposed to $\overline q$ within $\overline y$

It believes that it chooses future values of $q_t$ but not future values of $\overline q_t$

(This is an application of a ‘’Big $K$, little $k$‘’ idea)

The follower faces law of motion

$$\begin{bmatrix} \overline y_{t+1} \\ q_{t+1} \end{bmatrix} = \begin{bmatrix} A - BF & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \overline y_{t} \\ q_{t} \end{bmatrix} + \begin{bmatrix} 0 \cr 1 \end{bmatrix} i_t \tag{31}$$

We calculated $F$ and therefore $A - B F$ earlier

We can restate the optimization problem of the representative competitive firm

The firm takes $\overline y_t$ as an exogenous process and chooses an output plan $\{q_t\}$ to maximize

$$E_0 \sum_{t=0}^\infty \beta^t \left\{ p_t q_t - \sigma_t \right\}, \quad \beta \in(0,1)$$

subject to $q_0$ given the law of motion (29) and the price function (30) and where the costs are still $\sigma_t= d q_t + .5 h q_t^2 + .5 c (q_{t+1} - q_t)^2$

The representative firm’s problem is a linear-quadratic dynamic programming problem with matrices $A_s, B_s, Q_s, R_s$ that can be constructed easily from the above information.

The representative firm’s decision rule can be represented as

$$i_t = - F_s \begin{bmatrix} 1 \\ v_t \\ Q_t \\ \overline q_t \\ \overline i_t \\ q_t \end{bmatrix} \tag{32}$$

Now let’s stare at the decision rule (32) for $i_t$, apply “Big $K$, little $k$” logic again, and ask what we want in order to verify a recursive representation of a representative follower’s choice problem

• We want decision rule (32) to have the property that $i_t = \overline i_t$ when we evaluate it at $q_t = \overline q_t$

We inherit these desires from a “Big $K$, little $k$” logic

Here we apply a “Big $K$, little $k$” logic in two parts to make the “representative firm be representative” after solving the representative firm’s optimization problem

• We want $q_t = \overline q_t$
• We want $i_t = \overline i_t$

Numerical example¶

We computed the optimal Stackelberg plan for parameter settings $A_0, A_1, \rho, C_\epsilon, c, d, e, g, h, \beta$ = $100, 1, .8, .2, 1, 20, 20, .2, .2, .95$ [5]

For these parameter values the monopolist’s decision rule is

$$u_t = (Q_{t+1} - Q_t) =\begin{bmatrix} 83.98 & 0.78 & -0.95 & -1.31 & -2.07 \end{bmatrix} \begin{bmatrix} 1 \\ v_t \\ Q_t \\ \overline q_t \\ \overline i_t \end{bmatrix}$$

for $t \geq 0$

and

$$x_0 \equiv \overline i_0 = \begin{bmatrix} 31.08 & 0.29 & -0.15 & -0.56 \end{bmatrix} \begin{bmatrix} 1 \\ v_0 \\ Q_0 \\ \overline q_0 \end{bmatrix}$$

For this example, starting from $z_0 =\begin{bmatrix} 1 & v_0 & Q_0 & \overline q_0\end{bmatrix} = \begin{bmatrix} 1& 0 & 25 & 46 \end{bmatrix}$, the monopolist chooses to set $i_0=1.43$

That choice implies that

• $i_1=0.25$, and
• $z_1 = \begin{bmatrix} 1 & v_1 & Q_1 & \overline {q}_1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 21.83 & 47.43 \end{bmatrix}$

A monopolist who started from the initial conditions $\tilde z_0= z_1$ would set $i_0=1.10$ instead of $.25$ as called for under the original optimal plan

The preceding little calculation reflects the time inconsistency of the monopolist’s optimal plan

The recursive representation of the decision rule for a representative fringe firm is

$$i_t = \begin{bmatrix} 0 & 0 & 0 & .34 & 1 & - .34 \end{bmatrix} \begin{bmatrix} 1 \\ v_t \\ Q_t \\ \overline q_t \\ \overline i_t \\ q_t \end{bmatrix} ,$$

which we have computed by solving the appropriate linear-quadratic dynamic programming problem described above

Notice that, as expected, $i_t = \overline i_t$ when we evaluate this decision rule at $q_t = \overline q_t$

Concluding Remarks¶

This lecture is our first encounter with a class of problems in which optimal decision rules are history dependent [6]

We shall encounter other examples in lectures optimal taxation with state-contingent debt and optimal taxation without state-contingent debt

Many more examples of such problems are described in chapters 20-24 of [LS18]

Exercises¶

Exercise 1¶

There is no uncertainty

For $t \geq 0$, a monetary authority sets the growth of (the log of) money according to

$$m_{t+1} = m_t + u_t \tag{33}$$

subject to the initial condition $m_0>0$ given

The demand for money is

$$m_t - p_t = - \alpha (p_{t+1} - p_t) \tag{34}$$

where $\alpha > 0$ and $p_t$ is the log of the price level

Equation (33) can be interpreted as the Euler equation of the holders of money

a. Briefly interpret how (33) makes the demand for real balances vary inversely with the expected rate of inflation. Temporarily (only for this part of the exercise) drop (33) and assume instead that $\{m_t\}$ is a given sequence satisfying $\sum_{t=0}^\infty m_t^2 < + \infty$. Solve the difference equation (33) “forward” to express $p_t$ as a function of current and future values of $m_s$. Note how future values of $m$ influence the current price level.

At time $0$, a monetary authority chooses (commits to) a possibly history-dependent strategy for setting $\{u_t\}_{t=0}^\infty$

The monetary authority orders sequences $\{m_t, p_t\}_{t=0}^\infty$ according to

$$-\sum_{t=0}^\infty .95^t \left[ (p_t - \overline p)^2 + u_t^2 + .00001 m_t^2 \right] \tag{35}$$

Assume that $m_0=10, \alpha=5, \bar p=1$

b. Please briefly interpret this problem as one where the monetary authority wants to stabilize the price level, subject to costs of adjusting the money supply and some implementability constraints. (We include the term $.00001m_t^2$ for purely technical reasons that you need not discuss.)

c. Please write and run a Julia program to find the optimal sequence $\{u_t\}_{t=0}^\infty$

d. Display the optimal decision rule for $u_t$ as a function of $u_{t-1}, m_t, m_{t-1}$

e. Compute the optimal $\{m_t, p_t\}_t$ sequence for $t=0, \ldots, 10$

Hints:

• The optimal $\{m_t\}$ sequence must satisfy $\sum_{t=0}^\infty (.95)^t m_t^2 < +\infty$
• Code can be found in the file lqcontrol.jl from the QuantEcon.jl package that implements the optimal linear regulator

Exercise 2¶

A representative consumer has quadratic utility functional

$$\sum_{t=0}^\infty \beta^t \left\{ -.5 (b -c_t)^2 \right\} \tag{36}$$

where $\beta \in (0,1)$, $b = 30$, and $c_t$ is time $t$ consumption

The consumer faces a sequence of budget constraints

$$c_t + a_{t+1} = (1+r)a_t + y_t - \tau_t \tag{37}$$

where

• $a_t$ is the household’s holdings of an asset at the beginning of $t$
• $r >0$ is a constant net interest rate satisfying $\beta (1+r) <1$
• $y_t$ is the consumer’s endowment at $t$

The consumer’s plan for $(c_t, a_{t+1})$ has to obey the boundary condition $\sum_{t=0}^\infty \beta^t a_t^2 < + \infty$

Assume that $y_0, a_0$ are given initial conditions and that $y_t$ obeys

$$y_t = \rho y_{t-1}, \quad t \geq 1, \tag{38}$$

where $|\rho| <1$. Assume that $a_0=0$, $y_0=3$, and $\rho=.9$

At time $0$, a planner commits to a plan for taxes $\{\tau_t\}_{t=0}^\infty$

The planner designs the plan to maximize

$$\sum_{t=0}^\infty \beta^t \left\{ -.5 (c_t-b)^2 - \tau_t^2\right\} \tag{39}$$

over $\{c_t, \tau_t\}_{t=0}^\infty$ subject to the implementability constraints in (37) for $t \geq 0$ and

$$\lambda_t = \beta (1+r) \lambda_{t+1} \tag{40}$$

for $t\geq 0$, where $\lambda_t \equiv (b-c_t)$

a. Argue that (40) is the Euler equation for a consumer who maximizes (36) subject to (37), taking $\{\tau_t\}$ as a given sequence

b. Formulate the planner’s problem as a Stackelberg problem

c. For $\beta=.95, b=30, \beta(1+r)=.95$, formulate an artificial optimal linear regulator problem and use it to solve the Stackelberg problem

d. Give a recursive representation of the Stackelberg plan for $\tau_t$

Footnotes

[1] The problem assumes that there are no cross products between states and controls in the return function. A simple transformation converts a problem whose return function has cross products into an equivalent problem that has no cross products. For example, see [HS08] (chapter 4, pp. 72-73).

[2] The government would make different choices were it to choose sequentially, that is, were it to select its time $t$ action at time $t$.

[3] [HS08] (chapter 16), uses this model as a laboratory to illustrate an equilibrium concept featuring robustness in which at least one of the agents has doubts about the stochastic specification of the demand shock process.

[4] They used this method to compute a rational expectations competitive equilibrium. Their key step was to eliminate price and output by substituting from the inverse demand curve and the production function into the firm’s first-order conditions to get a difference equation in capital.

[5] These calculations were performed by these functions:

Setup¶

In [1]:
using InstantiateFromURL
activate_github("QuantEcon/QuantEconLecturePackages", tag = "v0.9.7");

In [2]:
using LinearAlgebra, Statistics, Compat, QuantEcon, Roots

In [3]:
function Oligopoly(;a0 = 100.0,
a1 = 1.0,
rho = 0.8,
c_eps = 0.2,
c = 1.0,
d = 20.0,
e = 20.0,
g = 0.2,
h = 0.2,
beta = 0.95)

# left-hand side of (37)
Alhs = I + zeros(5, 5)
Alhs[5, :] = [a0-d 1 -a1 -a1-h c]
Alhsinv = inv(Alhs)

# right-hand side of (37)
Brhs = [0; 0; 1; 0; 0]
Arhs = I + zeros(5, 5)
Arhs[2, 2] = rho
Arhs[4, 5] = 1
Arhs[5, 5] = c / beta

R = [0 0 (a0-e) / 2 0 0
0 0 1/2 0 0
(a0-e) / 2 1 / 2 -a1 - g / 2 -a1 / 2 0
0 0 -a1 / 2 0 0
0 0 0 0 0]

Rf = [0 0 0 0 0 (a0-d) / 2
0 0 0 0 0 1/2
0 0 0 0 0 -a1/2
0 0 0 0 0 -a1/2
0 0 0 0 0 0
(a0-d) / 2 1 / 2 -a1 / 2 -a1 / 2 0 -h / 2]

Q = c / 2

A = Alhsinv * Arhs
B = Alhsinv * Brhs

return (a0 = a0, a1 = a1, rho = rho, c_eps = c_eps,
c = c, d = d, e = e, g = g, h = h, beta = beta,
A = A, B = B, Q = Q, R = R, Rf = Rf)
end

function find_PFd(olig)
lq = QuantEcon.LQ(olig.Q, -olig.R, olig.A, olig.B; bet=olig.beta)
P, F, d = stationary_values(lq)

Af = vcat(hcat(olig.A-olig.B*F, [0; 0; 0; 0; 0]), [0 0 0 0 0 1])
Bf = [0; 0; 0; 0; 0; 1]

lqf = QuantEcon.LQ(olig.Q, -olig.Rf, Af, Bf; bet=olig.beta)
Pf, Ff, df = stationary_values(lqf)

return P, F, d, Pf, Ff, df
end

function solve_for_opt_policy(olig, eta0 = 0.0, Q0 = 0.0, q0 = 0.0)

# step 1 & 2: formulate/solve the optimal linear regulator
P, F, d, Pf, Ff, df = find_PFd(olig)

# step 3: convert implementation into state variables (find coeffs)
P22 = P[end, end]
P21 = transpose(P[end, 1:end-1])
P22inv = inv(P22)

# step 4: find optimal x_0 and \mu_{x, 0}
z0 = [1; eta0; Q0; q0]
x0 = -P22inv * P21 * z0
D0 = -P22inv * P21

# return -F and -Ff because we use u_t = -F y_t
return P, -F, D0, Pf, -Ff

end

olig = Oligopoly()
P, F, D0, Pf, Ff = solve_for_opt_policy(olig)

# checking time-inconsistency
# arbitrary initial z_0
y0 = [1; 0; 25; 46]
# optimal x_0 = i_0
i0 = D0 * y0
# iterate one period using the closed-loop system
y1 = (olig.A + olig.B * F) * [y0; i0]
# the last element of y_1 is x_1 = i_1
i1_0 = y1[end, 1]

# compare this to the case when the leader solves a Stackelberg problem
# in period 1. if in period 1 the leader could choose i1 given
# (1, v_1, Q_1, \bar{q}_1)
i1_1 = D0 * y1[1:end - 1, 1]

Out[3]:
1.103839239286826

[6] For another application of the techniques in this lecture and how they related to the method recommended by [KP80].

• Share page