diff --git a/lectures/_static/lecture_specific/cake_eating_numerical/analytical.py b/lectures/_static/lecture_specific/cake_eating_numerical/analytical.py
deleted file mode 100644
index ff092a781..000000000
--- a/lectures/_static/lecture_specific/cake_eating_numerical/analytical.py
+++ /dev/null
@@ -1,9 +0,0 @@
-def c_star(x, β, γ):
-
- return (1 - β ** (1/γ)) * x
-
-
-def v_star(x, β, γ):
-
- return (1 - β**(1 / γ))**(-γ) * (x**(1-γ) / (1-γ))
-
diff --git a/lectures/_static/lecture_specific/optgrowth/cd_analytical.py b/lectures/_static/lecture_specific/optgrowth/cd_analytical.py
deleted file mode 100644
index ec713ca90..000000000
--- a/lectures/_static/lecture_specific/optgrowth/cd_analytical.py
+++ /dev/null
@@ -1,17 +0,0 @@
-
-def v_star(y, α, β, μ):
- """
- True value function
- """
- c1 = np.log(1 - α * β) / (1 - β)
- c2 = (μ + α * np.log(α * β)) / (1 - α)
- c3 = 1 / (1 - β)
- c4 = 1 / (1 - α * β)
- return c1 + c2 * (c3 - c4) + c4 * np.log(y)
-
-def σ_star(y, α, β):
- """
- True optimal policy
- """
- return (1 - α * β) * y
-
diff --git a/lectures/_static/lecture_specific/optgrowth/solve_model.py b/lectures/_static/lecture_specific/optgrowth/solve_model.py
deleted file mode 100644
index 06333effc..000000000
--- a/lectures/_static/lecture_specific/optgrowth/solve_model.py
+++ /dev/null
@@ -1,29 +0,0 @@
-def solve_model(og,
- tol=1e-4,
- max_iter=1000,
- verbose=True,
- print_skip=25):
- """
- Solve model by iterating with the Bellman operator.
-
- """
-
- # Set up loop
- v = og.u(og.grid) # Initial condition
- i = 0
- error = tol + 1
-
- while i < max_iter and error > tol:
- v_greedy, v_new = T(v, og)
- error = np.max(np.abs(v - v_new))
- i += 1
- if verbose and i % print_skip == 0:
- print(f"Error at iteration {i} is {error}.")
- v = v_new
-
- if error > tol:
- print("Failed to converge!")
- elif verbose:
- print(f"\nConverged in {i} iterations.")
-
- return v_greedy, v_new
diff --git a/lectures/_toc.yml b/lectures/_toc.yml
index 4284de44a..126dca47d 100644
--- a/lectures/_toc.yml
+++ b/lectures/_toc.yml
@@ -75,12 +75,12 @@ parts:
- caption: Household Problems
numbered: true
chapters:
- - file: cake_eating_problem
+ - file: cake_eating
- file: cake_eating_numerical
- - file: optgrowth
- - file: optgrowth_fast
- - file: coleman_policy_iter
- - file: egm_policy_iter
+ - file: cake_eating_stochastic
+ - file: cake_eating_time_iter
+ - file: cake_eating_egm
+ - file: cake_eating_egm_jax
- file: ifp
- file: ifp_advanced
- caption: LQ Control
diff --git a/lectures/cake_eating_problem.md b/lectures/cake_eating.md
similarity index 100%
rename from lectures/cake_eating_problem.md
rename to lectures/cake_eating.md
diff --git a/lectures/cake_eating_egm.md b/lectures/cake_eating_egm.md
new file mode 100644
index 000000000..e9f7e272d
--- /dev/null
+++ b/lectures/cake_eating_egm.md
@@ -0,0 +1,335 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{raw} jupyter
+
+```
+
+# {index}`Cake Eating V: The Endogenous Grid Method `
+
+```{contents} Contents
+:depth: 2
+```
+
+
+## Overview
+
+Previously, we solved the stochastic cake eating problem using
+
+1. {doc}`value function iteration `
+1. {doc}`Euler equation based time iteration `
+
+We found time iteration to be significantly more accurate and efficient.
+
+In this lecture, we'll look at a clever twist on time iteration called the **endogenous grid method** (EGM).
+
+EGM is a numerical method for implementing policy iteration invented by [Chris Carroll](https://econ.jhu.edu/directory/christopher-carroll/).
+
+The original reference is {cite}`Carroll2006`.
+
+Let's start with some standard imports:
+
+```{code-cell} ipython
+import matplotlib.pyplot as plt
+import numpy as np
+import quantecon as qe
+```
+
+## Key Idea
+
+Let's start by reminding ourselves of the theory and then see how the numerics fit in.
+
+### Theory
+
+Take the model set out in {doc}`Cake Eating IV `, following the same terminology and notation.
+
+The Euler equation is
+
+```{math}
+:label: egm_euler
+
+(u'\circ \sigma^*)(x)
+= \beta \int (u'\circ \sigma^*)(f(x - \sigma^*(x)) z) f'(x - \sigma^*(x)) z \phi(dz)
+```
+
+As we saw, the Coleman-Reffett operator is a nonlinear operator $K$ engineered so that $\sigma^*$ is a fixed point of $K$.
+
+It takes as its argument a continuous strictly increasing consumption policy $\sigma \in \Sigma$.
+
+It returns a new function $K \sigma$, where $(K \sigma)(x)$ is the $c \in (0, \infty)$ that solves
+
+```{math}
+:label: egm_coledef
+
+u'(c)
+= \beta \int (u' \circ \sigma) (f(x - c) z ) f'(x - c) z \phi(dz)
+```
+
+### Exogenous Grid
+
+As discussed in {doc}`Cake Eating IV `, to implement the method on a computer, we need a numerical approximation.
+
+In particular, we represent a policy function by a set of values on a finite grid.
+
+The function itself is reconstructed from this representation when necessary, using interpolation or some other method.
+
+{doc}`Previously `, to obtain a finite representation of an updated consumption policy, we
+
+* fixed a grid of income points $\{x_i\}$
+* calculated the consumption value $c_i$ corresponding to each
+ $x_i$ using {eq}`egm_coledef` and a root-finding routine
+
+Each $c_i$ is then interpreted as the value of the function $K \sigma$ at $x_i$.
+
+Thus, with the points $\{x_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation.
+
+Iteration then continues...
+
+### Endogenous Grid
+
+The method discussed above requires a root-finding routine to find the
+$c_i$ corresponding to a given income value $x_i$.
+
+Root-finding is costly because it typically involves a significant number of
+function evaluations.
+
+As pointed out by Carroll {cite}`Carroll2006`, we can avoid this if
+$x_i$ is chosen endogenously.
+
+The only assumption required is that $u'$ is invertible on $(0, \infty)$.
+
+Let $(u')^{-1}$ be the inverse function of $u'$.
+
+The idea is this:
+
+* First, we fix an *exogenous* grid $\{k_i\}$ for capital ($k = x - c$).
+* Then we obtain $c_i$ via
+
+```{math}
+:label: egm_getc
+
+c_i =
+(u')^{-1}
+\left\{
+ \beta \int (u' \circ \sigma) (f(k_i) z ) \, f'(k_i) \, z \, \phi(dz)
+\right\}
+```
+
+* Finally, for each $c_i$ we set $x_i = c_i + k_i$.
+
+It is clear that each $(x_i, c_i)$ pair constructed in this manner satisfies {eq}`egm_coledef`.
+
+With the points $\{x_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation as before.
+
+The name EGM comes from the fact that the grid $\{x_i\}$ is determined **endogenously**.
+
+## Implementation
+
+As in {doc}`Cake Eating IV `, we will start with a simple setting
+where
+
+* $u(c) = \ln c$,
+* production is Cobb-Douglas, and
+* the shocks are lognormal.
+
+This will allow us to make comparisons with the analytical solutions
+
+```{code-cell} python3
+def v_star(x, α, β, μ):
+ """
+ True value function
+ """
+ c1 = np.log(1 - α * β) / (1 - β)
+ c2 = (μ + α * np.log(α * β)) / (1 - α)
+ c3 = 1 / (1 - β)
+ c4 = 1 / (1 - α * β)
+ return c1 + c2 * (c3 - c4) + c4 * np.log(x)
+
+def σ_star(x, α, β):
+ """
+ True optimal policy
+ """
+ return (1 - α * β) * x
+```
+
+We reuse the `Model` structure from {doc}`Cake Eating IV `.
+
+```{code-cell} python3
+from typing import NamedTuple, Callable
+
+class Model(NamedTuple):
+ u: Callable # utility function
+ f: Callable # production function
+ β: float # discount factor
+ μ: float # shock location parameter
+ s: float # shock scale parameter
+ grid: np.ndarray # state grid
+ shocks: np.ndarray # shock draws
+ α: float # production function parameter
+ u_prime: Callable # derivative of utility
+ f_prime: Callable # derivative of production
+ u_prime_inv: Callable # inverse of u_prime
+
+
+def create_model(u: Callable,
+ f: Callable,
+ β: float = 0.96,
+ μ: float = 0.0,
+ s: float = 0.1,
+ grid_max: float = 4.0,
+ grid_size: int = 120,
+ shock_size: int = 250,
+ seed: int = 1234,
+ α: float = 0.4,
+ u_prime: Callable = None,
+ f_prime: Callable = None,
+ u_prime_inv: Callable = None) -> Model:
+ """
+ Creates an instance of the cake eating model.
+ """
+ # Set up grid
+ grid = np.linspace(1e-4, grid_max, grid_size)
+
+ # Store shocks (with a seed, so results are reproducible)
+ np.random.seed(seed)
+ shocks = np.exp(μ + s * np.random.randn(shock_size))
+
+ return Model(u=u, f=f, β=β, μ=μ, s=s, grid=grid, shocks=shocks,
+ α=α, u_prime=u_prime, f_prime=f_prime, u_prime_inv=u_prime_inv)
+```
+
+### The Operator
+
+Here's an implementation of $K$ using EGM as described above.
+
+```{code-cell} python3
+def K(σ_array: np.ndarray, model: Model) -> np.ndarray:
+ """
+ The Coleman-Reffett operator using EGM
+
+ """
+
+ # Simplify names
+ f, β = model.f, model.β
+ f_prime, u_prime = model.f_prime, model.u_prime
+ u_prime_inv = model.u_prime_inv
+ grid, shocks = model.grid, model.shocks
+
+ # Determine endogenous grid
+ x = grid + σ_array # x_i = k_i + c_i
+
+ # Linear interpolation of policy using endogenous grid
+ σ = lambda x_val: np.interp(x_val, x, σ_array)
+
+ # Allocate memory for new consumption array
+ c = np.empty_like(grid)
+
+ # Solve for updated consumption value
+ for i, k in enumerate(grid):
+ vals = u_prime(σ(f(k) * shocks)) * f_prime(k) * shocks
+ c[i] = u_prime_inv(β * np.mean(vals))
+
+ return c
+```
+
+Note the lack of any root-finding algorithm.
+
+### Testing
+
+First we create an instance.
+
+```{code-cell} python3
+# Define utility and production functions with derivatives
+α = 0.4
+u = lambda c: np.log(c)
+u_prime = lambda c: 1 / c
+u_prime_inv = lambda x: 1 / x
+f = lambda k: k**α
+f_prime = lambda k: α * k**(α - 1)
+
+model = create_model(u=u, f=f, α=α, u_prime=u_prime,
+ f_prime=f_prime, u_prime_inv=u_prime_inv)
+grid = model.grid
+```
+
+Here's our solver routine:
+
+```{code-cell} python3
+def solve_model_time_iter(model: Model,
+ σ_init: np.ndarray,
+ tol: float = 1e-5,
+ max_iter: int = 1000,
+ verbose: bool = True) -> np.ndarray:
+ """
+ Solve the model using time iteration with EGM.
+ """
+ σ = σ_init
+ error = tol + 1
+ i = 0
+
+ while error > tol and i < max_iter:
+ σ_new = K(σ, model)
+ error = np.max(np.abs(σ_new - σ))
+ σ = σ_new
+ i += 1
+ if verbose:
+ print(f"Iteration {i}, error = {error}")
+
+ if i == max_iter:
+ print("Warning: maximum iterations reached")
+
+ return σ
+```
+
+Let's call it:
+
+```{code-cell} python3
+σ_init = np.copy(grid)
+σ = solve_model_time_iter(model, σ_init)
+```
+
+Here is a plot of the resulting policy, compared with the true policy:
+
+```{code-cell} python3
+x = grid + σ # x_i = k_i + c_i
+
+fig, ax = plt.subplots()
+
+ax.plot(x, σ, lw=2,
+ alpha=0.8, label='approximate policy function')
+
+ax.plot(x, σ_star(x, model.α, model.β), 'k--',
+ lw=2, alpha=0.8, label='true policy function')
+
+ax.legend()
+plt.show()
+```
+
+The maximal absolute deviation between the two policies is
+
+```{code-cell} python3
+np.max(np.abs(σ - σ_star(x, model.α, model.β)))
+```
+
+Here's the execution time:
+
+```{code-cell} python3
+with qe.Timer():
+ σ = solve_model_time_iter(model, σ_init, verbose=False)
+```
+
+EGM is faster than time iteration because it avoids numerical root-finding.
+
+Instead, we invert the marginal utility function directly, which is much more efficient.
diff --git a/lectures/cake_eating_egm_jax.md b/lectures/cake_eating_egm_jax.md
new file mode 100644
index 000000000..2fc3be430
--- /dev/null
+++ b/lectures/cake_eating_egm_jax.md
@@ -0,0 +1,393 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{raw} jupyter
+
+```
+
+# {index}`Cake Eating VI: EGM with JAX `
+
+```{contents} Contents
+:depth: 2
+```
+
+
+## Overview
+
+In this lecture, we'll implement the endogenous grid method (EGM) using JAX.
+
+This lecture builds on {doc}`cake_eating_egm`, which introduced EGM using NumPy.
+
+By converting to JAX, we can leverage fast linear algebra, hardware accelerators, and JIT compilation for improved performance.
+
+We'll also use JAX's `vmap` function to fully vectorize the Coleman-Reffett operator.
+
+Let's start with some standard imports:
+
+```{code-cell} ipython
+import matplotlib.pyplot as plt
+import jax
+import jax.numpy as jnp
+import quantecon as qe
+from typing import NamedTuple
+```
+
+## Implementation
+
+For details on the savings problem and the endogenous grid method (EGM), please see {doc}`cake_eating_egm`.
+
+Here we focus on the JAX implementation of EGM.
+
+We use the same setting as in {doc}`cake_eating_egm`:
+
+* $u(c) = \ln c$,
+* production is Cobb-Douglas, and
+* the shocks are lognormal.
+
+Here are the analytical solutions for comparison.
+
+```{code-cell} python3
+def v_star(x, α, β, μ):
+ """
+ True value function
+ """
+ c1 = jnp.log(1 - α * β) / (1 - β)
+ c2 = (μ + α * jnp.log(α * β)) / (1 - α)
+ c3 = 1 / (1 - β)
+ c4 = 1 / (1 - α * β)
+ return c1 + c2 * (c3 - c4) + c4 * jnp.log(x)
+
+def σ_star(x, α, β):
+ """
+ True optimal policy
+ """
+ return (1 - α * β) * x
+```
+
+The `Model` class stores only the data (grids, shocks, and parameters).
+
+Utility and production functions will be defined globally to work with JAX's JIT compiler.
+
+```{code-cell} python3
+class Model(NamedTuple):
+ β: float # discount factor
+ μ: float # shock location parameter
+ s: float # shock scale parameter
+ grid: jnp.ndarray # state grid
+ shocks: jnp.ndarray # shock draws
+ α: float # production function parameter
+
+
+def create_model(β: float = 0.96,
+ μ: float = 0.0,
+ s: float = 0.1,
+ grid_max: float = 4.0,
+ grid_size: int = 120,
+ shock_size: int = 250,
+ seed: int = 1234,
+ α: float = 0.4) -> Model:
+ """
+ Creates an instance of the cake eating model.
+ """
+ # Set up grid
+ grid = jnp.linspace(1e-4, grid_max, grid_size)
+
+ # Store shocks (with a seed, so results are reproducible)
+ key = jax.random.PRNGKey(seed)
+ shocks = jnp.exp(μ + s * jax.random.normal(key, shape=(shock_size,)))
+
+ return Model(β=β, μ=μ, s=s, grid=grid, shocks=shocks, α=α)
+```
+
+Here's the Coleman-Reffett operator using EGM.
+
+The key JAX feature here is `vmap`, which vectorizes the computation over the grid points.
+
+```{code-cell} python3
+def K(σ_array: jnp.ndarray, model: Model) -> jnp.ndarray:
+ """
+ The Coleman-Reffett operator using EGM
+
+ """
+
+ # Simplify names
+ β, α = model.β, model.α
+ grid, shocks = model.grid, model.shocks
+
+ # Determine endogenous grid
+ x = grid + σ_array # x_i = k_i + c_i
+
+ # Linear interpolation of policy using endogenous grid
+ σ = lambda x_val: jnp.interp(x_val, x, σ_array)
+
+ # Define function to compute consumption at a single grid point
+ def compute_c(k):
+ vals = u_prime(σ(f(k, α) * shocks)) * f_prime(k, α) * shocks
+ return u_prime_inv(β * jnp.mean(vals))
+
+ # Vectorize over grid using vmap
+ compute_c_vectorized = jax.vmap(compute_c)
+ c = compute_c_vectorized(grid)
+
+ return c
+```
+
+We define utility and production functions globally.
+
+Note that `f` and `f_prime` take `α` as an explicit argument, allowing them to work with JAX's functional programming model.
+
+```{code-cell} python3
+# Define utility and production functions with derivatives
+u = lambda c: jnp.log(c)
+u_prime = lambda c: 1 / c
+u_prime_inv = lambda x: 1 / x
+f = lambda k, α: k**α
+f_prime = lambda k, α: α * k**(α - 1)
+```
+
+Now we create a model instance.
+
+```{code-cell} python3
+α = 0.4
+
+model = create_model(α=α)
+grid = model.grid
+```
+
+The solver uses JAX's `jax.lax.while_loop` for the iteration and is JIT-compiled for speed.
+
+```{code-cell} python3
+@jax.jit
+def solve_model_time_iter(model: Model,
+ σ_init: jnp.ndarray,
+ tol: float = 1e-5,
+ max_iter: int = 1000) -> jnp.ndarray:
+ """
+ Solve the model using time iteration with EGM.
+ """
+
+ def condition(loop_state):
+ i, σ, error = loop_state
+ return (error > tol) & (i < max_iter)
+
+ def body(loop_state):
+ i, σ, error = loop_state
+ σ_new = K(σ, model)
+ error = jnp.max(jnp.abs(σ_new - σ))
+ return i + 1, σ_new, error
+
+ # Initialize loop state
+ initial_state = (0, σ_init, tol + 1)
+
+ # Run the loop
+ i, σ, error = jax.lax.while_loop(condition, body, initial_state)
+
+ return σ
+```
+
+We solve the model starting from an initial guess.
+
+```{code-cell} python3
+σ_init = jnp.copy(grid)
+σ = solve_model_time_iter(model, σ_init)
+```
+
+Let's plot the resulting policy against the analytical solution.
+
+```{code-cell} python3
+x = grid + σ # x_i = k_i + c_i
+
+fig, ax = plt.subplots()
+
+ax.plot(x, σ, lw=2,
+ alpha=0.8, label='approximate policy function')
+
+ax.plot(x, σ_star(x, model.α, model.β), 'k--',
+ lw=2, alpha=0.8, label='true policy function')
+
+ax.legend()
+plt.show()
+```
+
+The fit is very good.
+
+```{code-cell} python3
+max_dev = jnp.max(jnp.abs(σ - σ_star(x, model.α, model.β)))
+print(f"Maximum absolute deviation: {max_dev:.7}")
+```
+
+The JAX implementation is very fast thanks to JIT compilation and vectorization.
+
+```{code-cell} python3
+with qe.Timer(precision=8):
+ σ = solve_model_time_iter(model, σ_init).block_until_ready()
+```
+
+This speed comes from:
+
+* JIT compilation of the entire solver
+* Vectorization via `vmap` in the Coleman-Reffett operator
+* Use of `jax.lax.while_loop` instead of a Python loop
+* Efficient JAX array operations throughout
+
+## Exercises
+
+```{exercise}
+:label: cake_egm_jax_ex1
+
+Solve the stochastic cake eating problem with CRRA utility
+
+$$
+ u(c) = \frac{c^{1 - \gamma} - 1}{1 - \gamma}
+$$
+
+Compare the optimal policies for values of $\gamma$ approaching 1 from above (e.g., 1.05, 1.1, 1.2).
+
+Show that as $\gamma \to 1$, the optimal policy converges to the policy obtained with log utility ($\gamma = 1$).
+
+Hint: Use values of $\gamma$ close to 1 to ensure the endogenous grids have similar coverage and make visual comparison easier.
+```
+
+```{solution-start} cake_egm_jax_ex1
+:class: dropdown
+```
+
+We need to create a version of the Coleman-Reffett operator and solver that work with CRRA utility.
+
+The key is to parameterize the utility functions by $\gamma$.
+
+```{code-cell} python3
+def u_crra(c, γ):
+ return (c**(1 - γ) - 1) / (1 - γ)
+
+def u_prime_crra(c, γ):
+ return c**(-γ)
+
+def u_prime_inv_crra(x, γ):
+ return x**(-1/γ)
+```
+
+Now we create a version of the Coleman-Reffett operator that takes $\gamma$ as a parameter.
+
+```{code-cell} python3
+def K_crra(σ_array: jnp.ndarray, model: Model, γ: float) -> jnp.ndarray:
+ """
+ The Coleman-Reffett operator using EGM with CRRA utility
+ """
+ # Simplify names
+ β, α = model.β, model.α
+ grid, shocks = model.grid, model.shocks
+
+ # Determine endogenous grid
+ x = grid + σ_array
+
+ # Linear interpolation of policy using endogenous grid
+ σ = lambda x_val: jnp.interp(x_val, x, σ_array)
+
+ # Define function to compute consumption at a single grid point
+ def compute_c(k):
+ vals = u_prime_crra(σ(f(k, α) * shocks), γ) * f_prime(k, α) * shocks
+ return u_prime_inv_crra(β * jnp.mean(vals), γ)
+
+ # Vectorize over grid using vmap
+ compute_c_vectorized = jax.vmap(compute_c)
+ c = compute_c_vectorized(grid)
+
+ return c
+```
+
+We also need a solver that uses this operator.
+
+```{code-cell} python3
+@jax.jit
+def solve_model_crra(model: Model,
+ σ_init: jnp.ndarray,
+ γ: float,
+ tol: float = 1e-5,
+ max_iter: int = 1000) -> jnp.ndarray:
+ """
+ Solve the model using time iteration with EGM and CRRA utility.
+ """
+
+ def condition(loop_state):
+ i, σ, error = loop_state
+ return (error > tol) & (i < max_iter)
+
+ def body(loop_state):
+ i, σ, error = loop_state
+ σ_new = K_crra(σ, model, γ)
+ error = jnp.max(jnp.abs(σ_new - σ))
+ return i + 1, σ_new, error
+
+ # Initialize loop state
+ initial_state = (0, σ_init, tol + 1)
+
+ # Run the loop
+ i, σ, error = jax.lax.while_loop(condition, body, initial_state)
+
+ return σ
+```
+
+Now we solve for $\gamma = 1$ (log utility) and values approaching 1 from above.
+
+```{code-cell} python3
+γ_values = [1.0, 1.05, 1.1, 1.2]
+policies = {}
+
+model_crra = create_model(α=α)
+
+for γ in γ_values:
+ σ_init = jnp.copy(model_crra.grid)
+ σ_gamma = solve_model_crra(model_crra, σ_init, γ).block_until_ready()
+ policies[γ] = σ_gamma
+ print(f"Solved for γ = {γ}")
+```
+
+Plot the policies on their endogenous grids.
+
+```{code-cell} python3
+fig, ax = plt.subplots()
+
+for γ in γ_values:
+ x = model_crra.grid + policies[γ]
+ if γ == 1.0:
+ ax.plot(x, policies[γ], 'k-', linewidth=2,
+ label=f'γ = {γ:.2f} (log utility)', alpha=0.8)
+ else:
+ ax.plot(x, policies[γ], label=f'γ = {γ:.2f}', alpha=0.8)
+
+ax.set_xlabel('State x')
+ax.set_ylabel('Consumption σ(x)')
+ax.legend()
+ax.set_title('Optimal policies: CRRA utility approaching log case')
+plt.show()
+```
+
+Note that the plots for $\gamma > 1$ do not cover the entire x-axis range shown.
+
+This is because the endogenous grid $x = k + \sigma(k)$ depends on the consumption policy, which varies with $\gamma$.
+
+Let's check the maximum deviation between the log utility case ($\gamma = 1.0$) and values approaching from above.
+
+```{code-cell} python3
+for γ in [1.05, 1.1, 1.2]:
+ max_diff = jnp.max(jnp.abs(policies[1.0] - policies[γ]))
+ print(f"Max difference between γ=1.0 and γ={γ}: {max_diff:.6}")
+```
+
+As expected, the differences decrease as $\gamma$ approaches 1 from above, confirming convergence.
+
+```{solution-end}
+```
diff --git a/lectures/cake_eating_numerical.md b/lectures/cake_eating_numerical.md
index 3a0c2c3aa..b5aec3f83 100644
--- a/lectures/cake_eating_numerical.md
+++ b/lectures/cake_eating_numerical.md
@@ -17,7 +17,7 @@ kernelspec:
## Overview
-In this lecture we continue the study of {doc}`the cake eating problem `.
+In this lecture we continue the study of {doc}`the cake eating problem `.
The aim of this lecture is to solve the problem using numerical
methods.
@@ -34,17 +34,27 @@ simple problem.
Since we know the analytical solution, this will allow us to assess the
accuracy of alternative numerical methods.
+
+```{note}
+The code below aims for clarity rather than maximum efficiency.
+
+In the lectures below we will explore best practice for speed and efficiency.
+
+Let's put these algorithm and code optimizations to one side for now.
+```
+
We will use the following imports:
```{code-cell} ipython
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import minimize_scalar, bisect
+from typing import NamedTuple
```
## Reviewing the Model
-You might like to {doc}`review the details ` before we start.
+You might like to {doc}`review the details ` before we start.
Recall in particular that the Bellman equation is
@@ -61,7 +71,14 @@ The analytical solutions for the value function and optimal policy were found
to be as follows.
```{code-cell} python3
-:load: _static/lecture_specific/cake_eating_numerical/analytical.py
+def c_star(x, β, γ):
+
+ return (1 - β ** (1/γ)) * x
+
+
+def v_star(x, β, γ):
+
+ return (1 - β**(1 / γ))**(-γ) * (x**(1-γ) / (1-γ))
```
Our first aim is to obtain these analytical solutions numerically.
@@ -74,11 +91,11 @@ This is a form of **successive approximation**, and was discussed in our {doc}`l
The basic idea is:
-1. Take an arbitary intial guess of $v$.
+1. Take an arbitrary initial guess of $v$.
1. Obtain an update $w$ defined by
$$
- w(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\}
+ w(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\}
$$
1. Stop if $w$ is approximately equal to $v$, otherwise set
@@ -157,80 +174,82 @@ def maximize(g, a, b, args):
return maximizer, maximum
```
-We'll store the parameters $\beta$ and $\gamma$ in a
-class called `CakeEating`.
-
-The same class will also provide a method called `state_action_value` that
-returns the value of a consumption choice given a particular state and guess
-of $v$.
+We'll store the parameters $\beta$ and $\gamma$ and the grid in a
+`NamedTuple` called `Model`.
```{code-cell} python3
-class CakeEating:
-
- def __init__(self,
- β=0.96, # discount factor
- γ=1.5, # degree of relative risk aversion
- x_grid_min=1e-3, # exclude zero for numerical stability
- x_grid_max=2.5, # size of cake
- x_grid_size=120):
-
- self.β, self.γ = β, γ
-
- # Set up grid
- self.x_grid = np.linspace(x_grid_min, x_grid_max, x_grid_size)
-
- # Utility function
- def u(self, c):
-
- γ = self.γ
-
- if γ == 1:
- return np.log(c)
- else:
- return (c ** (1 - γ)) / (1 - γ)
+# Create model data structure
+class Model(NamedTuple):
+ β: float
+ γ: float
+ x_grid: np.ndarray
+
+def create_cake_eating_model(β=0.96, # discount factor
+ γ=1.5, # degree of relative risk aversion
+ x_grid_min=1e-3, # exclude zero for numerical stability
+ x_grid_max=2.5, # size of cake
+ x_grid_size=120):
+ """
+ Creates an instance of the cake eating model.
+ """
+ x_grid = np.linspace(x_grid_min, x_grid_max, x_grid_size)
+ return Model(β=β, γ=γ, x_grid=x_grid)
+```
- # first derivative of utility function
- def u_prime(self, c):
+Now we define utility functions that operate on the model:
- return c ** (-self.γ)
+```{code-cell} python3
+def u(c, γ):
+ """
+ Utility function.
+ """
+ if γ == 1:
+ return np.log(c)
+ else:
+ return (c ** (1 - γ)) / (1 - γ)
- def state_action_value(self, c, x, v_array):
- """
- Right hand side of the Bellman equation given x and c.
- """
+def u_prime(c, γ):
+ """
+ First derivative of utility function.
+ """
+ return c ** (-γ)
- u, β = self.u, self.β
- v = lambda x: np.interp(x, self.x_grid, v_array)
+def state_action_value(c, x, v_array, model):
+ """
+ Right hand side of the Bellman equation given x and c.
+ """
+ β, γ, x_grid = model.β, model.γ, model.x_grid
+ v = lambda x: np.interp(x, x_grid, v_array)
- return u(c) + β * v(x - c)
+ return u(c, γ) + β * v(x - c)
```
We now define the Bellman operation:
```{code-cell} python3
-def T(v, ce):
+def T(v, model):
"""
The Bellman operator. Updates the guess of the value function.
- * ce is an instance of CakeEating
+ * model is an instance of Model
* v is an array representing a guess of the value function
"""
v_new = np.empty_like(v)
- for i, x in enumerate(ce.x_grid):
+ for i, x in enumerate(model.x_grid):
# Maximize RHS of Bellman equation at state x
- v_new[i] = maximize(ce.state_action_value, 1e-10, x, (x, v))[1]
+ v_new[i] = maximize(state_action_value, 1e-10, x, (x, v, model))[1]
return v_new
```
After defining the Bellman operator, we are ready to solve the model.
-Let's start by creating a `CakeEating` instance using the default parameterization.
+Let's start by creating a model using the default parameterization.
```{code-cell} python3
-ce = CakeEating()
+model = create_cake_eating_model()
```
Now let's see the iteration of the value function in action.
@@ -239,9 +258,9 @@ We start from guess $v$ given by $v(x) = u(x)$ for every
$x$ grid point.
```{code-cell} python3
-x_grid = ce.x_grid
-v = ce.u(x_grid) # Initial guess
-n = 12 # Number of iterations
+x_grid = model.x_grid
+v = u(x_grid, model.γ) # Initial guess
+n = 12 # Number of iterations
fig, ax = plt.subplots()
@@ -249,7 +268,7 @@ ax.plot(x_grid, v, color=plt.cm.jet(0),
lw=2, alpha=0.6, label='Initial guess')
for i in range(n):
- v = T(v, ce) # Apply the Bellman operator
+ v = T(v, model) # Apply the Bellman operator
ax.plot(x_grid, v, color=plt.cm.jet(i / n), lw=2, alpha=0.6)
ax.legend()
@@ -265,19 +284,19 @@ To do this more systematically, we introduce a wrapper function called
satisfied.
```{code-cell} python3
-def compute_value_function(ce,
+def compute_value_function(model,
tol=1e-4,
max_iter=1000,
verbose=True,
print_skip=25):
# Set up loop
- v = np.zeros(len(ce.x_grid)) # Initial guess
+ v = np.zeros(len(model.x_grid)) # Initial guess
i = 0
error = tol + 1
while i < max_iter and error > tol:
- v_new = T(v, ce)
+ v_new = T(v, model)
error = np.max(np.abs(v - v_new))
i += 1
@@ -298,7 +317,7 @@ def compute_value_function(ce,
Now let's call it, noting that it takes a little while to run.
```{code-cell} python3
-v = compute_value_function(ce)
+v = compute_value_function(model)
```
Now we can plot and see what the converged value function looks like.
@@ -317,7 +336,7 @@ plt.show()
Next let's compare it to the analytical solution.
```{code-cell} python3
-v_analytical = v_star(ce.x_grid, ce.β, ce.γ)
+v_analytical = v_star(model.x_grid, model.β, model.γ)
```
```{code-cell} python3
@@ -338,15 +357,29 @@ less so near the lower boundary.
The reason is that the utility function and hence value function is very
steep near the lower boundary, and hence hard to approximate.
+```{note}
+One way to fix this issue is to use a nonlinear grid, with more points in the
+neighborhood of zero.
+
+Instead of pursuing this idea, however, we will turn our attention to
+working with policy functions.
+
+We will see that value function iteration can be avoided by iterating on a guess
+of the policy function instead.
+
+These ideas will be explored over the next few lectures.
+```
+
+
### Policy Function
-Let's see how this plays out in terms of computing the optimal policy.
+Let's try computing the optimal policy.
-In the {doc}`first lecture on cake eating `, the optimal
+In the {doc}`first lecture on cake eating `, the optimal
consumption policy was shown to be
$$
-\sigma^*(x) = \left(1-\beta^{1/\gamma} \right) x
+ \sigma^*(x) = \left(1-\beta^{1/\gamma} \right) x
$$
Let's see if our numerical results lead to something similar.
@@ -365,21 +398,21 @@ above.
Here's the function:
```{code-cell} python3
-def σ(ce, v):
+def σ(model, v):
"""
The optimal policy function. Given the value function,
it finds optimal consumption in each state.
- * ce is an instance of CakeEating
+ * model is an instance of Model
* v is a value function array
"""
c = np.empty_like(v)
- for i in range(len(ce.x_grid)):
- x = ce.x_grid[i]
+ for i in range(len(model.x_grid)):
+ x = model.x_grid[i]
# Maximize RHS of Bellman equation at state x
- c[i] = maximize(ce.state_action_value, 1e-10, x, (x, v))[0]
+ c[i] = maximize(state_action_value, 1e-10, x, (x, v, model))[0]
return c
```
@@ -387,19 +420,19 @@ def σ(ce, v):
Now let's pass the approximate value function and compute optimal consumption:
```{code-cell} python3
-c = σ(ce, v)
+c = σ(model, v)
```
(pol_an)=
Let's plot this next to the true analytical solution
```{code-cell} python3
-c_analytical = c_star(ce.x_grid, ce.β, ce.γ)
+c_analytical = c_star(model.x_grid, model.β, model.γ)
fig, ax = plt.subplots()
-ax.plot(ce.x_grid, c_analytical, label='analytical')
-ax.plot(ce.x_grid, c, label='numerical')
+ax.plot(model.x_grid, c_analytical, label='analytical')
+ax.plot(model.x_grid, c, label='numerical')
ax.set_ylabel(r'$\sigma(x)$')
ax.set_xlabel('$x$')
ax.legend()
@@ -417,55 +450,8 @@ However, both changes will lead to a longer compute time.
Another possibility is to use an alternative algorithm, which offers the
possibility of faster compute time and, at the same time, more accuracy.
-We explore this next.
-
-## Time Iteration
-
-Now let's look at a different strategy to compute the optimal policy.
-
-Recall that the optimal policy satisfies the Euler equation
-
-```{math}
-:label: euler-cen
-
-u' (\sigma(x)) = \beta u' ( \sigma(x - \sigma(x)))
-\quad \text{for all } x > 0
-```
-
-Computationally, we can start with any initial guess of
-$\sigma_0$ and now choose $c$ to solve
-
-$$
-u^{\prime}( c ) = \beta u^{\prime} (\sigma_0(x - c))
-$$
-
-Choosing $c$ to satisfy this equation at all $x > 0$ produces a function of $x$.
-
-Call this new function $\sigma_1$, treat it as the new guess and
-repeat.
-
-This is called **time iteration**.
-
-As with value function iteration, we can view the update step as action of an
-operator, this time denoted by $K$.
-
-* In particular, $K\sigma$ is the policy updated from $\sigma$
- using the procedure just described.
-* We will use this terminology in the exercises below.
+We explore this {doc}`soon `.
-The main advantage of time iteration relative to value function iteration is that it operates in policy space rather than value function space.
-
-This is helpful because the policy function has less curvature, and hence is easier to approximate.
-
-In the exercises you are asked to implement time iteration and compare it to
-value function iteration.
-
-You should find that the method is faster and more accurate.
-
-This is due to
-
-1. the curvature issue mentioned just above and
-1. the fact that we are using more information --- in this case, the first order conditions.
## Exercises
@@ -494,178 +480,140 @@ Try to reuse as much code as possible.
:class: dropdown
```
-We need to create a class to hold our primitives and return the right hand side of the Bellman equation.
+We need to create an extended version of our model and state-action value function.
-We will use [inheritance](https://en.wikipedia.org/wiki/Inheritance_%28object-oriented_programming%29) to maximize code reuse.
+We'll create a new `NamedTuple` for the extended cake model and a helper function.
```{code-cell} python3
-class OptimalGrowth(CakeEating):
+# Create extended cake model data structure
+class ExtendedModel(NamedTuple):
+ β: float
+ γ: float
+ α: float
+ x_grid: np.ndarray
+
+def create_extended_model(β=0.96, # discount factor
+ γ=1.5, # degree of relative risk aversion
+ α=0.4, # productivity parameter
+ x_grid_min=1e-3, # exclude zero for numerical stability
+ x_grid_max=2.5, # size of cake
+ x_grid_size=120):
"""
- A subclass of CakeEating that adds the parameter α and overrides
- the state_action_value method.
+ Creates an instance of the extended cake eating model.
"""
+ x_grid = np.linspace(x_grid_min, x_grid_max, x_grid_size)
+ return ExtendedModel(β=β, γ=γ, α=α, x_grid=x_grid)
- def __init__(self,
- β=0.96, # discount factor
- γ=1.5, # degree of relative risk aversion
- α=0.4, # productivity parameter
- x_grid_min=1e-3, # exclude zero for numerical stability
- x_grid_max=2.5, # size of cake
- x_grid_size=120):
-
- self.α = α
- CakeEating.__init__(self, β, γ, x_grid_min, x_grid_max, x_grid_size)
-
- def state_action_value(self, c, x, v_array):
- """
- Right hand side of the Bellman equation given x and c.
- """
-
- u, β, α = self.u, self.β, self.α
- v = lambda x: np.interp(x, self.x_grid, v_array)
-
- return u(c) + β * v((x - c)**α)
-```
+def extended_state_action_value(c, x, v_array, model):
+ """
+ Right hand side of the Bellman equation for the extended cake model given x and c.
+ """
+ β, γ, α, x_grid = model.β, model.γ, model.α, model.x_grid
+ v = lambda x: np.interp(x, x_grid, v_array)
-```{code-cell} python3
-og = OptimalGrowth()
+ return u(c, γ) + β * v((x - c)**α)
```
-Here's the computed value function.
+We also need a modified Bellman operator:
```{code-cell} python3
-v = compute_value_function(og, verbose=False)
-
-fig, ax = plt.subplots()
+def T_extended(v, model):
+ """
+ The Bellman operator for the extended cake model.
+ """
+ v_new = np.empty_like(v)
-ax.plot(x_grid, v, lw=2, alpha=0.6)
-ax.set_ylabel('value', fontsize=12)
-ax.set_xlabel('state $x$', fontsize=12)
+ for i, x in enumerate(model.x_grid):
+ # Maximize RHS of Bellman equation at state x
+ v_new[i] = maximize(extended_state_action_value, 1e-10, x, (x, v, model))[1]
-plt.show()
+ return v_new
```
-Here's the computed policy, combined with the solution we derived above for
-the standard cake eating case $\alpha=1$.
+Now create the model:
```{code-cell} python3
-c_new = σ(og, v)
-
-fig, ax = plt.subplots()
-
-ax.plot(ce.x_grid, c_analytical, label=r'$\alpha=1$ solution')
-ax.plot(ce.x_grid, c_new, label=fr'$\alpha={og.α}$ solution')
-
-ax.set_ylabel('consumption', fontsize=12)
-ax.set_xlabel('$x$', fontsize=12)
-
-ax.legend(fontsize=12)
-
-plt.show()
-```
-
-Consumption is higher when $\alpha < 1$ because, at least for large $x$, the return to savings is lower.
-
-```{solution-end}
-```
-
-
-```{exercise}
-:label: cen_ex2
-
-Implement time iteration, returning to the original case (i.e., dropping the
-modification in the exercise above).
-```
-
-
-```{solution-start} cen_ex2
-:class: dropdown
+model = create_extended_model()
```
-Here's one way to implement time iteration.
+Here's the computed value function.
```{code-cell} python3
-def K(σ_array, ce):
+def compute_value_function_extended(model,
+ tol=1e-4,
+ max_iter=1000,
+ verbose=True,
+ print_skip=25):
"""
- The policy function operator. Given the policy function,
- it updates the optimal consumption using Euler equation.
-
- * σ_array is an array of policy function values on the grid
- * ce is an instance of CakeEating
-
+ Compute value function for extended cake model.
"""
-
- u_prime, β, x_grid = ce.u_prime, ce.β, ce.x_grid
- σ_new = np.empty_like(σ_array)
-
- σ = lambda x: np.interp(x, x_grid, σ_array)
-
- def euler_diff(c, x):
- return u_prime(c) - β * u_prime(σ(x - c))
-
- for i, x in enumerate(x_grid):
-
- # handle small x separately --- helps numerical stability
- if x < 1e-12:
- σ_new[i] = 0.0
-
- # handle other x
- else:
- σ_new[i] = bisect(euler_diff, 1e-10, x - 1e-10, x)
-
- return σ_new
-```
-
-```{code-cell} python3
-def iterate_euler_equation(ce,
- max_iter=500,
- tol=1e-5,
- verbose=True,
- print_skip=25):
-
- x_grid = ce.x_grid
-
- σ = np.copy(x_grid) # initial guess
-
+ v = np.zeros(len(model.x_grid))
i = 0
error = tol + 1
- while i < max_iter and error > tol:
- σ_new = K(σ, ce)
-
- error = np.max(np.abs(σ_new - σ))
+ while i < max_iter and error > tol:
+ v_new = T_extended(v, model)
+ error = np.max(np.abs(v - v_new))
i += 1
-
if verbose and i % print_skip == 0:
print(f"Error at iteration {i} is {error}.")
-
- σ = σ_new
+ v = v_new
if error > tol:
print("Failed to converge!")
elif verbose:
print(f"\nConverged in {i} iterations.")
- return σ
-```
+ return v_new
-```{code-cell} python3
-ce = CakeEating(x_grid_min=0.0)
-c_euler = iterate_euler_equation(ce)
+v = compute_value_function_extended(model, verbose=False)
+
+fig, ax = plt.subplots()
+
+ax.plot(model.x_grid, v, lw=2, alpha=0.6)
+ax.set_ylabel('value', fontsize=12)
+ax.set_xlabel('state $x$', fontsize=12)
+
+plt.show()
```
+Here's the computed policy, combined with the solution we derived above for
+the standard cake eating case $\alpha=1$.
+
```{code-cell} python3
+def σ_extended(model, v):
+ """
+ The optimal policy function for the extended cake model.
+ """
+ c = np.empty_like(v)
+
+ for i in range(len(model.x_grid)):
+ x = model.x_grid[i]
+ c[i] = maximize(extended_state_action_value, 1e-10, x, (x, v, model))[0]
+
+ return c
+
+c_new = σ_extended(model, v)
+
+# Get the baseline model for comparison
+baseline_model = create_cake_eating_model()
+c_analytical = c_star(baseline_model.x_grid, baseline_model.β, baseline_model.γ)
+
fig, ax = plt.subplots()
-ax.plot(ce.x_grid, c_analytical, label='analytical solution')
-ax.plot(ce.x_grid, c_euler, label='time iteration solution')
+ax.plot(baseline_model.x_grid, c_analytical, label=r'$\alpha=1$ solution')
+ax.plot(model.x_grid, c_new, label=fr'$\alpha={model.α}$ solution')
+
+ax.set_ylabel('consumption', fontsize=12)
+ax.set_xlabel('$x$', fontsize=12)
-ax.set_ylabel('consumption')
-ax.set_xlabel('$x$')
ax.legend(fontsize=12)
plt.show()
```
+Consumption is higher when $\alpha < 1$ because, at least for large $x$, the return to savings is lower.
+
```{solution-end}
```
+
diff --git a/lectures/optgrowth.md b/lectures/cake_eating_stochastic.md
similarity index 68%
rename from lectures/optgrowth.md
rename to lectures/cake_eating_stochastic.md
index 0d14a7e5b..ff1187f0b 100644
--- a/lectures/optgrowth.md
+++ b/lectures/cake_eating_stochastic.md
@@ -18,7 +18,7 @@ kernelspec:
```
-# {index}`Optimal Growth I: The Stochastic Optimal Growth Model `
+# {index}`Cake Eating III: Stochastic Dynamics `
```{contents} Contents
:depth: 2
@@ -26,36 +26,53 @@ kernelspec:
## Overview
-In this lecture, we're going to study a simple optimal growth model with one
-agent.
+In this lecture, we continue our study of the cake eating problem, building on
+{doc}`Cake Eating I ` and {doc}`Cake Eating II `.
-The model is a version of the standard one sector infinite horizon growth
-model studied in
+The key difference from the previous lectures is that the cake size now evolves
+stochastically.
-* {cite}`StokeyLucas1989`, chapter 2
-* {cite}`Ljungqvist2012`, section 3.1
-* [EDTC](https://johnstachurski.net/edtc.html), chapter 1
-* {cite}`Sundaram1996`, chapter 12
+We can think of this cake as a harvest that regrows if we save some seeds.
-It is an extension of the simple {doc}`cake eating problem ` we looked at earlier.
+Specifically, if we save (invest) part of today's cake, it grows into next
+period's cake according to a stochastic production process.
-The extension involves
+```{note}
+The term "cake eating" is not such a good fit now that we have a stochastic and
+potentially growing state.
+
+Nonetheless, we'll continue to refer to cake eating to maintain flow from the
+previous lectures.
+
+Soon we'll move to more ambitious optimal savings/consumption problems and adopt
+new terminology.
+
+This lecture serves as a bridge between cake eating and the more ambitious
+problems.
+```
+
+The extensions in this lecture introduce several new elements:
* nonlinear returns to saving, through a production function, and
* stochastic returns, due to shocks to production.
-Despite these additions, the model is still relatively simple.
+Despite these additions, the model remains relatively tractable.
+
+As a first pass, we will solve the model using dynamic programming and value function iteration (VFI).
+
+```{note}
+In later lectures we'll explore more efficient methods for this class of problems.
-We regard it as a stepping stone to more sophisticated models.
+At the same time, VFI is foundational and globally convergent.
-We solve the model using dynamic programming and a range of numerical
-techniques.
+Hence we want to be sure we can use this method too.
+```
-In this first lecture on optimal growth, the solution method will be value
-function iteration (VFI).
+More information on this savings problem can be found in
-While the code in this first lecture runs slowly, we will use a variety of
-techniques to drastically improve execution time over the next few lectures.
+* {cite}`Ljungqvist2012`, Section 3.1
+* [EDTC](https://johnstachurski.net/edtc.html), Chapter 1
+* {cite}`Sundaram1996`, Chapter 12
Let's start with some imports:
@@ -68,10 +85,10 @@ from scipy.optimize import minimize_scalar
## The Model
-```{index} single: Optimal Growth; Model
+```{index} single: Stochastic Cake Eating; Model
```
-Consider an agent who owns an amount $y_t \in \mathbb R_+ := [0, \infty)$ of a consumption good at time $t$.
+Consider an agent who owns an amount $x_t \in \mathbb R_+ := [0, \infty)$ of a consumption good at time $t$.
This output can either be consumed or invested.
@@ -84,7 +101,7 @@ Production is stochastic, in that it also depends on a shock $\xi_{t+1}$ realize
Next period output is
$$
-y_{t+1} := f(k_{t+1}) \xi_{t+1}
+x_{t+1} := f(k_{t+1}) \xi_{t+1}
$$
where $f \colon \mathbb R_+ \to \mathbb R_+$ is called the production function.
@@ -94,7 +111,7 @@ The resource constraint is
```{math}
:label: outcsdp0
-k_{t+1} + c_t \leq y_t
+k_{t+1} + c_t \leq x_t
```
and all variables are required to be nonnegative.
@@ -108,7 +125,7 @@ In what follows,
* The production function $f$ is assumed to be increasing and continuous.
* Depreciation of capital is not made explicit but can be incorporated into the production function.
-While many other treatments of the stochastic growth model use $k_t$ as the state variable, we will use $y_t$.
+While many other treatments of stochastic consumption-saving models use $k_t$ as the state variable, we will use $x_t$.
This will allow us to treat a stochastic model while maintaining only one state variable.
@@ -116,7 +133,7 @@ We consider alternative states and timing specifications in some of our other le
### Optimization
-Taking $y_0$ as given, the agent wishes to maximize
+Taking $x_0$ as given, the agent wishes to maximize
```{math}
:label: texs0_og2
@@ -129,9 +146,9 @@ subject to
```{math}
:label: og_conse
-y_{t+1} = f(y_t - c_t) \xi_{t+1}
+x_{t+1} = f(x_t - c_t) \xi_{t+1}
\quad \text{and} \quad
-0 \leq c_t \leq y_t
+0 \leq c_t \leq x_t
\quad \text{for all } t
```
@@ -152,23 +169,23 @@ In summary, the agent's aim is to select a path $c_0, c_1, c_2, \ldots$ for cons
In the present context
-* $y_t$ is called the *state* variable --- it summarizes the "state of the world" at the start of each period.
+* $x_t$ is called the *state* variable --- it summarizes the "state of the world" at the start of each period.
* $c_t$ is called the *control* variable --- a value chosen by the agent each period after observing the state.
### The Policy Function Approach
-```{index} single: Optimal Growth; Policy Function Approach
+```{index} single: Stochastic Cake Eating; Policy Function Approach
```
One way to think about solving this problem is to look for the best **policy function**.
A policy function is a map from past and present observables into current action.
-We'll be particularly interested in **Markov policies**, which are maps from the current state $y_t$ into a current action $c_t$.
+We'll be particularly interested in **Markov policies**, which are maps from the current state $x_t$ into a current action $c_t$.
For dynamic programming problems such as this one (in fact for any [Markov decision process](https://en.wikipedia.org/wiki/Markov_decision_process)), the optimal policy is always a Markov policy.
-In other words, the current state $y_t$ provides a [sufficient statistic](https://en.wikipedia.org/wiki/Sufficient_statistic)
+In other words, the current state $x_t$ provides a [sufficient statistic](https://en.wikipedia.org/wiki/Sufficient_statistic)
for the history in terms of making an optimal decision today.
This is quite intuitive, but if you wish you can find proofs in texts such as {cite}`StokeyLucas1989` (section 4.1).
@@ -179,7 +196,7 @@ In our context, a Markov policy is a function $\sigma \colon
\mathbb R_+ \to \mathbb R_+$, with the understanding that states are mapped to actions via
$$
-c_t = \sigma(y_t) \quad \text{for all } t
+c_t = \sigma(x_t) \quad \text{for all } t
$$
In what follows, we will call $\sigma$ a *feasible consumption policy* if it satisfies
@@ -187,22 +204,22 @@ In what follows, we will call $\sigma$ a *feasible consumption policy* if it sat
```{math}
:label: idp_fp_og2
-0 \leq \sigma(y) \leq y
+0 \leq \sigma(x) \leq x
\quad \text{for all} \quad
-y \in \mathbb R_+
+x \in \mathbb R_+
```
In other words, a feasible consumption policy is a Markov policy that respects the resource constraint.
The set of all feasible consumption policies will be denoted by $\Sigma$.
-Each $\sigma \in \Sigma$ determines a [continuous state Markov process](https://python-advanced.quantecon.org/stationary_densities.html) $\{y_t\}$ for output via
+Each $\sigma \in \Sigma$ determines a [continuous state Markov process](https://python-advanced.quantecon.org/stationary_densities.html) $\{x_t\}$ for output via
```{math}
:label: firstp0_og2
-y_{t+1} = f(y_t - \sigma(y_t)) \xi_{t+1},
-\quad y_0 \text{ given}
+x_{t+1} = f(x_t - \sigma(x_t)) \xi_{t+1},
+\quad x_0 \text{ given}
```
This is the time path for output when we choose and stick with the policy $\sigma$.
@@ -218,12 +235,12 @@ We insert this process into the objective function to get
\right] =
\mathbb E
\left[ \,
-\sum_{t = 0}^{\infty} \beta^t u(\sigma(y_t)) \,
+\sum_{t = 0}^{\infty} \beta^t u(\sigma(x_t)) \,
\right]
```
This is the total expected present value of following policy $\sigma$ forever,
-given initial income $y_0$.
+given initial income $x_0$.
The aim is to select a policy that makes this number as large as possible.
@@ -236,26 +253,26 @@ The $\sigma$ associated with a given policy $\sigma$ is the mapping defined by
```{math}
:label: vfcsdp00
-v_{\sigma}(y) =
-\mathbb E \left[ \sum_{t = 0}^{\infty} \beta^t u(\sigma(y_t)) \right]
+v_{\sigma}(x) =
+\mathbb E \left[ \sum_{t = 0}^{\infty} \beta^t u(\sigma(x_t)) \right]
```
-when $\{y_t\}$ is given by {eq}`firstp0_og2` with $y_0 = y$.
+when $\{x_t\}$ is given by {eq}`firstp0_og2` with $x_0 = x$.
In other words, it is the lifetime value of following policy $\sigma$
-starting at initial condition $y$.
+starting at initial condition $x$.
The **value function** is then defined as
```{math}
:label: vfcsdp0
-v^*(y) := \sup_{\sigma \in \Sigma} \; v_{\sigma}(y)
+v^*(x) := \sup_{\sigma \in \Sigma} \; v_{\sigma}(x)
```
-The value function gives the maximal value that can be obtained from state $y$, after considering all feasible policies.
+The value function gives the maximal value that can be obtained from state $x$, after considering all feasible policies.
-A policy $\sigma \in \Sigma$ is called **optimal** if it attains the supremum in {eq}`vfcsdp0` for all $y \in \mathbb R_+$.
+A policy $\sigma \in \Sigma$ is called **optimal** if it attains the supremum in {eq}`vfcsdp0` for all $x \in \mathbb R_+$.
### The Bellman Equation
@@ -266,24 +283,23 @@ For this problem, the Bellman equation takes the form
```{math}
:label: fpb30
-v(y) = \max_{0 \leq c \leq y}
+v(x) = \max_{0 \leq c \leq x}
\left\{
- u(c) + \beta \int v(f(y - c) z) \phi(dz)
+ u(c) + \beta \int v(f(x - c) z) \phi(dz)
\right\}
-\qquad (y \in \mathbb R_+)
+\qquad (x \in \mathbb R_+)
```
This is a *functional equation in* $v$.
-The term $\int v(f(y - c) z) \phi(dz)$ can be understood as the expected next period value when
+The term $\int v(f(x - c) z) \phi(dz)$ can be understood as the expected next period value when
* $v$ is used to measure value
-* the state is $y$
+* the state is $x$
* consumption is set to $c$
-As shown in [EDTC](https://johnstachurski.net/edtc.html), theorem 10.1.11 and a range of other texts
-
-> *The value function* $v^*$ *satisfies the Bellman equation*
+As shown in [EDTC](https://johnstachurski.net/edtc.html), theorem 10.1.11 and a range of other texts,
+the value function $v^*$ satisfies the Bellman equation.
In other words, {eq}`fpb30` holds when $v=v^*$.
@@ -303,18 +319,18 @@ The primary importance of the value function is that we can use it to compute op
The details are as follows.
Given a continuous function $v$ on $\mathbb R_+$, we say that
-$\sigma \in \Sigma$ is $v$-**greedy** if $\sigma(y)$ is a solution to
+$\sigma \in \Sigma$ is $v$-**greedy** if $\sigma(x)$ is a solution to
```{math}
:label: defgp20
-\max_{0 \leq c \leq y}
+\max_{0 \leq c \leq x}
\left\{
- u(c) + \beta \int v(f(y - c) z) \phi(dz)
+ u(c) + \beta \int v(f(x - c) z) \phi(dz)
\right\}
```
-for every $y \in \mathbb R_+$.
+for every $x \in \mathbb R_+$.
In other words, $\sigma \in \Sigma$ is $v$-greedy if it optimally
trades off current and future rewards when $v$ is taken to be the value
@@ -348,11 +364,11 @@ The Bellman operator is denoted by $T$ and defined by
```{math}
:label: fcbell20_optgrowth
-Tv(y) := \max_{0 \leq c \leq y}
+Tv(x) := \max_{0 \leq c \leq x}
\left\{
- u(c) + \beta \int v(f(y - c) z) \phi(dz)
+ u(c) + \beta \int v(f(x - c) z) \phi(dz)
\right\}
-\qquad (y \in \mathbb R_+)
+\qquad (x \in \mathbb R_+)
```
In other words, $T$ sends the function $v$ into the new function
@@ -361,14 +377,14 @@ $Tv$ defined by {eq}`fcbell20_optgrowth`.
By construction, the set of solutions to the Bellman equation
{eq}`fpb30` *exactly coincides with* the set of fixed points of $T$.
-For example, if $Tv = v$, then, for any $y \geq 0$,
+For example, if $Tv = v$, then, for any $x \geq 0$,
$$
-v(y)
-= Tv(y)
-= \max_{0 \leq c \leq y}
+v(x)
+= Tv(x)
+= \max_{0 \leq c \leq x}
\left\{
- u(c) + \beta \int v^*(f(y - c) z) \phi(dz)
+ u(c) + \beta \int v^*(f(x - c) z) \phi(dz)
\right\}
$$
@@ -385,7 +401,7 @@ One can also show that $T$ is a contraction mapping on the set of
continuous bounded functions on $\mathbb R_+$ under the supremum distance
$$
-\rho(g, h) = \sup_{y \geq 0} |g(y) - h(y)|
+\rho(g, h) = \sup_{x \geq 0} |g(x) - h(x)|
$$
See [EDTC](https://johnstachurski.net/edtc.html), lemma 10.1.18.
@@ -420,12 +436,17 @@ In practice economists often work with unbounded utility functions --- and so wi
In the unbounded setting, various optimality theories exist.
-Unfortunately, they tend to be case-specific, as opposed to valid for a large range of applications.
-
Nevertheless, their main conclusions are usually in line with those stated for
the bounded case just above (as long as we drop the word "bounded").
-Consult, for example, section 12.2 of [EDTC](https://johnstachurski.net/edtc.html), {cite}`Kamihigashi2012` or {cite}`MV2010`.
+```{note}
+
+Consult the following references for more on the unbounded case:
+
+* The lecture {doc}`ifp_advanced`.
+* Section 12.2 of [EDTC](https://johnstachurski.net/edtc.html).
+```
+
## Computation
@@ -440,10 +461,6 @@ flexibility.
Both of these things are helpful, but they do cost us some speed --- as you
will see when you run the code.
-{doc}`Later ` we will sacrifice some of this clarity and
-flexibility in order to accelerate our code with just-in-time (JIT)
-compilation.
-
The algorithm we will use is fitted value function iteration, which was
described in earlier lectures {doc}`the McCall model ` and
{doc}`cake eating `.
@@ -452,13 +469,13 @@ The algorithm will be
(fvi_alg)=
1. Begin with an array of values $\{ v_1, \ldots, v_I \}$ representing
- the values of some initial function $v$ on the grid points $\{ y_1, \ldots, y_I \}$.
+ the values of some initial function $v$ on the grid points $\{ x_1, \ldots, x_I \}$.
1. Build a function $\hat v$ on the state space $\mathbb R_+$ by
linear interpolation, based on these data points.
-1. Obtain and record the value $T \hat v(y_i)$ on each grid point
- $y_i$ by repeatedly solving {eq}`fcbell20_optgrowth`.
+1. Obtain and record the value $T \hat v(x_i)$ on each grid point
+ $x_i$ by repeatedly solving {eq}`fcbell20_optgrowth`.
1. Unless some stopping condition is satisfied, set
- $\{ v_1, \ldots, v_I \} = \{ T \hat v(y_1), \ldots, T \hat v(y_I) \}$ and go to step 2.
+ $\{ v_1, \ldots, v_I \} = \{ T \hat v(x_1), \ldots, T \hat v(x_I) \}$ and go to step 2.
### Scalar Maximization
@@ -490,7 +507,7 @@ def maximize(g, a, b, args):
return maximizer, maximum
```
-### Optimal Growth Model
+### Stochastic Cake Eating Model
We will assume for now that $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ where
@@ -498,44 +515,56 @@ We will assume for now that $\phi$ is the distribution of $\xi := \exp(\mu + s \
* $\mu$ is a shock location parameter and
* $s$ is a shock scale parameter.
-We will store this and other primitives of the optimal growth model in a class.
-
-The class, defined below, combines both parameters and a method that realizes the
-right hand side of the Bellman equation {eq}`fpb30`.
+We will store the primitives of the model in a `NamedTuple`.
```{code-cell} python3
-class OptimalGrowthModel:
-
- def __init__(self,
- u, # utility function
- f, # production function
- β=0.96, # discount factor
- μ=0, # shock location parameter
- s=0.1, # shock scale parameter
- grid_max=4,
- grid_size=120,
- shock_size=250,
- seed=1234):
-
- self.u, self.f, self.β, self.μ, self.s = u, f, β, μ, s
+from typing import NamedTuple, Callable
+
+class Model(NamedTuple):
+ u: Callable # utility function
+ f: Callable # production function
+ β: float # discount factor
+ μ: float # shock location parameter
+ s: float # shock scale parameter
+ grid: np.ndarray # state grid
+ shocks: np.ndarray # shock draws
+
+
+def create_model(u: Callable,
+ f: Callable,
+ β: float = 0.96,
+ μ: float = 0.0,
+ s: float = 0.1,
+ grid_max: float = 4.0,
+ grid_size: int = 120,
+ shock_size: int = 250,
+ seed: int = 1234) -> Model:
+ """
+ Creates an instance of the cake eating model.
+ """
+ # Set up grid
+ grid = np.linspace(1e-4, grid_max, grid_size)
- # Set up grid
- self.grid = np.linspace(1e-4, grid_max, grid_size)
+ # Store shocks (with a seed, so results are reproducible)
+ np.random.seed(seed)
+ shocks = np.exp(μ + s * np.random.randn(shock_size))
- # Store shocks (with a seed, so results are reproducible)
- np.random.seed(seed)
- self.shocks = np.exp(μ + s * np.random.randn(shock_size))
+ return Model(u=u, f=f, β=β, μ=μ, s=s, grid=grid, shocks=shocks)
- def state_action_value(self, c, y, v_array):
- """
- Right hand side of the Bellman equation.
- """
- u, f, β, shocks = self.u, self.f, self.β, self.shocks
+def state_action_value(c: float,
+ model: Model,
+ x: float,
+ v_array: np.ndarray) -> float:
+ """
+ Right hand side of the Bellman equation.
+ """
+ u, f, β, shocks = model.u, model.f, model.β, model.shocks
+ grid = model.grid
- v = interp1d(self.grid, v_array)
+ v = interp1d(grid, v_array)
- return u(c) + β * np.mean(v(f(y - c) * shocks))
+ return u(c) + β * np.mean(v(f(x - c) * shocks))
```
In the second last line we are using linear interpolation.
@@ -544,7 +573,7 @@ In the last line, the expectation in {eq}`fcbell20_optgrowth` is
computed via [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_integration), using the approximation
$$
-\int v(f(y - c) z) \phi(dz) \approx \frac{1}{n} \sum_{i=1}^n v(f(y - c) \xi_i)
+\int v(f(x - c) z) \phi(dz) \approx \frac{1}{n} \sum_{i=1}^n v(f(x - c) \xi_i)
$$
where $\{\xi_i\}_{i=1}^n$ are IID draws from $\phi$.
@@ -558,35 +587,32 @@ but it does have some theoretical advantages in the present setting.
The next function implements the Bellman operator.
-(We could have added it as a method to the `OptimalGrowthModel` class, but we
-prefer small classes rather than monolithic ones for this kind of
-numerical work.)
-
```{code-cell} python3
-def T(v, og):
+def T(v: np.ndarray, model: Model) -> tuple[np.ndarray, np.ndarray]:
"""
The Bellman operator. Updates the guess of the value function
and also computes a v-greedy policy.
- * og is an instance of OptimalGrowthModel
+ * model is an instance of Model
* v is an array representing a guess of the value function
"""
+ grid = model.grid
v_new = np.empty_like(v)
v_greedy = np.empty_like(v)
for i in range(len(grid)):
- y = grid[i]
+ x = grid[i]
- # Maximize RHS of Bellman equation at state y
- c_star, v_max = maximize(og.state_action_value, 1e-10, y, (y, v))
+ # Maximize RHS of Bellman equation at state x
+ c_star, v_max = maximize(state_action_value, 1e-10, x, (model, x, v))
v_new[i] = v_max
v_greedy[i] = c_star
return v_greedy, v_new
```
-(benchmark_growth_mod)=
+(benchmark_cake_mod)=
### An Example
Let's suppose now that
@@ -602,19 +628,19 @@ For this particular problem, an exact analytical solution is available (see {cit
```{math}
:label: dpi_tv
-v^*(y) =
+v^*(x) =
\frac{\ln (1 - \alpha \beta) }{ 1 - \beta} +
\frac{(\mu + \alpha \ln (\alpha \beta))}{1 - \alpha}
\left[
\frac{1}{1- \beta} - \frac{1}{1 - \alpha \beta}
\right] +
- \frac{1}{1 - \alpha \beta} \ln y
+ \frac{1}{1 - \alpha \beta} \ln x
```
and optimal consumption policy
$$
-\sigma^*(y) = (1 - \alpha \beta ) y
+\sigma^*(x) = (1 - \alpha \beta ) x
$$
It is valuable to have these closed-form solutions because it lets us check
@@ -623,17 +649,31 @@ whether our code works for this particular case.
In Python, the functions above can be expressed as:
```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/cd_analytical.py
+def v_star(x, α, β, μ):
+ """
+ True value function
+ """
+ c1 = np.log(1 - α * β) / (1 - β)
+ c2 = (μ + α * np.log(α * β)) / (1 - α)
+ c3 = 1 / (1 - β)
+ c4 = 1 / (1 - α * β)
+ return c1 + c2 * (c3 - c4) + c4 * np.log(x)
+
+def σ_star(x, α, β):
+ """
+ True optimal policy
+ """
+ return (1 - α * β) * x
```
-Next let's create an instance of the model with the above primitives and assign it to the variable `og`.
+Next let's create an instance of the model with the above primitives and assign it to the variable `model`.
```{code-cell} python3
α = 0.4
def fcd(k):
return k**α
-og = OptimalGrowthModel(u=np.log, f=fcd)
+model = create_model(u=np.log, f=fcd)
```
Now let's see what happens when we apply our Bellman operator to the exact
@@ -644,10 +684,10 @@ In theory, since $v^*$ is a fixed point, the resulting function should again be
In practice, we expect some small numerical error.
```{code-cell} python3
-grid = og.grid
+grid = model.grid
-v_init = v_star(grid, α, og.β, og.μ) # Start at the solution
-v_greedy, v = T(v_init, og) # Apply T once
+v_init = v_star(grid, α, model.β, model.μ) # Start at the solution
+v_greedy, v = T(v_init, model) # Apply T once
fig, ax = plt.subplots()
ax.set_ylim(-35, -24)
@@ -662,7 +702,7 @@ The two functions are essentially indistinguishable, so we are off to a good sta
Now let's have a look at iterating with the Bellman operator, starting
from an arbitrary initial condition.
-The initial condition we'll start with is, somewhat arbitrarily, $v(y) = 5 \ln (y)$.
+The initial condition we'll start with is, somewhat arbitrarily, $v(x) = 5 \ln (x)$.
```{code-cell} python3
v = 5 * np.log(grid) # An initial condition
@@ -674,10 +714,10 @@ ax.plot(grid, v, color=plt.cm.jet(0),
lw=2, alpha=0.6, label='Initial condition')
for i in range(n):
- v_greedy, v = T(v, og) # Apply the Bellman operator
+ v_greedy, v = T(v, model) # Apply the Bellman operator
ax.plot(grid, v, color=plt.cm.jet(i / n), lw=2, alpha=0.6)
-ax.plot(grid, v_star(grid, α, og.β, og.μ), 'k-', lw=2,
+ax.plot(grid, v_star(grid, α, model.β, model.μ), 'k-', lw=2,
alpha=0.8, label='True value function')
ax.legend()
@@ -700,13 +740,41 @@ We can write a function that iterates until the difference is below a particular
tolerance level.
```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/solve_model.py
+def solve_model(og,
+ tol=1e-4,
+ max_iter=1000,
+ verbose=True,
+ print_skip=25):
+ """
+ Solve model by iterating with the Bellman operator.
+
+ """
+
+ # Set up loop
+ v = og.u(og.grid) # Initial condition
+ i = 0
+ error = tol + 1
+
+ while i < max_iter and error > tol:
+ v_greedy, v_new = T(v, og)
+ error = np.max(np.abs(v - v_new))
+ i += 1
+ if verbose and i % print_skip == 0:
+ print(f"Error at iteration {i} is {error}.")
+ v = v_new
+
+ if error > tol:
+ print("Failed to converge!")
+ elif verbose:
+ print(f"\nConverged in {i} iterations.")
+
+ return v_greedy, v_new
```
Let's use this function to compute an approximate solution at the defaults.
```{code-cell} python3
-v_greedy, v_solution = solve_model(og)
+v_greedy, v_solution = solve_model(model)
```
Now we check our result by plotting it against the true value:
@@ -717,7 +785,7 @@ fig, ax = plt.subplots()
ax.plot(grid, v_solution, lw=2, alpha=0.6,
label='Approximate value function')
-ax.plot(grid, v_star(grid, α, og.β, og.μ), lw=2,
+ax.plot(grid, v_star(grid, α, model.β, model.μ), lw=2,
alpha=0.6, label='True value function')
ax.legend()
@@ -729,13 +797,13 @@ The figure shows that we are pretty much on the money.
### The Policy Function
-```{index} single: Optimal Growth; Policy Function
+```{index} single: Stochastic Cake Eating; Policy Function
```
The policy `v_greedy` computed above corresponds to an approximate optimal policy.
The next figure compares it to the exact solution, which, as mentioned
-above, is $\sigma(y) = (1 - \alpha \beta) y$
+above, is $\sigma(x) = (1 - \alpha \beta) x$
```{code-cell} python3
fig, ax = plt.subplots()
@@ -743,7 +811,7 @@ fig, ax = plt.subplots()
ax.plot(grid, v_greedy, lw=2,
alpha=0.6, label='approximate policy function')
-ax.plot(grid, σ_star(grid, α, og.β), '--',
+ax.plot(grid, σ_star(grid, α, model.β), '--',
lw=2, alpha=0.6, label='true policy function')
ax.legend()
@@ -767,12 +835,11 @@ u(c) = \frac{c^{1 - \gamma}} {1 - \gamma}
$$
Maintaining the other defaults, including the Cobb-Douglas production
-function, solve the optimal growth model with this
+function, solve the stochastic cake eating model with this
utility specification.
Setting $\gamma = 1.5$, compute and plot an estimate of the optimal policy.
-Time how long this function takes to run, so you can compare it to faster code developed in the {doc}`next lecture `.
```
```{solution-start} og_ex1
@@ -787,14 +854,14 @@ Here we set up the model.
def u_crra(c):
return (c**(1 - γ) - 1) / (1 - γ)
-og = OptimalGrowthModel(u=u_crra, f=fcd)
+model = create_model(u=u_crra, f=fcd)
```
Now let's run it, with a timer.
```{code-cell} python3
%%time
-v_greedy, v_solution = solve_model(og)
+v_greedy, v_solution = solve_model(model)
```
Let's plot the policy function just to see what it looks like:
@@ -812,37 +879,3 @@ plt.show()
```{solution-end}
```
-```{exercise}
-:label: og_ex2
-
-Time how long it takes to iterate with the Bellman operator
-20 times, starting from initial condition $v(y) = u(y)$.
-
-Use the model specification in the previous exercise.
-
-(As before, we will compare this number with that for the faster code developed in the {doc}`next lecture `.)
-```
-
-```{solution-start} og_ex2
-:class: dropdown
-```
-
-Let's set up:
-
-```{code-cell} ipython3
-og = OptimalGrowthModel(u=u_crra, f=fcd)
-v = og.u(og.grid)
-```
-
-Here's the timing:
-
-```{code-cell} ipython3
-%%time
-
-for i in range(20):
- v_greedy, v_new = T(v, og)
- v = v_new
-```
-
-```{solution-end}
-```
diff --git a/lectures/cake_eating_time_iter.md b/lectures/cake_eating_time_iter.md
new file mode 100644
index 000000000..21f30141f
--- /dev/null
+++ b/lectures/cake_eating_time_iter.md
@@ -0,0 +1,545 @@
+---
+jupytext:
+ text_representation:
+ extension: .md
+ format_name: myst
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+```{raw} jupyter
+
+```
+
+# {index}`Cake Eating IV: Time Iteration `
+
+```{contents} Contents
+:depth: 2
+```
+
+In addition to what's in Anaconda, this lecture will need the following libraries:
+
+```{code-cell} ipython
+---
+tags: [hide-output]
+---
+!pip install quantecon
+```
+
+## Overview
+
+In this lecture, we introduce the core idea of **time iteration**: iterating on
+a guess of the optimal policy using the Euler equation.
+
+This approach differs from the value function iteration we used in
+{doc}`Cake Eating III `, where we iterated on the value function itself.
+
+Time iteration exploits the structure of the Euler equation to find the optimal
+policy directly, rather than computing the value function as an intermediate step.
+
+The key advantage is computational efficiency: by working directly with the
+policy function, we can often solve problems faster than with value function iteration.
+
+However, time iteration is not the most efficient Euler equation-based method
+available.
+
+In {doc}`Cake Eating V `, we'll introduce the **endogenous
+grid method** (EGM), which provides an even more efficient way to solve the
+problem.
+
+For now, our goal is to understand the basic mechanics of time iteration and
+how it leverages the Euler equation.
+
+Let's start with some imports:
+
+```{code-cell} ipython
+import matplotlib.pyplot as plt
+import numpy as np
+from scipy.optimize import brentq
+from typing import NamedTuple, Callable
+```
+
+## The Euler Equation
+
+Our first step is to derive the Euler equation, which is a generalization of
+the Euler equation we obtained in {doc}`Cake Eating I `.
+
+We take the model set out in {doc}`Cake Eating III ` and add the following assumptions:
+
+1. $u$ and $f$ are continuously differentiable and strictly concave
+1. $f(0) = 0$
+1. $\lim_{c \to 0} u'(c) = \infty$ and $\lim_{c \to \infty} u'(c) = 0$
+1. $\lim_{k \to 0} f'(k) = \infty$ and $\lim_{k \to \infty} f'(k) = 0$
+
+The last two conditions are usually called **Inada conditions**.
+
+Recall the Bellman equation
+
+```{math}
+:label: cpi_fpb30
+
+v^*(x) = \max_{0 \leq c \leq x}
+ \left\{
+ u(c) + \beta \int v^*(f(x - c) z) \phi(dz)
+ \right\}
+\quad \text{for all} \quad
+x \in \mathbb R_+
+```
+
+Let the optimal consumption policy be denoted by $\sigma^*$.
+
+We know that $\sigma^*$ is a $v^*$-greedy policy so that $\sigma^*(x)$ is the maximizer in {eq}`cpi_fpb30`.
+
+The conditions above imply that
+
+* $\sigma^*$ is the unique optimal policy for the stochastic cake eating problem
+* the optimal policy is continuous, strictly increasing and also **interior**, in the sense that $0 < \sigma^*(x) < x$ for all strictly positive $x$, and
+* the value function is strictly concave and continuously differentiable, with
+
+```{math}
+:label: cpi_env
+
+(v^*)'(x) = u' (\sigma^*(x) ) := (u' \circ \sigma^*)(x)
+```
+
+The last result is called the **envelope condition** due to its relationship with the [envelope theorem](https://en.wikipedia.org/wiki/Envelope_theorem).
+
+To see why {eq}`cpi_env` holds, write the Bellman equation in the equivalent
+form
+
+$$
+v^*(x) = \max_{0 \leq k \leq x}
+ \left\{
+ u(x-k) + \beta \int v^*(f(k) z) \phi(dz)
+ \right\},
+$$
+
+Differentiating with respect to $x$, and then evaluating at the optimum yields {eq}`cpi_env`.
+
+(Section 12.1 of [EDTC](https://johnstachurski.net/edtc.html) contains full proofs of these results, and closely related discussions can be found in many other texts.)
+
+Differentiability of the value function and interiority of the optimal policy
+imply that optimal consumption satisfies the first order condition associated
+with {eq}`cpi_fpb30`, which is
+
+```{math}
+:label: cpi_foc
+
+u'(\sigma^*(x)) = \beta \int (v^*)'(f(x - \sigma^*(x)) z) f'(x - \sigma^*(x)) z \phi(dz)
+```
+
+Combining {eq}`cpi_env` and the first-order condition {eq}`cpi_foc` gives the **Euler equation**
+
+```{math}
+:label: cpi_euler
+
+(u'\circ \sigma^*)(x)
+= \beta \int (u'\circ \sigma^*)(f(x - \sigma^*(x)) z) f'(x - \sigma^*(x)) z \phi(dz)
+```
+
+We can think of the Euler equation as a functional equation
+
+```{math}
+:label: cpi_euler_func
+
+(u'\circ \sigma)(x)
+= \beta \int (u'\circ \sigma)(f(x - \sigma(x)) z) f'(x - \sigma(x)) z \phi(dz)
+```
+
+over interior consumption policies $\sigma$, one solution of which is the optimal policy $\sigma^*$.
+
+Our aim is to solve the functional equation {eq}`cpi_euler_func` and hence obtain $\sigma^*$.
+
+### The Coleman-Reffett Operator
+
+Recall the Bellman operator
+
+```{math}
+:label: fcbell20_coleman
+
+Tv(x) := \max_{0 \leq c \leq x}
+\left\{
+ u(c) + \beta \int v(f(x - c) z) \phi(dz)
+\right\}
+```
+
+Just as we introduced the Bellman operator to solve the Bellman equation, we
+will now introduce an operator over policies to help us solve the Euler
+equation.
+
+This operator $K$ will act on the set of all $\sigma \in \Sigma$
+that are continuous, strictly increasing and interior.
+
+Henceforth we denote this set of policies by $\mathscr P$
+
+1. The operator $K$ takes as its argument a $\sigma \in \mathscr P$ and
+1. returns a new function $K\sigma$, where $K\sigma(x)$ is the $c \in (0, x)$ that solves.
+
+```{math}
+:label: cpi_coledef
+
+u'(c)
+= \beta \int (u' \circ \sigma) (f(x - c) z ) f'(x - c) z \phi(dz)
+```
+
+We call this operator the **Coleman-Reffett operator** to acknowledge the work of
+{cite}`Coleman1990` and {cite}`Reffett1996`.
+
+In essence, $K\sigma$ is the consumption policy that the Euler equation tells
+you to choose today when your future consumption policy is $\sigma$.
+
+The important thing to note about $K$ is that, by
+construction, its fixed points coincide with solutions to the functional
+equation {eq}`cpi_euler_func`.
+
+In particular, the optimal policy $\sigma^*$ is a fixed point.
+
+Indeed, for fixed $x$, the value $K\sigma^*(x)$ is the $c$ that
+solves
+
+$$
+u'(c)
+= \beta \int (u' \circ \sigma^*) (f(x - c) z ) f'(x - c) z \phi(dz)
+$$
+
+In view of the Euler equation, this is exactly $\sigma^*(x)$.
+
+### Is the Coleman-Reffett Operator Well Defined?
+
+In particular, is there always a unique $c \in (0, x)$ that solves
+{eq}`cpi_coledef`?
+
+The answer is yes, under our assumptions.
+
+For any $\sigma \in \mathscr P$, the right side of {eq}`cpi_coledef`
+
+* is continuous and strictly increasing in $c$ on $(0, x)$
+* diverges to $+\infty$ as $c \uparrow x$
+
+The left side of {eq}`cpi_coledef`
+
+* is continuous and strictly decreasing in $c$ on $(0, x)$
+* diverges to $+\infty$ as $c \downarrow 0$
+
+Sketching these curves and using the information above will convince you that they cross exactly once as $c$ ranges over $(0, x)$.
+
+With a bit more analysis, one can show in addition that $K \sigma \in \mathscr P$
+whenever $\sigma \in \mathscr P$.
+
+### Comparison with VFI (Theory)
+
+It is possible to prove that there is a tight relationship between iterates of
+$K$ and iterates of the Bellman operator.
+
+Mathematically, the two operators are *topologically conjugate*.
+
+Loosely speaking, this means that if iterates of one operator converge then
+so do iterates of the other, and vice versa.
+
+Moreover, there is a sense in which they converge at the same rate, at least
+in theory.
+
+However, it turns out that the operator $K$ is more stable numerically
+and hence more efficient in the applications we consider.
+
+Examples are given below.
+
+## Implementation
+
+As in {doc}`Cake Eating III `, we continue to assume that
+
+* $u(c) = \ln c$
+* $f(k) = k^{\alpha}$
+* $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ when $\zeta$ is standard normal
+
+This will allow us to compare our results to the analytical solutions
+
+```{code-cell} python3
+def v_star(x, α, β, μ):
+ """
+ True value function
+ """
+ c1 = np.log(1 - α * β) / (1 - β)
+ c2 = (μ + α * np.log(α * β)) / (1 - α)
+ c3 = 1 / (1 - β)
+ c4 = 1 / (1 - α * β)
+ return c1 + c2 * (c3 - c4) + c4 * np.log(x)
+
+def σ_star(x, α, β):
+ """
+ True optimal policy
+ """
+ return (1 - α * β) * x
+```
+
+As discussed above, our plan is to solve the model using time iteration, which
+means iterating with the operator $K$.
+
+For this we need access to the functions $u'$ and $f, f'$.
+
+We use the same `Model` structure from {doc}`Cake Eating III `.
+
+```{code-cell} python3
+class Model(NamedTuple):
+ u: Callable # utility function
+ f: Callable # production function
+ β: float # discount factor
+ μ: float # shock location parameter
+ s: float # shock scale parameter
+ grid: np.ndarray # state grid
+ shocks: np.ndarray # shock draws
+ α: float = 0.4 # production function parameter
+ u_prime: Callable = None # derivative of utility
+ f_prime: Callable = None # derivative of production
+
+
+def create_model(u: Callable,
+ f: Callable,
+ β: float = 0.96,
+ μ: float = 0.0,
+ s: float = 0.1,
+ grid_max: float = 4.0,
+ grid_size: int = 120,
+ shock_size: int = 250,
+ seed: int = 1234,
+ α: float = 0.4,
+ u_prime: Callable = None,
+ f_prime: Callable = None) -> Model:
+ """
+ Creates an instance of the cake eating model.
+ """
+ # Set up grid
+ grid = np.linspace(1e-4, grid_max, grid_size)
+
+ # Store shocks (with a seed, so results are reproducible)
+ np.random.seed(seed)
+ shocks = np.exp(μ + s * np.random.randn(shock_size))
+
+ return Model(u=u, f=f, β=β, μ=μ, s=s, grid=grid, shocks=shocks,
+ α=α, u_prime=u_prime, f_prime=f_prime)
+```
+
+Now we implement a method called `euler_diff`, which returns
+
+```{math}
+:label: euler_diff
+
+u'(c) - \beta \int (u' \circ \sigma) (f(x - c) z ) f'(x - c) z \phi(dz)
+```
+
+```{code-cell} ipython
+def euler_diff(c: float, σ: np.ndarray, x: float, model: Model) -> float:
+ """
+ Set up a function such that the root with respect to c,
+ given x and σ, is equal to Kσ(x).
+
+ """
+
+ β, shocks, grid = model.β, model.shocks, model.grid
+ f, f_prime, u_prime = model.f, model.f_prime, model.u_prime
+
+ # First turn σ into a function via interpolation
+ σ_func = lambda x: np.interp(x, grid, σ)
+
+ # Now set up the function we need to find the root of.
+ vals = u_prime(σ_func(f(x - c) * shocks)) * f_prime(x - c) * shocks
+ return u_prime(c) - β * np.mean(vals)
+```
+
+The function `euler_diff` evaluates integrals by Monte Carlo and
+approximates functions using linear interpolation.
+
+We will use a root-finding algorithm to solve {eq}`euler_diff` for $c$ given
+state $x$ and $σ$, the current guess of the policy.
+
+Here's the operator $K$, that implements the root-finding step.
+
+```{code-cell} ipython3
+def K(σ: np.ndarray, model: Model) -> np.ndarray:
+ """
+ The Coleman-Reffett operator
+
+ Here model is an instance of Model.
+ """
+
+ β = model.β
+ f, f_prime, u_prime = model.f, model.f_prime, model.u_prime
+ grid, shocks = model.grid, model.shocks
+
+ σ_new = np.empty_like(σ)
+ for i, x in enumerate(grid):
+ # Solve for optimal c at x
+ c_star = brentq(euler_diff, 1e-10, x-1e-10, args=(σ, x, model))
+ σ_new[i] = c_star
+
+ return σ_new
+```
+
+### Testing
+
+Let's generate an instance and plot some iterates of $K$, starting from $σ(x) = x$.
+
+```{code-cell} python3
+# Define utility and production functions with derivatives
+α = 0.4
+u = lambda c: np.log(c)
+u_prime = lambda c: 1 / c
+f = lambda k: k**α
+f_prime = lambda k: α * k**(α - 1)
+
+model = create_model(u=u, f=f, α=α, u_prime=u_prime, f_prime=f_prime)
+grid = model.grid
+
+n = 15
+σ = grid.copy() # Set initial condition
+
+fig, ax = plt.subplots()
+lb = r'initial condition $\sigma(x) = x$'
+ax.plot(grid, σ, color=plt.cm.jet(0), alpha=0.6, label=lb)
+
+for i in range(n):
+ σ = K(σ, model)
+ ax.plot(grid, σ, color=plt.cm.jet(i / n), alpha=0.6)
+
+# Update one more time and plot the last iterate in black
+σ = K(σ, model)
+ax.plot(grid, σ, color='k', alpha=0.8, label='last iterate')
+
+ax.legend()
+
+plt.show()
+```
+
+We see that the iteration process converges quickly to a limit
+that resembles the solution we obtained in {doc}`Cake Eating III `.
+
+Here is a function called `solve_model_time_iter` that takes an instance of
+`Model` and returns an approximation to the optimal policy,
+using time iteration.
+
+```{code-cell} python3
+def solve_model_time_iter(model: Model,
+ σ_init: np.ndarray,
+ tol: float = 1e-5,
+ max_iter: int = 1000,
+ verbose: bool = True) -> np.ndarray:
+ """
+ Solve the model using time iteration.
+ """
+ σ = σ_init
+ error = tol + 1
+ i = 0
+
+ while error > tol and i < max_iter:
+ σ_new = K(σ, model)
+ error = np.max(np.abs(σ_new - σ))
+ σ = σ_new
+ i += 1
+ if verbose:
+ print(f"Iteration {i}, error = {error}")
+
+ if i == max_iter:
+ print("Warning: maximum iterations reached")
+
+ return σ
+```
+
+Let's call it:
+
+```{code-cell} python3
+σ_init = np.copy(model.grid)
+σ = solve_model_time_iter(model, σ_init)
+```
+
+Here is a plot of the resulting policy, compared with the true policy:
+
+```{code-cell} python3
+fig, ax = plt.subplots()
+
+ax.plot(model.grid, σ, lw=2,
+ alpha=0.8, label='approximate policy function')
+
+ax.plot(model.grid, σ_star(model.grid, model.α, model.β), 'k--',
+ lw=2, alpha=0.8, label='true policy function')
+
+ax.legend()
+plt.show()
+```
+
+Again, the fit is excellent.
+
+The maximal absolute deviation between the two policies is
+
+```{code-cell} python3
+np.max(np.abs(σ - σ_star(model.grid, model.α, model.β)))
+```
+
+Time iteration runs faster than value function iteration, as discussed in {doc}`cake_eating_stochastic`.
+
+This is because time iteration exploits differentiability and the first order conditions, while value function iteration does not use this available structure.
+
+At the same time, there is a variation of time iteration that runs even faster.
+
+This is the endogenous grid method, which we will introduce in {doc}`cake_eating_egm`.
+
+## Exercises
+
+```{exercise}
+:label: cpi_ex1
+
+Solve the stochastic cake eating problem with CRRA utility
+
+$$
+u(c) = \frac{c^{1 - \gamma}} {1 - \gamma}
+$$
+
+Set `γ = 1.5`.
+
+Compute and plot the optimal policy.
+```
+
+```{solution-start} cpi_ex1
+:class: dropdown
+```
+
+We define the CRRA utility function and its derivative.
+
+```{code-cell} python3
+γ = 1.5
+
+def u_crra(c):
+ return c**(1 - γ) / (1 - γ)
+
+def u_prime_crra(c):
+ return c**(-γ)
+
+# Use same production function as before
+model_crra = create_model(u=u_crra, f=f, α=α,
+ u_prime=u_prime_crra, f_prime=f_prime)
+```
+
+Now we solve and plot the policy:
+
+```{code-cell} python3
+%%time
+σ_init = np.copy(model_crra.grid)
+σ = solve_model_time_iter(model_crra, σ_init)
+
+
+fig, ax = plt.subplots()
+
+ax.plot(model_crra.grid, σ, lw=2,
+ alpha=0.8, label='approximate policy function')
+
+ax.legend()
+plt.show()
+```
+
+```{solution-end}
+```
diff --git a/lectures/coleman_policy_iter.md b/lectures/coleman_policy_iter.md
deleted file mode 100644
index 7bec1c5e0..000000000
--- a/lectures/coleman_policy_iter.md
+++ /dev/null
@@ -1,468 +0,0 @@
----
-jupytext:
- text_representation:
- extension: .md
- format_name: myst
-kernelspec:
- display_name: Python 3
- language: python
- name: python3
----
-
-```{raw} jupyter
-
-```
-
-# {index}`Optimal Growth III: Time Iteration `
-
-```{contents} Contents
-:depth: 2
-```
-
-In addition to what's in Anaconda, this lecture will need the following libraries:
-
-```{code-cell} ipython
----
-tags: [hide-output]
----
-!pip install quantecon
-```
-
-## Overview
-
-In this lecture, we'll continue our {doc}`earlier ` study of the stochastic optimal growth model.
-
-In that lecture, we solved the associated dynamic programming
-problem using value function iteration.
-
-The beauty of this technique is its broad applicability.
-
-With numerical problems, however, we can often attain higher efficiency in
-specific applications by deriving methods that are carefully tailored to the
-application at hand.
-
-The stochastic optimal growth model has plenty of structure to exploit for
-this purpose, especially when we adopt some concavity and smoothness
-assumptions over primitives.
-
-We'll use this structure to obtain an Euler equation based method.
-
-This will be an extension of the time iteration method considered
-in our elementary lecture on {doc}`cake eating `.
-
-In a {doc}`subsequent lecture `, we'll see that time
-iteration can be further adjusted to obtain even more efficiency.
-
-Let's start with some imports:
-
-```{code-cell} ipython
-import matplotlib.pyplot as plt
-import numpy as np
-from quantecon.optimize import brentq
-from numba import jit
-```
-
-## The Euler Equation
-
-Our first step is to derive the Euler equation, which is a generalization of
-the Euler equation we obtained in the {doc}`lecture on cake eating `.
-
-We take the model set out in {doc}`the stochastic growth model lecture ` and add the following assumptions:
-
-1. $u$ and $f$ are continuously differentiable and strictly concave
-1. $f(0) = 0$
-1. $\lim_{c \to 0} u'(c) = \infty$ and $\lim_{c \to \infty} u'(c) = 0$
-1. $\lim_{k \to 0} f'(k) = \infty$ and $\lim_{k \to \infty} f'(k) = 0$
-
-The last two conditions are usually called **Inada conditions**.
-
-Recall the Bellman equation
-
-```{math}
-:label: cpi_fpb30
-
-v^*(y) = \max_{0 \leq c \leq y}
- \left\{
- u(c) + \beta \int v^*(f(y - c) z) \phi(dz)
- \right\}
-\quad \text{for all} \quad
-y \in \mathbb R_+
-```
-
-Let the optimal consumption policy be denoted by $\sigma^*$.
-
-We know that $\sigma^*$ is a $v^*$-greedy policy so that $\sigma^*(y)$ is the maximizer in {eq}`cpi_fpb30`.
-
-The conditions above imply that
-
-* $\sigma^*$ is the unique optimal policy for the stochastic optimal growth model
-* the optimal policy is continuous, strictly increasing and also **interior**, in the sense that $0 < \sigma^*(y) < y$ for all strictly positive $y$, and
-* the value function is strictly concave and continuously differentiable, with
-
-```{math}
-:label: cpi_env
-
-(v^*)'(y) = u' (\sigma^*(y) ) := (u' \circ \sigma^*)(y)
-```
-
-The last result is called the **envelope condition** due to its relationship with the [envelope theorem](https://en.wikipedia.org/wiki/Envelope_theorem).
-
-To see why {eq}`cpi_env` holds, write the Bellman equation in the equivalent
-form
-
-$$
-v^*(y) = \max_{0 \leq k \leq y}
- \left\{
- u(y-k) + \beta \int v^*(f(k) z) \phi(dz)
- \right\},
-$$
-
-Differentiating with respect to $y$, and then evaluating at the optimum yields {eq}`cpi_env`.
-
-(Section 12.1 of [EDTC](https://johnstachurski.net/edtc.html) contains full proofs of these results, and closely related discussions can be found in many other texts.)
-
-Differentiability of the value function and interiority of the optimal policy
-imply that optimal consumption satisfies the first order condition associated
-with {eq}`cpi_fpb30`, which is
-
-```{math}
-:label: cpi_foc
-
-u'(\sigma^*(y)) = \beta \int (v^*)'(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
-```
-
-Combining {eq}`cpi_env` and the first-order condition {eq}`cpi_foc` gives the **Euler equation**
-
-```{math}
-:label: cpi_euler
-
-(u'\circ \sigma^*)(y)
-= \beta \int (u'\circ \sigma^*)(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
-```
-
-We can think of the Euler equation as a functional equation
-
-```{math}
-:label: cpi_euler_func
-
-(u'\circ \sigma)(y)
-= \beta \int (u'\circ \sigma)(f(y - \sigma(y)) z) f'(y - \sigma(y)) z \phi(dz)
-```
-
-over interior consumption policies $\sigma$, one solution of which is the optimal policy $\sigma^*$.
-
-Our aim is to solve the functional equation {eq}`cpi_euler_func` and hence obtain $\sigma^*$.
-
-### The Coleman-Reffett Operator
-
-Recall the Bellman operator
-
-```{math}
-:label: fcbell20_coleman
-
-Tv(y) := \max_{0 \leq c \leq y}
-\left\{
- u(c) + \beta \int v(f(y - c) z) \phi(dz)
-\right\}
-```
-
-Just as we introduced the Bellman operator to solve the Bellman equation, we
-will now introduce an operator over policies to help us solve the Euler
-equation.
-
-This operator $K$ will act on the set of all $\sigma \in \Sigma$
-that are continuous, strictly increasing and interior.
-
-Henceforth we denote this set of policies by $\mathscr P$
-
-1. The operator $K$ takes as its argument a $\sigma \in \mathscr P$ and
-1. returns a new function $K\sigma$, where $K\sigma(y)$ is the $c \in (0, y)$ that solves.
-
-```{math}
-:label: cpi_coledef
-
-u'(c)
-= \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
-```
-
-We call this operator the **Coleman-Reffett operator** to acknowledge the work of
-{cite}`Coleman1990` and {cite}`Reffett1996`.
-
-In essence, $K\sigma$ is the consumption policy that the Euler equation tells
-you to choose today when your future consumption policy is $\sigma$.
-
-The important thing to note about $K$ is that, by
-construction, its fixed points coincide with solutions to the functional
-equation {eq}`cpi_euler_func`.
-
-In particular, the optimal policy $\sigma^*$ is a fixed point.
-
-Indeed, for fixed $y$, the value $K\sigma^*(y)$ is the $c$ that
-solves
-
-$$
-u'(c)
-= \beta \int (u' \circ \sigma^*) (f(y - c) z ) f'(y - c) z \phi(dz)
-$$
-
-In view of the Euler equation, this is exactly $\sigma^*(y)$.
-
-### Is the Coleman-Reffett Operator Well Defined?
-
-In particular, is there always a unique $c \in (0, y)$ that solves
-{eq}`cpi_coledef`?
-
-The answer is yes, under our assumptions.
-
-For any $\sigma \in \mathscr P$, the right side of {eq}`cpi_coledef`
-
-* is continuous and strictly increasing in $c$ on $(0, y)$
-* diverges to $+\infty$ as $c \uparrow y$
-
-The left side of {eq}`cpi_coledef`
-
-* is continuous and strictly decreasing in $c$ on $(0, y)$
-* diverges to $+\infty$ as $c \downarrow 0$
-
-Sketching these curves and using the information above will convince you that they cross exactly once as $c$ ranges over $(0, y)$.
-
-With a bit more analysis, one can show in addition that $K \sigma \in \mathscr P$
-whenever $\sigma \in \mathscr P$.
-
-### Comparison with VFI (Theory)
-
-It is possible to prove that there is a tight relationship between iterates of
-$K$ and iterates of the Bellman operator.
-
-Mathematically, the two operators are *topologically conjugate*.
-
-Loosely speaking, this means that if iterates of one operator converge then
-so do iterates of the other, and vice versa.
-
-Moreover, there is a sense in which they converge at the same rate, at least
-in theory.
-
-However, it turns out that the operator $K$ is more stable numerically
-and hence more efficient in the applications we consider.
-
-Examples are given below.
-
-## Implementation
-
-As in our {doc}`previous study `, we continue to assume that
-
-* $u(c) = \ln c$
-* $f(k) = k^{\alpha}$
-* $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ when $\zeta$ is standard normal
-
-This will allow us to compare our results to the analytical solutions
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/cd_analytical.py
-```
-
-As discussed above, our plan is to solve the model using time iteration, which
-means iterating with the operator $K$.
-
-For this we need access to the functions $u'$ and $f, f'$.
-
-These are available in a class called `OptimalGrowthModel` that we
-constructed in an {doc}`earlier lecture `.
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth_fast/ogm.py
-```
-
-Now we implement a method called `euler_diff`, which returns
-
-```{math}
-:label: euler_diff
-
-u'(c) - \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
-```
-
-```{code-cell} ipython
-@jit
-def euler_diff(c, σ, y, og):
- """
- Set up a function such that the root with respect to c,
- given y and σ, is equal to Kσ(y).
-
- """
-
- β, shocks, grid = og.β, og.shocks, og.grid
- f, f_prime, u_prime = og.f, og.f_prime, og.u_prime
-
- # First turn σ into a function via interpolation
- σ_func = lambda x: np.interp(x, grid, σ)
-
- # Now set up the function we need to find the root of.
- vals = u_prime(σ_func(f(y - c) * shocks)) * f_prime(y - c) * shocks
- return u_prime(c) - β * np.mean(vals)
-```
-
-The function `euler_diff` evaluates integrals by Monte Carlo and
-approximates functions using linear interpolation.
-
-We will use a root-finding algorithm to solve {eq}`euler_diff` for $c$ given
-state $y$ and $σ$, the current guess of the policy.
-
-Here's the operator $K$, that implements the root-finding step.
-
-```{code-cell} ipython3
-@jit
-def K(σ, og):
- """
- The Coleman-Reffett operator
-
- Here og is an instance of OptimalGrowthModel.
- """
-
- β = og.β
- f, f_prime, u_prime = og.f, og.f_prime, og.u_prime
- grid, shocks = og.grid, og.shocks
-
- σ_new = np.empty_like(σ)
- for i, y in enumerate(grid):
- # Solve for optimal c at y
- c_star = brentq(euler_diff, 1e-10, y-1e-10, args=(σ, y, og))[0]
- σ_new[i] = c_star
-
- return σ_new
-```
-
-### Testing
-
-Let's generate an instance and plot some iterates of $K$, starting from $σ(y) = y$.
-
-```{code-cell} python3
-og = OptimalGrowthModel()
-grid = og.grid
-
-n = 15
-σ = grid.copy() # Set initial condition
-
-fig, ax = plt.subplots()
-lb = r'initial condition $\sigma(y) = y$'
-ax.plot(grid, σ, color=plt.cm.jet(0), alpha=0.6, label=lb)
-
-for i in range(n):
- σ = K(σ, og)
- ax.plot(grid, σ, color=plt.cm.jet(i / n), alpha=0.6)
-
-# Update one more time and plot the last iterate in black
-σ = K(σ, og)
-ax.plot(grid, σ, color='k', alpha=0.8, label='last iterate')
-
-ax.legend()
-
-plt.show()
-```
-
-We see that the iteration process converges quickly to a limit
-that resembles the solution we obtained in {doc}`the previous lecture `.
-
-Here is a function called `solve_model_time_iter` that takes an instance of
-`OptimalGrowthModel` and returns an approximation to the optimal policy,
-using time iteration.
-
-```{code-cell} python3
-:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py
-```
-
-Let's call it:
-
-```{code-cell} python3
-σ_init = np.copy(og.grid)
-σ = solve_model_time_iter(og, σ_init)
-```
-
-Here is a plot of the resulting policy, compared with the true policy:
-
-```{code-cell} python3
-fig, ax = plt.subplots()
-
-ax.plot(og.grid, σ, lw=2,
- alpha=0.8, label='approximate policy function')
-
-ax.plot(og.grid, σ_star(og.grid, og.α, og.β), 'k--',
- lw=2, alpha=0.8, label='true policy function')
-
-ax.legend()
-plt.show()
-```
-
-Again, the fit is excellent.
-
-The maximal absolute deviation between the two policies is
-
-```{code-cell} python3
-np.max(np.abs(σ - σ_star(og.grid, og.α, og.β)))
-```
-
-How long does it take to converge?
-
-```{code-cell} python3
-%%timeit -n 3 -r 1
-σ = solve_model_time_iter(og, σ_init, verbose=False)
-```
-
-Convergence is very fast, even compared to our {doc}`JIT-compiled value function iteration `.
-
-Overall, we find that time iteration provides a very high degree of efficiency
-and accuracy, at least for this model.
-
-## Exercises
-
-```{exercise}
-:label: cpi_ex1
-
-Solve the model with CRRA utility
-
-$$
-u(c) = \frac{c^{1 - \gamma}} {1 - \gamma}
-$$
-
-Set `γ = 1.5`.
-
-Compute and plot the optimal policy.
-```
-
-```{solution-start} cpi_ex1
-:class: dropdown
-```
-
-We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture `.
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth_fast/ogm_crra.py
-```
-
-Let's create an instance:
-
-```{code-cell} python3
-og_crra = OptimalGrowthModel_CRRA()
-```
-
-Now we solve and plot the policy:
-
-```{code-cell} python3
-%%time
-σ = solve_model_time_iter(og_crra, σ_init)
-
-
-fig, ax = plt.subplots()
-
-ax.plot(og.grid, σ, lw=2,
- alpha=0.8, label='approximate policy function')
-
-ax.legend()
-plt.show()
-```
-
-```{solution-end}
-```
diff --git a/lectures/egm_policy_iter.md b/lectures/egm_policy_iter.md
deleted file mode 100644
index 2250cb520..000000000
--- a/lectures/egm_policy_iter.md
+++ /dev/null
@@ -1,253 +0,0 @@
----
-jupytext:
- text_representation:
- extension: .md
- format_name: myst
-kernelspec:
- display_name: Python 3
- language: python
- name: python3
----
-
-```{raw} jupyter
-
-```
-
-# {index}`Optimal Growth IV: The Endogenous Grid Method `
-
-```{contents} Contents
-:depth: 2
-```
-
-
-## Overview
-
-Previously, we solved the stochastic optimal growth model using
-
-1. {doc}`value function iteration `
-1. {doc}`Euler equation based time iteration `
-
-We found time iteration to be significantly more accurate and efficient.
-
-In this lecture, we'll look at a clever twist on time iteration called the **endogenous grid method** (EGM).
-
-EGM is a numerical method for implementing policy iteration invented by [Chris Carroll](https://econ.jhu.edu/directory/christopher-carroll/).
-
-The original reference is {cite}`Carroll2006`.
-
-Let's start with some standard imports:
-
-```{code-cell} ipython
-import matplotlib.pyplot as plt
-import numpy as np
-from numba import jit
-```
-
-## Key Idea
-
-Let's start by reminding ourselves of the theory and then see how the numerics fit in.
-
-### Theory
-
-Take the model set out in {doc}`the time iteration lecture `, following the same terminology and notation.
-
-The Euler equation is
-
-```{math}
-:label: egm_euler
-
-(u'\circ \sigma^*)(y)
-= \beta \int (u'\circ \sigma^*)(f(y - \sigma^*(y)) z) f'(y - \sigma^*(y)) z \phi(dz)
-```
-
-As we saw, the Coleman-Reffett operator is a nonlinear operator $K$ engineered so that $\sigma^*$ is a fixed point of $K$.
-
-It takes as its argument a continuous strictly increasing consumption policy $\sigma \in \Sigma$.
-
-It returns a new function $K \sigma$, where $(K \sigma)(y)$ is the $c \in (0, \infty)$ that solves
-
-```{math}
-:label: egm_coledef
-
-u'(c)
-= \beta \int (u' \circ \sigma) (f(y - c) z ) f'(y - c) z \phi(dz)
-```
-
-### Exogenous Grid
-
-As discussed in {doc}`the lecture on time iteration `, to implement the method on a computer, we need a numerical approximation.
-
-In particular, we represent a policy function by a set of values on a finite grid.
-
-The function itself is reconstructed from this representation when necessary, using interpolation or some other method.
-
-{doc}`Previously `, to obtain a finite representation of an updated consumption policy, we
-
-* fixed a grid of income points $\{y_i\}$
-* calculated the consumption value $c_i$ corresponding to each
- $y_i$ using {eq}`egm_coledef` and a root-finding routine
-
-Each $c_i$ is then interpreted as the value of the function $K \sigma$ at $y_i$.
-
-Thus, with the points $\{y_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation.
-
-Iteration then continues...
-
-### Endogenous Grid
-
-The method discussed above requires a root-finding routine to find the
-$c_i$ corresponding to a given income value $y_i$.
-
-Root-finding is costly because it typically involves a significant number of
-function evaluations.
-
-As pointed out by Carroll {cite}`Carroll2006`, we can avoid this if
-$y_i$ is chosen endogenously.
-
-The only assumption required is that $u'$ is invertible on $(0, \infty)$.
-
-Let $(u')^{-1}$ be the inverse function of $u'$.
-
-The idea is this:
-
-* First, we fix an *exogenous* grid $\{k_i\}$ for capital ($k = y - c$).
-* Then we obtain $c_i$ via
-
-```{math}
-:label: egm_getc
-
-c_i =
-(u')^{-1}
-\left\{
- \beta \int (u' \circ \sigma) (f(k_i) z ) \, f'(k_i) \, z \, \phi(dz)
-\right\}
-```
-
-* Finally, for each $c_i$ we set $y_i = c_i + k_i$.
-
-It is clear that each $(y_i, c_i)$ pair constructed in this manner satisfies {eq}`egm_coledef`.
-
-With the points $\{y_i, c_i\}$ in hand, we can reconstruct $K \sigma$ via approximation as before.
-
-The name EGM comes from the fact that the grid $\{y_i\}$ is determined **endogenously**.
-
-## Implementation
-
-As {doc}`before `, we will start with a simple setting
-where
-
-* $u(c) = \ln c$,
-* production is Cobb-Douglas, and
-* the shocks are lognormal.
-
-This will allow us to make comparisons with the analytical solutions
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/cd_analytical.py
-```
-
-We reuse the `OptimalGrowthModel` class
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth_fast/ogm.py
-```
-
-### The Operator
-
-Here's an implementation of $K$ using EGM as described above.
-
-```{code-cell} python3
-@jit
-def K(σ_array, og):
- """
- The Coleman-Reffett operator using EGM
-
- """
-
- # Simplify names
- f, β = og.f, og.β
- f_prime, u_prime = og.f_prime, og.u_prime
- u_prime_inv = og.u_prime_inv
- grid, shocks = og.grid, og.shocks
-
- # Determine endogenous grid
- y = grid + σ_array # y_i = k_i + c_i
-
- # Linear interpolation of policy using endogenous grid
- σ = lambda x: np.interp(x, y, σ_array)
-
- # Allocate memory for new consumption array
- c = np.empty_like(grid)
-
- # Solve for updated consumption value
- for i, k in enumerate(grid):
- vals = u_prime(σ(f(k) * shocks)) * f_prime(k) * shocks
- c[i] = u_prime_inv(β * np.mean(vals))
-
- return c
-```
-
-Note the lack of any root-finding algorithm.
-
-### Testing
-
-First we create an instance.
-
-```{code-cell} python3
-og = OptimalGrowthModel()
-grid = og.grid
-```
-
-Here's our solver routine:
-
-```{code-cell} python3
-:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py
-```
-
-Let's call it:
-
-```{code-cell} python3
-σ_init = np.copy(grid)
-σ = solve_model_time_iter(og, σ_init)
-```
-
-Here is a plot of the resulting policy, compared with the true policy:
-
-```{code-cell} python3
-y = grid + σ # y_i = k_i + c_i
-
-fig, ax = plt.subplots()
-
-ax.plot(y, σ, lw=2,
- alpha=0.8, label='approximate policy function')
-
-ax.plot(y, σ_star(y, og.α, og.β), 'k--',
- lw=2, alpha=0.8, label='true policy function')
-
-ax.legend()
-plt.show()
-```
-
-The maximal absolute deviation between the two policies is
-
-```{code-cell} python3
-np.max(np.abs(σ - σ_star(y, og.α, og.β)))
-```
-
-How long does it take to converge?
-
-```{code-cell} python3
-%%timeit -n 3 -r 1
-σ = solve_model_time_iter(og, σ_init, verbose=False)
-```
-
-Relative to time iteration, which as already found to be highly efficient, EGM
-has managed to shave off still more run time without compromising accuracy.
-
-This is due to the lack of a numerical root-finding step.
-
-We can now solve the optimal growth model at given parameters extremely fast.
diff --git a/lectures/ifp.md b/lectures/ifp.md
index e67865b2e..5054735b2 100644
--- a/lectures/ifp.md
+++ b/lectures/ifp.md
@@ -42,7 +42,7 @@ This is an essential sub-problem for many representative macroeconomic models
* {cite}`Huggett1993`
* etc.
-It is related to the decision problem in the {doc}`stochastic optimal growth model ` and yet differs in important ways.
+It is related to the decision problem in the {doc}`cake eating model ` and yet differs in important ways.
For example, the choice problem for the agent includes an additive income term that leads to an occasionally binding constraint.
@@ -50,8 +50,8 @@ Moreover, in this and the following lectures, we will inject more realistic
features such as correlated shocks.
To solve the model we will use Euler equation based time iteration, which proved
-to be {doc}`fast and accurate ` in our investigation of
-the {doc}`stochastic optimal growth model `.
+to be {doc}`fast and accurate ` in our investigation of
+the {doc}`cake eating model `.
Time iteration is globally convergent under mild assumptions, even when utility is unbounded (both above and below).
@@ -472,7 +472,31 @@ The following function iterates to convergence and returns the approximate
optimal policy.
```{code-cell} python3
-:load: _static/lecture_specific/coleman_policy_iter/solve_time_iter.py
+def solve_model_time_iter(model, # Class with model information
+ σ, # Initial condition
+ tol=1e-4,
+ max_iter=1000,
+ verbose=True,
+ print_skip=25):
+
+ # Set up loop
+ i = 0
+ error = tol + 1
+
+ while i < max_iter and error > tol:
+ σ_new = K(σ, model)
+ error = np.max(np.abs(σ - σ_new))
+ i += 1
+ if verbose and i % print_skip == 0:
+ print(f"Error at iteration {i} is {error}.")
+ σ = σ_new
+
+ if error > tol:
+ print("Failed to converge!")
+ elif verbose:
+ print(f"\nConverged in {i} iterations.")
+
+ return σ_new
```
Let's carry this out using the default parameters of the `IFP` class:
@@ -516,7 +540,14 @@ We know that, in this case, the value function and optimal consumption policy
are given by
```{code-cell} python3
-:load: _static/lecture_specific/cake_eating_numerical/analytical.py
+def c_star(x, β, γ):
+
+ return (1 - β ** (1/γ)) * x
+
+
+def v_star(x, β, γ):
+
+ return (1 - β**(1 / γ))**(-γ) * (x**(1-γ) / (1-γ))
```
Let's see if we match up:
diff --git a/lectures/ifp_advanced.md b/lectures/ifp_advanced.md
index 6c8757388..ffa9130ff 100644
--- a/lectures/ifp_advanced.md
+++ b/lectures/ifp_advanced.md
@@ -251,7 +251,7 @@ convergence (as measured by the distance $\rho$).
### Using an Endogenous Grid
In the study of that model we found that it was possible to further
-accelerate time iteration via the {doc}`endogenous grid method `.
+accelerate time iteration via the {doc}`endogenous grid method `.
We will use the same method here.
diff --git a/lectures/lqcontrol.md b/lectures/lqcontrol.md
index f04dff5c4..a831fde5f 100644
--- a/lectures/lqcontrol.md
+++ b/lectures/lqcontrol.md
@@ -57,7 +57,7 @@ In reading what follows, it will be useful to have some familiarity with
* matrix manipulations
* vectors of random variables
-* dynamic programming and the Bellman equation (see for example {doc}`this lecture ` and {doc}`this lecture `)
+* dynamic programming and the Bellman equation (see for example {doc}`this lecture ` and {doc}`this lecture `)
For additional reading on LQ control, see, for example,
diff --git a/lectures/optgrowth_fast.md b/lectures/optgrowth_fast.md
deleted file mode 100644
index c63bf9038..000000000
--- a/lectures/optgrowth_fast.md
+++ /dev/null
@@ -1,404 +0,0 @@
----
-jupytext:
- text_representation:
- extension: .md
- format_name: myst
-kernelspec:
- display_name: Python 3
- language: python
- name: python3
----
-
-(optgrowth)=
-```{raw} jupyter
-
-```
-
-# {index}`Optimal Growth II: Accelerating the Code with Numba `
-
-```{contents} Contents
-:depth: 2
-```
-
-In addition to what's in Anaconda, this lecture will need the following libraries:
-
-```{code-cell} ipython
----
-tags: [hide-output]
----
-!pip install quantecon
-```
-
-## Overview
-
-{doc}`Previously `, we studied a stochastic optimal
-growth model with one representative agent.
-
-We solved the model using dynamic programming.
-
-In writing our code, we focused on clarity and flexibility.
-
-These are important, but there's often a trade-off between flexibility and
-speed.
-
-The reason is that, when code is less flexible, we can exploit structure more
-easily.
-
-(This is true about algorithms and mathematical problems more generally:
-more specific problems have more structure, which, with some thought, can be
-exploited for better results.)
-
-So, in this lecture, we are going to accept less flexibility while gaining
-speed, using just-in-time (JIT) compilation to
-accelerate our code.
-
-Let's start with some imports:
-
-```{code-cell} ipython
-import matplotlib.pyplot as plt
-import numpy as np
-from numba import jit, jit
-from quantecon.optimize.scalar_maximization import brent_max
-```
-
-The function `brent_max` is also designed for embedding in JIT-compiled code.
-
-These are alternatives to similar functions in SciPy (which, unfortunately, are not JIT-aware).
-
-## The Model
-
-```{index} single: Optimal Growth; Model
-```
-
-The model is the same as discussed in our {doc}`previous lecture `
-on optimal growth.
-
-We will start with log utility:
-
-$$
-u(c) = \ln(c)
-$$
-
-We continue to assume that
-
-* $f(k) = k^{\alpha}$
-* $\phi$ is the distribution of $\xi := \exp(\mu + s \zeta)$ when $\zeta$ is standard normal
-
-We will once again use value function iteration to solve the model.
-
-In particular, the algorithm is unchanged, and the only difference is in the implementation itself.
-
-As before, we will be able to compare with the true solutions
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/cd_analytical.py
-```
-
-## Computation
-
-```{index} single: Dynamic Programming; Computation
-```
-
-We will again store the primitives of the optimal growth model in a class.
-
-But now we are going to use [Numba's](https://python-programming.quantecon.org/numba.html) `@jitclass` decorator to target our class for JIT compilation.
-
-Because we are going to use Numba to compile our class, we need to specify the data types.
-
-You will see this as a list called `opt_growth_data` above our class.
-
-Unlike in the {doc}`previous lecture `, we
-hardwire the production and utility specifications into the
-class.
-
-This is where we sacrifice flexibility in order to gain more speed.
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth_fast/ogm.py
-```
-
-The class includes some methods such as `u_prime` that we do not need now
-but will use in later lectures.
-
-### The Bellman Operator
-
-We will use JIT compilation to accelerate the Bellman operator.
-
-First, here's a function that returns the value of a particular consumption choice `c`, given state `y`, as per the Bellman equation {eq}`fpb30`.
-
-```{code-cell} python3
-@jit
-def state_action_value(c, y, v_array, og):
- """
- Right hand side of the Bellman equation.
-
- * c is consumption
- * y is income
- * og is an instance of OptimalGrowthModel
- * v_array represents a guess of the value function on the grid
-
- """
-
- u, f, β, shocks = og.u, og.f, og.β, og.shocks
-
- v = lambda x: np.interp(x, og.grid, v_array)
-
- return u(c) + β * np.mean(v(f(y - c) * shocks))
-```
-
-Now we can implement the Bellman operator, which maximizes the right hand side
-of the Bellman equation:
-
-```{code-cell} python3
-@jit
-def T(v, og):
- """
- The Bellman operator.
-
- * og is an instance of OptimalGrowthModel
- * v is an array representing a guess of the value function
-
- """
-
- v_new = np.empty_like(v)
- v_greedy = np.empty_like(v)
-
- for i in range(len(og.grid)):
- y = og.grid[i]
-
- # Maximize RHS of Bellman equation at state y
- result = brent_max(state_action_value, 1e-10, y, args=(y, v, og))
- v_greedy[i], v_new[i] = result[0], result[1]
-
- return v_greedy, v_new
-```
-
-We use the `solve_model` function to perform iteration until convergence.
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth/solve_model.py
-```
-
-Let's compute the approximate solution at the default parameters.
-
-First we create an instance:
-
-```{code-cell} python3
-og = OptimalGrowthModel()
-```
-
-Now we call `solve_model`, using the `%%time` magic to check how long it
-takes.
-
-```{code-cell} python3
-%%time
-v_greedy, v_solution = solve_model(og)
-```
-
-You will notice that this is *much* faster than our {doc}`original implementation `.
-
-Here is a plot of the resulting policy, compared with the true policy:
-
-```{code-cell} python3
-fig, ax = plt.subplots()
-
-ax.plot(og.grid, v_greedy, lw=2,
- alpha=0.8, label='approximate policy function')
-
-ax.plot(og.grid, σ_star(og.grid, og.α, og.β), 'k--',
- lw=2, alpha=0.8, label='true policy function')
-
-ax.legend()
-plt.show()
-```
-
-Again, the fit is excellent --- this is as expected since we have not changed
-the algorithm.
-
-The maximal absolute deviation between the two policies is
-
-```{code-cell} python3
-np.max(np.abs(v_greedy - σ_star(og.grid, og.α, og.β)))
-```
-
-## Exercises
-
-```{exercise}
-:label: ogfast_ex1
-
-Time how long it takes to iterate with the Bellman operator
-20 times, starting from initial condition $v(y) = u(y)$.
-
-Use the default parameterization.
-```
-
-```{solution-start} ogfast_ex1
-:class: dropdown
-```
-
-Let's set up the initial condition.
-
-```{code-cell} ipython3
-v = og.u(og.grid)
-```
-
-Here's the timing:
-
-```{code-cell} ipython3
-%%time
-
-for i in range(20):
- v_greedy, v_new = T(v, og)
- v = v_new
-```
-
-Compared with our {ref}`timing ` for the non-compiled version of
-value function iteration, the JIT-compiled code is usually an order of magnitude faster.
-
-```{solution-end}
-```
-
-```{exercise}
-:label: ogfast_ex2
-
-Modify the optimal growth model to use the CRRA utility specification.
-
-$$
-u(c) = \frac{c^{1 - \gamma} } {1 - \gamma}
-$$
-
-Set `γ = 1.5` as the default value and maintaining other specifications.
-
-(Note that `jitclass` currently does not support inheritance, so you will
-have to copy the class and change the relevant parameters and methods.)
-
-Compute an estimate of the optimal policy, plot it and compare visually with
-the same plot from the {ref}`analogous exercise ` in the first optimal
-growth lecture.
-
-Compare execution time as well.
-```
-
-
-```{solution-start} ogfast_ex2
-:class: dropdown
-```
-
-Here's our CRRA version of `OptimalGrowthModel`:
-
-```{code-cell} python3
-:load: _static/lecture_specific/optgrowth_fast/ogm_crra.py
-```
-
-Let's create an instance:
-
-```{code-cell} python3
-og_crra = OptimalGrowthModel_CRRA()
-```
-
-Now we call `solve_model`, using the `%%time` magic to check how long it
-takes.
-
-```{code-cell} python3
-%%time
-v_greedy, v_solution = solve_model(og_crra)
-```
-
-Here is a plot of the resulting policy:
-
-```{code-cell} python3
-fig, ax = plt.subplots()
-
-ax.plot(og.grid, v_greedy, lw=2,
- alpha=0.6, label='Approximate value function')
-
-ax.legend(loc='lower right')
-plt.show()
-```
-
-This matches the solution that we obtained in our non-jitted code,
-{ref}`in the exercises `.
-
-Execution time is an order of magnitude faster.
-
-```{solution-end}
-```
-
-
-```{exercise-start}
-:label: ogfast_ex3
-```
-
-In this exercise we return to the original log utility specification.
-
-Once an optimal consumption policy $\sigma$ is given, income follows
-
-$$
-y_{t+1} = f(y_t - \sigma(y_t)) \xi_{t+1}
-$$
-
-The next figure shows a simulation of 100 elements of this sequence for three
-different discount factors (and hence three different policies).
-
-```{image} /_static/lecture_specific/optgrowth/solution_og_ex2.png
-:align: center
-```
-
-In each sequence, the initial condition is $y_0 = 0.1$.
-
-The discount factors are `discount_factors = (0.8, 0.9, 0.98)`.
-
-We have also dialed down the shocks a bit with `s = 0.05`.
-
-Otherwise, the parameters and primitives are the same as the log-linear model discussed earlier in the lecture.
-
-Notice that more patient agents typically have higher wealth.
-
-Replicate the figure modulo randomness.
-
-```{exercise-end}
-```
-
-```{solution-start} ogfast_ex3
-:class: dropdown
-```
-
-Here's one solution:
-
-```{code-cell} python3
-def simulate_og(σ_func, og, y0=0.1, ts_length=100):
- '''
- Compute a time series given consumption policy σ.
- '''
- y = np.empty(ts_length)
- ξ = np.random.randn(ts_length-1)
- y[0] = y0
- for t in range(ts_length-1):
- y[t+1] = (y[t] - σ_func(y[t]))**og.α * np.exp(og.μ + og.s * ξ[t])
- return y
-```
-
-```{code-cell} python3
-fig, ax = plt.subplots()
-
-for β in (0.8, 0.9, 0.98):
-
- og = OptimalGrowthModel(β=β, s=0.05)
-
- v_greedy, v_solution = solve_model(og, verbose=False)
-
- # Define an optimal policy function
- σ_func = lambda x: np.interp(x, og.grid, v_greedy)
- y = simulate_og(σ_func, og)
- ax.plot(y, lw=2, alpha=0.6, label=rf'$\beta = {β}$')
-
-ax.legend(loc='lower right')
-plt.show()
-```
-
-```{solution-end}
-```
diff --git a/lectures/wald_friedman_2.md b/lectures/wald_friedman_2.md
index ec7937cac..729945646 100644
--- a/lectures/wald_friedman_2.md
+++ b/lectures/wald_friedman_2.md
@@ -451,7 +451,7 @@ class WaldFriedman:
return π_new
```
-As in the {doc}`optimal growth lecture `, to approximate a continuous value function
+As in {doc}`cake_eating_stochastic`, to approximate a continuous value function
* We iterate at a finite grid of possible values of $\pi$.
* When we evaluate $\mathbb E[J(\pi')]$ between grid points, we use linear interpolation.