Skip to content

Commit 2b6576f

Browse files
explictely adding convex constraints
1 parent 2614d4a commit 2b6576f

File tree

1 file changed

+21
-1
lines changed

1 file changed

+21
-1
lines changed

docs/solver/background.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,25 @@ The indicator function says simply that there is infinite additional cost when $
3535

3636
We modify the generic optimization problem to include the indicator function by adding it to the cost. We introduce a new state variable $z$, called the slack variable, to describe the constrained version of the original state variable $x$, which we will now call the primal variable.
3737

38-
$$
38+
Since both the state and input constraints are convex ($\mathcal{X}$ and $\mathcal{U}$), ADMM naturally decomposes the problem by projecting the primal variables ($x, u$) onto these convex constraint sets through the slack updates. This projection ensures constraint satisfaction and accelerates convergence by leveraging the separability of the constraint structure. The reduction via ADMM works by alternating between solving smaller subproblems for the primal and slack variables, significantly reducing the complexity of the original constrained optimization problem.
39+
40+
41+
42+
<!-- $$
3943
\begin{alignat}{2}
4044
\min_x & \quad f(x) + I_\mathcal{C}(z) \\
4145
\text{subject to} & \quad x = z.
4246
\end{alignat}
47+
$$ -->
48+
49+
$$
50+
\begin{alignat}{2}
51+
\min_{x, u} & \quad f(x, u) + I_\mathcal{X}(z_x) + I_\mathcal{U}(z_u) \\
52+
\text{subject to} & \quad x = z_x, \quad u = z_u.
53+
\end{alignat}
4354
$$
4455

56+
4557
At minimum cost, the primal variable $x$ must be equal to the slack variable $z$, but during each solve they will not necessarily be equal. This is because the slack variable $z$ manifests in the algorithm as the version of the primal variable $x$ that has been projected onto the feasible set $\mathcal{C}$, and thus whenever the primal variable $x$ violates any constraint, the slack variable at that iteration will be projected back onto $\mathcal{C}$ and thus differ from $x$. To push the primal variable $x$ back to the feasible set $\mathcal{C}$, we introduce a third variable, $\lambda$, called the dual variable. This method is referred to as the [augmented Lagrangian](https://en.wikipedia.org/wiki/Augmented_Lagrangian_method){:target="_blank"} (originally named the method of multipliers), and introduces a scalar penalty parameter $\rho$ alongside the dual variable $\lambda$ (also known as a Lagrange multiplier). The penalty parameter $\rho$ is the augmentation to what would otherwise just be the Lagrangian of our constrained optimization problem above. $\lambda$ and $\rho$ work together to force $x$ closer to $z$ by increasing the cost of the augmented Lagrangian the more $x$ and $z$ differ.
4658

4759
$$
@@ -77,6 +89,14 @@ $$
7789
\text{subject to: } x_{k+1} = Ax_k + Bu_k \quad \forall k \in [1,N)
7890
$$
7991

92+
In addition to the dynamics constraints, the optimization problem also includes convex state and input constraints:
93+
94+
$$
95+
x_k \in \mathcal{X}, u_k \in \mathcal{U} \quad \forall k \in [1,N)
96+
$$
97+
98+
where $\mathcal{X}$ and $\mathcal{U}$ are convex sets representing the feasible state and input regions, respectively. These convex constraints ensure that the solution remains within feasible boundaries for both the state and the control inputs at every time step.
99+
80100
When we apply ADMM to this problem, the primal update becomes an equality-constrained quadratic program with modified cost matrices:
81101

82102
$$

0 commit comments

Comments
 (0)