Augmented Lagrangian method

Manopt.augmented_Lagrangian_methodFunction
augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)
augmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)

perform the augmented Lagrangian method (ALM) [LB19].

The aim of the ALM is to find the solution of the constrained optimisation task

\[\begin{aligned} \min_{p ∈\mathcal{M}} &f(p)\\ \text{subject to } &g_i(p)\leq 0 \quad \text{ for } i= 1, …, m,\\ \quad &h_j(p)=0 \quad \text{ for } j=1,…,n, \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^m$ and $\{h_j\}_{j=1}^p$ are twice continuously differentiable functions from M to ℝ. In every step $k$ of the algorithm, the AugmentedLagrangianCost $\mathcal{L}_{ρ^{(k-1)}}(p, μ^{(k-1)}, λ^{(k-1)})$ is minimized on $\mathcal{M}$, where $μ^{(k-1)} ∈ \mathbb R^n$ and $λ^{(k-1)} ∈ ℝ^m$ are the current iterates of the Lagrange multipliers and $ρ^{(k-1)}$ is the current penalty parameter.

The Lagrange multipliers are then updated by

\[λ_j^{(k)} =\operatorname{clip}_{[λ_{\min},λ_{\max}]} (λ_j^{(k-1)} + ρ^{(k-1)} h_j(p^{(k)})) \text{for all} j=1,…,p,\]

and

\[μ_i^{(k)} =\operatorname{clip}_{[0,μ_{\max}]} (μ_i^{(k-1)} + ρ^{(k-1)} g_i(p^{(k)})) \text{ for all } i=1,…,m,\]

where $λ_{\min} \leq λ_{\max}$ and $μ_{\max}$ are the multiplier boundaries.

Next, the accuracy tolerance $ϵ$ is updated as

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\min}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor.

Last, the penalty parameter $ρ$ is updated as follows: with

\[σ^{(k)}=\max_{j=1,…,p, i=1,…,m} \{\|h_j(p^{(k)})\|, \|\max_{i=1,…,m}\{g_i(p^{(k)}), -\frac{μ_i^{(k-1)}}{ρ^{(k-1)}} \}\| \}.\]

ρ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } σ^{(k)}\leq θ_ρ σ^{(k-1)} ,\\ ρ^{(k-1)}, & \text{else,} \end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M a manifold $\mathcal M$
  • f a cost function $F:\mathcal M→ℝ$ to minimize
  • grad_f the gradient of the cost function

Optional (if not called with the ConstrainedManifoldObjective cmo)

  • g: (nothing) the inequality constraints
  • h: (nothing) the equality constraints
  • grad_g: (nothing) the gradient of the inequality constraints
  • grad_h: (nothing) the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Optional

  • ϵ: (1e-3) the accuracy tolerance
  • ϵ_min: (1e-6) the lower bound for the accuracy tolerance
  • ϵ_exponent: (1/100) exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally
  • θ_ϵ: ((ϵ_min / ϵ)^(ϵ_exponent)) the scaling factor of the exactness
  • μ: (ones(size(h(M,x),1))) the Lagrange multiplier with respect to the inequality constraints
  • μ_max: (20.0) an upper bound for the Lagrange multiplier belonging to the inequality constraints
  • λ: (ones(size(h(M,x),1))) the Lagrange multiplier with respect to the equality constraints
  • λ_max: (20.0) an upper bound for the Lagrange multiplier belonging to the equality constraints
  • λ_min: (- λ_max) a lower bound for the Lagrange multiplier belonging to the equality constraints
  • τ: (0.8) factor for the improvement of the evaluation of the penalty parameter
  • ρ: (1.0) the penalty parameter
  • θ_ρ: (0.3) the scaling factor of the penalty parameter
  • sub_cost: (AugmentedLagrangianCost(problem, ρ, μ, λ)) use augmented Lagrangian, especially with the same numbers ρ,μ as in the options for the sub problem
  • sub_grad: (AugmentedLagrangianGrad(problem, ρ, μ, λ)) use augmented Lagrangian gradient, especially with the same numbers ρ,μ as in the options for the sub problem
  • sub_kwargs: keyword arguments to decorate the sub options, for example the debug= keyword.
  • sub_stopping_criterion: (StopAfterIteration(200) |StopWhenGradientNormLess(ϵ) |StopWhenStepsizeLess(1e-8)) specify a stopping criterion for the subsolver.
  • sub_problem: (DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation))) problem for the subsolver
  • sub_state: (QuasiNewtonState) using QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS and sub_stopping_criterion as a stopping criterion. See also sub_kwargs.
  • stopping_criterion: (StopAfterIteration(300) | (StopWhenSmallerOrEqual(ϵ, ϵ_min) & StopWhenChangeLess(1e-10))) a functor inheriting from StoppingCriterion indicating when to stop.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.AugmentedLagrangianMethodStateType
AugmentedLagrangianMethodState{P,T} <: AbstractManoptSolverState

Describes the augmented Lagrangian method, with

Fields

a default value is given in brackets if a parameter can be left out in initialization.

  • p: a point on a manifold as starting point and current iterate
  • sub_problem: an AbstractManoptProblem problem for the subsolver
  • sub_state: an AbstractManoptSolverState for the subsolver
  • ϵ: (1e–3) the accuracy tolerance
  • ϵ_min: (1e-6) the lower bound for the accuracy tolerance
  • λ: (ones(len(get_equality_constraints(p,x))) the Lagrange multiplier with respect to the equality constraints
  • λ_max: (20.0) an upper bound for the Lagrange multiplier belonging to the equality constraints
  • λ_min: (- λ_max) a lower bound for the Lagrange multiplier belonging to the equality constraints
  • μ: (ones(len(get_inequality_constraints(p,x))) the Lagrange multiplier with respect to the inequality constraints
  • μ_max: (20.0) an upper bound for the Lagrange multiplier belonging to the inequality constraints
  • ρ: (1.0) the penalty parameter
  • τ: (0.8) factor for the improvement of the evaluation of the penalty parameter
  • θ_ρ: (0.3) the scaling factor of the penalty parameter
  • θ_ϵ: (((ϵ_min/ϵ)^(ϵ_exponent)) the scaling factor of the accuracy tolerance
  • penalty: evaluation of the current penalty term, initialized to Inf.
  • stopping_criterion: ((StopAfterIteration(300) | (StopWhenSmallerOrEqual(ϵ, ϵ_min) &StopWhenChangeLess(1e-10))) a functor inheriting from StoppingCriterion indicating when to stop.

Constructor

AugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective, p; kwargs...)

construct an augmented Lagrangian method options with the fields and defaults as stated before, where the manifold M and the ConstrainedManifoldObjective co can be helpful for manifold- or objective specific defaults.

See also

augmented_Lagrangian_method

source

Helping functions

Manopt.AugmentedLagrangianCostType
AugmentedLagrangianCost{CO,R,T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor (M,p) -> v that can be used as a cost function within a solver, based on the internal ConstrainedManifoldObjective it computes

\[\mathcal L_\rho(p, μ, λ) = f(x) + \frac{ρ}{2} \biggl( \sum_{j=1}^n \Bigl( h_j(p) + \frac{λ_j}{ρ} \Bigr)^2 + \sum_{i=1}^m \max\Bigl\{ 0, \frac{μ_i}{ρ} + g_i(p) \Bigr\}^2 \Bigr)\]

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianCost(co, ρ, μ, λ)
source
Manopt.AugmentedLagrangianGradType
AugmentedLagrangianGrad{CO,R,T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

based on the internal ConstrainedManifoldObjective and computes the gradient $\operatorname{grad} \mathcal L_{ρ}(p, μ, λ)$, see also AugmentedLagrangianCost.

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianGrad(co, ρ, μ, λ)
source

Technical details

The augmented_Lagrangian_method solver requires the following functions of a manifold to be available

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.