Exact penalty method

Manopt.exact_penalty_methodFunction
exact_penalty_method(M, F, gradF, p=rand(M); kwargs...)
exact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)

perform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task

\[\begin{aligned} \min_{p ∈\mathcal{M}} &f(p)\\ \text{subject to } &g_i(p)\leq 0 \quad \text{ for } i= 1, …, m,\\ \quad &h_j(p)=0 \quad \text{ for } j=1,…,n, \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^m$ and $\{h_j\}_{j=1}^n$ are twice continuously differentiable functions from M to ℝ. For that a weighted $L_1$-penalty term for the violation of the constraints is added to the objective

\[f(x) + ρ\biggl( \sum_{i=1}^m \max\bigl\{0, g_i(x)\bigr\} + \sum_{j=1}^n \vert h_j(x)\vert\biggr),\]

where $ρ>0$ is the penalty parameter. Since this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.

In every step $k$ of the exact penalty method, the smoothed objective is then minimized over all $x ∈\mathcal{M}$. Then, the accuracy tolerance $ϵ$ and the smoothing parameter $u$ are updated by setting

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\min}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor, and

\[u^{(k)} = \max \{u_{\min}, \theta_u u^{(k-1)} \},\]

where $u_{\min}$ is the lowest value $u$ is allowed to become and $θ_u ∈ (0,1)$ is constant scaling factor.

Finally, the penalty parameter $ρ$ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } \displaystyle \max_{j ∈ \mathcal{E},i ∈ \mathcal{I}} \Bigl\{ \vert h_j(x^{(k)}) \vert, g_i(x^{(k)})\Bigr\} \geq u^{(k-1)} \Bigr) ,\\ ρ^{(k-1)}, & \text{else,} \end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M a manifold $\mathcal M$
  • f a cost function $f:\mathcal M→ℝ$ to minimize
  • grad_f the gradient of the cost function

Optional (if not called with the ConstrainedManifoldObjective cmo)

  • g: (nothing) the inequality constraints
  • h: (nothing) the equality constraints
  • grad_g: (nothing) the gradient of the inequality constraints
  • grad_h: (nothing) the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and you should consider using unconstrained solvers like quasi_Newton.

Optional

  • smoothing: (LogarithmicSumOfExponentials) SmoothingTechnique to use
  • ϵ: (1e–3) the accuracy tolerance
  • ϵ_exponent: (1/100) exponent of the ϵ update factor;
  • ϵ_min: (1e-6) the lower bound for the accuracy tolerance
  • u: (1e–1) the smoothing parameter and threshold for violation of the constraints
  • u_exponent: (1/100) exponent of the u update factor;
  • u_min: (1e-6) the lower bound for the smoothing parameter and threshold for violation of the constraints
  • ρ: (1.0) the penalty parameter
  • equality_constraints: (nothing) the number $n$ of equality constraints.
  • gradient_range (nothing, equivalent to NestedPowerRepresentation specify how gradients are represented
  • gradient_equality_range: (gradient_range) specify how the gradients of the equality constraints are represented
  • gradient_inequality_range: (gradient_range) specify how the gradients of the inequality constraints are represented
  • inequality_constraints: (nothing) the number $m$ of inequality constraints.
  • min_stepsize: (1e-10) the minimal step size
  • sub_cost: (ExactPenaltyCost(problem, ρ, u; smoothing=smoothing)) use this exact penalty cost, especially with the same numbers ρ,u as in the options for the sub problem
  • sub_grad: (ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing)) use this exact penalty gradient, especially with the same numbers ρ,u as in the options for the sub problem
  • sub_kwargs: ((;)) keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well
  • sub_stopping_criterion: (StopAfterIteration(200) |StopWhenGradientNormLess(ϵ) |StopWhenStepsizeLess(1e-10)) specify a stopping criterion for the subsolver.
  • sub_problem: (DefaultManoptProblem(M,ManifoldGradientObjective(sub_cost, sub_grad; evaluation=evaluation), provide a problem for the subsolver
  • sub_state: (QuasiNewtonState) using QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS and sub_stopping_criterion as a stopping criterion. See also sub_kwargs.
  • stopping_criterion: (StopAfterIteration(300) | (StopWhenSmallerOrEqual(ϵ, ϵ_min) & StopWhenChangeLess(1e-10)) a functor inheriting from StoppingCriterion indicating when to stop.

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

With equality_constraints and inequality_constraints you have to provide the dimension of the ranges of h and g, respectively. If not provided, together with M and the start point p0, a call to either of these is performed to try to infer these.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.exact_penalty_method!Function
exact_penalty_method!(M, f, grad_f, p; kwargs...)
exact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)

perform the exact penalty method (EPM) performed in place of p.

For all options, see exact_penalty_method.

source

State

Manopt.ExactPenaltyMethodStateType
ExactPenaltyMethodState{P,T} <: AbstractManoptSolverState

Describes the exact penalty method, with

Fields

a default value is given in brackets if a parameter can be left out in initialization.

  • p: a set point on a manifold as starting point
  • sub_problem: an AbstractManoptProblem problem for the subsolver
  • sub_state: an AbstractManoptSolverState for the subsolver
  • ϵ: (1e–3) the accuracy tolerance
  • ϵ_min: (1e-6) the lower bound for the accuracy tolerance
  • u: (1e–1) the smoothing parameter and threshold for violation of the constraints
  • u_min: (1e-6) the lower bound for the smoothing parameter and threshold for violation of the constraints
  • ρ: (1.0) the penalty parameter
  • θ_ρ: (0.3) the scaling factor of the penalty parameter
  • stopping_criterion: (StopAfterIteration(300) | (StopWhenSmallerOrEqual(ϵ, ϵ_min) &StopWhenChangeLess(min_stepsize))) a functor inheriting from StoppingCriterion indicating when to stop.

Constructor

ExactPenaltyMethodState(M::AbstractManifold, p, sub_problem, sub_state; kwargs...)

construct an exact penalty options with the remaining previously mentioned fields as keywords using their provided defaults.

See also

exact_penalty_method

source

Helping functions

Manopt.ExactPenaltyCostType
ExactPenaltyCost{S, Pr, R}

Represent the cost of the exact penalty method based on a ConstrainedManifoldObjective P and a parameter $ρ$ given by

\[f(p) + ρ\Bigl( \sum_{i=0}^m \max\{0,g_i(p)\} + \sum_{j=0}^n \lvert h_j(p)\rvert \Bigr),\]

where an additional parameter $u$ is used as well as a smoothing technique, for example LogarithmicSumOfExponentials or LinearQuadraticHuber to obtain a smooth cost function. This struct is also a functor (M,p) -> v of the cost $v$.

Fields

  • ρ, u: as described in the mathematical formula, .
  • co: the original cost

Constructor

ExactPenaltyCost(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.ExactPenaltyGradType
ExactPenaltyGrad{S, CO, R}

Represent the gradient of the ExactPenaltyCost based on a ConstrainedManifoldObjective co and a parameter $ρ$ and a smoothing technique, which uses an additional parameter $u$.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

Fields

  • ρ, u as stated before
  • co the nonsmooth objective

Constructor

ExactPenaltyGradient(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.LinearQuadraticHuberType
LinearQuadraticHuber <: SmoothingTechnique

Specify a smoothing based on $\max\{0,x\} ≈ \mathcal P(x,u)$ for some $u$, where

\[\mathcal P(x, u) = \begin{cases} 0 & \text{ if } x \leq 0,\\ \frac{x^2}{2u} & \text{ if } 0 \leq x \leq u,\\ x-\frac{u}{2} & \text{ if } x \geq u. \end{cases}\]

source
Manopt.LogarithmicSumOfExponentialsType
LogarithmicSumOfExponentials <: SmoothingTechnique

Specify a smoothing based on $\max\{a,b\} ≈ u \log(\mathrm{e}^{\frac{a}{u}}+\mathrm{e}^{\frac{b}{u}})$ for some $u$.

source

Technical details

The exact_penalty_method solver requires the following functions of a manifold to be available

The stopping criteria involves StopWhenChangeLess and StopWhenGradientNormLess which require

  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.