Convex bundle method

Manopt.convex_bundle_methodFunction
convex_bundle_method(M, f, ∂f, p)
convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j ∈ J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal{M}$

  • f: a cost function $f: \mathcal{M}→ ℝ$ implemented as (M, p) -> v

  • ∂f: the subgradient $∂f: \mathcal{M} → T\mathcal{M}$ of $f$ as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place. This function should always only return one element from the subgradient.

  • p::P: a point on the manifold $\mathcal{M}$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.convex_bundle_method!Function
convex_bundle_method(M, f, ∂f, p)
convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j ∈ J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal{M}$

  • f: a cost function $f: \mathcal{M}→ ℝ$ implemented as (M, p) -> v

  • ∂f: the subgradient $∂f: \mathcal{M} → T\mathcal{M}$ of $f$ as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place. This function should always only return one element from the subgradient.

  • p::P: a point on the manifold $\mathcal{M}$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConvexBundleMethodStateType
ConvexBundleMethodState <: AbstractManoptSolverState

Stores option values for a convex_bundle_method solver.

Fields

THe following fields require a (real) number type R, as well as point type P and a tangent vector type T`

  • atol_λ::R: tolerance parameter for the convex coefficients in λ
  • `atol_errors::R: tolerance parameter for the linearization errors
  • bundle<:AbstractVector{Tuple{<:P,<:T}}: bundle that collects each iterate with the computed subgradient at the iterate
  • bundle_cap::Int: the maximal number of elements the bundle is allowed to remember
  • diameter::R: estimate for the diameter of the level set of the objective function at the starting point
  • domain: the domain offas a function(M,p) -> bthat evaluates to true when the current candidate is in the domain off`, and false otherwise,
  • g::T: descent direction
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • k_max::R: upper bound on the sectional curvature of the manifold
  • linearization_errors<:AbstractVector{<:R}: linearization errors at the last serious step
  • m::R: the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • p::P: a point on the manifold $\mathcal{M}$ storing the current iterate
  • p_last_serious::P: last serious iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • transported_subgradients: subgradients of the bundle that are transported to p_last_serious
  • vector_transport_method::AbstractVectorTransportMethod: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal{M}$ storing a subgradient at the current iterate
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • ε::R: convex combination of the linearization errors
  • λ:::AbstractVector{<:R}: convex coefficients from the slution of the subproblem
  • ξ: the stopping parameter given by $ξ = -\lVert g \rVert^2 – ε$
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

ConvexBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)
ConvexBundleMethodState(M::AbstractManifold, sub_problem=convex_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)

Generate the state for the convex_bundle_method on the manifold M

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal{M}$
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

Most of the following keyword arguments set default values for the fields mentioned before.

source

Stopping criteria

Manopt.StopWhenLagrangeMultiplierLessType
StopWhenLagrangeMultiplierLess <: StoppingCriterion

Stopping Criteria for Lagrange multipliers.

Currently these are meant for the convex_bundle_method and proximal_bundle_method, where based on the Lagrange multipliers an approximate (sub)gradient $g$ and an error estimate $ε$ is computed.

The mode=:both requires that both $ε$ and $\lvert g \rvert$ are smaller than their tolerances for the convex_bundle_method, and that $c$ and $\lvert d \rvert$ are smaller than their tolerances for the proximal_bundle_method.

The mode=:estimate requires that, for the convex_bundle_method $-ξ = \lvert g \rvert^2 + ε$ is less than a given tolerance. For the proximal_bundle_method, the equation reads $-ν = μ \lvert d \rvert^2 + c$.

Constructors

StopWhenLagrangeMultiplierLess(tolerance=1e-6; mode::Symbol=:estimate, names=nothing)

Create the stopping criterion for one of the modes mentioned. Note that tolerance can be a single number for the :estimate case, but a vector of two values is required for the :both mode. Here the first entry specifies the tolerance for $ε$ ($c$), the second the tolerance for $\lvert g \rvert$ ($\lvert d \rvert$), respectively.

source

Debug functions

Manopt.DebugWarnIfLagrangeMultiplierIncreasesType
DebugWarnIfLagrangeMultiplierIncreases <: DebugAction

print a warning if the Lagrange parameter based value $-ξ$ of the bundle method increases.

Constructor

DebugWarnIfLagrangeMultiplierIncreases(warn=:Once; tol=1e2)

Initialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e2.

The warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always.

source

Helpers and internal functions

Manopt.convex_bundle_method_subsolverFunction
λ = convex_bundle_method_subsolver(M, p_last_serious, linearization_errors, transported_subgradients)
convex_bundle_method_subsolver!(M, λ, p_last_serious, linearization_errors, transported_subgradients)

solver for the subproblem of the convex bundle method at the last serious iterate $p_k$ given the current linearization errors $c_j^k$, and transported subgradients $\mathrm{P}_{p_k←q_j} X_{q_j}$.

The computation can also be done in-place of λ.

The subproblem for the convex bundle method is

\[\begin{align*} \operatorname*{arg\,min}_{λ ∈ ℝ^{\lvert J_k \rvert} & \frac{1}{2}\Bigl\lVert \sum_{j ∈ J_k} λ_j \mathrm{P}_{p_k←q_j} X_{q_j} \Bigl\rVert^2 + \sum_{j ∈ J_k}, "λ_j \, c_j^k" \\ \text{s. t.}\quad & \sum_{j ∈ J_k} λ_j = 1, \quad λ_j ≥ 0 \quad \text{for all } j ∈ J_k, nd{align*}\]

where $J_k = \set{j ∈ J_{k-1} \ | \ λ_j > 0} ∪ \set{k}$. See [BHJ24] for more details

Tip

A default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.

source
Manopt.DomainBackTrackingFunction
DomainBackTracking(; kwargs...)
DomainBackTracking(M::AbstractManifold; kwargs...)

Specify a step size that performs a backtracking to the interior of the domain of the objective function.

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for DomainBackTrackingStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.ζ_1Function
ζ_1(ω, δ)

compute a curvature-dependent bound. The formula reads

\[ζ_{1, ω}(δ) := \begin{cases} 1 & \text{ if } ω ≥ 0,\\\\ \sqrt{-ω}δ\cot(\sqrt{-ω}δ) & \text{ if } ω < 0\end{cases}\]

where $ω ≤ κ_p$ for all $p ∈ \mathcal{U}$ is a lower bound to the sectional curvature in a (strongly geodesically convex) bounded subset $\mathcal{U} ⊆ \mathcal{M})$ with diameter $δ$.

source
Manopt.ζ_2Function
ζ_2(Ω, δ)

compute a curvature-dependent bound. The formula reads

\[ζ_{2, Ω}(δ) := \begin{cases} 1 & \text{ if } Ω ≤ 0,\\\\ \sqrt{Ω}δ\cot(\sqrt{Ω}δ) & \text{ if } Ω > 0\end{cases}\]

where $Ω ≥ κ_p$ for all $p ∈ \mathcal{U}$ is an upper bound to the sectional curvature in a (strongly geodesically convex) bounded subset $\mathcal{U} ⊆ \mathcal{M})$ with diameter $δ$.

source
Manopt.close_pointFunction
close_point(M, p, tol; retraction_method=default_retraction_method(M, typeof(p)))

sample a random point close to $p ∈ \mathcal{M})$ within a tolerance tol and a retraction.

source

Literature

[BHJ24]
R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.