Projected gradient method

Manopt.projected_gradient_methodFunction
projected_gradient_method(M, f, grad_f, proj, p=rand(M); kwargs...)
projected_gradient_method(M, obj::ManifoldConstrainedSetObjective, p=rand(M); kwargs...)
projected_gradient_method!(M, f, grad_f, proj, p; kwargs...)
projected_gradient_method!(M, obj::ManifoldConstrainedSetObjective, p; kwargs...)

Compute the projected gradient method for the constrained problem

\[\begin{aligned} \operatorname*{arg\,min}_{p ∈ \mathcal M} & f(p)\\ \text{subject to}\quad& p ∈ \mathcal C ⊂ \mathcal M \end{aligned}\]

by performing the following steps

  1. Using the stepsize $α_k$ compute a candidate $q_k = \operatorname{proj}_{\mathcal C}\Bigl(\operatorname{retr}_{p_k}\bigl(-α_k \operatorname{grad} f(p_k)\bigr)\Bigr)$
  2. Compute a backtracking stepsize $β_k ≤ 1$ along $Y_k = \operatorname{retr}_{p_k}^{-1}q_k$
  3. Compute the new iterate $p_{k+1} = \operatorname{retr}_{p_k}( β_k \operatorname{retr}_{p_k}^{-1}q_k )$

until the stopping_criterion= is fulfilled.

For more information see [BFNZ25].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f: \mathcal M → T_{p}\mathcal M$ of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • proj the function that projects onto the set $\mathcal C$ as a function (M, p) -> q or a function (M, q, p) -> q computing the projection in-place of q.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • backtrack=ArmijoLinesearchStepsize(M; stop_increasing_at_step=0): a functor inheriting from Stepsize to determine a step size to perform the backtracking to determine the $β_k$. Note that the method requires $β_k ≤ 1$, otherwise the projection step no longer provides points within the constraints
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantStepsize(injectivity_radius(M)/2): a functor inheriting from Stepsize to determine a step size to perform the candidate projected step.
  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.projected_gradient_method!Function
projected_gradient_method(M, f, grad_f, proj, p=rand(M); kwargs...)
projected_gradient_method(M, obj::ManifoldConstrainedSetObjective, p=rand(M); kwargs...)
projected_gradient_method!(M, f, grad_f, proj, p; kwargs...)
projected_gradient_method!(M, obj::ManifoldConstrainedSetObjective, p; kwargs...)

Compute the projected gradient method for the constrained problem

\[\begin{aligned} \operatorname*{arg\,min}_{p ∈ \mathcal M} & f(p)\\ \text{subject to}\quad& p ∈ \mathcal C ⊂ \mathcal M \end{aligned}\]

by performing the following steps

  1. Using the stepsize $α_k$ compute a candidate $q_k = \operatorname{proj}_{\mathcal C}\Bigl(\operatorname{retr}_{p_k}\bigl(-α_k \operatorname{grad} f(p_k)\bigr)\Bigr)$
  2. Compute a backtracking stepsize $β_k ≤ 1$ along $Y_k = \operatorname{retr}_{p_k}^{-1}q_k$
  3. Compute the new iterate $p_{k+1} = \operatorname{retr}_{p_k}( β_k \operatorname{retr}_{p_k}^{-1}q_k )$

until the stopping_criterion= is fulfilled.

For more information see [BFNZ25].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f: \mathcal M → T_{p}\mathcal M$ of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • proj the function that projects onto the set $\mathcal C$ as a function (M, p) -> q or a function (M, q, p) -> q computing the projection in-place of q.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • backtrack=ArmijoLinesearchStepsize(M; stop_increasing_at_step=0): a functor inheriting from Stepsize to determine a step size to perform the backtracking to determine the $β_k$. Note that the method requires $β_k ≤ 1$, otherwise the projection step no longer provides points within the constraints
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantStepsize(injectivity_radius(M)/2): a functor inheriting from Stepsize to determine a step size to perform the candidate projected step.
  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ProjectedGradientMethodStateType
ProjectedGradientMethodState <: AbstractManoptSolverState

Fields

  • backtracking::Stepsize: a functor inheriting from Stepsize to determine a step size to determine the step size $β_k$ step size from $p_k$ to the candidate $q_k$
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • q an interims point for the projected gradient step
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size $α_k$ to determine the $q_k$ candidate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • Y::T a temporary memory for a tangent vector to store the no. Used within the backtracking

Constructor

ProjectedGradientMethodState(M, p=rand(M); kwargs...)

Keyword arguments

source

Stopping criteria

Literature

[BFNZ25]
R. Bergmann, O. P. Ferreira, S. Z. Németh and J. Zhu. On projection mappings and the gradient projection method on hyperbolic space forms. Preprint, in preparation (2025).