Projected gradient method
Manopt.projected_gradient_method
— Functionprojected_gradient_method(M, f, grad_f, proj, p=rand(M); kwargs...)
projected_gradient_method(M, obj::ManifoldConstrainedSetObjective, p=rand(M); kwargs...)
projected_gradient_method!(M, f, grad_f, proj, p; kwargs...)
projected_gradient_method!(M, obj::ManifoldConstrainedSetObjective, p; kwargs...)
Compute the projected gradient method for the constrained problem
\[\begin{aligned} \operatorname*{arg\,min}_{p ∈ \mathcal M} & f(p)\\ \text{subject to}\quad& p ∈ \mathcal C ⊂ \mathcal M \end{aligned}\]
by performing the following steps
- Using the
stepsize
$α_k$ compute a candidate $q_k = \operatorname{proj}_{\mathcal C}\Bigl(\operatorname{retr}_{p_k}\bigl(-α_k \operatorname{grad} f(p_k)\bigr)\Bigr)$ - Compute a backtracking stepsize $β_k ≤ 1$ along $Y_k = \operatorname{retr}_{p_k}^{-1}q_k$
- Compute the new iterate $p_{k+1} = \operatorname{retr}_{p_k}( β_k \operatorname{retr}_{p_k}^{-1}q_k )$
until the stopping_criterion=
is fulfilled.
For more information see [BFNZ25].
Input
M::
AbstractManifold
: a Riemannian manifold $\mathcal M$
f
: a cost function $f: \mathcal M→ ℝ$ implemented as(M, p) -> v
grad_f
: the (Riemannian) gradient $\operatorname{grad}f: \mathcal M → T_{p}\mathcal M$ of f as a function(M, p) -> X
or a function(M, X, p) -> X
computingX
in-placeproj
the function that projects onto the set $\mathcal C$ as a function(M, p) -> q
or a function(M, q, p) -> q
computing the projection in-place ofq
.p
: a point on the manifold $\mathcal M$
Keyword arguments
backtrack=
ArmijoLinesearchStepsize
(M; stop_increasing_at_step=0)
: a functor inheriting fromStepsize
to determine a step size to perform the backtracking to determine the $β_k$. Note that the method requires $β_k ≤ 1$, otherwise the projection step no longer provides points within the constraintsevaluation=
AllocatingEvaluation
()
: specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation
) or whether they modify their input argument to return the result therein (InplaceEvaluation
). Since usually the first argument is the manifold, the modified argument is the second.retraction_method=
default_retraction_method
(M, typeof(p))
: a retraction $\operatorname{retr}$ to use, see the section on retractionsstepsize=
ConstantStepsize
(injectivity_radius(M)/2)
: a functor inheriting fromStepsize
to determine a step size to perform the candidate projected step.stopping_criterion=
StopAfterIteration
(500)
|
StopWhenGradientNormLess
(1.0e-6)
): a functor indicating that the stopping criterion is fulfilled
All other keyword arguments are passed to decorate_state!
for state decorators or decorate_objective!
for objective decorators, respectively.
Output
The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return
for details, especially the return_state=
keyword.
Manopt.projected_gradient_method!
— Functionprojected_gradient_method(M, f, grad_f, proj, p=rand(M); kwargs...)
projected_gradient_method(M, obj::ManifoldConstrainedSetObjective, p=rand(M); kwargs...)
projected_gradient_method!(M, f, grad_f, proj, p; kwargs...)
projected_gradient_method!(M, obj::ManifoldConstrainedSetObjective, p; kwargs...)
Compute the projected gradient method for the constrained problem
\[\begin{aligned} \operatorname*{arg\,min}_{p ∈ \mathcal M} & f(p)\\ \text{subject to}\quad& p ∈ \mathcal C ⊂ \mathcal M \end{aligned}\]
by performing the following steps
- Using the
stepsize
$α_k$ compute a candidate $q_k = \operatorname{proj}_{\mathcal C}\Bigl(\operatorname{retr}_{p_k}\bigl(-α_k \operatorname{grad} f(p_k)\bigr)\Bigr)$ - Compute a backtracking stepsize $β_k ≤ 1$ along $Y_k = \operatorname{retr}_{p_k}^{-1}q_k$
- Compute the new iterate $p_{k+1} = \operatorname{retr}_{p_k}( β_k \operatorname{retr}_{p_k}^{-1}q_k )$
until the stopping_criterion=
is fulfilled.
For more information see [BFNZ25].
Input
M::
AbstractManifold
: a Riemannian manifold $\mathcal M$
f
: a cost function $f: \mathcal M→ ℝ$ implemented as(M, p) -> v
grad_f
: the (Riemannian) gradient $\operatorname{grad}f: \mathcal M → T_{p}\mathcal M$ of f as a function(M, p) -> X
or a function(M, X, p) -> X
computingX
in-placeproj
the function that projects onto the set $\mathcal C$ as a function(M, p) -> q
or a function(M, q, p) -> q
computing the projection in-place ofq
.p
: a point on the manifold $\mathcal M$
Keyword arguments
backtrack=
ArmijoLinesearchStepsize
(M; stop_increasing_at_step=0)
: a functor inheriting fromStepsize
to determine a step size to perform the backtracking to determine the $β_k$. Note that the method requires $β_k ≤ 1$, otherwise the projection step no longer provides points within the constraintsevaluation=
AllocatingEvaluation
()
: specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation
) or whether they modify their input argument to return the result therein (InplaceEvaluation
). Since usually the first argument is the manifold, the modified argument is the second.retraction_method=
default_retraction_method
(M, typeof(p))
: a retraction $\operatorname{retr}$ to use, see the section on retractionsstepsize=
ConstantStepsize
(injectivity_radius(M)/2)
: a functor inheriting fromStepsize
to determine a step size to perform the candidate projected step.stopping_criterion=
StopAfterIteration
(500)
|
StopWhenGradientNormLess
(1.0e-6)
): a functor indicating that the stopping criterion is fulfilled
All other keyword arguments are passed to decorate_state!
for state decorators or decorate_objective!
for objective decorators, respectively.
Output
The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return
for details, especially the return_state=
keyword.
State
Manopt.ProjectedGradientMethodState
— TypeProjectedGradientMethodState <: AbstractManoptSolverState
Fields
backtracking::Stepsize
: a functor inheriting fromStepsize
to determine a step size to determine the step size $β_k$ step size from $p_k$ to the candidate $q_k$inverse_retraction_method::AbstractInverseRetractionMethod
: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inversesp::P
: a point on the manifold $\mathcal M$storing the current iterateq
an interims point for the projected gradient stepstepsize::Stepsize
: a functor inheriting fromStepsize
to determine a step size $α_k$ to determine the $q_k$ candidatestop::StoppingCriterion
: a functor indicating that the stopping criterion is fulfilledretraction_method::AbstractRetractionMethod
: a retraction $\operatorname{retr}$ to use, see the section on retractionsX::T
: a tangent vector at the point $p$ on the manifold $\mathcal M$Y::T
a temporary memory for a tangent vector to store the no. Used within the backtracking
Constructor
ProjectedGradientMethodState(M, p=rand(M); kwargs...)
Keyword arguments
backtracking=
ArmijoLinesearchStepsize
(M)
: a functor inheriting fromStepsize
to determine a step size $p_k$ to the candidate $q_k$inverse_retraction_method=
default_inverse_retraction_method
(M, typeof(p))
: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inversesstepsize=
ConstantStepsize
(M)
: a functor inheriting fromStepsize
to determine a step size $α_k$ to determine the $q_k$ candidatestop=
StopAfterIteration
(300)
: a functor indicating that the stopping criterion is fulfilledretraction_method=
default_retraction_method
(M, typeof(p))
: a retraction $\operatorname{retr}$ to use, see the section on retractionsX=
zero_vector
(M, p)
: a tangent vector at the point $p$ on the manifold $\mathcal M$
Stopping criteria
Manopt.StopWhenProjectedGradientStationary
— TypeStopWhenProjectedGradientStationary <: StoppingCriterion
Stop when the step taken by the projection is (before linesearch) exactly the opposite of the
Literature
- [BFNZ25]
- R. Bergmann, O. P. Ferreira, S. Z. Németh and J. Zhu. On projection mappings and the gradient projection method on hyperbolic space forms. Preprint, in preparation (2025).