Subgradient method

Manopt.subgradient_methodFunction
subgradient_method(M, f, ∂f, p=rand(M); kwargs...)
subgradient_method(M, sgo, p=rand(M); kwargs...)
subgradient_method!(M, f, ∂f, p; kwargs...)
subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal{M}$

  • f: a cost function $f: \mathcal{M}→ ℝ$ implemented as (M, p) -> v

  • ∂f: the subgradient $∂f: \mathcal{M} → T\mathcal{M}$ of $f$ as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place. This function should always only return one element from the subgradient.

  • p::P: a point on the manifold $\mathcal{M}$

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.subgradient_method!Function
subgradient_method(M, f, ∂f, p=rand(M); kwargs...)
subgradient_method(M, sgo, p=rand(M); kwargs...)
subgradient_method!(M, f, ∂f, p; kwargs...)
subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal{M}$

  • f: a cost function $f: \mathcal{M}→ ℝ$ implemented as (M, p) -> v

  • ∂f: the subgradient $∂f: \mathcal{M} → T\mathcal{M}$ of $f$ as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place. This function should always only return one element from the subgradient.

  • p::P: a point on the manifold $\mathcal{M}$

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.SubGradientMethodStateType
SubGradientMethodState <: AbstractManoptSolverState

stores option values for a subgradient_method solver

Fields

  • p::P: a point on the manifold $\mathcal{M}$ storing the current iterate
  • p_star: optimal value
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X: the current element from the possible subgradients at p that was last evaluated.

Constructor

SubGradientMethodState(M::AbstractManifold; kwargs...)

Initialise the Subgradient method state

Keyword arguments

source

For DebugActions and RecordActions to record (sub)gradient, its norm and the step sizes, see the gradient descent actions.

Technical details

The subgradient_method solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.

Literature

[FO98]
O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).