Stochastic Gradient Descent

Manopt.stochastic_gradient_descentFunction
stochastic_gradient_descent(M, gradF, x)

perform a stochastic gradient descent

Input

  • M a manifold $\mathcal M$
  • gradF – a gradient function, that either returns a vector of the subgradients or is a vector of gradients
  • x – an initial value $x ∈ \mathcal M$

Optional

  • cost – (missing) you can provide a cost function for example to track the function value
  • evaluation – (AllocatingEvaluation) specify whether the gradient(s) works by allocation (default) form gradF(M, x) or InplaceEvaluation in place, i.e. is of the form gradF!(M, X, x) (elementwise).
  • evaluation_order – (:Random) – whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Linear) or the default :Random one.
  • stopping_criterion (StopAfterIteration(1000))– a StoppingCriterion
  • stepsize (ConstantStepsize(1.0)) a Stepsize
  • order_type (:RandomOder) a type of ordering of gradient evaluations. values are :RandomOrder, a :FixedPermutation, :LinearOrder
  • order - ([1:n]) the initial permutation, where n is the number of gradients in gradF.
  • retraction_method – (default_retraction_method(M, typeof(p))) a retraction to use.

Output

the obtained (approximate) minimizer $x^*$, see get_solver_return for details

source
Manopt.stochastic_gradient_descent!Function
stochastic_gradient_descent!(M, gradF, x)

perform a stochastic gradient descent in place of x.

Input

  • M a manifold $\mathcal M$
  • gradF – a gradient function, that either returns a vector of the subgradients or is a vector of gradients
  • x – an initial value $x ∈ \mathcal M$

for all optional parameters, see stochastic_gradient_descent.

source

State

Manopt.StochasticGradientDescentStateType
StochasticGradientDescentState <: AbstractGradientDescentSolverState

Store the following fields for a default stochastic gradient descent algorithm, see also ManifoldStochasticGradientObjective and stochastic_gradient_descent.

Fields

  • p the current iterate
  • direction (StochasticGradient) a direction update to use
  • stopping_criterion (StopAfterIteration(1000))– a StoppingCriterion
  • stepsize (ConstantStepsize(1.0)) a Stepsize
  • evaluation_order – (:Random) – whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Linear) or the default :Random one.
  • order the current permutation
  • retraction_method – (default_retraction_method(M, typeof(p))) a retraction(M, p, X) to use.

Constructor

StochasticGradientDescentState(M, p)

Create a StochasticGradientDescentState with start point x. all other fields are optional keyword arguments, and the defaults are taken from M.

source

Additionally, the options share a DirectionUpdateRule, so you can also apply MomentumGradient and AverageGradient here. The most inner one should always be.

Manopt.StochasticGradientType
StochasticGradient <: AbstractGradientGroupProcessor

The default gradient processor, which just evaluates the (stochastic) gradient or a subset thereof.

Constructor

StochasticGradient(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))

Initialize the stochastic Gradient processor with X, i.e. both M and p are just help variables, though M is mandatory by convention.

source