Alternating Gradient Descent
Manopt.alternating_gradient_descent
— Functionalternating_gradient_descent(M::ProductManifold, f, grad_f, p)
perform an alternating gradient descent
Input
M
– the product manifold $\mathcal M = \mathcal M_1 × \mathcal M_2 × ⋯ ×\mathcal M_n$f
– the objective function (cost) defined onM
.grad_f
– a gradient, that can be of two cases- is a single function returning a
ProductRepr
or - is a vector functions each returning a component part of the whole gradient
- is a single function returning a
p
– an initial value $p_0 ∈ \mathcal M$
Optional
evaluation
– (AllocatingEvaluation
) specify whether the gradient(s) works by allocation (default) formgradF(M, x)
orInplaceEvaluation
in place, i.e. is of the formgradF!(M, X, x)
(elementwise).evaluation_order
– (:Linear
) – whether to use a randomly permuted sequence (:FixedRandom
), a per cycle permuted sequence (:Random
) or the default:Linear
one.inner_iterations
– (5
) how many gradient steps to take in a component before alternating to the nextstopping_criterion
(StopAfterIteration
(1000)
)– aStoppingCriterion
stepsize
(ArmijoLinesearch
()
) aStepsize
order
- ([1:n]
) the initial permutation, wheren
is the number of gradients ingradF
.retraction_method
– (default_retraction_method(M, typeof(p))
) aretraction(M, p, X)
to use.
Output
usually the obtained (approximate) minimizer, see get_solver_return
for details
This Problem requires the ProductManifold
from Manifolds.jl
, so Manifolds.jl
to be loaded.
The input of each of the (component) gradients is still the whole vector x
, just that all other then the i
th input component are assumed to be fixed and just the i
th components gradient is computed / returned.
Manopt.alternating_gradient_descent!
— Functionalternating_gradient_descent!(M::ProductManifold, f, grad_f, p)
perform a alternating gradient descent in place of p
.
Input
M
a product manifold $\mathcal M$f
– the objective functioN (cost)grad_f
– a gradient function, that either returns a vector of the subgradients or is a vector of gradientsp
– an initial value $p_0 ∈ \mathcal M$
for all optional parameters, see alternating_gradient_descent
.
State
Manopt.AlternatingGradientDescentState
— TypeAlternatingGradientDescentState <: AbstractGradientDescentSolverState
Store the fields for an alternating gradient descent algorithm, see also alternating_gradient_descent
.
Fields
direction
(AlternatingGradient(zero_vector(M, x))
aDirectionUpdateRule
evaluation_order
– (:Linear
) – whetherinner_iterations
– (5
) how many gradient steps to take in a component before alternating to the next to use a randomly permuted sequence (:FixedRandom
), a per cycle newly permuted sequence (:Random
) or the default:Linear
evaluation order.order
the current permutationretraction_method
– (default_retraction_method(M, typeof(p))
) aretraction(M,x,ξ)
to use.stepsize
(ConstantStepsize
(M)
) aStepsize
stopping_criterion
(StopAfterIteration
(1000)
)– aStoppingCriterion
p
the current iterateX
(zero_vector(M,p)
) the current gradient tangent vectork
, ì` internal counters for the outer and inner iterations, respectively.
Constructors
AlternatingGradientDescentState(M, p; kwargs...)
Generate the options for point p
and and where inner_iterations
, order_type
, order
, retraction_method
, stopping_criterion
, and stepsize
` are keyword arguments
Additionally, the options share a DirectionUpdateRule
, which chooses the current component, so they can be decorated further; The most inner one should always be the following one though.
Manopt.AlternatingGradient
— TypeAlternatingGradient <: DirectionUpdateRule
The default gradient processor, which just evaluates the (alternating) gradient on one of the components