Manopt.conjugate_gradient_descentFunction
conjugate_gradient_descent(M, F, gradF, x)

perform a conjugate gradient based descent

$x_{k+1} = \operatorname{retr}_{x_k} \bigl( s_kδ_k \bigr),$

where $\operatorname{retr}$ denotes a retraction on the Manifold M and one can employ different rules to update the descent direction $δ_k$ based on the last direction $δ_{k-1}$ and both gradients $\operatorname{grad}f(x_k)$,$\operatorname{grad}f(x_{k-1})$. The Stepsize $s_k$ may be determined by a Linesearch.

They all compute $β_k$ such that this algorithm updates the search direction as

$\delta_k=\operatorname{grad}f(x_k) + β_k \delta_{k-1}$

Input

• M : a manifold $\mathcal M$
• F : a cost function $F:\mathcal M→ℝ$ to minimize
• gradF: the gradient $\operatorname{grad}F:\mathcal M → T\mathcal M$ of $F$
• x : an initial value $x∈\mathcal M$

Optional

• coefficient : (SteepestDirectionUpdateRule <: DirectionUpdateRule rule to compute the descent direction update coefficient $β_k$, as a functor, i.e. the resulting function maps (p,o,i) -> β, where p is the current GradientProblem, o are the ConjugateGradientDescentOptions o and i is the current iterate.
• evaluation – (AllocatingEvaluation) specify whether the gradient works by allocation (default) form gradF(M, x) or MutatingEvaluation in place, i.e. is of the form gradF!(M, X, x).
• retraction_method - (default_retraction_method(M) a retraction method to use.
• return_options – (false) – if actiavated, the extended result, i.e. the complete Options re returned. This can be used to access recorded values. If set to false (default) just the optimal value x_opt if returned
• stepsize - (Constant(1.)) A Stepsize function applied to the search direction. The default is a constant step size 1.
• stopping_criterion : (stopWhenAny( stopAtIteration(200), stopGradientNormLess(10.0^-8))) a function indicating when to stop.
• vector_transport_method – (default_vector_transport_method(M)) vector transport method to transport the old descent direction when computing the new descent direction.

Output

• x_opt – the resulting (approximately critical) point of gradientDescent

OR

• options - the options returned by the solver (see return_options)
source
Manopt.conjugate_gradient_descent!Function
conjugate_gradient_descent!(M, F, gradF, x)

perform a conjugate gradient based descent in place of x, i.e.

$x_{k+1} = \operatorname{retr}_{x_k} \bigl( s_k\delta_k \bigr),$

where $\operatorname{retr}$ denotes a retraction on the Manifold M

Input

• M : a manifold $\mathcal M$
• F : a cost function $F:\mathcal M→ℝ$ to minimize
• gradF: the gradient $\operatorname{grad}F:\mathcal M→ T\mathcal M$ of F
• x : an initial value $x∈\mathcal M$

for more details and options, especially the DirectionUpdateRules, see conjugate_gradient_descent.

source

Options

Manopt.ConjugateGradientDescentOptionsType
ConjugateGradientOptions <: AbstractGradientOptions

specify options for a conjugate gradient descent algorithm, that solves a [GradientProblem].

Fields

• x – the current iterate, a point on a manifold
• gradient – the current gradient
• δ – the current descent direction, i.e. also tangent vector
• β – the current update coefficient rule, see .
• coefficient – a DirectionUpdateRule function to determine the new β
• stepsize – a Stepsize function
• stop – a StoppingCriterion
• retraction_method – (ExponentialRetraction()) a type of retraction

source

Available Coefficients

The update rules act as DirectionUpdateRule, which internally always first evaluate the gradient itself.

Manopt.ConjugateDescentCoefficientType
ConjugateDescentCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [Flethcer1987] adapted to manifolds:

$β_k = \frac{ \lVert ξ_{k+1} \rVert_{x_{k+1}}^2 } {\langle -\delta_k,ξ_k \rangle_{x_k}}.$

Constructor

ConjugateDescentCoefficient(a::StoreOptionsAction=())

Construct the conjugate descent coefficient update rule, a new storage is created by default.

source
Manopt.DaiYuanCoefficientType
DaiYuanCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [DaiYuan1999]

adapted to manifolds: let $\nu_k = ξ_{k+1} - P_{x_{k+1}\gets x_k}ξ_k$, where $P_{a\gets b}(⋅)$ denotes a vector transport from the tangent space at $a$ to $b$.

$β_k = \frac{ \lVert ξ_{k+1} \rVert_{x_{k+1}}^2 } {\langle P_{x_{k+1}\gets x_k}\delta_k, \nu_k \rangle_{x_{k+1}}}.$

Constructor

DaiYuanCoefficient(
t::AbstractVectorTransportMethod=ParallelTransport(),
a::StoreOptionsAction=(),
)

Construct the Dai Yuan coefficient update rule, where the parallel transport is the default vector transport and a new storage is created by default.

source
Manopt.FletcherReevesCoefficientType
FletcherReevesCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [FletcherReeves1964] adapted to manifolds:

$β_k = \frac{\lVert ξ_{k+1}\rVert_{x_{k+1}}^2}{\lVert ξ_{k}\rVert_{x_{k}}^2}.$

Constructor

FletcherReevesCoefficient(a::StoreOptionsAction=())

Construct the Fletcher Reeves coefficient update rule, a new storage is created by default.

source
Manopt.HagerZhangCoefficientType
HagerZhangCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [HagerZhang2005] adapted to manifolds: let $\nu_k = ξ_{k+1} - P_{x_{k+1}\gets x_k}ξ_k$, where $P_{a\gets b}(⋅)$ denotes a vector transport from the tangent space at $a$ to $b$.

$β_k = \Bigl\langle\nu_k - \frac{ 2\lVert \nu_k\rVert_{x_{k+1}}^2 }{ \langle P_{x_{k+1}\gets x_k}\delta_k, \nu_k \rangle_{x_{k+1}} } P_{x_{k+1}\gets x_k}\delta_k, \frac{ξ_{k+1}}{ \langle P_{x_{k+1}\gets x_k}\delta_k, \nu_k \rangle_{x_{k+1}} } \Bigr\rangle_{x_{k+1}}.$

This method includes a numerical stability proposed by those authors.

Constructor

HagerZhangCoefficient(
t::AbstractVectorTransportMethod=ParallelTransport(),
a::StoreOptionsAction=(),
)

Construct the Hager Zhang coefficient update rule, where the parallel transport is the default vector transport and a new storage is created by default.

source
Manopt.HeestenesStiefelCoefficientType
HeestenesStiefelCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [HeestensStiefel1952]

adapted to manifolds as follows: let $\nu_k = ξ_{k+1} - P_{x_{k+1}\gets x_k}ξ_k$. Then the update reads

$β_k = \frac{\langle ξ_{k+1}, \nu_k \rangle_{x_{k+1}} } { \langle P_{x_{k+1}\gets x_k} \delta_k, \nu_k\rangle_{x_{k+1}} },$

where $P_{a\gets b}(⋅)$ denotes a vector transport from the tangent space at $a$ to $b$.

Constructor

HeestenesStiefelCoefficient(
t::AbstractVectorTransportMethod=ParallelTransport(),
a::StoreOptionsAction=()
)

Construct the Heestens Stiefel coefficient update rule, where the parallel transport is the default vector transport and a new storage is created by default.

source
Manopt.LiuStoreyCoefficientType
LiuStoreyCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [LuiStorey1991] adapted to manifolds: let $\nu_k = ξ_{k+1} - P_{x_{k+1}\gets x_k}ξ_k$, where $P_{a\gets b}(⋅)$ denotes a vector transport from the tangent space at $a$ to $b$.

$β_k = - \frac{ \langle ξ_{k+1},\nu_k \rangle_{x_{k+1}} } {\langle \delta_k,ξ_k \rangle_{x_k}}.$

Constructor

LiuStoreyCoefficient(
t::AbstractVectorTransportMethod=ParallelTransport(),
a::StoreOptionsAction=()
)

Construct the Lui Storey coefficient update rule, where the parallel transport is the default vector transport and a new storage is created by default.

source
Manopt.PolakRibiereCoefficientType
PolakRibiereCoefficient <: DirectionUpdateRule

Computes an update coefficient for the conjugate gradient method, where the ConjugateGradientDescentOptionso include the last iterates $x_k,ξ_k$, the current iterates $x_{k+1},ξ_{k+1}$ and the last update direction $\delta=\delta_k$, where the last three ones are stored in the variables with prequel Old based on [PolakRibiere1969][Polyak1969]

adapted to manifolds: let $\nu_k = ξ_{k+1} - P_{x_{k+1}\gets x_k}ξ_k$, where $P_{a\gets b}(⋅)$ denotes a vector transport from the tangent space at $a$ to $b$.

$β_k = \frac{ \langle ξ_{k+1}, \nu_k \rangle_{x_{k+1}} } {\lVert ξ_k \rVert_{x_k}^2 }.$

Constructor

PolakRibiereCoefficient(
t::AbstractVectorTransportMethod=ParallelTransport(),
a::StoreOptionsAction=()
)

Construct the PolakRibiere coefficient update rule, where the parallel transport is the default vector transport and a new storage is created by default.

source