Plans for solvers
In order to start a solver, both a Problem and Options are required. Together they form a plan and these are stored in this folder. For sub-problems there are maybe also only Options, since they than refer to the same problem.
Options
For most algorithms a certain set of options can either be generated beforehand of the function with keywords can be used. Generally the type
Manopt.Options — TypeOptionsA general super type for all options.
Fields
The following fields are assumed to be default. If you use different ones, provide the access functions accordingly
xa point with the current iteratestopaStoppingCriterion.
Manopt.get_options — Functionget_options(o::Options)return the undecorated Options of the (possibly) decorated o. As long as your decorated options store the options within o.options and the dispatch_options_decorator is set to Val{true}, the internal options are extracted.
Since the Options directly relate to a solver, they are documented with the corresponding Solvers. You can always access the options (since they might be decorated) by calling get_options.
Decorators for Options
Options can be decorated using the following trait and function to initialize
Manopt.dispatch_options_decorator — Functiondispatch_options_decorator(o::Options)Indicate internally, whether an Options o to be of decorating type, i.e. it stores (encapsulates) options in itself, by default in the field o. options.
Decorators indicate this by returning Val{true} for further dispatch.
The default is Val{false}, i.e. by default an options is not decorated.
Manopt.is_options_decorator — Functionis_options_decorator(o::Options)Indicate, whether Options o are of decorator type.
Manopt.decorate_options — Functiondecorate_options(o)decorate the Optionso with specific decorators.
Optional Arguments
optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.
debug– (Array{Union{Symbol,DebugAction,String,Int},1}()) a set of symbols representingDebugActions,Stringsused as dividers and a subsampling integer. These are passed as aDebugGroupwithin:Allto theDebugOptionsdecorator dictionary. Only excention is:Stopthat is passed to:Stop.record– (Array{Union{Symbol,RecordAction,Int},1}()) specify recordings by usingSymbols orRecordActions directly. The integer can again be used for only recording every $i$th iteration.
See also
In general decorators often perform actions so we introduce
Manopt.AbstractOptionsAction — TypeAbstractOptionsActiona common Type for AbstractOptionsActions that might be triggered in decoraters, for example DebugOptions or RecordOptions.
as well as a helper for storing values using keys, i.e.
Manopt.StoreOptionsAction — TypeStoreTupleAction <: AbstractOptionsActioninternal storage for AbstractOptionsActions to store a tuple of fields from an Optionss
This functor posesses the usual interface of functions called during an iteration, i.e. acts on (p,o,i), where p is a Problem, o is an Options and i is the current iteration.
Fields
values– a dictionary to store interims values based on certainSymbolskeys– anNTupleofSymbolsto refer to fields ofOptionsonce– whether to update the internal values only once per iterationlastStored– last iterate, where thisAbstractOptionsActionwas called (to determineonce
Constructiors
StoreOptionsAction([keys=(), once=true])Initialize the Functor to an (empty) set of keys, where once determines whether more that one update per iteration are effective
StoreOptionsAction(keys, once=true])Initialize the Functor to a set of keys, where the dictionary is initialized to be empty. Further, once determines whether more that one update per iteration are effective, otherwise only the first update is stored, all others are ignored.
Manopt.get_storage — Functionget_storage(a,key)return the internal value of the StoreOptionsAction a at the Symbol key.
Manopt.has_storage — Functionget_storage(a,key)return whether the StoreOptionsAction a has a value stored at the Symbol key.
Manopt.update_storage! — Functionupdate_storage!(a,o)update the StoreOptionsAction a internal values to the ones given on the Options o.
update_storage!(a,o)update the StoreOptionsAction a internal values to the ones given in the dictionary d. The values are merged, where the values from d are preferred.
Debug Options
Manopt.DebugAction — TypeDebugActionA DebugAction is a small functor to print/issue debug output. The usual call is given by (p,o,i) -> s that performs the debug based on a Problem p, Options o and the current iterate i.
By convention i=0 is interpreted as "For Initialization only", i.e. only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output. Finally typemin(Int) is used to indicate a call from stop_solver! that returns true afterwards.
Fields (assumed by subtypes to exist)
printmethod to perform the actual print. Can for example be set to a file export,
or to @info. The default is the print function on the default Base.stdout.
Manopt.DebugChange — TypeDebugChange(a,prefix,print)debug for the amount of change of the iterate (stored in o.x of the Options) during the last iteration. See DebugEntryChange
Parameters
x0– an initial value to already get a Change after the first iterate. Can be left outa– (StoreOptionsAction( (:x,) )) – the storage of the previous actionprefix– ("Last Change:") prefix of the debug outputprint– (print) default method to peform the print.
Manopt.DebugCost — TypeDebugCost <: DebugActionprint the current cost function value, see get_cost.
Constructors
DebugCost(long,print)where long indicated whether to print F(x): (default) or cost:
DebugCost(prefix,print)set a prefix manually.
Manopt.DebugDivider — TypeDebugDivider <: DebugActionprint a small divider (default " | ").
Constructor
DebugDivider(div,print)Manopt.DebugEntry — TypeDebugEntry <: RecordActionprint a certain fields entry of type {T} during the iterates
Addidtional Fields
field– Symbol the entry can be accessed with withinOptions
Constructor
DebugEntry(f[, prefix="$f:", io=stdout])Manopt.DebugEntryChange — TypeDebugEntryChange{T} <: DebugActionprint a certain entries change during iterates
Additional Fields
print– (print) function to print the resultprefix– ("Change of :x") prefix to the print outfield– Symbol the field can be accessed with withinOptionsdistance– function (p,o,x1,x2) to compute the change/distance between two values of the entrystorage– aStoreOptionsActionto store the previous value of:f
Constructors
DebugEntryChange(f,d[, a, prefix, io])initialize the Debug to a field f and a distance d.
DebugEntryChange(v,f,d[, a, prefix="Change of $f:", io])initialize the Debug to a field f and a distance d with initial value v for the history of o.field.
Manopt.DebugEvery — TypeDebugEvery <: DebugActionevaluate and print debug only every $i$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by alwaysUpdate.
This method does not perform any print itself but relies on it's childrens print.
Manopt.DebugGroup — TypeDebugGroup <: DebugActiongroup a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated
Constructor
DebugGroup(g)construct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string
Manopt.DebugIterate — TypeDebugIterate <: DebugActiondebug for the current iterate (stored in o.x).
Constructor
DebugIterate(io=stdout, long::Bool=false)Parameters
long::Boolwhether to printx:orcurrent iterate
Manopt.DebugIteration — TypeDebugIteration <: DebugActiondebug for the current iteration (prefixed with #)
Manopt.DebugOptions — TypeDebugOptions <: OptionsThe debug options append to any options a debug functionality, i.e. they act as a decorator pattern. Internally a Dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference. The default occasion is :All and for example solvers join this field with :Start, :Step and :Stop at the beginning, every iteration or the end of the algorithm, respectively
The original options can still be accessed using the get_options function.
Fields (defaults in brackets)
options– the options that are extended by debug informationdebugDictionary– aDict{Symbol,DebugAction}to keep track of Debug for different actions
Constructors
DebugOptions(o,dA)construct debug decorated options, where dD can be
- a
DebugAction, then it is stored within the dictionary at:All - an
ArrayofDebugActions, then it is stored as adebugDictionarywithin:All. - a
Dict{Symbol,DebugAction}. - an Array of Symbols, String and an Int for the
DebugFactory
Manopt.DebugStoppingCriterion — TypeDebugStoppingCriterion <: DebugActionprint the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.
Manopt.DebugActionFactory — MethodDebugActionFactory(s)create a DebugAction where
- a
Stringyields the correspoinding divider - a
DebugActionis passed through - a [
Symbol] createsDebugEntryof that symbol, with the exceptions of:Change,:Iterate,:Iteration, and:Cost.
Manopt.DebugFactory — MethodDebugFactory(a)given an array of Symbols, Strings DebugActions and Ints
- The symbol
:Stopcreates an entry of to display the stoping criterion at the end (:Stop => DebugStoppingCriterion()) - The symbol
:Costcreates aDebugCost - The symbol
:iterationcreates aDebugIteration - The symbol
:Changecreates aDebugChange - any other symbol creates debug output of the corresponding field in
Options - any string creates a
DebugDivider - any
DebugActionis directly included - an Integer
kintroduces that debug is only printed everykth iteration
see DebugSolver for details on the decorated solver.
Further specific DebugActions can be found at the specific Options.
Record Options
Manopt.RecordAction — TypeRecordActionA RecordAction is a small functor to record values. The usual call is given by (p,o,i) -> s that performs the record based on a Problem p, Options o and the current iterate i.
By convention i<=0 is interpreted as "For Initialization only", i.e. only initialize internal values, but not trigger any record, the same holds for i=typemin(Inf) which is used to indicate stop, i.e. that the record is called from within stop_solver! which returns true afterwards.
Fields (assumed by subtypes to exist)
recorded_valuesanArrayof the recorded values.
Manopt.RecordChange — TypeRecordChange <: RecordActiondebug for the amount of change of the iterate (stored in o.x of the Options) during the last iteration.
Additional Fields
storageaStoreOptionsActionto store (at least)o.xto use this as the last value (to compute the change)
Manopt.RecordCost — TypeRecordCost <: RecordActionrecord the current cost function value, see get_cost.
Manopt.RecordEntry — TypeRecordEntry{T} <: RecordActionrecord a certain fields entry of type {T} during the iterates
Fields
recorded_values– the recorded Iteratesfield– Symbol the entry can be accessed with withinOptions
Manopt.RecordEntryChange — TypeRecordEntryChange{T} <: RecordActionrecord a certain entries change during iterates
Additional Fields
recorded_values– the recorded Iteratesfield– Symbol the field can be accessed with withinOptionsdistance– function (p,o,x1,x2) to compute the change/distance between two values of the entrystorage– aStoreOptionsActionto store (at least)getproperty(o, d.field)
Manopt.RecordEvery — TypeRecordEvery <: RecordActionrecord only every $i$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.
This method does not perform any record itself but relies on it's childrens methods
Manopt.RecordGroup — TypeRecordGroup <: RecordActiongroup a set of RecordActions into one action, where the internal prints are removed by default and the resulting strings are concatenated
Constructor
RecordGroup(g)construct a group consisting of an Array of RecordActions g, that are recording en bloque; the method does not perform any record itself, but keeps an array of records. Accessing these yields a Tuple of the recorded values per iteration
Manopt.RecordIterate — TypeRecordIterate <: RecordActionrecord the iterate
Constructors
RecordIterate(x0)initialize the iterate record array to the type of x0, e.g. your initial data.
RecordIterate(P)initialize the iterate record array to the data type T.
Manopt.RecordIteration — TypeRecordIteration <: RecordActionrecord the current iteration
Manopt.RecordOptions — TypeRecordOptions <: Optionsappend to any Options the decorator with record functionality, Internally a Dictionary is kept that stores a RecordAction for several occasions using a Symbol as reference. The default occasion is :All and for example solvers join this field with :Start, :Step and :Stop at the beginning, every iteration or the end of the algorithm, respectively
The original options can still be accessed using the get_options function.
Fields
options– the options that are extended by debug informationrecordDictionary– aDict{Symbol,RecordAction}to keep track of all different recorded values
Constructors
RecordOptions(o,dR)construct record decorated Options, where dR can be
- a
RecordAction, then it is stored within the dictionary at:All - an
ArrayofRecordActions, then it is stored as arecordDictionary(@ref) within the dictionary at:All. - a
Dict{Symbol,RecordAction}.
Manopt.RecordActionFactory — MethodRecordActionFactory(s)create a RecordAction where
- a
RecordActionis passed through - a [
Symbol] createsRecordEntryof that symbol, with the exceptions of:Change,:Iterate,:Iteration, and:Cost.
Manopt.RecordFactory — MethodRecordFactory(a)given an array of Symbols and RecordActions and Ints
- The symbol
:Costcreates aRecordCost - The symbol
:iterationcreates aRecordIteration - The symbol
:Changecreates aRecordChange - any other symbol creates a
RecordEntryof the corresponding field inOptions - any
RecordActionis directly included - an Integer
kintroduces that record is only performed everykth iteration
Manopt.get_record — Functionget_record(o[,s=:Step])return the recorded values from within the RecordOptions o that where recorded with respect to the Symbol s as an Array. The default refers to any recordings during an Iteration represented by the Symbol :Step
Manopt.get_record — Methodget_record(r)return the recorded values stored within a RecordAction r.
Manopt.has_record — Methodhas_record(o)check whether the Optionso are decorated with RecordOptions
see RecordSolver for details on the decorated solver.
Further specific RecordActions can be found at the specific Options.
there's one internal helper that might be useful for you own actions, namely
Manopt.record_or_reset! — Functionrecord_or_reset!(r,v,i)either record (i>0 and not Inf) the value v within the RecordAction r or reset (i<0) the internal storage, where v has to match the internal value type of the corresponding Recordaction.
Stepsize and Linesearch
The step size determination is implemented as a Functor based on
Manopt.Stepsize — TypeStepsizeAn abstract type for the functors representing step sizes, i.e. they are callable structurs. The naming scheme is TypeOfStepSize, e.g. ConstantStepsize.
Every Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a Problem as well as Options and the current number of iterations are the arguments and returns a number, namely the stepsize to use.
See also
in general there are
Manopt.ArmijoLinesearch — TypeArmijoLineseach <: LinesearchA functor representing Armijo line seach including the last runs state, i.e. a last step size.
Fields
initialStepsize– (1.0) and initial step sizeretraction_method– (ExponentialRetraction()) the rectraction to use, defaults to the exponential mapcontractionFactor– (0.95) exponent for line search reductionsufficientDecrease– (0.1) gain within Armijo's rulelastStepSize– (initialstepsize) the last step size we start the search with
Constructor
ArmijoLineSearch()with the Fields above in their order as optional arguments.
This method returns the functor to perform Armijo line search, where two inter faces are available:
- based on a tuple
(p,o,i)of aGradientProblemp,Optionsoand a current iteratei. - with
(M,x,F,∇Fx[,η=-∇Fx]) -> swhere ManifoldM, a current pointxa functionF, that maps from the manifold to the reals, its gradient (a tangent vector)∇F=∇F(x)atxand an optional search direction tangent vectorη=∇Fare the arguments.
Manopt.ConstantStepsize — TypeConstantStepsize <: StepsizeA functor that always returns a fixed step size.
Fields
length– constant value for the step size.
Constructor
ConstantStepSize(s)initialize the stepsie to a constant s
Manopt.DecreasingStepsize — TypeDecreasingStepsize()A functor that represents several decreasing step sizes
Fields
length– (1) the initial step size $l$.factor– (1) a value $f$ to multiply the initial step size with every iterationsubtrahend– (0) a value $a$ that is subtracted every iterationexponent– (1) a value $e$ the current iteration numbers $e$th exponential is taken of
In total the complete formulae reads for the $i$th iterate as
$ s_i = \frac{(l-i\cdot a)f^i}{i^e}$
and hence the default simplifies to just $ s_i = \frac{l}{i} $
Constructor
ConstantStepSize(l,f,a,e)initialiszes all fields above, where none of them is mandatory.
Manopt.Linesearch — TypeLinesearch <: StepsizeAn abstract functor to represent line search type step size deteminations, see Stepsize for details. One example is the ArmijoLinesearch functor.
Compared to simple step sizes, the linesearch functors provide an interface of the form (p,o,i,η) -> s with an additional (but optional) fourth parameter to proviade a search direction; this should default to something reasonable, e.g. the negative gradient.
Manopt.NonmonotoneLinesearch — TypeNonmonotoneLinesearch <: LinesearchA functor representing a nonmonotone line seach using the Barzilai-Borwein step size[Iannazzo2018]. Together with a gradient descent algorithm this line search represents the Riemannian Barzilai-Borwein with nonmonotone line-search (RBBNMLS) algorithm. We shifted the order of the algorithm steps from the paper by Iannazzo and Porcelli so that in each iteration we first find
and
where $α _{k-1}$ is the step size computed in the last iteration and $\operatorname{T}$ is a vector transport. We then find the Barzilai–Borwein step size
where
if the direct strategy is chosen,
in case of the inverse strategy and an alternation between the two in case of the alternating strategy. Then we find the smallest $h = 0, 1, 2 …$ such that
where $σ$ is a step length reduction factor $\in (0,1)$, $m$ is the number of iterations after which the function value has to be lower than the current one and $γ$ is the sufficient decrease parameter $\in (0,1)$. We can then find the new stepsize by
Fields
initial_stepsize– (1.0) the step size we start the search withretraction_method– (ExponentialRetraction()) the rectraction to usevector_transport_method– (ParallelTransport()) the vector transport method to usestepsize_reduction– (0.5) step size reduction factor contained in the interval (0,1)sufficient_decrease– (1e-4) sufficient decrease parameter contained in the interval (0,1)memory_size– (10) number of iterations after which the cost value needs to be lower than the current onemin_stepsize– (1e-3) lower bound for the Barzilai-Borwein step size greater than zeromax_stepsize– (1e3) upper bound for the Barzilai-Borwein step size greater than min_stepsizestrategy– (direct) defines if the new step size is computed using the direct, indirect or alternating strategystorage– (x,∇F) aStoreOptionsActionto storeold_xandold_∇, the x-value and corresponding gradient of the previous iteration
Constructor
NonmonotoneLinesearch()with the Fields above in their order as optional arguments.
This method returns the functor to perform nonmonotone line search.
Manopt.WolfePowellBinaryLinesearch — TypeWolfePowellBinaryLinesearch <: LinesearchA Linesearch method that determines a step size t fulfilling the Wolfe conditions
based on a binary chop. Let $η$ be a search direction and $c_1,c_2>0$ be two constants. Then with
where $x_+ = \operatorname{retr}_x(tη)$ is the current trial point, and $\text{V}$ is a vector transport, we perform the following Algorithm similar to Algorithm 7 from [Huang2014]
- set $α=0$, $β=∞$ and $t=1$.
- While either $A(t)$ does not hold or $W(t)$ does not hold do steps 3-5.
- If $A(t)$ fails, set $β=t$.
- If $A(t)$ holds but $W(t)$ fails, set $α=t$.
- If $β<∞$ set $t=\frac{α+β}{2}$, otherwise set $t=2α$.
Constructor
WolfePowellBinaryLinesearch(
retr::AbstractRetractionMethod=ExponentialRetraction(),
vtr::AbstractVectorTransportMethod=ParallelTransport(),
c_1::Float64=10^(-4),
c_2::Float64=0.999
)Manopt.WolfePowellLineseach — TypeWolfePowellLineseach <: LinesearchDo a backgtracking linesearch to find a step size $α$ that fulfills the Wolfe conditions along a search direktion $η$ starting ffrom $x$, i.e.
Constructor
WolfePowellLinesearch(
retr::AbstractRetractionMethod=ExponentialRetraction(),
vtr::AbstractVectorTransportMethod=ParallelTransport(),
c_1::Float64=10^(-4),
c_2::Float64=0.999
)Manopt.get_stepsize — Methodget_stepsize(p::Problem, o::Options, vars...)return the stepsize stored within Options o when solving Problem p. This method also works for decorated options and the Stepsize function within the options, by default stored in o.stepsize.
Manopt.linesearch_backtrack — Methodlinesearch_backtrack(M, F, x, ∇F, s, decrease, contract, retr, η = -∇F, f0 = F(x))perform a linesearch for
- a manifold
M - a cost function
F, - an iterate
x - the gradient $∇F(x)$
- an initial stepsize
susually called $γ$ - a sufficient
decrease - a
contraction factor $σ$ - a
retraction, which defaults to theExponentialRetraction() - a search direction $η = -∇F(x)$
- an offset, $f_0 = F(x)$
Problems
A problem usually contains its cost function and provides and implementation to access the cost
Manopt.Problem — TypeProblemSpecify properties (values) and related functions for computing a certain optimization problem.
Manopt.get_cost — Functionget_cost(p,x)evaluate the cost function F stored within a Problem at the point x.
Cost based problem
Manopt.CostProblem — TypeCostProblem <: Problemspeficy a problem for solvers just based on cost functions, i.e. gradient free ones.
Fields
M– a manifold $\mathcal M$cost– a function $F\colon\mathcal M\to\mathbb R$ to minimize
See also
Gradient based problem
Manopt.GradientProblem — TypeGradientProblem <: Problemspecify a problem for gradient based algorithms.
Fields
M– a manifold $\mathcal M$cost– a function $F\colon\mathcal M\to\mathbb R$ to minimizegradient– the gradient $\nabla F\colon\mathcal M \to \mathcal T\mathcal M$ of the cost function $F$
See also
gradient_descent GradientDescentOptions
Manopt.StochasticGradientProblem — TypeStochasticGradientProblem <: ProblemA stochastic gradient problem consists of
- a
Manifold M - a(n optional) cost function ``f(x) = \displaystyle\sum{i=1}^n fi(x)
- an array of gradients, i.e. a function that returns and array or an array of functions $\{∇f_i\}_{i=1}^n$.
Constructors
StochasticGradientProblem(M::Manifold, ∇::Function; cost=Missing())
StochasticGradientProblem(M::Manifold, ∇::AbstractVector{<:Function}; cost=Missing())Create a Stochastic gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.
Manopt.get_gradient — Functionget_gradient(p,x)evaluate the gradient of a GradientProblemp at the point x.
get_gradient(p,x)evaluate the gradient of a HessianProblemp at the point x.
get_gradient(P::StochasticGradientProblem, k, x)Evaluate one of the summands gradients $∇f_k$, $k\in \{1,…,n\}$, at x.
Manopt.get_gradients — Functionget_gradients(P::StochasticGradientProblem, x)Evaluate all summands gradients $\{∇f_i\}_{i=1}^n$ at x.
Subgradient based problem
Manopt.SubGradientProblem — TypeSubGradientProblem <: ProblemA structure to store information about a subgradient based optimization problem
Fields
manifold– a Manifoldcost– the function $F$ to be minimizedsubgradient– a function returning a subgradient $\partial F$ of $F$
Constructor
SubGradientProblem(M, f, ∂f)Generate the [Problem] for a subgradient problem, i.e. a function f on the manifold M and a function ∂f that returns an element from the subdifferential at a point.
Manopt.get_subgradient — Functionget_subgradient(p,x)Evaluate the (sub)gradient of a SubGradientProblemp at the point x.
Proximal Map(s) based problem
Manopt.ProximalProblem — TypeProximalProblem <: Problemspecify a problem for solvers based on the evaluation of proximal map(s).
Fields
M- a Manifold $\mathcal M$cost- a function $F\colon\mathcal M\to\mathbb R$ to minimizeproxes- proximal maps $\operatorname{prox}_{\lambda\varphi}\colon\mathcal M\to\mathcal M$ as functions (λ,x) -> y, i.e. the prox parameter λ also belongs to the signature of the proximal map.number_of_proxes- (length(proxes)) number of proxmal Maps, e.g. if one of the maps is a combined one such that the proximal Maps functions return more than one entry per function
See also
Manopt.get_proximal_map — Functionget_proximal_map(p,λ,x,i)evaluate the ith proximal map of ProximalProblem p at the point x of p.M with parameter λ$>0$.
Further planned problems
Manopt.HessianProblem — TypeHessianProblem <: Problemspecify a problem for hessian based algorithms.
Fields
M: a manifold $\mathcal M$cost: a function $F\colon\mathcal M\to\mathbb R$ to minimizegradient: the gradient $\nabla F\colon\mathcal M \to \mathcal T\mathcal M$ of the cost function $F$hessian: the hessian $\operatorname{Hess}[F] (\cdot)_ {x} \colon \mathcal T_{x} \mathcal M \to \mathcal T_{x} \mathcal M$ of the cost function $F$precon: the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of $F$)
See also
Manopt.getHessian — FunctiongetHessian(p,x,ξ)evaluate the Hessian of a HessianProblem p at the point x applied to a tangent vector ξ.
Manopt.get_preconditioner — Functionget_preconditioner(p,x,ξ)evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a HessianProblem p at the point xapplied to a tangent vector ξ.
- Iannazzo2018
B. Iannazzo, M. Porcelli, The Riemannian Barzilai–Borwein Method with Nonmonotone Line Search and the Matrix Geometric Mean Computation, In: IMA Journal of Numerical Analysis. Volume 38, Issue 1, January 2018, Pages 495–517, doi 10.1093/imanum/drx015
- Huang2014
Huang, W.: Optimization algorithms on Riemannian manifolds with applications, Dissertation, Flordia State University, 2014. pdf