# Plans for solvers

In order to start a solver, both a `Problem`

and `Options`

are required. Together they form a **plan** and these are stored in this folder. For sub-problems there are maybe also only `Options`

, since they than refer to the same problem.

## Options

For most algorithms a certain set of options can either be generated beforehand of the function with keywords can be used. Generally the type

`Manopt.Options`

— Type`Options`

A general super type for all options.

**Fields**

The following fields are assumed to be default. If you use different ones, provide the access functions accordingly

`x`

a point with the current iterate`stop`

a`StoppingCriterion`

.

`Manopt.get_options`

— Function`get_options(o::Options)`

return the undecorated `Options`

of the (possibly) decorated `o`

. As long as your decorated options store the options within `o.options`

and the `dispatch_options_decorator`

is set to `Val{true}`

, the internal options are extracted.

Since the `Options`

directly relate to a solver, they are documented with the corresponding Solvers. You can always access the options (since they might be decorated) by calling `get_options`

.

### Decorators for Options

Options can be decorated using the following trait and function to initialize

`Manopt.dispatch_options_decorator`

— Function`dispatch_options_decorator(o::Options)`

Indicate internally, whether an `Options`

`o`

to be of decorating type, i.e. it stores (encapsulates) options in itself, by default in the field `o. options`

.

Decorators indicate this by returning `Val{true}`

for further dispatch.

The default is `Val{false}`

, i.e. by default an options is not decorated.

`Manopt.is_options_decorator`

— Function`is_options_decorator(o::Options)`

Indicate, whether `Options`

`o`

are of decorator type.

`Manopt.decorate_options`

— Function`decorate_options(o)`

decorate the `Options`

`o`

with specific decorators.

**Optional Arguments**

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

`debug`

– (`Array{Union{Symbol,DebugAction,String,Int},1}()`

) a set of symbols representing`DebugAction`

s,`Strings`

used as dividers and a subsampling integer. These are passed as a`DebugGroup`

within`:All`

to the`DebugOptions`

decorator dictionary. Only excention is`:Stop`

that is passed to`:Stop`

.`record`

– (`Array{Union{Symbol,RecordAction,Int},1}()`

) specify recordings by using`Symbol`

s or`RecordAction`

s directly. The integer can again be used for only recording every $i$th iteration.

**See also**

In general decorators often perform actions so we introduce

`Manopt.AbstractOptionsAction`

— Type`AbstractOptionsAction`

a common `Type`

for `AbstractOptionsActions`

that might be triggered in decoraters, for example `DebugOptions`

or `RecordOptions`

.

as well as a helper for storing values using keys, i.e.

`Manopt.StoreOptionsAction`

— Type`StoreTupleAction <: AbstractOptionsAction`

internal storage for `AbstractOptionsAction`

s to store a tuple of fields from an `Options`

s

This functor posesses the usual interface of functions called during an iteration, i.e. acts on `(p,o,i)`

, where `p`

is a `Problem`

, `o`

is an `Options`

and `i`

is the current iteration.

**Fields**

`values`

– a dictionary to store interims values based on certain`Symbols`

`keys`

– an`NTuple`

of`Symbols`

to refer to fields of`Options`

`once`

– whether to update the internal values only once per iteration`lastStored`

– last iterate, where this`AbstractOptionsAction`

was called (to determine`once`

**Constructiors**

`StoreOptionsAction([keys=(), once=true])`

Initialize the Functor to an (empty) set of keys, where `once`

determines whether more that one update per iteration are effective

`StoreOptionsAction(keys, once=true])`

Initialize the Functor to a set of keys, where the dictionary is initialized to be empty. Further, `once`

determines whether more that one update per iteration are effective, otherwise only the first update is stored, all others are ignored.

`Manopt.get_storage`

— Function`get_storage(a,key)`

return the internal value of the `StoreOptionsAction`

`a`

at the `Symbol`

`key`

.

`Manopt.has_storage`

— Function`get_storage(a,key)`

return whether the `StoreOptionsAction`

`a`

has a value stored at the `Symbol`

`key`

.

`Manopt.update_storage!`

— Function`update_storage!(a,o)`

update the `StoreOptionsAction`

`a`

internal values to the ones given on the `Options`

`o`

.

`update_storage!(a,o)`

update the `StoreOptionsAction`

`a`

internal values to the ones given in the dictionary `d`

. The values are merged, where the values from `d`

are preferred.

#### Debug Options

`Manopt.DebugAction`

— Type`DebugAction`

A `DebugAction`

is a small functor to print/issue debug output. The usual call is given by `(p,o,i) -> s`

that performs the debug based on a `Problem`

`p`

, `Options`

`o`

and the current iterate `i`

.

By convention `i=0`

is interpreted as "For Initialization only", i.e. only debug info that prints initialization reacts, `i<0`

triggers updates of variables internally but does not trigger any output. Finally `typemin(Int)`

is used to indicate a call from `stop_solver!`

that returns true afterwards.

**Fields (assumed by subtypes to exist)**

`print`

method to perform the actual print. Can for example be set to a file export,

or to @info. The default is the `print`

function on the default `Base.stdout`

.

`Manopt.DebugChange`

— Type`DebugChange(a,prefix,print)`

debug for the amount of change of the iterate (stored in `o.x`

of the `Options`

) during the last iteration. See `DebugEntryChange`

**Parameters**

`x0`

– an initial value to already get a Change after the first iterate. Can be left out`a`

– (`StoreOptionsAction( (:x,) )`

) – the storage of the previous action`prefix`

– (`"Last Change:"`

) prefix of the debug output`print`

– (`print`

) default method to perform the print.

`Manopt.DebugCost`

— Type`DebugCost <: DebugAction`

print the current cost function value, see `get_cost`

.

**Constructors**

`DebugCost(long,print)`

where `long`

indicated whether to print `F(x):`

(default) or `cost:`

`DebugCost(prefix,print)`

set a prefix manually.

`Manopt.DebugDivider`

— Type`DebugDivider <: DebugAction`

print a small `div`

ider (default `" | "`

).

**Constructor**

`DebugDivider(div,print)`

`Manopt.DebugEntry`

— Type`DebugEntry <: RecordAction`

print a certain fields entry of type {T} during the iterates

**Addidtional Fields**

`field`

– Symbol the entry can be accessed with within`Options`

**Constructor**

`DebugEntry(f[, prefix="$f:", io=stdout])`

`Manopt.DebugEntryChange`

— Type`DebugEntryChange{T} <: DebugAction`

print a certain entries change during iterates

**Additional Fields**

`print`

– (`print`

) function to print the result`prefix`

– (`"Change of :x"`

) prefix to the print out`field`

– Symbol the field can be accessed with within`Options`

`distance`

– function (p,o,x1,x2) to compute the change/distance between two values of the entry`storage`

– a`StoreOptionsAction`

to store the previous value of`:f`

**Constructors**

`DebugEntryChange(f,d[, a, prefix, io])`

initialize the Debug to a field `f`

and a `distance`

`d`

.

`DebugEntryChange(v,f,d[, a, prefix="Change of $f:", io])`

initialize the Debug to a field `f`

and a `distance`

`d`

with initial value `v`

for the history of `o.field`

.

`Manopt.DebugEvery`

— Type`DebugEvery <: DebugAction`

evaluate and print debug only every $i$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by `alwaysUpdate`

.

This method does not perform any print itself but relies on it's childrens print.

`Manopt.DebugGroup`

— Type`DebugGroup <: DebugAction`

group a set of `DebugAction`

s into one action, where the internal prints are removed by default and the resulting strings are concatenated

**Constructor**

`DebugGroup(g)`

construct a group consisting of an Array of `DebugAction`

s `g`

, that are evaluated `en bloque`

; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string

`Manopt.DebugIterate`

— Type`DebugIterate <: DebugAction`

debug for the current iterate (stored in `o.x`

).

**Constructor**

`DebugIterate(io=stdout, long::Bool=false)`

**Parameters**

`long::Bool`

whether to print`x:`

or`current iterate`

`Manopt.DebugIteration`

— Type`DebugIteration <: DebugAction`

debug for the current iteration (prefixed with `#`

)

`Manopt.DebugOptions`

— Type`DebugOptions <: Options`

The debug options append to any options a debug functionality, i.e. they act as a decorator pattern. Internally a `Dict`

ionary is kept that stores a `DebugAction`

for several occasions using a `Symbol`

as reference. The default occasion is `:All`

and for example solvers join this field with `:Start`

, `:Step`

and `:Stop`

at the beginning, every iteration or the end of the algorithm, respectively

The original options can still be accessed using the `get_options`

function.

**Fields (defaults in brackets)**

`options`

– the options that are extended by debug information`debugDictionary`

– a`Dict{Symbol,DebugAction}`

to keep track of Debug for different actions

**Constructors**

`DebugOptions(o,dA)`

construct debug decorated options, where `dD`

can be

- a
`DebugAction`

, then it is stored within the dictionary at`:All`

- an
`Array`

of`DebugAction`

s, then it is stored as a`debugDictionary`

within`:All`

. - a
`Dict{Symbol,DebugAction}`

. - an Array of Symbols, String and an Int for the
`DebugFactory`

`Manopt.DebugStoppingCriterion`

— Type`DebugStoppingCriterion <: DebugAction`

print the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.

`Manopt.DebugActionFactory`

— Method`DebugActionFactory(s)`

create a `DebugAction`

where

- a
`String`

yields the correspoinding divider - a
`DebugAction`

is passed through - a [
`Symbol`

] creates`DebugEntry`

of that symbol, with the exceptions of`:Change`

,`:Iterate`

,`:Iteration`

, and`:Cost`

.

`Manopt.DebugFactory`

— Method`DebugFactory(a)`

given an array of `Symbol`

s, `String`

s `DebugAction`

s and `Ints`

- The symbol
`:Stop`

creates an entry of to display the stopping criterion at the end (`:Stop => DebugStoppingCriterion()`

) - The symbol
`:Cost`

creates a`DebugCost`

- The symbol
`:iteration`

creates a`DebugIteration`

- The symbol
`:Change`

creates a`DebugChange`

- any other symbol creates debug output of the corresponding field in
`Options`

- any string creates a
`DebugDivider`

- any
`DebugAction`

is directly included - an Integer
`k`

introduces that debug is only printed every`k`

th iteration

see DebugSolver for details on the decorated solver.

Further specific `DebugAction`

s can be found at the specific Options.

#### Record Options

`Manopt.RecordAction`

— Type`RecordAction`

A `RecordAction`

is a small functor to record values. The usual call is given by `(p,o,i) -> s`

that performs the record based on a `Problem`

`p`

, `Options`

`o`

and the current iterate `i`

.

By convention `i<=0`

is interpreted as "For Initialization only", i.e. only initialize internal values, but not trigger any record, the same holds for `i=typemin(Inf)`

which is used to indicate `stop`

, i.e. that the record is called from within `stop_solver!`

which returns true afterwards.

**Fields (assumed by subtypes to exist)**

`recorded_values`

an`Array`

of the recorded values.

`Manopt.RecordChange`

— Type`RecordChange <: RecordAction`

debug for the amount of change of the iterate (stored in `o.x`

of the `Options`

) during the last iteration.

**Additional Fields**

`storage`

a`StoreOptionsAction`

to store (at least)`o.x`

to use this as the last value (to compute the change)

`Manopt.RecordCost`

— Type`RecordCost <: RecordAction`

record the current cost function value, see `get_cost`

.

`Manopt.RecordEntry`

— Type`RecordEntry{T} <: RecordAction`

record a certain fields entry of type {T} during the iterates

**Fields**

`recorded_values`

– the recorded Iterates`field`

– Symbol the entry can be accessed with within`Options`

`Manopt.RecordEntryChange`

— Type`RecordEntryChange{T} <: RecordAction`

record a certain entries change during iterates

**Additional Fields**

`recorded_values`

– the recorded Iterates`field`

– Symbol the field can be accessed with within`Options`

`distance`

– function (p,o,x1,x2) to compute the change/distance between two values of the entry`storage`

– a`StoreOptionsAction`

to store (at least)`getproperty(o, d.field)`

`Manopt.RecordEvery`

— Type`RecordEvery <: RecordAction`

record only every $i$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.

This method does not perform any record itself but relies on it's childrens methods

`Manopt.RecordGroup`

— Type`RecordGroup <: RecordAction`

group a set of `RecordAction`

s into one action, where the internal `RecordAction`

s act independently, but the results can be collected in a grouped fashion, i.e. tuples per calls of this group. The enries can be later addressed either by index or semantic Symbols

**Constructors**

`RecordGroup(g::Array{<:RecordAction, 1})`

construct a group consisting of an Array of `RecordAction`

s `g`

,

`RecordGroup(g, symbols)`

**Examples**

`r = RecordGroup([RecordIteration(), RecordCost()])`

A RecordGroup to record the current iteration and the cost. The cost can then be accessed using `get_record(r,2)`

or `r[2]`

.

`r = RecordGroup([RecordIteration(), RecordCost()], Dict(:Cost => 2))`

A RecordGroup to record the current iteration and the cost, wich can then be accesed using `get_record(:Cost)`

or `r[:Cost]`

.

`r = RecordGroup([RecordIteration(), :Cost => RecordCost()])`

A RecordGroup identical to the previous constructor, just a little easier to use.

`Manopt.RecordIterate`

— Type`RecordIterate <: RecordAction`

record the iterate

**Constructors**

`RecordIterate(x0)`

initialize the iterate record array to the type of `x0`

, e.g. your initial data.

`RecordIterate(P)`

initialize the iterate record array to the data type `T`

.

`Manopt.RecordIteration`

— Type`RecordIteration <: RecordAction`

record the current iteration

`Manopt.RecordOptions`

— Type`RecordOptions <: Options`

append to any `Options`

the decorator with record functionality, Internally a `Dict`

ionary is kept that stores a `RecordAction`

for several concurrent modes using a `Symbol`

as reference. The default mode is `:Iteration`

, which is used to store information that is recorded during the iterations. RecordActions might be added to `:Start`

or `:Stop`

to record values at the beginning or for the stopping time point, respectively

The original options can still be accessed using the `get_options`

function.

**Fields**

`options`

– the options that are extended by debug information`recordDictionary`

– a`Dict{Symbol,RecordAction}`

to keep track of all different recorded values

**Constructors**

`RecordOptions(o,dR)`

construct record decorated `Options`

, where `dR`

can be

- a
`RecordAction`

, then it is stored within the dictionary at`:Iteration`

- an
`Array`

of`RecordAction`

s, then it is stored as a`recordDictionary`

(@ref) within the dictionary at`:All`

. - a
`Dict{Symbol,RecordAction}`

.

`Manopt.RecordActionFactory`

— Method`RecordActionFactory(s)`

create a `RecordAction`

where

- a
`RecordAction`

is passed through - a [
`Symbol`

] creates`RecordEntry`

of that symbol, with the exceptions of`:Change`

,`:Iterate`

,`:Iteration`

, and`:Cost`

.

`Manopt.RecordFactory`

— Method`RecordFactory(o::Options, a)`

given an array of `Symbol`

s and `RecordAction`

s and `Ints`

- The symbol
`:Cost`

creates a`RecordCost`

- The symbol
`:iteration`

creates a`RecordIteration`

- The symbol
`:Change`

creates a`RecordChange`

- any other symbol creates a
`RecordEntry`

of the corresponding field in`Options`

- any
`RecordAction`

is directly included - an semantic pair
`:symbol => RecordAction`

is directly included - an Integer
`k`

introduces that record is only performed every`k`

th iteration

`Manopt.get_record`

— Function```
get_record(o::Options, [,s=:Iteration])
get_record(o::RecordOptions, [,s=:Iteration])
```

return the recorded values from within the `RecordOptions`

`o`

that where recorded with respect to the `Symbol s`

as an `Array`

. The default refers to any recordings during an `:Iteration`

.

When called with arbitrary `Options`

, this method looks for the `RecordOptions`

decorator and calls `get_record`

on the decorator.

`Manopt.get_record`

— Method`get_record(r::RecordAction)`

return the recorded values stored within a `RecordAction`

`r`

.

`Manopt.get_record`

— Method`get_record(r::RecordGroup)`

return an array of tuples, where each tuple is a recorded set, e.g. per iteration / record call.

`get_record(r::RecordGruop, i::Int)`

return an array of values corresponding to the `i`

th entry in this record group

`get_record(r::RecordGruop, s::Symbol)`

return an array of recorded values with respect to the `s`

, see `RecordGroup`

.

`get_record(r::RecordGroup, s1::Symbol, s2::Symbol,...)`

return an array of tuples, where each tuple is a recorded set corresponding to the symbols `s1, s2,...`

per iteration / record call.

`Manopt.get_record_action`

— Function`get_record_action(o::Options, s::Symbol)`

return the action contained in the (first) `RecordOptions`

decorator within the `Options`

`o`

.

`Manopt.get_record_options`

— Method`get_record_options(o::Options)`

return the `RecordOptions`

among the decorators from the `Options`

`o`

`Manopt.has_record`

— Method`has_record(o::Options)`

check whether the `Options`

`o`

are decorated with `RecordOptions`

`Base.getindex`

— Method```
get_index(ro::RecordOptions, s::Symbol)
ro[s]
```

Get the recorded values for reording type `s`

, see `get_record`

for details.

```
get_index(ro::RecordOptions, s::Symbol, i...)
ro[s, i...]
```

Acces the recording type of type `s`

and call its `RecordAction`

with `[i...]`

.

`Base.getindex`

— Method```
getindex(r::RecordGroup, s::Symbol)
r[s]
getindex(r::RecordGroup, sT::NTuple{N,Symbol})
r[sT]
getindex(r::RecordGroup, i)
r[i]
```

return an array of recorded values with respect to the `s`

, the symbols from the tuple `sT`

or the index `i`

. See `get_record`

for details.

see RecordSolver for details on the decorated solver.

Further specific `RecordAction`

s can be found at the specific Options.

there's one internal helper that might be useful for you own actions, namely

`Manopt.record_or_reset!`

— Function`record_or_reset!(r,v,i)`

either record (`i>0`

and not `Inf`

) the value `v`

within the `RecordAction`

`r`

or reset (`i<0`

) the internal storage, where `v`

has to match the internal value type of the corresponding Recordaction.

### Stepsize and Linesearch

The step size determination is implemented as a `Functor`

based on

`Manopt.Stepsize`

— Type`Stepsize`

An abstract type for the functors representing step sizes, i.e. they are callable structures. The naming scheme is `TypeOfStepSize`

, e.g. `ConstantStepsize`

.

Every Stepsize has to provide a constructor and its function has to have the interface `(p,o,i)`

where a `Problem`

as well as `Options`

and the current number of iterations are the arguments and returns a number, namely the stepsize to use.

**See also**

in general there are

`Manopt.ArmijoLinesearch`

— Type`ArmijoLineseach <: Linesearch`

A functor representing Armijo line seach including the last runs state, i.e. a last step size.

**Fields**

`initialStepsize`

– (`1.0`

) and initial step size`retraction_method`

– (`ExponentialRetraction()`

) the rectraction to use, defaults to the exponential map`contractionFactor`

– (`0.95`

) exponent for line search reduction`sufficientDecrease`

– (`0.1`

) gain within Armijo's rule`lastStepSize`

– (`initialstepsize`

) the last step size we start the search with

**Constructor**

`ArmijoLineSearch()`

with the Fields above in their order as optional arguments.

This method returns the functor to perform Armijo line search, where two inter faces are available:

- based on a tuple
`(p,o,i)`

of a`GradientProblem`

`p`

,`Options`

`o`

and a current iterate`i`

. - with
`(M, x, F, gradFx[,η=-gradFx]) -> s`

where Manifold`M`

, a current point`x`

a function`F`

, that maps from the manifold to the reals, its gradient (a tangent vector)`gradFx`

$=\operatorname{grad}F(x)$ at`x`

and an optional search direction tangent vector`η=-gradFx`

are the arguments.

`Manopt.ConstantStepsize`

— Type`ConstantStepsize <: Stepsize`

A functor that always returns a fixed step size.

**Fields**

`length`

– constant value for the step size.

**Constructor**

`ConstantStepSize(s)`

initialize the stepsize to a constant `s`

`Manopt.DecreasingStepsize`

— Type`DecreasingStepsize()`

A functor that represents several decreasing step sizes

**Fields**

`length`

– (`1`

) the initial step size $l$.`factor`

– (`1`

) a value $f$ to multiply the initial step size with every iteration`subtrahend`

– (`0`

) a value $a$ that is subtracted every iteration`exponent`

– (`1`

) a value $e$ the current iteration numbers $e$th exponential is taken of

In total the complete formulae reads for the $i$th iterate as

and hence the default simplifies to just $s_i = \frac{l}{i}$

**Constructor**

`ConstantStepSize(l,f,a,e)`

initialiszes all fields above, where none of them is mandatory.

`Manopt.Linesearch`

— Type`Linesearch <: Stepsize`

An abstract functor to represent line search type step size deteminations, see `Stepsize`

for details. One example is the `ArmijoLinesearch`

functor.

Compared to simple step sizes, the linesearch functors provide an interface of the form `(p,o,i,η) -> s`

with an additional (but optional) fourth parameter to provide a search direction; this should default to something reasonable, e.g. the negative gradient.

`Manopt.NonmonotoneLinesearch`

— Type`NonmonotoneLinesearch <: Linesearch`

A functor representing a nonmonotone line search using the Barzilai-Borwein step size^{[Iannazzo2018]}. Together with a gradient descent algorithm this line search represents the Riemannian Barzilai-Borwein with nonmonotone line-search (RBBNMLS) algorithm. We shifted the order of the algorithm steps from the paper by Iannazzo and Porcelli so that in each iteration we first find

and

where $α_{k-1}$ is the step size computed in the last iteration and $\operatorname{T}$ is a vector transport. We then find the Barzilai–Borwein step size

where

if the direct strategy is chosen,

in case of the inverse strategy and an alternation between the two in case of the alternating strategy. Then we find the smallest $h = 0, 1, 2, …$ such that

where $σ$ is a step length reduction factor $∈ (0,1)$, $m$ is the number of iterations after which the function value has to be lower than the current one and $γ$ is the sufficient decrease parameter $∈(0,1)$. We can then find the new stepsize by

**Fields**

`initial_stepsize`

– (`1.0`

) the step size we start the search with`retraction_method`

– (`ExponentialRetraction()`

) the rectraction to use`vector_transport_method`

– (`ParallelTransport()`

) the vector transport method to use`stepsize_reduction`

– (`0.5`

) step size reduction factor contained in the interval (0,1)`sufficient_decrease`

– (`1e-4`

) sufficient decrease parameter contained in the interval (0,1)`memory_size`

– (`10`

) number of iterations after which the cost value needs to be lower than the current one`min_stepsize`

– (`1e-3`

) lower bound for the Barzilai-Borwein step size greater than zero`max_stepsize`

– (`1e3`

) upper bound for the Barzilai-Borwein step size greater than min_stepsize`strategy`

– (`direct`

) defines if the new step size is computed using the direct, indirect or alternating strategy`storage`

– (`x`

,`gradient`

) a`StoreOptionsAction`

to store`old_x`

and`old_gradient`

, the x-value and corresponding gradient of the previous iteration

**Constructor**

`NonmonotoneLinesearch()`

with the Fields above in their order as optional arguments.

This method returns the functor to perform nonmonotone line search.

`Manopt.WolfePowellBinaryLinesearch`

— Type`WolfePowellBinaryLinesearch <: Linesearch`

A `Linesearch`

method that determines a step size `t`

fulfilling the Wolfe conditions

based on a binary chop. Let $η$ be a search direction and $c_1,c_2>0$ be two constants. Then with

where $x_+ = \operatorname{retr}_x(tη)$ is the current trial point, and $\text{V}$ is a vector transport, we perform the following Algorithm similar to Algorithm 7 from ^{[Huang2014]}

- set $α=0$, $β=∞$ and $t=1$.
- While either $A(t)$ does not hold or $W(t)$ does not hold do steps 3-5.
- If $A(t)$ fails, set $β=t$.
- If $A(t)$ holds but $W(t)$ fails, set $α=t$.
- If $β<∞$ set $t=\frac{α+β}{2}$, otherwise set $t=2α$.

**Constructor**

```
WolfePowellBinaryLinesearch(
retr::AbstractRetractionMethod=ExponentialRetraction(),
vtr::AbstractVectorTransportMethod=ParallelTransport(),
c_1::Float64=10^(-4),
c_2::Float64=0.999
)
```

`Manopt.WolfePowellLineseach`

— Type`WolfePowellLineseach <: Linesearch`

Do a backtracking linesearch to find a step size $α$ that fulfils the Wolfe conditions along a search direction $η$ starting from $x$, i.e.

**Constructor**

```
WolfePowellLinesearch(
retr::AbstractRetractionMethod=ExponentialRetraction(),
vtr::AbstractVectorTransportMethod=ParallelTransport(),
c_1::Float64=10^(-4),
c_2::Float64=0.999
)
```

`Manopt.get_stepsize`

— Method`get_stepsize(p::Problem, o::Options, vars...)`

return the stepsize stored within `Options`

`o`

when solving `Problem`

`p`

. This method also works for decorated options and the `Stepsize`

function within the options, by default stored in `o.stepsize`

.

`Manopt.linesearch_backtrack`

— Method`linesearch_backtrack(M, F, x, gradFx, s, decrease, contract, retr, η = -gradFx, f0 = F(x))`

perform a linesearch for

- a manifold
`M`

- a cost function
`F`

, - an iterate
`x`

- the gradient $\operatorname{grad}F(x)$
- an initial stepsize
`s`

usually called $γ$ - a sufficient
`decrease`

- a
`contract`

ion factor $σ$ - a
`retr`

action, which defaults to the`ExponentialRetraction()`

- a search direction $η = -\operatorname{grad}F(x)$
- an offset, $f_0 = F(x)$

## Problems

A problem usually contains its cost function and provides and implementation to access the cost

`Manopt.Problem`

— Type`Problem{T}`

Describe the problem that should be optimized by stating all properties, that do not change during an optimization or that are dependent of a certain solver.

The parameter `T`

can be used to distinguish problems with different representations or implementations. The default parameter `AllocatingEvaluation`

, which might be slower but easier to use. The usually faster parameter value is `MutatingEvaluation`

See `Options`

for the changing and solver dependent properties.

`Manopt.get_cost`

— Function`get_cost(p, x)`

evaluate the cost function `F`

stored within a `Problem`

at the point `x`

.

A problem can be of different type, more specifically, whether its containing functions, for example to compute the gradient work with allocation or without. To be precise, an allocation function `X = gradF(x)`

allocates memory for its result `X`

, while `gradF!(X,x)`

does not.

`Manopt.AbstractEvaluationType`

— Type`AbstractEvaluationType`

An abstract type to specify the kind of evaluation a `Problem`

supports.

`Manopt.AllocatingEvaluation`

— Type`AllocatingEvaluation <: AbstractEvaluationType`

A parameter for a `Problem`

indicating that the problem uses functions that allocate memory for their result, i.e. they work out of place.

`Manopt.MutatingEvaluation`

— Type`MutatingEvaluation`

A parameter for a `Problem`

indicating that the problem uses functions that do not allocate memory but work on their input, i.e. in place.

### Cost based problem

`Manopt.CostProblem`

— Type`CostProblem{T, Manifold, TCost} <: Problem{T}`

speficy a problem for solvers just based on cost functions, i.e. gradient free ones.

**Fields**

`M`

– a manifold $\mathcal M$`cost`

– a function $F: \mathcal M → ℝ$ to minimize

**Constructors**

`CostProblem(M, cost; evaluation=AllocatingEvaluation())`

Generate a problem. While this Problem does not have any allocating functions, the type `T`

can be set for consistency reasons with other problems.

**See also**

### Gradient based problem

`Manopt.AbstractGradientProblem`

— Type`AbstractGradientProblem{T} <: Problem{T}`

An abstract type for all functions that provide a (full) gradient, where `T`

is a `AbstractEvaluationType`

for the gradient function.

`Manopt.GradientProblem`

— Type`GradientProblem{T} <: AbstractGradientProblem{T}`

specify a problem for gradient based algorithms.

**Fields**

`M`

– a manifold $\mathcal M$`cost`

– a function $F: \mathcal M → ℝ$ to minimize`gradient!!`

– the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$.

Depending on the `AbstractEvaluationType`

`T`

the gradient has to be provided

- as a function
`x -> X`

that allocates memory for`X`

itself for an`AllocatingEvaluation`

- as a function
`(X,x) -> X`

that work in place of`X`

for an`MutatingEvaluation`

**Constructors**

`GradientProblem(M, cost, gradient; evaluation=AllocatingEvaluation())`

**See also**

`Manopt.StochasticGradientProblem`

— Type`StochasticGradientProblem <: Problem`

A stochastic gradient problem consists of

- a
`Manifold M`

- a(n optional) cost function ``f(x) = \displaystyle\sum
*{i=1}^n f*i(x) - an array of gradients, i.e. a function that returns and array or an array of functions

$\{\operatorname{grad}f_i\}_{i=1}^n$, where both variants can be given in the allocating variant and the array of function may also be provided as mutating functions `(X,x) -> X`

.

**Constructors**

```
StochasticGradientProblem(M::Manifold, gradF::Function;
cost=Missing(), evaluation=AllocatingEvaluation()
)
StochasticGradientProblem(M::Manifold, gradF::AbstractVector{<:Function};
cost=Missing(), evaluation=AllocatingEvaluation()
)
```

Create a Stochastic gradient problem with an optional `cost`

and the gradient either as one function (returning an array) or a vector of functions.

`Manopt.get_gradient`

— Function```
get_gradient(p::AbstractGradientProblem{T},x)
get_gradient!(p::AbstractGradientProblem{T}, X, x)
```

evaluate the gradient of a `AbstractGradientProblem{T}`

`p`

at the point `x`

.

The evaluation is done in place of `X`

for the `!`

-variant. The `T=`

`AllocatingEvaluation`

problem might still allocate memory within. When the non-mutating variant is called with a `T=`

`MutatingEvaluation`

memory for the result is allocated.

```
get_gradient(p::StochasticGradientProblem, k, x)
get_gradient!(p::StochasticGradientProblem, Y, k, x)
```

Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at `x`

(in place of `Y`

).

Note that for the `MutatingEvaluation`

based problem and a single function for the stochastic gradient mutating variant is not available, since it would require too many allocations.

```
get_gradient(P::AlternatingGradientProblem, x)
get_gradient!(P::AlternatingGradientProblem, Y, x)
```

Evaluate all summands gradients at a point `x`

on the `ProductManifold M`

(in place of `Y`

)

```
get_gradient(p::AlternatingGradientProblem, k, x)
get_gradient!(p::AlternatingGradientProblem, Y, k, x)
```

Evaluate one of the component gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at `x`

(in place of `Y`

).

`Manopt.get_gradients`

— Function```
get_gradients(P::StochasticGradientProblem, x)
get_gradients!(P::StochasticGradientProblem, Y, x)
```

Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at `x`

(in place of `Y`

).

Note that for the `MutatingEvaluation`

based problem and a single function for the stochastic gradient, the allocating variant is not available.

### Subgradient based problem

`Manopt.SubGradientProblem`

— Type`SubGradientProblem <: Problem`

A structure to store information about a subgradient based optimization problem

**Fields**

`manifold`

– a Manifold`cost`

– the function $F$ to be minimized`subgradient`

– a function returning a subgradient $\partial F$ of $F$

**Constructor**

`SubGradientProblem(M, f, ∂f)`

Generate the [`Problem`

] for a subgradient problem, i.e. a function `f`

on the manifold `M`

and a function `∂f`

that returns an element from the subdifferential at a point.

`Manopt.get_subgradient`

— Function```
get_subgradient(p, q)
get_subgradient!(p, X, q)
```

Evaluate the (sub)gradient of a `SubGradientProblem`

`p`

at the point `q`

.

The evaluation is done in place of `X`

for the `!`

-variant. The `T=`

`AllocatingEvaluation`

problem might still allocate memory within. When the non-mutating variant is called with a `T=`

`MutatingEvaluation`

memory for the result is allocated.

### Proximal Map(s) based problem

`Manopt.ProximalProblem`

— Type`ProximalProblem <: Problem`

specify a problem for solvers based on the evaluation of proximal map(s).

**Fields**

`M`

- a Riemannian manifold`cost`

- a function $F:\mathcal M→ℝ$ to minimize`proxes`

- proximal maps $\operatorname{prox}_{λ\varphi}:\mathcal M→\mathcal M$ as functions (λ,x) -> y, i.e. the prox parameter λ also belongs to the signature of the proximal map.`number_of_proxes`

- (length(proxes)) number of proximal Maps, e.g. if one of the maps is a combined one such that the proximal Maps functions return more than one entry per function

**See also**

`Manopt.get_proximal_map`

— Function`get_proximal_map(p,λ,x,i)`

evaluate the `i`

th proximal map of `ProximalProblem p`

at the point `x`

of `p.M`

with parameter $λ>0$.

### Hessian based problem

`Manopt.HessianProblem`

— Type`HessianProblem <: Problem`

specify a problem for hessian based algorithms.

**Fields**

`M`

: a manifold $\mathcal M$`cost`

: a function $F:\mathcal M→ℝ$ to minimize`gradient`

: the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$`hessian`

: the hessian $\operatorname{Hess}F(x)[⋅]: \mathcal T_{x} \mathcal M → \mathcal T_{x} \mathcal M$ of the cost function $F$`precon`

: the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of $F$)

**See also**

`Manopt.get_hessian`

— Function```
get_hessian(p::HessianProblem{T}, q, X)
get_hessian!(p::HessianProblem{T}, Y, q, X)
```

evaluate the Hessian of a `HessianProblem`

`p`

at the point `q`

applied to a tangent vector `X`

, i.e. $\operatorname{Hess}f(q)[X]$.

The evaluation is done in place of `Y`

for the `!`

-variant. The `T=`

`AllocatingEvaluation`

problem might still allocate memory within. When the non-mutating variant is called with a `T=`

`MutatingEvaluation`

memory for the result is allocated.

`Manopt.get_preconditioner`

— Function`get_preconditioner(p,x,ξ)`

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function `F`

) of a `HessianProblem`

`p`

at the point `x`

applied to a tangent vector `ξ`

.

### Primal dual based problem

`Manopt.PrimalDualProblem`

— Type`PrimalDualProblem {T, mT <: Manifold, nT <: Manifold} <: PrimalDualProblem} <: Problem{T}`

Describes a Problem for the linearized or exact Chambolle-Pock algorithm.^{[BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020]}^{[ChambollePock2011]}

**Fields**

All fields with !! can either be mutating or nonmutating functions, which should be set depenting on the parameter `T <: AbstractEvaluationType`

.

`M`

,`N`

– two manifolds $\mathcal M$, $\mathcal N$`cost`

$F + G(Λ(⋅))$ to evaluate interims cost function values`linearized_forward_operator!!`

linearized operator for the forward operation in the algorithm $DΛ$`linearized_adjoint_operator!!`

The adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$`prox_F!!`

the proximal map belonging to $f$`prox_G_dual!!`

the proximal map belonging to $g_n^*$`Λ!!`

– (`fordward_operator`

) the forward operator (if given) $Λ: \mathcal M → \mathcal N$

Either $DΛ$ (for the linearized) or $Λ$ are required usually.

**Constructor**

```
LinearizedPrimalDualProblem(M, N, cost, prox_F, prox_G_dual, adjoint_linearized_operator;
linearized_forward_operator::Union{Function,Missing}=missing,
Λ::Union{Function,Missing}=missing,
evaluation::AbstractEvaluationType=AllocatingEvaluation()
)
```

The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.

`Manopt.get_primal_prox`

— Function```
y = get_primal_prox(p::PrimalDualProblem, σ, x)
get_primal_prox!(p::PrimalDualProblem, y, σ, x)
```

Evaluate the proximal map of $F$ stored within `PrimalDualProblem`

which can also be computed in place of `y`

.

`Manopt.get_dual_prox`

— Function```
y = get_dual_prox(p::PrimalDualProblem, n, τ, ξ)
get_dual_prox!(p::PrimalDualProblem, y, n, τ, ξ)
```

Evaluate the proximal map of $G_n^*$ stored within `PrimalDualProblem`

which can also be computed in place of `y`

.

`Manopt.forward_operator`

— Function```
y = forward_operator(p::PrimalDualProblem, x)
forward_operator!(p::PrimalDualProblem, y, x)
```

Evaluate the forward operator of $Λ(x)$ stored within the `PrimalDualProblem`

(in place of `y`

).

`Manopt.linearized_forward_operator`

— Function```
Y = linearized_forward_operator(p::PrimalDualProblem, m X, n)
linearized_forward_operator!(p::PrimalDualProblem, Y, m, X, n)
```

Evaluate the linearized operator (differential) $DΛ(m)[X]$ stored within the `PrimalDualProblem`

(in place of `Y`

), where `n = Λ(m)`

.

`Manopt.adjoint_linearized_operator`

— Function```
X = adjoint_linearized_operator(p::PrimalDualProblem, m, n, Y)
adjoint_linearized_operator(p::PrimalDualProblem, X, m, n, Y)
```

Evaluate the adjoint of the linearized forward operator of $(DΛ(m))^*[Y]$ stored within the `PrimalDualProblem`

(in place of `X`

). Since $Y∈T_n\mathcal N$, both $m$ and $n=Λ(m)$ are necessary arguments, mainly because the forward operator $Λ$ might be `missing`

in `p`

.

- Iannazzo2018
B. Iannazzo, M. Porcelli,

**The Riemannian Barzilai–Borwein Method with Nonmonotone Line Search and the Matrix Geometric Mean Computation**, In: IMA Journal of Numerical Analysis. Volume 38, Issue 1, January 2018, Pages 495–517, doi 10.1093/imanum/drx015 - Huang2014
Huang, W.:

*Optimization algorithms on Riemannian manifolds with applications*, Dissertation, Flordia State University, 2014. pdf - BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck, J. Vidal-Núñez:

*Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds*, Foundations of Computational Mathematics, 2021. doi: 10.1007/s10208-020-09486-5 arXiv: 1908.02022 - ChambollePock2011
A. Chambolle, T. Pock:

*A first-order primal-dual algorithm for convex problems with applications to imaging*, Journal of Mathematical Imaging and Vision 40(1), 120–145, 2011. doi: 10.1007/s10851-010-0251-1