# Plans for solvers

In order to start a solver, both a `Problem`

and `Options`

are required. Together they form a **plan** and these are stored in this folder. For sub-problems there are maybe also only `Options`

, since they than refer to the same problem.

## Options

For most algorithms a certain set of options can either be generated beforehand of the function with keywords can be used. Generally the type

`Manopt.Options`

— Type`Options`

A general super type for all options.

**Fields**

The following fields are assumed to be default. If you use different ones, provide the access functions accordingly

`x`

a point with the current iterate`stop`

a`StoppingCriterion`

.

`Manopt.get_options`

— Function`get_options(o::Options)`

return the undecorated `Options`

of the (possibly) decorated `o`

. As long as your decorated options store the options within `o.options`

and the `dispatch_options_decorator`

is set to `Val{true}`

, the internal options are extracted.

Since the `Options`

directly relate to a solver, they are documented with the corresponding Solvers. You can always access the options (since they might be decorated) by calling `get_options`

.

### Decorators for Options

Options can be decorated using the following trait and function to initialize

`Manopt.dispatch_options_decorator`

— Function`dispatch_options_decorator(o::Options)`

Indicate internally, whether an `Options`

`o`

to be of decorating type, i.e. it stores (encapsulates) options in itself, by default in the field `o. options`

.

Decorators indicate this by returning `Val{true}`

for further dispatch.

The default is `Val{false}`

, i.e. by default an options is not decorated.

`Manopt.is_options_decorator`

— Function`is_options_decorator(o::Options)`

Indicate, whether `Options`

`o`

are of decorator type.

`Manopt.decorate_options`

— Function`decorate_options(o)`

decorate the `Options`

`o`

with specific decorators.

**Optional Arguments**

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

`debug`

– (`Array{Union{Symbol,DebugAction,String,Int},1}()`

) a set of symbols representing`DebugAction`

s,`Strings`

used as dividers and a subsampling integer. These are passed as a`DebugGroup`

within`:All`

to the`DebugOptions`

decorator dictionary. Only excention is`:Stop`

that is passed to`:Stop`

.`record`

– (`Array{Union{Symbol,RecordAction,Int},1}()`

) specify recordings by using`Symbol`

s or`RecordAction`

s directly. The integer can again be used for only recording every $i$th iteration.

**See also**

In general decorators often perform actions so we introduce

`Manopt.AbstractOptionsAction`

— Type`AbstractOptionsAction`

a common `Type`

for `AbstractOptionsActions`

that might be triggered in decoraters, for example `DebugOptions`

or `RecordOptions`

.

as well as a helper for storing values using keys, i.e.

`Manopt.StoreOptionsAction`

— Type`StoreTupleAction <: AbstractOptionsAction`

internal storage for `AbstractOptionsAction`

s to store a tuple of fields from an `Options`

s

This functor posesses the usual interface of functions called during an iteration, i.e. acts on `(p,o,i)`

, where `p`

is a `Problem`

, `o`

is an `Options`

and `i`

is the current iteration.

**Fields**

`values`

– a dictionary to store interims values based on certain`Symbols`

`keys`

– an`NTuple`

of`Symbols`

to refer to fields of`Options`

`once`

– whether to update the internal values only once per iteration`lastStored`

– last iterate, where this`AbstractOptionsAction`

was called (to determine`once`

**Constructiors**

`StoreOptionsAction([keys=(), once=true])`

Initialize the Functor to an (empty) set of keys, where `once`

determines whether more that one update per iteration are effective

`StoreOptionsAction(keys, once=true])`

Initialize the Functor to a set of keys, where the dictionary is initialized to be empty. Further, `once`

determines whether more that one update per iteration are effective, otherwise only the first update is stored, all others are ignored.

`Manopt.get_storage`

— Function`get_storage(a,key)`

return the internal value of the `StoreOptionsAction`

`a`

at the `Symbol`

`key`

.

`Manopt.has_storage`

— Function`get_storage(a,key)`

return whether the `StoreOptionsAction`

`a`

has a value stored at the `Symbol`

`key`

.

`Manopt.update_storage!`

— Function`update_storage!(a,o)`

update the `StoreOptionsAction`

`a`

internal values to the ones given on the `Options`

`o`

.

`update_storage!(a,o)`

update the `StoreOptionsAction`

`a`

internal values to the ones given in the dictionary `d`

. The values are merged, where the values from `d`

are preferred.

#### Debug Options

`Manopt.DebugAction`

— Type`DebugAction`

A `DebugAction`

is a small functor to print/issue debug output. The usual call is given by `(p,o,i) -> s`

that performs the debug based on a `Problem`

`p`

, `Options`

`o`

and the current iterate `i`

.

By convention `i=0`

is interpreted as "For Initialization only", i.e. only debug info that prints initialization reacts, `i<0`

triggers updates of variables internally but does not trigger any output. Finally `typemin(Int)`

is used to indicate a call from `stop_solver!`

that returns true afterwards.

**Fields (assumed by subtypes to exist)**

`print`

method to perform the actual print. Can for example be set to a file export,

or to @info. The default is the `print`

function on the default `Base.stdout`

.

`Manopt.DebugChange`

— Type`DebugChange(a,prefix,print)`

debug for the amount of change of the iterate (stored in `o.x`

of the `Options`

) during the last iteration. See `DebugEntryChange`

**Parameters**

`x0`

– an initial value to already get a Change after the first iterate. Can be left out`a`

– (`StoreOptionsAction( (:x,) )`

) – the storage of the previous action`prefix`

– (`"Last Change:"`

) prefix of the debug output`print`

– (`print`

) default method to peform the print.

`Manopt.DebugCost`

— Type`DebugCost <: DebugAction`

print the current cost function value, see `get_cost`

.

**Constructors**

`DebugCost(long,print)`

where `long`

indicated whether to print `F(x):`

(default) or `cost:`

`DebugCost(prefix,print)`

set a prefix manually.

`Manopt.DebugDivider`

— Type`DebugDivider <: DebugAction`

print a small `div`

ider (default `" | "`

).

**Constructor**

`DebugDivider(div,print)`

`Manopt.DebugEntry`

— Type`DebugEntry <: RecordAction`

print a certain fields entry of type {T} during the iterates

**Addidtional Fields**

`field`

– Symbol the entry can be accessed with within`Options`

**Constructor**

`DebugEntry(f[, prefix="$f:", io=stdout])`

`Manopt.DebugEntryChange`

— Type`DebugEntryChange{T} <: DebugAction`

print a certain entries change during iterates

**Additional Fields**

`print`

– (`print`

) function to print the result`prefix`

– (`"Change of :x"`

) prefix to the print out`field`

– Symbol the field can be accessed with within`Options`

`distance`

– function (p,o,x1,x2) to compute the change/distance between two values of the entry`storage`

– a`StoreOptionsAction`

to store the previous value of`:f`

**Constructors**

`DebugEntryChange(f,d[, a, prefix, io])`

initialize the Debug to a field `f`

and a `distance`

`d`

.

`DebugEntryChange(v,f,d[, a, prefix="Change of $f:", io])`

initialize the Debug to a field `f`

and a `distance`

`d`

with initial value `v`

for the history of `o.field`

.

`Manopt.DebugEvery`

— Type`DebugEvery <: DebugAction`

evaluate and print debug only every $i$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by `alwaysUpdate`

.

This method does not perform any print itself but relies on it's childrens print.

`Manopt.DebugGroup`

— Type`DebugGroup <: DebugAction`

group a set of `DebugAction`

s into one action, where the internal prints are removed by default and the resulting strings are concatenated

**Constructor**

`DebugGroup(g)`

construct a group consisting of an Array of `DebugAction`

s `g`

, that are evaluated `en bloque`

; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string

`Manopt.DebugIterate`

— Type`DebugIterate <: DebugAction`

debug for the current iterate (stored in `o.x`

).

**Constructor**

`DebugIterate(io=stdout, long::Bool=false)`

**Parameters**

`long::Bool`

whether to print`x:`

or`current iterate`

`Manopt.DebugIteration`

— Type`DebugIteration <: DebugAction`

debug for the current iteration (prefixed with `#`

)

`Manopt.DebugOptions`

— Type`DebugOptions <: Options`

The debug options append to any options a debug functionality, i.e. they act as a decorator pattern. Internally a `Dict`

ionary is kept that stores a `DebugAction`

for several occasions using a `Symbol`

as reference. The default occasion is `:All`

and for example solvers join this field with `:Start`

, `:Step`

and `:Stop`

at the beginning, every iteration or the end of the algorithm, respectively

The original options can still be accessed using the `get_options`

function.

**Fields (defaults in brackets)**

`options`

– the options that are extended by debug information`debugDictionary`

– a`Dict{Symbol,DebugAction}`

to keep track of Debug for different actions

**Constructors**

`DebugOptions(o,dA)`

construct debug decorated options, where `dD`

can be

- a
`DebugAction`

, then it is stored within the dictionary at`:All`

- an
`Array`

of`DebugAction`

s, then it is stored as a`debugDictionary`

within`:All`

. - a
`Dict{Symbol,DebugAction}`

. - an Array of Symbols, String and an Int for the
`DebugFactory`

`Manopt.DebugStoppingCriterion`

— Type`DebugStoppingCriterion <: DebugAction`

print the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.

`Manopt.DebugActionFactory`

— Method`DebugActionFactory(s)`

create a `DebugAction`

where

- a
`String`

yields the correspoinding divider - a
`DebugAction`

is passed through - a [
`Symbol`

] creates`DebugEntry`

of that symbol, with the exceptions of`:Change`

,`:Iterate`

,`:Iteration`

, and`:Cost`

.

`Manopt.DebugFactory`

— Method`DebugFactory(a)`

given an array of `Symbol`

s, `String`

s `DebugAction`

s and `Ints`

- The symbol
`:Stop`

creates an entry of to display the stoping criterion at the end (`:Stop => DebugStoppingCriterion()`

) - The symbol
`:Cost`

creates a`DebugCost`

- The symbol
`:iteration`

creates a`DebugIteration`

- The symbol
`:Change`

creates a`DebugChange`

- any other symbol creates debug output of the corresponding field in
`Options`

- any string creates a
`DebugDivider`

- any
`DebugAction`

is directly included - an Integer
`k`

introduces that debug is only printed every`k`

th iteration

see DebugSolver for details on the decorated solver.

Further specific `DebugAction`

s can be found at the specific Options.

#### Record Options

`Manopt.RecordAction`

— Type`RecordAction`

A `RecordAction`

is a small functor to record values. The usual call is given by `(p,o,i) -> s`

that performs the record based on a `Problem`

`p`

, `Options`

`o`

and the current iterate `i`

.

By convention `i<=0`

is interpreted as "For Initialization only", i.e. only initialize internal values, but not trigger any record, the same holds for `i=typemin(Inf)`

which is used to indicate `stop`

, i.e. that the record is called from within `stop_solver!`

which returns true afterwards.

**Fields (assumed by subtypes to exist)**

`recorded_values`

an`Array`

of the recorded values.

`Manopt.RecordChange`

— Type`RecordChange <: RecordAction`

debug for the amount of change of the iterate (stored in `o.x`

of the `Options`

) during the last iteration.

**Additional Fields**

`storage`

a`StoreOptionsAction`

to store (at least)`o.x`

to use this as the last value (to compute the change)

`Manopt.RecordCost`

— Type`RecordCost <: RecordAction`

record the current cost function value, see `get_cost`

.

`Manopt.RecordEntry`

— Type`RecordEntry{T} <: RecordAction`

record a certain fields entry of type {T} during the iterates

**Fields**

`recorded_values`

– the recorded Iterates`field`

– Symbol the entry can be accessed with within`Options`

`Manopt.RecordEntryChange`

— Type`RecordEntryChange{T} <: RecordAction`

record a certain entries change during iterates

**Additional Fields**

`recorded_values`

– the recorded Iterates`field`

– Symbol the field can be accessed with within`Options`

`distance`

– function (p,o,x1,x2) to compute the change/distance between two values of the entry`storage`

– a`StoreOptionsAction`

to store (at least)`getproperty(o, d.field)`

`Manopt.RecordEvery`

— Type`RecordEvery <: RecordAction`

record only every $i$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.

This method does not perform any record itself but relies on it's childrens methods

`Manopt.RecordGroup`

— Type`RecordGroup <: RecordAction`

group a set of `RecordAction`

s into one action, where the internal prints are removed by default and the resulting strings are concatenated

**Constructor**

`RecordGroup(g)`

construct a group consisting of an Array of `RecordAction`

s `g`

, that are recording `en bloque`

; the method does not perform any record itself, but keeps an array of records. Accessing these yields a `Tuple`

of the recorded values per iteration

`Manopt.RecordIterate`

— Type`RecordIterate <: RecordAction`

record the iterate

**Constructors**

`RecordIterate(x0)`

initialize the iterate record array to the type of `x0`

, e.g. your initial data.

`RecordIterate(P)`

initialize the iterate record array to the data type `T`

.

`Manopt.RecordIteration`

— Type`RecordIteration <: RecordAction`

record the current iteration

`Manopt.RecordOptions`

— Type`RecordOptions <: Options`

append to any `Options`

the decorator with record functionality, Internally a `Dict`

ionary is kept that stores a `RecordAction`

for several occasions using a `Symbol`

as reference. The default occasion is `:All`

and for example solvers join this field with `:Start`

, `:Step`

and `:Stop`

at the beginning, every iteration or the end of the algorithm, respectively

The original options can still be accessed using the `get_options`

function.

**Fields**

`options`

– the options that are extended by debug information`recordDictionary`

– a`Dict{Symbol,RecordAction}`

to keep track of all different recorded values

**Constructors**

`RecordOptions(o,dR)`

construct record decorated `Options`

, where `dR`

can be

- a
`RecordAction`

, then it is stored within the dictionary at`:All`

- an
`Array`

of`RecordAction`

s, then it is stored as a`recordDictionary`

(@ref) within the dictionary at`:All`

. - a
`Dict{Symbol,RecordAction}`

.

`Manopt.RecordActionFactory`

— Method`RecordActionFactory(s)`

create a `RecordAction`

where

- a
`RecordAction`

is passed through - a [
`Symbol`

] creates`RecordEntry`

of that symbol, with the exceptions of`:Change`

,`:Iterate`

,`:Iteration`

, and`:Cost`

.

`Manopt.RecordFactory`

— Method`RecordFactory(a)`

given an array of `Symbol`

s and `RecordAction`

s and `Ints`

- The symbol
`:Cost`

creates a`RecordCost`

- The symbol
`:iteration`

creates a`RecordIteration`

- The symbol
`:Change`

creates a`RecordChange`

- any other symbol creates a
`RecordEntry`

of the corresponding field in`Options`

- any
`RecordAction`

is directly included - an Integer
`k`

introduces that record is only performed every`k`

th iteration

`Manopt.get_record`

— Function`get_record(o[,s=:Step])`

return the recorded values from within the `RecordOptions`

`o`

that where recorded with respect to the `Symbol s`

as an `Array`

. The default refers to any recordings during an Iteration represented by the Symbol `:Step`

`Manopt.get_record`

— Method`get_record(r)`

return the recorded values stored within a `RecordAction`

`r`

.

`Manopt.has_record`

— Method`has_record(o)`

check whether the `Options`

`o`

are decorated with `RecordOptions`

see RecordSolver for details on the decorated solver.

Further specific `RecordAction`

s can be found at the specific Options.

there's one internal helper that might be useful for you own actions, namely

`Manopt.record_or_reset!`

— Function`record_or_reset!(r,v,i)`

either record (`i>0`

and not `Inf`

) the value `v`

within the `RecordAction`

`r`

or reset (`i<0`

) the internal storage, where `v`

has to match the internal value type of the corresponding Recordaction.

### Stepsize and Linesearch

The step size determination is implemented as a `Functor`

based on

`Manopt.Stepsize`

— Type`Stepsize`

An abstract type for the functors representing step sizes, i.e. they are callable structurs. The naming scheme is `TypeOfStepSize`

, e.g. `ConstantStepsize`

.

Every Stepsize has to provide a constructor and its function has to have the interface `(p,o,i)`

where a `Problem`

as well as `Options`

and the current number of iterations are the arguments and returns a number, namely the stepsize to use.

**See also**

in general there are

`Manopt.ArmijoLinesearch`

— Type`ArmijoLineseach <: Linesearch`

A functor representing Armijo line seach including the last runs state, i.e. a last step size.

**Fields**

`initialStepsize`

– (`1.0`

) and initial step size`retraction_method`

– (`ExponentialRetraction()`

) the rectraction to use, defaults to the exponential map`contractionFactor`

– (`0.95`

) exponent for line search reduction`sufficientDecrease`

– (`0.1`

) gain within Armijo's rule`lastStepSize`

– (`initialstepsize`

) the last step size we start the search with

**Constructor**

`ArmijoLineSearch()`

with the Fields above in their order as optional arguments.

This method returns the functor to perform Armijo line search, where two inter faces are available:

- based on a tuple
`(p,o,i)`

of a`GradientProblem`

`p`

,`Options`

`o`

and a current iterate`i`

. - with
`(M,x,F,∇Fx[,η=-∇Fx]) -> s`

where Manifold`M`

, a current point`x`

a function`F`

, that maps from the manifold to the reals, its gradient (a tangent vector)`∇F`

$=\nabla F(x)$ at`x`

and an optional search direction tangent vector`η-∇F`

are the arguments.

`Manopt.ConstantStepsize`

— Type`ConstantStepsize <: Stepsize`

A functor that always returns a fixed step size.

**Fields**

`length`

– constant value for the step size.

**Constructor**

`ConstantStepSize(s)`

initialize the stepsie to a constant `s`

`Manopt.DecreasingStepsize`

— Type`DecreasingStepsize()`

A functor that represents several decreasing step sizes

**Fields**

`length`

– (`1`

) the initial step size $l$.`factor`

– (`1`

) a value $f$ to multiply the initial step size with every iteration`subtrahend`

– (`0`

) a value $a$ that is subtracted every iteration`exponent`

– (`1`

) a value $e$ the current iteration numbers $e$th exponential is taken of

In total the complete formulae reads for the $i$th iterate as

$ s_i = \frac{(l-i\cdot a)f^i}{i^e}$

and hence the default simplifies to just $ s_i = \frac{l}{i} $

**Constructor**

`ConstantStepSize(l,f,a,e)`

initialiszes all fields above, where none of them is mandatory.

`Manopt.Linesearch`

— Type`Linesearch <: Stepsize`

An abstract functor to represent line search type step size deteminations, see `Stepsize`

for details. One example is the `ArmijoLinesearch`

functor.

Compared to simple step sizes, the linesearch functors provide an interface of the form `(p,o,i,η) -> s`

with an additional (but optional) fourth parameter to proviade a search direction; this should default to something reasonable, e.g. the negative gradient.

`Manopt.NonmonotoneLinesearch`

— Type`NonmonotoneLinesearch <: Linesearch`

A functor representing a nonmonotone line seach using the Barzilai-Borwein step size^{[Iannazzo2018]}. Together with a gradient descent algorithm this line search represents the Riemannian Barzilai-Borwein with nonmonotone line-search (RBBNMLS) algorithm. We shifted the order of the algorithm steps from the paper by Iannazzo and Porcelli so that in each iteration we first find

and

where $\alpha_{k-1}$ is the step size computed in the last iteration and $\operatorname{T}$ is a vector transport. We then find the Barzilai–Borwein step size

where

if the direct strategy is chosen,

in case of the inverse strategy and an alternation between the two in case of the alternating strategy. Then we find the smallest $h = 0, 1, 2 …$ such that

where $σ$ is a step length reduction factor $\in (0,1)$, $m$ is the number of iterations after which the function value has to be lower than the current one and $γ$ is the sufficient decrease parameter $\in (0,1)$. We can then find the new stepsize by

**Fields**

`initial_stepsize`

– (`1.0`

) the step size we start the search with`retraction_method`

– (`ExponentialRetraction()`

) the rectraction to use`vector_transport_method`

– (`ParallelTransport()`

) the vector transport method to use`stepsize_reduction`

– (`0.5`

) step size reduction factor contained in the interval (0,1)`sufficient_decrease`

– (`1e-4`

) sufficient decrease parameter contained in the interval (0,1)`memory_size`

– (`10`

) number of iterations after which the cost value needs to be lower than the current one`min_stepsize`

– (`1e-3`

) lower bound for the Barzilai-Borwein step size greater than zero`max_stepsize`

– (`1e3`

) upper bound for the Barzilai-Borwein step size greater than min_stepsize`strategy`

– (`direct`

) defines if the new step size is computed using the direct, indirect or alternating strategy`storage`

– (`x`

,`∇F`

) a`StoreOptionsAction`

to store`old_x`

and`old_∇`

, the x-value and corresponding gradient of the previous iteration

**Constructor**

`NonmonotoneLinesearch()`

with the Fields above in their order as optional arguments.

This method returns the functor to perform nonmonotone line search.

`Manopt.get_stepsize`

— Method`get_stepsize(p::Problem, o::Options, vars...)`

return the stepsize stored within `Options`

`o`

when solving `Problem`

`p`

. This method also works for decorated options and the `Stepsize`

function within the options, by default stored in `o.stepsize`

.

`Manopt.linesearch_backtrack`

— Method`linesearch_backtrack(M, F, x, ∇F, s, decrease, contract, retr, η = -∇F, f0 = F(x))`

perform a linesearch for

- a manifold
`M`

- a cost function
`F`

, - an iterate
`x`

- the gradient ∇F(x)``
- an initial stepsize
`s`

usually called $γ$ - a sufficient
`decrease`

- a
`contract`

ion factor $σ$ - a
`retr`

action, which defaults to the`ExponentialRetraction()`

- a search direction $η = -∇F(x)$
- an offset, $f_0 = F(x)$

## Problems

A problem usually contains its cost function and provides and implementation to access the cost

`Manopt.Problem`

— Type`Problem`

Specify properties (values) and related functions for computing a certain optimization problem.

`Manopt.get_cost`

— Function`get_cost(p,x)`

evaluate the cost function `F`

stored within a `Problem`

at the point `x`

.

For any algorithm that involves a cyclic evalutaion, e.g. `cyclic_proximal_point`

, one can specify the `EvalOrder`

as

`Manopt.EvalOrder`

— Type`EvalOrder`

type for specifying an evaluation order for any cyclicly evaluated algorithms

`Manopt.LinearEvalOrder`

— Type`LinearEvalOrder <: EvalOrder`

evaluate in a linear order, i.e. for each cycle of length l evaluate in the order 1,2,...,l.

`Manopt.RandomEvalOrder`

— Type`RandomEvalOrder <: EvalOrder`

choose a random order for each evaluation of the l functionals.

`Manopt.FixedRandomEvalOrder`

— Type`FixedRandomEvalOrder <: EvalOrder`

Choose a random order once and evaluate always in this order, i.e. for l elements there is one chosen permutation used for each iteration cycle.

### Cost based problem

`Manopt.CostProblem`

— Type`CostProblem <: Problem`

speficy a problem for solvers just based on cost functions, i.e. gradient free ones.

**Fields**

`M`

– a manifold $\mathcal M$`cost`

– a function $F\colon\mathcal M\to\mathbb R$ to minimize

**See also**

### Gradient based problem

`Manopt.GradientProblem`

— Type`GradientProblem <: Problem`

specify a problem for gradient based algorithms.

**Fields**

`M`

– a manifold $\mathcal M$`cost`

– a function $F\colon\mathcal M\to\mathbb R$ to minimize`gradient`

– the gradient $\nabla F\colon\mathcal M \to \mathcal T\mathcal M$ of the cost function $F$

**See also**

`gradient_descent`

`GradientDescentOptions`

`Manopt.get_gradient`

— Function`get_gradient(p,x)`

evaluate the gradient of a `GradientProblem`

`p`

at the point `x`

.

`get_gradient(p,x)`

evaluate the gradient of a `HessianProblem`

`p`

at the point `x`

.

### Subgradient based problem

`Manopt.SubGradientProblem`

— Type`SubGradientProblem <: Problem`

A structure to store information about a subgradient based optimization problem

**Fields**

`manifold`

– a Manifold`cost`

– the function $F$ to be minimized`subgradient`

– a function returning a subgradient $\partial F$ of $F$

**Constructor**

`SubGradientProblem(M, f, ∂f)`

Generate the [`Problem`

] for a subgradient problem, i.e. a function `f`

on the manifold `M`

and a function `∂f`

that returns an element from the subdifferential at a point.

`Manopt.get_subgradient`

— Function`get_subgradient(p,x)`

Evaluate the (sub)gradient of a `SubGradientProblem`

`p`

at the point `x`

.

### Proximal Map(s) based problem

`Manopt.ProximalProblem`

— Type`ProximalProblem <: Problem`

specify a problem for solvers based on the evaluation of proximal map(s).

**Fields**

`M`

- a Manifold $\mathcal M$`cost`

- a function $F\colon\mathcal M\to\mathbb R$ to minimize`proxes`

- proximal maps $\operatorname{prox}_{\lambda\varphi}\colon\mathcal M\to\mathcal M$ as functions (λ,x) -> y, i.e. the prox parameter λ also belongs to the signature of the proximal map.`number_of_proxes`

- (length(proxes)) number of proxmal Maps, e.g. if one of the maps is a combined one such that the proximal Maps functions return more than one entry per function

**See also**

`Manopt.get_proximal_map`

— Function`get_proximal_map(p,λ,x,i)`

evaluate the `i`

th proximal map of `ProximalProblem p`

at the point `x`

of `p.M`

with parameter `λ`

$>0$.

### Further planned problems

`Manopt.HessianProblem`

— Type`HessianProblem <: Problem`

specify a problem for hessian based algorithms.

**Fields**

`M`

: a manifold $\mathcal M$`cost`

: a function $F\colon\mathcal M\to\mathbb R$ to minimize`gradient`

: the gradient $\nabla F\colon\mathcal M \to \mathcal T\mathcal M$ of the cost function $F$`hessian`

: the hessian $\operatorname{Hess}[F] (\cdot)_ {x} \colon \mathcal T_{x} \mathcal M \to \mathcal T_{x} \mathcal M$ of the cost function $F$`precon`

: the symmetric, positive deﬁnite preconditioner (approximation of the inverse of the Hessian of $F$)

**See also**

`Manopt.getHessian`

— Function`getHessian(p,x,ξ)`

evaluate the Hessian of a `HessianProblem`

`p`

at the point `x`

applied to a tangent vector `ξ`

.

`Manopt.get_preconditioner`

— Function`get_preconditioner(p,x,ξ)`

evaluate the symmetric, positive deﬁnite preconditioner (approximation of the inverse of the Hessian of the cost function `F`

) of a `HessianProblem`

`p`

at the point `x`

applied to a tangent vector `ξ`

.

- Iannazzo2018
B. Iannazzo, M. Porcelli,

**The Riemannian Barzilai–Borwein Method with Nonmonotone Line Search and the Matrix Geometric Mean Computation**, In: IMA Journal of Numerical Analysis. Volume 38, Issue 1, January 2018, Pages 495–517, doi 10.1093/imanum/drx015