Plans for solvers

In order to start a solver, both a Problem and Options are required. Together they form a plan and these are stored in this folder. For sub-problems there are maybe also only Options, since they than refer to the same problem.

Options

For most algorithms a certain set of options can either be generated beforehand of the function with keywords can be used. Generally the type

Manopt.OptionsType
Options

A general super type for all options.

Fields

The following fields are assumed to be default. If you use different ones, provide the access functions accordingly

source

Since the Options directly relate to a solver, they are documented with the corresponding Solvers. You can always access the options (since they might be decorated) by calling get_options.

Decorators for Options

Options can be decorated using the following trait and function to initialize

Manopt.dispatch_options_decoratorFunction
dispatch_options_decorator(o::Options)

Indicate internally, whether an Options o to be of decorating type, i.e. it stores (encapsulates) options in itself, by default in the field o. options.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, i.e. by default an options is not decorated.

source
Manopt.decorate_optionsFunction
decorate_options(o)

decorate the Optionso with specific decorators.

Optional Arguments

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

  • debug – (Array{Union{Symbol,DebugAction,String,Int},1}()) a set of symbols representing DebugActions, Strings used as dividers and a subsampling integer. These are passed as a DebugGroup within :All to the DebugOptions decorator dictionary. Only excention is :Stop that is passed to :Stop.
  • record – (Array{Union{Symbol,RecordAction,Int},1}()) specify recordings by using Symbols or RecordActions directly. The integer can again be used for only recording every $i$th iteration.

See also

DebugOptions, RecordOptions

source

In general decorators often perform actions so we introduce

as well as a helper for storing values using keys, i.e.

Manopt.StoreOptionsActionType
StoreTupleAction <: AbstractOptionsAction

internal storage for AbstractOptionsActions to store a tuple of fields from an Optionss

This functor posesses the usual interface of functions called during an iteration, i.e. acts on (p,o,i), where p is a Problem, o is an Options and i is the current iteration.

Fields

  • values – a dictionary to store interims values based on certain Symbols
  • keys – an NTuple of Symbols to refer to fields of Options
  • once – whether to update the internal values only once per iteration
  • lastStored – last iterate, where this AbstractOptionsAction was called (to determine once

Constructiors

StoreOptionsAction([keys=(), once=true])

Initialize the Functor to an (empty) set of keys, where once determines whether more that one update per iteration are effective

StoreOptionsAction(keys, once=true])

Initialize the Functor to a set of keys, where the dictionary is initialized to be empty. Further, once determines whether more that one update per iteration are effective, otherwise only the first update is stored, all others are ignored.

source

Debug Options

Manopt.DebugActionType
DebugAction

A DebugAction is a small functor to print/issue debug output. The usual call is given by (p,o,i) -> s that performs the debug based on a Problem p, Options o and the current iterate i.

By convention i=0 is interpreted as "For Initialization only", i.e. only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output. Finally typemin(Int) is used to indicate a call from stop_solver! that returns true afterwards.

Fields (assumed by subtypes to exist)

  • print method to perform the actual print. Can for example be set to a file export,

or to @info. The default is the print function on the default Base.stdout.

source
Manopt.DebugChangeType
DebugChange(a,prefix,print)

debug for the amount of change of the iterate (stored in o.x of the Options) during the last iteration. See DebugEntryChange

Parameters

  • x0 – an initial value to already get a Change after the first iterate. Can be left out
  • a – (StoreOptionsAction( (:x,) )) – the storage of the previous action
  • prefix – ("Last Change:") prefix of the debug output
  • print – (print) default method to peform the print.
source
Manopt.DebugCostType
DebugCost <: DebugAction

print the current cost function value, see get_cost.

Constructors

DebugCost(long,print)

where long indicated whether to print F(x): (default) or cost:

DebugCost(prefix,print)

set a prefix manually.

source
Manopt.DebugDividerType
DebugDivider <: DebugAction

print a small divider (default " | ").

Constructor

DebugDivider(div,print)
source
Manopt.DebugEntryType
DebugEntry <: RecordAction

print a certain fields entry of type {T} during the iterates

Addidtional Fields

  • field – Symbol the entry can be accessed with within Options

Constructor

DebugEntry(f[, prefix="$f:", io=stdout])
source
Manopt.DebugEntryChangeType
DebugEntryChange{T} <: DebugAction

print a certain entries change during iterates

Additional Fields

  • print – (print) function to print the result
  • prefix – ("Change of :x") prefix to the print out
  • field – Symbol the field can be accessed with within Options
  • distance – function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage – a StoreOptionsAction to store the previous value of :f

Constructors

DebugEntryChange(f,d[, a, prefix, io])

initialize the Debug to a field f and a distance d.

DebugEntryChange(v,f,d[, a, prefix="Change of $f:", io])

initialize the Debug to a field f and a distance d with initial value v for the history of o.field.

source
Manopt.DebugEveryType
DebugEvery <: DebugAction

evaluate and print debug only every $i$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by alwaysUpdate.

This method does not perform any print itself but relies on it's childrens print.

source
Manopt.DebugGroupType
DebugGroup <: DebugAction

group a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated

Constructor

DebugGroup(g)

construct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string

source
Manopt.DebugIterateType
DebugIterate <: DebugAction

debug for the current iterate (stored in o.x).

Constructor

DebugIterate(io=stdout, long::Bool=false)

Parameters

  • long::Bool whether to print x: or current iterate
source
Manopt.DebugOptionsType
DebugOptions <: Options

The debug options append to any options a debug functionality, i.e. they act as a decorator pattern. Internally a Dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference. The default occasion is :All and for example solvers join this field with :Start, :Step and :Stop at the beginning, every iteration or the end of the algorithm, respectively

The original options can still be accessed using the get_options function.

Fields (defaults in brackets)

  • options – the options that are extended by debug information
  • debugDictionary – a Dict{Symbol,DebugAction} to keep track of Debug for different actions

Constructors

DebugOptions(o,dA)

construct debug decorated options, where dD can be

  • a DebugAction, then it is stored within the dictionary at :All
  • an Array of DebugActions, then it is stored as a debugDictionary within :All.
  • a Dict{Symbol,DebugAction}.
  • an Array of Symbols, String and an Int for the DebugFactory
source
Manopt.DebugStoppingCriterionType
DebugStoppingCriterion <: DebugAction

print the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.

source
Manopt.DebugFactoryMethod
DebugFactory(a)

given an array of Symbols, Strings DebugActions and Ints

  • The symbol :Stop creates an entry of to display the stoping criterion at the end (:Stop => DebugStoppingCriterion())
  • The symbol :Cost creates a DebugCost
  • The symbol :iteration creates a DebugIteration
  • The symbol :Change creates a DebugChange
  • any other symbol creates debug output of the corresponding field in Options
  • any string creates a DebugDivider
  • any DebugAction is directly included
  • an Integer kintroduces that debug is only printed every kth iteration
source

see DebugSolver for details on the decorated solver.

Further specific DebugActions can be found at the specific Options.

Record Options

Manopt.RecordActionType
RecordAction

A RecordAction is a small functor to record values. The usual call is given by (p,o,i) -> s that performs the record based on a Problem p, Options o and the current iterate i.

By convention i<=0 is interpreted as "For Initialization only", i.e. only initialize internal values, but not trigger any record, the same holds for i=typemin(Inf) which is used to indicate stop, i.e. that the record is called from within stop_solver! which returns true afterwards.

Fields (assumed by subtypes to exist)

  • recorded_values an Array of the recorded values.
source
Manopt.RecordChangeType
RecordChange <: RecordAction

debug for the amount of change of the iterate (stored in o.x of the Options) during the last iteration.

Additional Fields

  • storage a StoreOptionsAction to store (at least) o.x to use this as the last value (to compute the change)
source
Manopt.RecordEntryType
RecordEntry{T} <: RecordAction

record a certain fields entry of type {T} during the iterates

Fields

  • recorded_values – the recorded Iterates
  • field – Symbol the entry can be accessed with within Options
source
Manopt.RecordEntryChangeType
RecordEntryChange{T} <: RecordAction

record a certain entries change during iterates

Additional Fields

  • recorded_values – the recorded Iterates
  • field – Symbol the field can be accessed with within Options
  • distance – function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage – a StoreOptionsAction to store (at least) getproperty(o, d.field)
source
Manopt.RecordEveryType
RecordEvery <: RecordAction

record only every $i$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.

This method does not perform any record itself but relies on it's childrens methods

source
Manopt.RecordGroupType
RecordGroup <: RecordAction

group a set of RecordActions into one action, where the internal prints are removed by default and the resulting strings are concatenated

Constructor

RecordGroup(g)

construct a group consisting of an Array of RecordActions g, that are recording en bloque; the method does not perform any record itself, but keeps an array of records. Accessing these yields a Tuple of the recorded values per iteration

source
Manopt.RecordIterateType
RecordIterate <: RecordAction

record the iterate

Constructors

RecordIterate(x0)

initialize the iterate record array to the type of x0, e.g. your initial data.

RecordIterate(P)

initialize the iterate record array to the data type T.

source
Manopt.RecordOptionsType
RecordOptions <: Options

append to any Options the decorator with record functionality, Internally a Dictionary is kept that stores a RecordAction for several occasions using a Symbol as reference. The default occasion is :All and for example solvers join this field with :Start, :Step and :Stop at the beginning, every iteration or the end of the algorithm, respectively

The original options can still be accessed using the get_options function.

Fields

  • options – the options that are extended by debug information
  • recordDictionary – a Dict{Symbol,RecordAction} to keep track of all different recorded values

Constructors

RecordOptions(o,dR)

construct record decorated Options, where dR can be

  • a RecordAction, then it is stored within the dictionary at :All
  • an Array of RecordActions, then it is stored as a recordDictionary(@ref) within the dictionary at :All.
  • a Dict{Symbol,RecordAction}.
source
Manopt.get_recordFunction
get_record(o[,s=:Step])

return the recorded values from within the RecordOptions o that where recorded with respect to the Symbol s as an Array. The default refers to any recordings during an Iteration represented by the Symbol :Step

source

see RecordSolver for details on the decorated solver.

Further specific RecordActions can be found at the specific Options.

there's one internal helper that might be useful for you own actions, namely

Manopt.record_or_reset!Function
record_or_reset!(r,v,i)

either record (i>0 and not Inf) the value v within the RecordAction r or reset (i<0) the internal storage, where v has to match the internal value type of the corresponding Recordaction.

source

Stepsize and Linesearch

The step size determination is implemented as a Functor based on

Manopt.StepsizeType
Stepsize

An abstract type for the functors representing step sizes, i.e. they are callable structurs. The naming scheme is TypeOfStepSize, e.g. ConstantStepsize.

Every Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a Problem as well as Options and the current number of iterations are the arguments and returns a number, namely the stepsize to use.

See also

Linesearch

source

in general there are

Manopt.ArmijoLinesearchType
ArmijoLineseach <: Linesearch

A functor representing Armijo line seach including the last runs state, i.e. a last step size.

Fields

  • initialStepsize – (1.0) and initial step size
  • retraction_method – (ExponentialRetraction()) the rectraction to use, defaults to the exponential map
  • contractionFactor – (0.95) exponent for line search reduction
  • sufficientDecrease – (0.1) gain within Armijo's rule
  • lastStepSize – (initialstepsize) the last step size we start the search with

Constructor

ArmijoLineSearch()

with the Fields above in their order as optional arguments.

This method returns the functor to perform Armijo line search, where two inter faces are available:

  • based on a tuple (p,o,i) of a GradientProblem p, Options o and a current iterate i.
  • with (M, x, F, gradFx[,η=-gradFx]) -> s where Manifold M, a current point x a function F, that maps from the manifold to the reals, its gradient (a tangent vector) gradFx$=\operatorname{grad}F(x)$ at x and an optional search direction tangent vector η=-gradFx are the arguments.
source
Manopt.ConstantStepsizeType
ConstantStepsize <: Stepsize

A functor that always returns a fixed step size.

Fields

  • length – constant value for the step size.

Constructor

ConstantStepSize(s)

initialize the stepsize to a constant s

source
Manopt.DecreasingStepsizeType
DecreasingStepsize()

A functor that represents several decreasing step sizes

Fields

  • length – (1) the initial step size $l$.
  • factor – (1) a value $f$ to multiply the initial step size with every iteration
  • subtrahend – (0) a value $a$ that is subtracted every iteration
  • exponent – (1) a value $e$ the current iteration numbers $e$th exponential is taken of

In total the complete formulae reads for the $i$th iterate as

\[s_i = \frac{(l - i a)f^i}{i^e}\]

and hence the default simplifies to just $s_i = \frac{l}{i}$

Constructor

ConstantStepSize(l,f,a,e)

initialiszes all fields above, where none of them is mandatory.

source
Manopt.LinesearchType
Linesearch <: Stepsize

An abstract functor to represent line search type step size deteminations, see Stepsize for details. One example is the ArmijoLinesearch functor.

Compared to simple step sizes, the linesearch functors provide an interface of the form (p,o,i,η) -> s with an additional (but optional) fourth parameter to proviade a search direction; this should default to something reasonable, e.g. the negative gradient.

source
Manopt.NonmonotoneLinesearchType
NonmonotoneLinesearch <: Linesearch

A functor representing a nonmonotone line seach using the Barzilai-Borwein step size[Iannazzo2018]. Together with a gradient descent algorithm this line search represents the Riemannian Barzilai-Borwein with nonmonotone line-search (RBBNMLS) algorithm. We shifted the order of the algorithm steps from the paper by Iannazzo and Porcelli so that in each iteration we first find

\[y_{k} = \operatorname{grad}F(x_{k}) - \operatorname{T}_{x_{k-1} → x_k}(\operatorname{grad}F(x_{k-1}))\]

and

\[s_{k} = - α_{k-1} * \operatorname{T}_{x_{k-1} → x_k}(\operatorname{grad}F(x_{k-1})),\]

where $α_{k-1}$ is the step size computed in the last iteration and $\operatorname{T}$ is a vector transport. We then find the Barzilai–Borwein step size

\[α_k^{\text{BB}} = \begin{cases} \min(α_{\text{max}}, \max(α_{\text{min}}, τ_{k})), & \text{if } ⟨s_{k}, y_{k}⟩_{x_k} > 0,\\ α_{\text{max}}, & \text{else,} \end{cases}\]

where

\[τ_{k} = \frac{⟨s_{k}, s_{k}⟩_{x_k}}{⟨s_{k}, y_{k}⟩_{x_k}},\]

if the direct strategy is chosen,

\[τ_{k} = \frac{⟨s_{k}, y_{k}⟩_{x_k}}{⟨y_{k}, y_{k}⟩_{x_k}},\]

in case of the inverse strategy and an alternation between the two in case of the alternating strategy. Then we find the smallest $h = 0, 1, 2, …$ such that

\[F(\operatorname{retr}_{x_k}(- σ^h α_k^{\text{BB}} \operatorname{grad}F(x_k))) \leq \max_{1 ≤ j ≤ \min(k+1,m)} F(x_{k+1-j}) - γ σ^h α_k^{\text{BB}} ⟨\operatorname{grad}F(x_k), \operatorname{grad}F(x_k)⟩_{x_k},\]

where $σ$ is a step length reduction factor $∈ (0,1)$, $m$ is the number of iterations after which the function value has to be lower than the current one and $γ$ is the sufficient decrease parameter $∈(0,1)$. We can then find the new stepsize by

\[α_k = σ^h α_k^{\text{BB}}.\]

Fields

  • initial_stepsize – (1.0) the step size we start the search with
  • retraction_method – (ExponentialRetraction()) the rectraction to use
  • vector_transport_method – (ParallelTransport()) the vector transport method to use
  • stepsize_reduction – (0.5) step size reduction factor contained in the interval (0,1)
  • sufficient_decrease – (1e-4) sufficient decrease parameter contained in the interval (0,1)
  • memory_size – (10) number of iterations after which the cost value needs to be lower than the current one
  • min_stepsize – (1e-3) lower bound for the Barzilai-Borwein step size greater than zero
  • max_stepsize – (1e3) upper bound for the Barzilai-Borwein step size greater than min_stepsize
  • strategy – (direct) defines if the new step size is computed using the direct, indirect or alternating strategy
  • storage – (x, gradient) a StoreOptionsAction to store old_x and old_gradient, the x-value and corresponding gradient of the previous iteration

Constructor

NonmonotoneLinesearch()

with the Fields above in their order as optional arguments.

This method returns the functor to perform nonmonotone line search.

source
Manopt.WolfePowellBinaryLinesearchType
WolfePowellBinaryLinesearch <: Linesearch

A Linesearch method that determines a step size t fulfilling the Wolfe conditions

based on a binary chop. Let $η$ be a search direction and $c_1,c_2>0$ be two constants. Then with

\[A(t) = f(x_+) ≤ c_1 t ⟨\operatorname{grad}f(x), η⟩_{x} \quad\text{and}\quad  W(t) = ⟨\operatorname{grad}f(x_+), \text{V}_{x_+\gets x}η⟩_{x_+} ≥ c_2 ⟨η, \operatorname{grad}f(x)⟩_x,\]

where $x_+ = \operatorname{retr}_x(tη)$ is the current trial point, and $\text{V}$ is a vector transport, we perform the following Algorithm similar to Algorithm 7 from [Huang2014]

  1. set $α=0$, $β=∞$ and $t=1$.
  2. While either $A(t)$ does not hold or $W(t)$ does not hold do steps 3-5.
  3. If $A(t)$ fails, set $β=t$.
  4. If $A(t)$ holds but $W(t)$ fails, set $α=t$.
  5. If $β<∞$ set $t=\frac{α+β}{2}$, otherwise set $t=2α$.

Constructor

WolfePowellBinaryLinesearch(
    retr::AbstractRetractionMethod=ExponentialRetraction(),
    vtr::AbstractVectorTransportMethod=ParallelTransport(),
    c_1::Float64=10^(-4),
    c_2::Float64=0.999
)
source
Manopt.WolfePowellLineseachType
WolfePowellLineseach <: Linesearch

Do a backgtracking linesearch to find a step size $α$ that fulfills the Wolfe conditions along a search direktion $η$ starting from $x$, i.e.

\[f\bigl( \operatorname{retr}_x(αη) \bigr) ≤ f(x_k) + c_1 α_k ⟨\operatorname{grad}f(x), η⟩_x \quad\text{and}\quad \frac{\mathrm{d}}{\mathrm{d}t} f\bigr(\operatorname{retr}_x(tη)\bigr) \Big\vert_{t=α} ≥ c_2 \frac{\mathrm{d}}{\mathrm{d}t} f\bigl(\operatorname{retr}_x(tη)\bigr)\Big\vert_{t=0}.\]

Constructor

WolfePowellLinesearch(
    retr::AbstractRetractionMethod=ExponentialRetraction(),
    vtr::AbstractVectorTransportMethod=ParallelTransport(),
    c_1::Float64=10^(-4),
    c_2::Float64=0.999
)
source
Manopt.get_stepsizeMethod
get_stepsize(p::Problem, o::Options, vars...)

return the stepsize stored within Options o when solving Problem p. This method also works for decorated options and the Stepsize function within the options, by default stored in o.stepsize.

source
Manopt.linesearch_backtrackMethod
linesearch_backtrack(M, F, x, gradFx, s, decrease, contract, retr, η = -gradFx, f0 = F(x))

perform a linesearch for

  • a manifold M
  • a cost function F,
  • an iterate x
  • the gradient $\operatorname{grad}F(x)$
  • an initial stepsize s usually called $γ$
  • a sufficient decrease
  • a contraction factor $σ$
  • a retraction, which defaults to the ExponentialRetraction()
  • a search direction $η = -\operatorname{grad}F(x)$
  • an offset, $f_0 = F(x)$
source

Problems

A problem usually contains its cost function and provides and implementation to access the cost

Manopt.ProblemType
Problem{T}

Dexcribe the problem that should be optimized by stating all properties, that do not change during an optimization or that are dependent of a certain solver.

The parameter T can be used to distinguish problems with different representations or implementations. The default parameter AllocatingEvaluation, which might be slower but easier to use. The usually faster parameter value is MutatingEvaluation

See Options for the changing and solver dependent properties.

source

A problem can be of different type, more specifically, whether its containing functions, for example to compute the gradient work with allocation or without. To be precise, an allocation function X = gradF(x) allocates memory for its result X, while gradF!(X,x) does not.

Manopt.AllocatingEvaluationType
AllocatingEvaluation <: AbstractEvaluationType

A parameter for a Problem indicating that the problem uses functions that allocate memory for their result, i.e. they work out of place.

source
Manopt.MutatingEvaluationType
MutatingEvaluation

A parameter for a Problem indicating that the problem uses functions that do not allocate memory but work on their input, i.e. in place.

source

Cost based problem

Manopt.CostProblemType
CostProblem{T, Manifold, TCost} <: Problem{T}

speficy a problem for solvers just based on cost functions, i.e. gradient free ones.

Fields

  • M – a manifold $\mathcal M$
  • cost – a function $F: \mathcal M → ℝ$ to minimize

Constructors

CostProblem(M, cost; evaluation=AllocatingEvaluation())

Generate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.

See also

NelderMead

source

Gradient based problem

Manopt.GradientProblemType
GradientProblem{T} <: AbstractGradientProblem{T}

specify a problem for gradient based algorithms.

Fields

  • M – a manifold $\mathcal M$
  • cost – a function $F: \mathcal M → ℝ$ to minimize
  • gradient!! – the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$.

Depending on the AbstractEvaluationType T the gradient has to be provided

Constructors

GradientProblem(M, cost, gradient; evaluation=AllocatingEvaluation())

See also

gradient_descent, GradientDescentOptions

source
Manopt.StochasticGradientProblemType
StochasticGradientProblem <: Problem

A stochastic gradient problem consists of

  • a Manifold M
  • a(n optional) cost function ``f(x) = \displaystyle\sum{i=1}^n fi(x)
  • an array of gradients, i.e. a function that returns and array or an array of functions

$\{\operatorname{grad}f_i\}_{i=1}^n$, where both variants can be given in the allocating variant and the array of function may also be provided as mutating functions (X,x) -> X.

Constructors

StochasticGradientProblem(M::Manifold, gradF::Function;
    cost=Missing(), evaluation=AllocatingEvaluation()
)
StochasticGradientProblem(M::Manifold, gradF::AbstractVector{<:Function};
    cost=Missing(), evaluation=AllocatingEvaluation()
)

Create a Stochastic gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.

source
Manopt.get_gradientFunction
get_gradient(p::AbstractGradientProblem{T},x)
get_gradient!(p::AbstractGradientProblem{T}, X, x)

evaluate the gradient of a AbstractGradientProblem{T}p at the point x.

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=MutatingEvaluation memory for the result is allocated.

source
get_gradient(p::StochasticGradientProblem, k, x)
get_gradient!(p::StochasticGradientProblem, Y, k, x)

Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x (in place of Y).

Note that for the MutatingEvaluation based problem and a single function for the stochastic gradient mutating variant is not available, since it would require too many allocatins.

source
Manopt.get_gradientsFunction
get_gradients(P::StochasticGradientProblem, x)
get_gradients!(P::StochasticGradientProblem, Y, x)

Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at x (in place of Y).

Note that for the MutatingEvaluation based problem and a single function for the stochastic gradient, the allocating variant is not available.

source

Subgradient based problem

Manopt.SubGradientProblemType
SubGradientProblem <: Problem

A structure to store information about a subgradient based optimization problem

Fields

  • manifold – a Manifold
  • cost – the function $F$ to be minimized
  • subgradient – a function returning a subgradient $\partial F$ of $F$

Constructor

SubGradientProblem(M, f, ∂f)

Generate the [Problem] for a subgradient problem, i.e. a function f on the manifold M and a function ∂f that returns an element from the subdifferential at a point.

source

Proximal Map(s) based problem

Manopt.ProximalProblemType
ProximalProblem <: Problem

specify a problem for solvers based on the evaluation of proximal map(s).

Fields

  • M - a Riemannian manifold
  • cost - a function $F:\mathcal M→ℝ$ to minimize
  • proxes - proximal maps $\operatorname{prox}_{λ\varphi}:\mathcal M→\mathcal M$ as functions (λ,x) -> y, i.e. the prox parameter λ also belongs to the signature of the proximal map.
  • number_of_proxes - (length(proxes)) number of proxmal Maps, e.g. if one of the maps is a combined one such that the proximal Maps functions return more than one entry per function

See also

cyclic_proximal_point, get_cost, get_proximal_map

source
Manopt.get_proximal_mapFunction
get_proximal_map(p,λ,x,i)

evaluate the ith proximal map of ProximalProblem p at the point x of p.M with parameter $λ>0$.

source

Hessian based problem

Manopt.HessianProblemType
HessianProblem <: Problem

specify a problem for hessian based algorithms.

Fields

  • M : a manifold $\mathcal M$
  • cost : a function $F:\mathcal M→ℝ$ to minimize
  • gradient : the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$
  • hessian : the hessian $\operatorname{Hess}F(x)[⋅]: \mathcal T_{x} \mathcal M → \mathcal T_{x} \mathcal M$ of the cost function $F$
  • precon : the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of $F$)

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.get_hessianFunction
get_hessian(p::HessianProblem{T}, q, X)
get_hessian!(p::HessianProblem{T}, Y, q, X)

evaluate the Hessian of a HessianProblem p at the point q applied to a tangent vector X, i.e. $\operatorname{Hess}f(q)[X]$.

The evaluation is done in place of Y for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=MutatingEvaluation memory for the result is allocated.

source
Manopt.get_preconditionerFunction
get_preconditioner(p,x,ξ)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a HessianProblem p at the point xapplied to a tangent vector ξ.

source

Primal dual based problem

Manopt.PrimalDualProblemType
PrimalDualProblem {T, mT <: Manifold, nT <: Manifold} <: PrimalDualProblem} <: Problem{T}

Describes a Problem for the linearized or exact Chambolle-Pock algorithm.[BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020][ChambollePock2011]

Fields

All fields with !! can either be mutating or nonmutating functions, which should be set depenting on the parameter T <: AbstractEvaluationType.

  • M, N – two manifolds $\mathcal M$, $\mathcal N$
  • cost $F + G(Λ(⋅))$ to evaluate interims cost function values
  • linearized_forward_operator!! linearized operator for the forward operation in the algorthm $DΛ$
  • linearized_adjoint_operator!! The adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$
  • prox_F!! the proximal map belonging to $f$
  • prox_G_dual!! the proximal map belonging to $g_n^*$
  • Λ!! – (fordward_operator) the forward operator (if given) $Λ: \mathcal M → \mathcal N$

Either $DΛ$ (for the linearized) or $Λ$ are required usually.

Constructor

LinearizedPrimalDualProblem(M, N, cost, prox_F, prox_G_dual, adjoint_linearized_operator;
    linearized_forward_operator::Union{Function,Missing}=missing,
    Λ::Union{Function,Missing}=missing,
    evaluation::AbstractEvaluationType=AllocatingEvaluation()
)

The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.

source
Manopt.get_primal_proxFunction
y = get_primal_prox(p::PrimalDualProblem, σ, x)
get_primal_prox!(p::PrimalDualProblem, y, σ, x)

Evaluate the proximal map of $F$ stored within PrimalDualProblem

\[\operatorname{prox}_{σF}(x)\]

which can also be computed in place of y.

source
Manopt.get_dual_proxFunction
y = get_dual_prox(p::PrimalDualProblem, n, τ, ξ)
get_dual_prox!(p::PrimalDualProblem, y, n, τ, ξ)

Evaluate the proximal map of $G_n^*$ stored within PrimalDualProblem

\[\operatorname{prox}_{τG_n^*}(ξ)\]

which can also be computed in place of y.

source
Manopt.adjoint_linearized_operatorFunction
X = adjoint_linearized_operator(p::PrimalDualProblem, m, n, Y)
adjoint_linearized_operator(p::PrimalDualProblem, X, m, n, Y)

Evaluate the adjoint of the linearized forward operator of $(DΛ(m))^*[Y]$ stored within the PrimalDualProblem (in place of X). Since $Y∈T_n\mathcal N$, both $m$ and $n=Λ(m)$ are necessary arguments, mainly because the forward operator $Λ$ might be missing in p.

source
  • Iannazzo2018

    B. Iannazzo, M. Porcelli, The Riemannian Barzilai–Borwein Method with Nonmonotone Line Search and the Matrix Geometric Mean Computation, In: IMA Journal of Numerical Analysis. Volume 38, Issue 1, January 2018, Pages 495–517, doi 10.1093/imanum/drx015

  • Huang2014

    Huang, W.: Optimization algorithms on Riemannian manifolds with applications, Dissertation, Flordia State University, 2014. pdf

  • BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020

    R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck, J. Vidal-Núñez: Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds, Foundations of Computational Mathematics, 2021. doi: 10.1007/s10208-020-09486-5 arXiv: 1908.02022

  • ChambollePock2011

    A. Chambolle, T. Pock: A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40(1), 120–145, 2011. doi: 10.1007/s10851-010-0251-1