Problems
A problem usually contains its cost function and provides and implementation to access the cost
Manopt.Problem
— TypeProblem{T}
Describe the problem that should be optimized by stating all properties, that do not change during an optimization or that are dependent of a certain solver.
The parameter T
can be used to distinguish problems with different representations or implementations. The default parameter AllocatingEvaluation
, which might be slower but easier to use. The usually faster parameter value is MutatingEvaluation
See Options
for the changing and solver dependent properties.
Manopt.get_cost
— Functionget_cost(p, x)
evaluate the cost function F
stored within a Problem
at the point x
.
A problem can be of different type, more specifically, whether its containing functions, for example to compute the gradient work with allocation or without. To be precise, an allocation function X = gradF(x)
allocates memory for its result X
, while gradF!(X,x)
does not.
Manopt.AbstractEvaluationType
— TypeAbstractEvaluationType
An abstract type to specify the kind of evaluation a Problem
supports.
Manopt.AllocatingEvaluation
— TypeAllocatingEvaluation <: AbstractEvaluationType
A parameter for a Problem
indicating that the problem uses functions that allocate memory for their result, i.e. they work out of place.
Manopt.MutatingEvaluation
— TypeMutatingEvaluation
A parameter for a Problem
indicating that the problem uses functions that do not allocate memory but work on their input, i.e. in place.
Cost based problem
Manopt.CostProblem
— TypeCostProblem{T, Manifold, TCost} <: Problem{T}
speficy a problem for solvers just based on cost functions, i.e. gradient free ones.
Fields
M
– a manifold $\mathcal M$cost
– a function $F: \mathcal M → ℝ$ to minimize
Constructors
CostProblem(M, cost; evaluation=AllocatingEvaluation())
Generate a problem. While this Problem does not have any allocating functions, the type T
can be set for consistency reasons with other problems.
See also
Gradient based problem
Manopt.AbstractGradientProblem
— TypeAbstractGradientProblem{T} <: Problem{T}
An abstract type for all functions that provide a (full) gradient, where T
is a AbstractEvaluationType
for the gradient function.
Manopt.GradientProblem
— TypeGradientProblem{T} <: AbstractGradientProblem{T}
specify a problem for gradient based algorithms.
Fields
M
– a manifold $\mathcal M$cost
– a function $F: \mathcal M → ℝ$ to minimizegradient!!
– the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$.
Depending on the AbstractEvaluationType
T
the gradient has to be provided
- as a function
x -> X
that allocates memory forX
itself for anAllocatingEvaluation
- as a function
(X,x) -> X
that work in place ofX
for anMutatingEvaluation
Constructors
GradientProblem(M, cost, gradient; evaluation=AllocatingEvaluation())
See also
Manopt.StochasticGradientProblem
— TypeStochasticGradientProblem <: Problem
A stochastic gradient problem consists of
- a
Manifold M
- a(n optional) cost function ``f(x) = \displaystyle\sum{i=1}^n fi(x)
- an array of gradients, i.e. a function that returns and array or an array of functions
$\{\operatorname{grad}f_i\}_{i=1}^n$, where both variants can be given in the allocating variant and the array of function may also be provided as mutating functions (X,x) -> X
.
Constructors
StochasticGradientProblem(M::AbstractManifold, gradF::Function;
cost=Missing(), evaluation=AllocatingEvaluation()
)
StochasticGradientProblem(M::AbstractManifold, gradF::AbstractVector{<:Function};
cost=Missing(), evaluation=AllocatingEvaluation()
)
Create a Stochastic gradient problem with an optional cost
and the gradient either as one function (returning an array) or a vector of functions.
Manopt.get_gradient
— Functionget_gradient(p::AbstractGradientProblem{T},x)
get_gradient!(p::AbstractGradientProblem{T}, X, x)
evaluate the gradient of a AbstractGradientProblem{T}
p
at the point x
.
The evaluation is done in place of X
for the !
-variant. The T=
AllocatingEvaluation
problem might still allocate memory within. When the non-mutating variant is called with a T=
MutatingEvaluation
memory for the result is allocated.
get_gradient(p::StochasticGradientProblem, k, x)
get_gradient!(p::StochasticGradientProblem, Y, k, x)
Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x
(in place of Y
).
Note that for the MutatingEvaluation
based problem and a single function for the stochastic gradient mutating variant is not available, since it would require too many allocations.
get_gradient(P::AlternatingGradientProblem, x)
get_gradient!(P::AlternatingGradientProblem, Y, x)
Evaluate all summands gradients at a point x
on the ProductManifold M
(in place of Y
)
get_gradient(p::AlternatingGradientProblem, k, x)
get_gradient!(p::AlternatingGradientProblem, Y, k, x)
Evaluate one of the component gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x
(in place of Y
).
Manopt.get_gradients
— Functionget_gradients(P::StochasticGradientProblem, x)
get_gradients!(P::StochasticGradientProblem, Y, x)
Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at x
(in place of Y
).
Note that for the MutatingEvaluation
based problem and a single function for the stochastic gradient, the allocating variant is not available.
Subgradient based problem
Manopt.SubGradientProblem
— TypeSubGradientProblem <: Problem
A structure to store information about a subgradient based optimization problem
Fields
manifold
– a Manifoldcost
– the function $F$ to be minimizedsubgradient
– a function returning a subgradient $\partial F$ of $F$
Constructor
SubGradientProblem(M, f, ∂f)
Generate the [Problem
] for a subgradient problem, i.e. a function f
on the manifold M
and a function ∂f
that returns an element from the subdifferential at a point.
Manopt.get_subgradient
— Functionget_subgradient(p, q)
get_subgradient!(p, X, q)
Evaluate the (sub)gradient of a SubGradientProblem
p
at the point q
.
The evaluation is done in place of X
for the !
-variant. The T=
AllocatingEvaluation
problem might still allocate memory within. When the non-mutating variant is called with a T=
MutatingEvaluation
memory for the result is allocated.
Proximal Map(s) based problem
Manopt.ProximalProblem
— TypeProximalProblem <: Problem
specify a problem for solvers based on the evaluation of proximal map(s).
Fields
M
- a Riemannian manifoldcost
- a function $F:\mathcal M→ℝ$ to minimizeproxes
- proximal maps $\operatorname{prox}_{λ\varphi}:\mathcal M→\mathcal M$ as functions (λ,x) -> y, i.e. the prox parameter λ also belongs to the signature of the proximal map.number_of_proxes
- (length(proxes)) number of proximal Maps, e.g. if one of the maps is a combined one such that the proximal Maps functions return more than one entry per function
See also
Manopt.get_proximal_map
— Functionget_proximal_map(p,λ,x,i)
evaluate the i
th proximal map of ProximalProblem p
at the point x
of p.M
with parameter $λ>0$.
Hessian based problem
Manopt.HessianProblem
— TypeHessianProblem <: Problem
specify a problem for hessian based algorithms.
Fields
M
: a manifold $\mathcal M$cost
: a function $F:\mathcal M→ℝ$ to minimizegradient
: the gradient $\operatorname{grad}F:\mathcal M → \mathcal T\mathcal M$ of the cost function $F$hessian
: the hessian $\operatorname{Hess}F(x)[⋅]: \mathcal T_{x} \mathcal M → \mathcal T_{x} \mathcal M$ of the cost function $F$precon
: the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of $F$)
See also
Manopt.get_hessian
— Functionget_hessian(p::HessianProblem{T}, q, X)
get_hessian!(p::HessianProblem{T}, Y, q, X)
evaluate the Hessian of a HessianProblem
p
at the point q
applied to a tangent vector X
, i.e. $\operatorname{Hess}f(q)[X]$.
The evaluation is done in place of Y
for the !
-variant. The T=
AllocatingEvaluation
problem might still allocate memory within. When the non-mutating variant is called with a T=
MutatingEvaluation
memory for the result is allocated.
Manopt.get_preconditioner
— Functionget_preconditioner(p,x,ξ)
evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F
) of a HessianProblem
p
at the point x
applied to a tangent vector ξ
.
Primal dual based problem
Manopt.PrimalDualProblem
— TypePrimalDualProblem {T, mT <: Manifold, nT <: Manifold} <: PrimalDualProblem} <: Problem{T}
Describes a Problem for the linearized or exact Chambolle-Pock algorithm.[BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020][ChambollePock2011]
Fields
All fields with !! can either be mutating or nonmutating functions, which should be set depenting on the parameter T <: AbstractEvaluationType
.
M
,N
– two manifolds $\mathcal M$, $\mathcal N$cost
$F + G(Λ(⋅))$ to evaluate interims cost function valueslinearized_forward_operator!!
linearized operator for the forward operation in the algorithm $DΛ$linearized_adjoint_operator!!
The adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$prox_F!!
the proximal map belonging to $f$prox_G_dual!!
the proximal map belonging to $g_n^*$Λ!!
– (fordward_operator
) the forward operator (if given) $Λ: \mathcal M → \mathcal N$
Either $DΛ$ (for the linearized) or $Λ$ are required usually.
Constructor
LinearizedPrimalDualProblem(M, N, cost, prox_F, prox_G_dual, adjoint_linearized_operator;
linearized_forward_operator::Union{Function,Missing}=missing,
Λ::Union{Function,Missing}=missing,
evaluation::AbstractEvaluationType=AllocatingEvaluation()
)
The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.
Manopt.get_primal_prox
— Functiony = get_primal_prox(p::PrimalDualProblem, σ, x)
get_primal_prox!(p::PrimalDualProblem, y, σ, x)
Evaluate the proximal map of $F$ stored within PrimalDualProblem
\[\operatorname{prox}_{σF}(x)\]
which can also be computed in place of y
.
Manopt.get_dual_prox
— Functiony = get_dual_prox(p::PrimalDualProblem, n, τ, ξ)
get_dual_prox!(p::PrimalDualProblem, y, n, τ, ξ)
Evaluate the proximal map of $G_n^*$ stored within PrimalDualProblem
\[\operatorname{prox}_{τG_n^*}(ξ)\]
which can also be computed in place of y
.
Manopt.forward_operator
— Functiony = forward_operator(p::PrimalDualProblem, x)
forward_operator!(p::PrimalDualProblem, y, x)
Evaluate the forward operator of $Λ(x)$ stored within the PrimalDualProblem
(in place of y
).
Manopt.linearized_forward_operator
— FunctionY = linearized_forward_operator(p::PrimalDualProblem, m X, n)
linearized_forward_operator!(p::PrimalDualProblem, Y, m, X, n)
Evaluate the linearized operator (differential) $DΛ(m)[X]$ stored within the PrimalDualProblem
(in place of Y
), where n = Λ(m)
.
Manopt.adjoint_linearized_operator
— FunctionX = adjoint_linearized_operator(p::PrimalDualProblem, m, n, Y)
adjoint_linearized_operator(p::PrimalDualProblem, X, m, n, Y)
Evaluate the adjoint of the linearized forward operator of $(DΛ(m))^*[Y]$ stored within the PrimalDualProblem
(in place of X
). Since $Y∈T_n\mathcal N$, both $m$ and $n=Λ(m)$ are necessary arguments, mainly because the forward operator $Λ$ might be missing
in p
.
- BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2020
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck, J. Vidal-Núñez: Fenchel Duality Theory and a Primal-Dual Algorithm on Riemannian Manifolds, Foundations of Computational Mathematics, 2021. doi: 10.1007/s10208-020-09486-5 arXiv: 1908.02022
- ChambollePock2011
A. Chambolle, T. Pock: A first-order primal-dual algorithm for convex problems with applications to imaging, Journal of Mathematical Imaging and Vision 40(1), 120–145, 2011. doi: 10.1007/s10851-010-0251-1