List of available solvers

Solvers can be applied to AbstractManoptProblems with solver specific AbstractManoptSolverState.

List of algorithms

The following algorithms are currently available

SolverFunction & StateObjective
Alternating Gradient Descentalternating_gradient_descent AlternatingGradientDescentState$f=(f_1,\ldots,f_n)$, $\operatorname{grad} f_i$
Augmented Lagrangian Methodaugmented_Lagrangian_method, AugmentedLagrangianMethodState$f$, $\operatorname{grad} f$, $g$, $\operatorname{grad} g_i$, $h$, $\operatorname{grad} h_j$
Chambolle-PockChambollePock, ChambollePockState (using TwoManifoldProblem)$f=F+G(Λ\cdot)$, $\operatorname{prox}_{σ F}$, $\operatorname{prox}_{τ G^*}$, $Λ$
Conjugate Gradient Descentconjugate_gradient_descent, ConjugateGradientDescentState$f$, $\operatorname{grad} f$
Convex Bundle Methodconvex_bundle_method, ConvexBundleMethodState$f$, $\partial f$
Cyclic Proximal Pointcyclic_proximal_point, CyclicProximalPointState$f=\sum f_i$, $\operatorname{prox}_{\lambda f_i}$
Difference of Convex Algorithmdifference_of_convex_algorithm, DifferenceOfConvexState$f=g-h$, $∂h$, and for example $g$, $\operatorname{grad} g$
Difference of Convex Proximal Pointdifference_of_convex_proximal_point, DifferenceOfConvexProximalState$f=g-h$, $∂h$, and for example $g$, $\operatorname{grad} g$
Douglas—RachfordDouglasRachford, DouglasRachfordState$f=\sum f_i$, $\operatorname{prox}_{\lambda f_i}$
Exact Penalty Methodexact_penalty_method, ExactPenaltyMethodState$f$, $\operatorname{grad} f$, $g$, $\operatorname{grad} g_i$, $h$, $\operatorname{grad} h_j$
Frank-Wolfe algorithmFrank_Wolfe_method, FrankWolfeStatesub-problem solver
Gradient Descentgradient_descent, GradientDescentState$f$, $\operatorname{grad} f$
Levenberg-MarquardtLevenbergMarquardt, LevenbergMarquardtState$f = \sum_i f_i$ $\operatorname{grad} f_i$ (Jacobian)
Nelder-MeadNelderMead, NelderMeadState$f$
Particle Swarmparticle_swarm, ParticleSwarmState$f$
Primal-dual Riemannian semismooth Newton Algorithmprimal_dual_semismooth_Newton, PrimalDualSemismoothNewtonState (using TwoManifoldProblem)$f=F+G(Λ\cdot)$, $\operatorname{prox}_{σ F}$ & diff., $\operatorname{prox}_{τ G^*}$ & diff., $Λ$
Proximal Bundle Methodproximal_bundle_method, ProximalBundleMethodState$f$, $\partial f$
Quasi-Newton Methodquasi_Newton, QuasiNewtonState$f$, $\operatorname{grad} f$
Steihaug-Toint Truncated Conjugate-Gradient Methodtruncated_conjugate_gradient_descent, TruncatedConjugateGradientState$f$, $\operatorname{grad} f$, $\operatorname{Hess} f$
Subgradient Methodsubgradient_method, SubGradientMethodState$f$, $∂ f$
Stochastic Gradient Descentstochastic_gradient_descent, StochasticGradientDescentState$f = \sum_i f_i$, $\operatorname{grad} f_i$
The Riemannian Trust-Regions Solvertrust_regions, TrustRegionsState$f$, $\operatorname{grad} f$, $\operatorname{Hess} f$

Note that the solvers (their AbstractManoptSolverState, to be precise) can also be decorated to enhance your algorithm by general additional properties, see debug output and recording values. This is done using the debug= and record= keywords in the function calls. Similarly, since Manopt.jl 0.4 a (simple) caching of the objective function using the cache= keyword is available in any of the function calls..

Technical details

The main function a solver calls is

which is a framework that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.

Manopt.initialize_solver!Function
initialize_solver!(ams::AbstractManoptProblem, amp::AbstractManoptSolverState)

Initialize the solver to the optimization AbstractManoptProblem amp by initializing the necessary values in the AbstractManoptSolverState amp.

source
initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)

Extend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets

source
initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)

Extend the initialization of the solver by a hook to run records that were added to the :Start entry.

source
Manopt.step_solver!Function
step_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, i)

Do one iteration step (the ith) for an AbstractManoptProblemp by modifying the values in the AbstractManoptSolverState ams.

source
step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, i)

Extend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.

source
step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, i)

Extend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.

source
Manopt.get_solver_resultFunction
get_solver_result(ams::AbstractManoptSolverState)
get_solver_result(tos::Tuple{AbstractManifoldObjective,AbstractManoptSolverState})
get_solver_result(o::AbstractManifoldObjective, s::AbstractManoptSolverState)

Return the final result after all iterations that is stored within the AbstractManoptSolverState ams, which was modified during the iterations.

For the case the objective is passed as well, but default, the objective is ignored, and the solver result for the state is called.

source
Manopt.get_solver_returnFunction
get_solver_return(s::AbstractManoptSolverState)
get_solver_return(o::AbstractManifoldObjective, s::AbstractManoptSolverState)

determine the result value of a call to a solver. By default this returns the same as get_solver_result.

get_solver_return(s::ReturnSolverState)
get_solver_return(o::AbstractManifoldObjective, s::ReturnSolverState)

return the internally stored state of the ReturnSolverState instead of the minimizer. This means that when the state are decorated like this, the user still has to call get_solver_result on the internal state separately.

get_solver_return(o::ReturnManifoldObjective, s::AbstractManoptSolverState)

return both the objective and the state as a tuple.

source

API for solvers

this is a short overview of the different types of high-level functions are usually available for a solver. Assume the solver is called new_solver and requires a cost f and some first order information df as well as a starting point p on M. f and df form the objective together called obj.

Then there are basically two different variants to call

The easy to access call

new_solver(M, f, df, p=rand(M); kwargs...)
new_solver!(M, f, df, p; kwargs...)

Where the start point should be optional. Keyword arguments include the type of evaluation, decorators like debug= or record= as well as algorithm specific ones. If you provide an immutable point p or the rand(M) point is immutable, like on the Circle() this method should turn the point into a mutable one as well.

The third variant works in place of p, so it is mandatory.

This first interface would set up the objective and pass all keywords on the objective based call.

Objective based calls to solvers

new_solver(M, obj, p=rand(M); kwargs...)
new_solver!(M, obj, p; kwargs...)

Here the objective would be created beforehand for example to compare different solvers on the same objective, and for the first variant the start point is optional. Keyword arguments include decorators like debug= or record= as well as algorithm specific ones.

This variant would generate the problem and the state and verify validity of all provided keyword arguments that affect the state. Then it would call the iterate process.

Manual calls

If you generate the corresponding problem and state as the previous step does, you can also use the third (lowest level) and just call

solve!(problem, state)