# Solvers

Solvers can be applied to `Problem`

s with solver specific `Options`

.

# List of Algorithms

The following algorithms are currently available

Solver | File | Problem & Option |
---|---|---|

steepest Descent | `gradient_descent.jl` | `GradientProblem` , `GradientDescentOptions` |

Cyclic Proximal Point | `cyclic_proximal_point.jl` | `ProximalProblem` , `CyclicProximalPointOptions` |

Douglas–Rachford | `DouglasRachford.jl` | `ProximalProblem` , `DouglasRachfordOptions` |

Nelder-Mead | `NelderMead.jl` | `CostProblem` , `NelderMeadOptions` |

Subgradient Method | `subgradient_method.jl` | `SubGradientProblem` , `SubGradientMethodOptions` |

Steihaug-Toint Truncated Conjugate-Gradient Method | `truncated_conjugate_gradient_descent.jl` | `HessianProblem` , |

`TruncatedConjugateGradientOptions`

The Riemannian Trust-Regions Solver | `trust_regions.jl`

| `HessianProblem`

, `TrustRegionsOptions`

Note that the `Options`

can also be decorated to enhance your algorithm by general additional properties.

## StoppingCriteria

Stopping criteria are implemented as a `functor`

, i.e. inherit from the base type

`Manopt.StoppingCriterion`

— Type`StoppingCriterion`

An abstract type for the functors representing stoping criteria, i.e. they are callable structures. The naming Scheme follows functions, see for example `StopAfterIteration`

.

Every StoppingCriterion has to provide a constructor and its function has to have the interface `(p,o,i)`

where a `Problem`

as well as `Options`

and the current number of iterations are the arguments and returns a Bool whether to stop or not.

By default each `StoppingCriterion`

should provide a fiels `reason`

to provide details when a criteion is met (and that is empty otherwise).

`Manopt.StoppingCriterionSet`

— Type`StoppingCriterionGroup <: StoppingCriterion`

An abstract type for a Stopping Criterion that itself consists of a set of Stopping criteria. In total it acts as a stopping criterion itself. Examples are `StopWhenAny`

and `StopWhenAll`

that can be used to combine stopping criteria.

`Manopt.StopAfter`

— Type`StopAfter <: StoppingCriterion`

store a threshold when to stop looking at the complete runtime. It uses `time_ns()`

to measure the time and you provide a `Period`

as a time limit, i.e. `Minute(15)`

**Constructor**

`StopAfter(t)`

initialize the stopping criterion to a `Period t`

to stop after.

`Manopt.StopAfterIteration`

— Type`StopAfterIteration <: StoppingCriterion`

A functor for an easy stopping criterion, i.e. to stop after a maximal number of iterations.

**Fields**

`maxIter`

– stores the maximal iteration number where to stop at`reason`

– stores a reason of stopping if the stopping criterion has one be reached, see`get_reason`

.

**Constructor**

`StopAfterIteration(maxIter)`

initialize the stopafterIteration functor to indicate to stop after `maxIter`

iterations.

`Manopt.StopWhenAll`

— Type`StopWhenAll <: StoppingCriterion`

store an array of `StoppingCriterion`

elements and indicates to stop, when *all* indicate to stop. The `reseason`

is given by the concatenation of all reasons.

**Constructor**

```
StopWhenAll(c::NTuple{N,StoppingCriterion} where N)
StopWhenAll(c::StoppingCriterion,...)
```

`Manopt.StopWhenAny`

— Type`StopWhenAny <: StoppingCriterion`

store an array of `StoppingCriterion`

elements and indicates to stop, when *any* single one indicates to stop. The `reseason`

is given by the concatenation of all reasons (assuming that all non-indicating return `""`

).

**Constructor**

```
StopWhenAny(c::NTuple{N,StoppingCriterion} where N)
StopWhenAny(c::StoppingCriterion...)
```

`Manopt.StopWhenChangeLess`

— Type`StopWhenChangeLess <: StoppingCriterion`

stores a threshold when to stop looking at the norm of the change of the optimization variable from within a `Options`

, i.e `o.x`

. For the storage a `StoreOptionsAction`

is used

**Constructor**

`StopWhenChangeLess(ε[, a])`

initialize the stopping criterion to a threshold `ε`

using the `StoreOptionsAction`

`a`

, which is initialized to just store `:x`

by default.

`Manopt.StopWhenCostLess`

— Type`StopWhenCostLess <: StoppingCriterion`

store a threshold when to stop looking at the cost function of the optimization problem from within a `Problem`

, i.e `get_cost(p,o.x)`

.

**Constructor**

`StopWhenCostLess(ε)`

initialize the stopping criterion to a threshold `ε`

.

`Manopt.StopWhenGradientNormLess`

— Type`StopWhenGradientNormLess <: StoppingCriterion`

stores a threshold when to stop looking at the norm of the gradient from within a `GradientProblem`

.

as well as the functions

`Manopt.get_reason`

— Function`get_reason(o)`

return the current reason stored within the `StoppingCriterion`

from within the `Options`

This reason is empty if the criterion has never been met.

`get_reason(c)`

return the current reason stored within a `StoppingCriterion`

`c`

. This reason is empty if the criterion has never been met.

`Manopt.get_stopping_criteria`

— Function`get_stopping_criteria(c)`

return the array of internally stored `StoppingCriterion`

s for a `StoppingCriterionSet`

`c`

.

`Manopt.get_active_stopping_criteria`

— Function`get_active_stopping_criteria(c)`

returns all active stopping criteria, if any, that are within a `StoppingCriterion`

`c`

, and indicated a stop, i.e. their reason is nonempty. To be precise for a simple stopping criterion, this returns either an empty array if no stop is incated or the stopping criterion as the only element of an array. For a `StoppingCriterionSet`

all internal (even nested) criteria that indicate to stop are returned.

further stopping criteria might be available for individual Solvers.

## Decorated Solvers

The following decorators are available.

### Debug Solver

The decorator to print debug during the iterations can be activated by decorating the `Options`

with `DebugOptions`

and implementing your own `DebugAction`

s. For example printing a gradient from the `GradientDescentOptions`

is automatically available, as explained in the `gradient_descent`

solver.

`Manopt.get_solver_result`

— Method`get_solver_result(o)`

Return the final result after all iterations that is stored within the (modified during the iterations) `Options`

`o`

.

`Manopt.initialize_solver!`

— Method`initialize_solver!(p,o)`

Initialize the solver to the optimization `Problem`

by initializing all values in the `DebugOptions`

`o`

.

`Manopt.step_solver!`

— Method`step_solver!(p,o,iter)`

Do one iteration step (the `iter`

th) for `Problem`

`p`

by modifying the values in the `Options`

`o.options`

and print `Debug`

.

`Manopt.stop_solver!`

— Method`stop_solver!(p,o,i)`

determine whether the solver for `Problem`

`p`

and the `DebugOptions`

`o`

should stop at iteration `i`

. If so, print all debug from `:All`

and `:Final`

.

### Record Solver

The decorator to record certain values during the iterations can be activated by decorating the `Options`

with `RecordOptions`

and implementing your own `RecordAction`

s. For example recording the gradient from the `GradientDescentOptions`

is automatically available, as explained in the `gradient_descent`

solver.

`Manopt.get_solver_result`

— Method`get_solver_result(o)`

Return the final result after all iterations that is stored within the (modified during the iterations) `Options`

`o`

.

`Manopt.initialize_solver!`

— Method`initialize_solver!(p,o)`

Initialize the solver to the optimization `Problem`

by initializing the encapsulated `options`

from within the `RecordOptions`

`o`

.

`Manopt.step_solver!`

— Method`step_solver!(p,o,iter)`

Do one iteration step (the `iter`

th) for `Problem`

`p`

by modifying the values in the `Options`

`o.options`

and record the result(s).

`Manopt.stop_solver!`

— Method`stop_solver!(p,o,i)`

determine whether the solver for `Problem`

`p`

and the `RecordOptions`

`o`

should stop at iteration `i`

. If so, do a (final) record to `:All`

and `:Stop`

.

## Technical Details

The main function a solver calls is

`Manopt.solve`

— Method`solve(p,o)`

run the solver implemented for the `Problem`

`p`

and the `Options`

`o`

employing `initialize_solver!`

, `step_solver!`

, as well as the `stop_solver!`

of the solver.

which is a framework, that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.

`Manopt.initialize_solver!`

— Function`initialize_solver!(p,o)`

Initialize the solver to the optimization `Problem`

by initializing all values in the `Options`

`o`

.

`Manopt.step_solver!`

— Function`step_solver!(p,o,iter)`

Do one iteration step (the `iter`

th) for `Problem`

`p`

by modifying the values in the `Options`

`o`

.

`Manopt.get_solver_result`

— Function`get_solver_result(o)`

Return the final result after all iterations that is stored within the (modified during the iterations) `Options`

`o`

.

`Manopt.stop_solver!`

— Method`stop_solver!(p,o,i)`

depending on the current `Problem`

`p`

, the current state of the solver stored in `Options`

`o`

and the current iterate `i`

this function determines whether to stop the solver by calling the `StoppingCriterion`

.