# Riemannian quasi-Newton methods

`Manopt.quasi_Newton`

— Function```
quasi_Newton(M, f, grad_f, p; kwargs...)
quasi_Newton!(M, f, grad_f, p; kwargs...)
```

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point `p`

. The iterations can be done in-place of `p`

$=p^{(0)}$. The $k$th iteration consists of

- Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
- Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using
`WolfePowellLinesearch`

. - Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
- Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
- Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

**Input**

`M::`

`AbstractManifold`

`: a Riemannian manifold $\mathcal M$`

`f`

: a cost function $f: \mathcal M→ ℝ$ implemented as`(M, p) -> v`

`grad_f`

: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function`(M, p) -> X`

or a function`(M, X, p) -> X`

computing`X`

in-place`p`

: a point on the manifold $\mathcal M$

**Keyword arguments**

`basis=`

`DefaultOrthonormalBasis`

`()`

: basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.`cautious_update=false`

: whether or not to use the`QuasiNewtonCautiousDirectionUpdate`

which wraps the`direction_upate`

.`cautious_function=(x) -> x * 1e-4`

: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$`direction_update=`

`InverseBFGS`

`()`

: the`AbstractQuasiNewtonUpdateRule`

to use.`evaluation=`

`AllocatingEvaluation`

`()`

: specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (`AllocatingEvaluation`

) or whether they modify their input argument to return the result therein (`InplaceEvaluation`

). Since usually the first argument is the manifold, the modified argument is the second.For example`grad_f(M,p)`

allocates, but`grad_f!(M, X, p)`

computes the result in-place of`X`

.`initial_operator= initial_scale*Matrix{Float64}(I, n, n)`

: initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is`n=manifold_dimension(M)`

. This matrix is only allocated for the full matrix case. See also`initial_scale`

.`initial_scale=1.0`

: scale initial`s`

to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also`initial_operator`

`memory_size=20`

: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation`nondescent_direction_behavior=:reinitialize_direction_update`

: specify how non-descent direction is handled. This can be`:step_towards_negative_gradient`

: the direction is replaced with negative gradient, a message is stored.`:ignore`

: the verification is not performed, so any computed direction is accepted. No message is stored.`:reinitialize_direction_update`

: discards operator state stored in direction update rules.- any other value performs the verification, keeps the direction but stores a message.

`DebugMessages`

.`project!=copyto!`

: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of`Y`

, that is`(M, Y, p, X) -> Y`

, where`X`

and`Y`

can be the same memory.`retraction_method=`

`default_retraction_method`

`(M, typeof(p))`

: a retraction $\operatorname{retr}$ to use, see the section on retractions`stepsize=`

`WolfePowellLinesearch`

`(retraction_method, vector_transport_method)`

: a functor inheriting from`Stepsize`

to determine a step size`stopping_criterion=`

`StopAfterIteration`

`(max(1000, memory_size))`

`|`

`StopWhenGradientNormLess`

`(1e-6)`

: a functor indicating that the stopping criterion is fulfilled`vector_transport_method=`

`default_vector_transport_method`

`(M, typeof(p))`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to `decorate_state!`

for state decorators or `decorate_objective!`

for objective decorators, respectively.

**Output**

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see `get_solver_return`

for details, especially the `return_state=`

keyword.

`Manopt.quasi_Newton!`

— Function```
quasi_Newton(M, f, grad_f, p; kwargs...)
quasi_Newton!(M, f, grad_f, p; kwargs...)
```

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point `p`

. The iterations can be done in-place of `p`

$=p^{(0)}$. The $k$th iteration consists of

- Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
- Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using
`WolfePowellLinesearch`

. - Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
- Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
- Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

**Input**

`M::`

`AbstractManifold`

`: a Riemannian manifold $\mathcal M$`

`f`

: a cost function $f: \mathcal M→ ℝ$ implemented as`(M, p) -> v`

`grad_f`

: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function`(M, p) -> X`

or a function`(M, X, p) -> X`

computing`X`

in-place`p`

: a point on the manifold $\mathcal M$

**Keyword arguments**

`basis=`

`DefaultOrthonormalBasis`

`()`

: basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.`cautious_update=false`

: whether or not to use the`QuasiNewtonCautiousDirectionUpdate`

which wraps the`direction_upate`

.`cautious_function=(x) -> x * 1e-4`

: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$`direction_update=`

`InverseBFGS`

`()`

: the`AbstractQuasiNewtonUpdateRule`

to use.`evaluation=`

`AllocatingEvaluation`

`()`

: specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (`AllocatingEvaluation`

) or whether they modify their input argument to return the result therein (`InplaceEvaluation`

). Since usually the first argument is the manifold, the modified argument is the second.For example`grad_f(M,p)`

allocates, but`grad_f!(M, X, p)`

computes the result in-place of`X`

.`initial_operator= initial_scale*Matrix{Float64}(I, n, n)`

: initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is`n=manifold_dimension(M)`

. This matrix is only allocated for the full matrix case. See also`initial_scale`

.`initial_scale=1.0`

: scale initial`s`

to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also`initial_operator`

`memory_size=20`

: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation`nondescent_direction_behavior=:reinitialize_direction_update`

: specify how non-descent direction is handled. This can be`:step_towards_negative_gradient`

: the direction is replaced with negative gradient, a message is stored.`:ignore`

: the verification is not performed, so any computed direction is accepted. No message is stored.`:reinitialize_direction_update`

: discards operator state stored in direction update rules.- any other value performs the verification, keeps the direction but stores a message.

`DebugMessages`

.`project!=copyto!`

: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of`Y`

, that is`(M, Y, p, X) -> Y`

, where`X`

and`Y`

can be the same memory.`retraction_method=`

`default_retraction_method`

`(M, typeof(p))`

: a retraction $\operatorname{retr}$ to use, see the section on retractions`stepsize=`

`WolfePowellLinesearch`

`(retraction_method, vector_transport_method)`

: a functor inheriting from`Stepsize`

to determine a step size`stopping_criterion=`

`StopAfterIteration`

`(max(1000, memory_size))`

`|`

`StopWhenGradientNormLess`

`(1e-6)`

: a functor indicating that the stopping criterion is fulfilled`vector_transport_method=`

`default_vector_transport_method`

`(M, typeof(p))`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to `decorate_state!`

for state decorators or `decorate_objective!`

for objective decorators, respectively.

**Output**

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see `get_solver_return`

for details, especially the `return_state=`

keyword.

## Background

The aim is to minimize a real-valued function on a Riemannian manifold, that is

\[\min f(x), \quad x ∈ \mathcal{M}.\]

Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction $η_k ∈ T_{x_k} \mathcal{M}$ at the current iterate $x_k$ and a suitable stepsize $α_k$ along $\gamma(α) = R_{x_k}(α η_k)$, where $R: T \mathcal{M} →\mathcal{M}$ is a retraction. The next iterate is obtained by

\[x_{k+1} = R_{x_k}(α_k η_k).\]

In quasi-Newton methods, the search direction is given by

\[η_k = -{\mathcal{H}_k}^{-1}[\operatorname{grad}f (x_k)] = -\mathcal{B}_k [\operatorname{grad} (x_k)],\]

where $\mathcal{H}_k : T_{x_k} \mathcal{M} →T_{x_k} \mathcal{M}$ is a positive definite self-adjoint operator, which approximates the action of the Hessian $\operatorname{Hess} f (x_k)[⋅]$ and $\mathcal{B}_k = {\mathcal{H}_k}^{-1}$. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator $\operatorname{Hess} f(x_{k+1})$ or its inverse at every iteration, the previous operator $\mathcal{H}_k$ or $\mathcal{B}_k$ is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ acts on the tangent space $T_{x_{k+1}} \mathcal{M}$ of the freshly computed iterate $x_{k+1}$. In order to get a well-defined method, the following requirements are placed on the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ that is created by an update. Since the Hessian $\operatorname{Hess} f(x_{k+1})$ is a self-adjoint operator on the tangent space $T_{x_{k+1}} \mathcal{M}$, and $\mathcal{H}_{k+1}$ approximates it, one requirement is, that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is also self-adjoint on $T_{x_{k+1}} \mathcal{M}$. In order to achieve a steady descent, the next requirement is that $η_k$ is a descent direction in each iteration. Hence a further requirement is that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is a positive definite operator on $T_{x_{k+1}} \mathcal{M}$. In order to get information about the curvature of the objective function into the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$, the last requirement is a form of a Riemannian quasi-Newton equation:

\[\mathcal{H}_{k+1} [T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))] = \operatorname{grad}(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))\]

or

\[\mathcal{B}_{k+1} [\operatorname{grad}f(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))] = T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))\]

where $T_{x_k \rightarrow x_{k+1}} : T_{x_k} \mathcal{M} →T_{x_{k+1}} \mathcal{M}$ and the chosen retraction $R$ is the associated retraction of $T$. Note that, of course, not all updates in all situations meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that

\[g_{x_{k+1}}(s_k, y_k) > 0\]

holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the $\mathcal{H}_k$ or $\mathcal{B}_k$ to the operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$. Unfortunately, the fulfilment of the Riemannian curvature condition is not given by a step size $\alpha_k > 0$ that satisfies the generalized Wolfe conditions. However, to create a positive definite operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ in each iteration, the so-called locking condition was introduced in [HGA15], which requires that the isometric vector transport $T^S$, which is used in the update formula, and its associate retraction $R$ fulfil

\[T^{S}{x, ξ_x}(ξ_x) = β T^{R}{x, ξ_x}(ξ_x), \quad β = \frac{\lVert ξ_x \rVert_x}{\lVert T^{R}{x, ξ_x}(ξ_x) \rVert_{R_{x}(ξ_x)}},\]

where $T^R$ is the vector transport by differentiated retraction. With the requirement that the isometric vector transport $T^S$ and its associated retraction $R$ satisfies the locking condition and using the tangent vector

\[y_k = {β_k}^{-1} \operatorname{grad}f(x_{k+1}) - T^{S}{x_k, α_k η_k}(\operatorname{grad}f(x_k)),\]

where

\[β_k = \frac{\lVert α_k η_k \rVert_{x_k}}{\lVert T^{R}{x_k, α_k η_k}(α_k η_k) \rVert_{x_{k+1}}},\]

in the update, it can be shown that choosing a stepsize $α_k > 0$ that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following the specific operators are denoted in matrix notation and hence use $H_k$ and $B_k$, respectively.

## Direction updates

In general there are different ways to compute a fixed `AbstractQuasiNewtonUpdateRule`

. In general these are represented by

`Manopt.AbstractQuasiNewtonDirectionUpdate`

— Type`AbstractQuasiNewtonDirectionUpdate`

An abstract representation of an Quasi Newton Update rule to determine the next direction given current `QuasiNewtonState`

.

All subtypes should be functors, they should be callable as `H(M,x,d)`

to compute a new direction update.

`Manopt.QuasiNewtonMatrixDirectionUpdate`

— Type`QuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate`

The `QuasiNewtonMatrixDirectionUpdate`

represent a quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, $H_k \mapsto H_{k+1}$, and the update of the approximation of the Hessian inverse, $B_k \mapsto B_{k+1}$. For the first case, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are determined by solving a linear system of equations

\[\text{Solve} \quad \hat{η_k} = - H_k \widehat{\operatorname{grad}f(x_k)},\]

where $H_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and $\widehat{\operatorname{grad}} f(p_k)}$ represents the coordinates of the gradient of the objective function $f$ in $x_k$ with respect to the basis $\{b_i\}_{i=1}^{n}$. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are obtained simply by matrix-vector multiplication

\[\hat{η_k} = - B_k \widehat{\operatorname{grad}f(x_k)},\]

where $B_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and `\widehat{\operatorname{grad}} f(p_k)}`

`. In the end, the search direction`

`η_k`

`is generated from the coordinates`

`\hat{eta_k}`

`and the vectors of the basis`

`\{b_i\}_{i=1}^{n}`

`in both variants. The [`

AbstractQuasiNewtonUpdateRule`](@ref) indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrix`

`H_{k+1}`

`and`

`B_{k+1}`

`, and the basis`

`\{b_i\}_{i=1}^{n}`

`is transported into the upcoming tangent space`

`T_{p_{k+1}} \mathcal M`

`, preferably with an isometric vector transport, or generated there.

**Provided functors**

`(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction`(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction in-place of`η`

**Fields**

`basis`

: an`AbstractBasis`

to use in the tangent spaces`matrix`

: the matrix which represents the approximating operator.`initial_scale`

: when initialising the update, a unit matrix is used as initial approximation, scaled by this factor`update`

: a`AbstractQuasiNewtonUpdateRule`

.`vector_transport_method::AbstractVectorTransportMethodP`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

**Constructor**

```
QuasiNewtonMatrixDirectionUpdate(
M::AbstractManifold,
update,
basis::B=DefaultOrthonormalBasis(),
m=Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M));
kwargs...
)
```

**Keyword arguments**

`initial_scale=1.0`

`vector_transport_method=`

`default_vector_transport_method`

`(M, typeof(p))`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Generate the Update rule with defaults from a manifold and the names corresponding to the fields.

**See also**

`QuasiNewtonLimitedMemoryDirectionUpdate`

, `QuasiNewtonCautiousDirectionUpdate`

, `AbstractQuasiNewtonDirectionUpdate`

,

`Manopt.QuasiNewtonLimitedMemoryDirectionUpdate`

— Type`QuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate`

This `AbstractQuasiNewtonDirectionUpdate`

represents the limited-memory Riemannian BFGS update, where the approximating operator is represented by $m$ stored pairs of tangent vectors $\{\widehat{s}_i\}_{i=k-m}^{k-1} and \{\widehat{y}_i\}_{i=k-m}^{k-1} in the$k$-th iteration. For the calculation of the search direction$X*k$, the generalisation of the two-loop recursion is used (see [HuangGallivanAbsil:2015](@cite)), since it only requires inner products and linear combinations of tangent vectors in$T*{p*k}\mathcal M$. For that the stored pairs of tangent vectors$\widehat{s}*i, \widehat{y}*i$, the gradient$\operatorname{grad} f(p*k)$of the objective function$f$in$p_k`` and the positive definite self-adjoint operator

\[\mathcal{B}^{(0)}_k[⋅] = \frac{g_{p_k}(s_{k-1}, y_{k-1})}{g_{p_k}(y_{k-1}, y_{k-1})} \; \mathrm{id}_{T_{p_k} \mathcal{M}}[⋅]\]

are used. The two-loop recursion can be understood as that the `InverseBFGS`

update is executed $m$ times in a row on $\mathcal B^{(0)}_k[⋅]$ using the tangent vectors $\widehat{s}_i,\widehat{y}_i$, and in the same time the resulting operator $\mathcal B^{LRBFGS}_k [⋅]$ is directly applied on $\operatorname{grad}f(x_k)$. When updating there are two cases: if there is still free memory, $k < m$, the previously stored vector pairs $\widehat{s}_i,\widehat{y}_i$ have to be transported into the upcoming tangent space $T_{p_{k+1}}\mathcal M$. If there is no free memory, the oldest pair $\widehat{s}_i,\widehat{y}_i$ has to be discarded and then all the remaining vector pairs $\widehat{s}_i,\widehat{y}_i$ are transported into the tangent space $T_{p_{k+1}}\mathcal M$. After that the new values $s_k = \widehat{s}_k = T^{S}_{x_k, α_k η_k}(α_k η_k)$ and $y_k = \widehat{y}_k$ are stored at the beginning. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.

**Provided functors**

`(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction`(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction in-place of`η`

**Fields**

`memory_s`

; the set of the stored (and transported) search directions times step size $\{\widehat{s}_i\}_{i=k-m}^{k-1}$.`memory_y`

: set of the stored gradient differences $\{\widehat{y}_i\}_{i=k-m}^{k-1}$.`ξ`

: a variable used in the two-loop recursion.`ρ`

; a variable used in the two-loop recursion.`initial_scale`

: initial scaling of the Hessian`vector_transport_method::AbstractVectorTransportMethodP`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports`message`

: a string containing a potential warning that might have appeared`project!`

: a function to stabilize the update by projecting on the tangent space

**Constructor**

```
QuasiNewtonLimitedMemoryDirectionUpdate(
M::AbstractManifold,
x,
update::AbstractQuasiNewtonUpdateRule,
memory_size;
initial_vector=zero_vector(M,x),
initial_scale::Real=1.0
project!=copyto!
)
```

**See also**

`InverseBFGS`

`QuasiNewtonCautiousDirectionUpdate`

`AbstractQuasiNewtonDirectionUpdate`

`Manopt.QuasiNewtonCautiousDirectionUpdate`

— Type`QuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate`

These `AbstractQuasiNewtonDirectionUpdate`

s represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in `QuasiNewtonMatrixDirectionUpdate`

or `QuasiNewtonLimitedMemoryDirectionUpdate`

, butut the update then is only executed if

\[\frac{g_{x_{k+1}}(y_k,s_k)}{\lVert s_k \rVert^{2}_{x_{k+1}}} ≥ θ(\lVert \operatorname{grad}f(x_k) \rVert_{x_k}),\]

is satisfied, where $θ$ is a monotone increasing function satisfying $θ(0) = 0$ and $θ$ is strictly increasing at $0$. If this is not the case, the corresponding update is skipped, which means that for `QuasiNewtonMatrixDirectionUpdate`

the matrix $H_k$ or $B_k$ is not updated. The basis $\{b_i\}^{n}_{i=1}$ is nevertheless transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$, and for `QuasiNewtonLimitedMemoryDirectionUpdate`

neither the oldest vector pair $\{ \widetilde{s}_{k−m}, \widetilde{y}_{k−m}\}$ is discarded nor the newest vector pair $\{ \widetilde{s}_{k}, \widetilde{y}_{k}\}$ is added into storage, but all stored vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ are transported into the tangent space $T_{x_{k+1}} \mathcal{M}$. If `InverseBFGS`

or `InverseBFGS`

is chosen as update, then the resulting method follows the method of [HAG18], taking into account that the corresponding step size is chosen.

**Provided functors**

`(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction`(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η`

to compute the update direction in-place of`η`

**Fields**

`update`

: an`AbstractQuasiNewtonDirectionUpdate`

`θ`

: a monotone increasing function satisfying $θ(0) = 0$ and $θ$ is strictly increasing at $0$.

**Constructor**

```
QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonMatrixDirectionUpdate; θ = identity)
QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonLimitedMemoryDirectionUpdate; θ = identity)
```

Generate a cautious update for either a matrix based or a limited memory based update rule.

**See also**

`QuasiNewtonMatrixDirectionUpdate`

`QuasiNewtonLimitedMemoryDirectionUpdate`

`Manopt.initialize_update!`

— Function`initialize_update!(s::AbstractQuasiNewtonDirectionUpdate)`

Initialize direction update. By default no change is made.

`initialize_update!(d::QuasiNewtonLimitedMemoryDirectionUpdate)`

Initialize the limited memory direction update by emptying the memory buffers.

## Hessian update rules

Using

`Manopt.update_hessian!`

— Function`update_hessian!(d::AbstractQuasiNewtonDirectionUpdate, amp, st, p_old, k)`

update the Hessian within the `QuasiNewtonState`

`st`

given a `AbstractManoptProblem`

`amp`

as well as the an `AbstractQuasiNewtonDirectionUpdate`

`d`

and the last iterate `p_old`

. Note that the current (`k`

th) iterate is already stored in `get_iterate`

`(st)`

.

See also `AbstractQuasiNewtonUpdateRule`

and its subtypes for the different rules that are available within `d`

.

the following update formulae for either $H_{k+1}$ or $B_{k+1}$ are available.

`Manopt.AbstractQuasiNewtonUpdateRule`

— Type`AbstractQuasiNewtonUpdateRule`

Specify a type for the different `AbstractQuasiNewtonDirectionUpdate`

s, that is for a `QuasiNewtonMatrixDirectionUpdate`

there are several different updates to the matrix, while the default for `QuasiNewtonLimitedMemoryDirectionUpdate`

the most prominent is `InverseBFGS`

.

`Manopt.BFGS`

— Type`BFGS <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{BFGS}_{k+1} = \widetilde{H}^\mathrm{BFGS}_k + \frac{y_k y^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{BFGS}_k s_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k }{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k s_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from `QuasiNewtonState`

) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad \operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

`Manopt.DFP`

— Type`DFP <: AbstractQuasiNewtonUpdateRule`

indicates in an `AbstractQuasiNewtonDirectionUpdate`

that the Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{DFP}_{k+1} = \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{y_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} \Bigr) \widetilde{H}^\mathrm{DFP}_k \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{s_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} \Bigr) + \frac{y_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from `QuasiNewtonState`

) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad \operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

`Manopt.Broyden`

— Type`Broyden <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of `BFGS`

and `DFP`

.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{Br}_{k+1} = \widetilde{H}^\mathrm{Br}_k - \frac{\widetilde{H}^\mathrm{Br}_k s_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k}{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k} + \frac{y_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} + φ_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k \Bigl( \frac{y_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{Br}_k s_k}{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k} \Bigr) \Bigl( \frac{y_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{Br}_k s_k}{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k} \Bigr)^{\mathrm{T}}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from `QuasiNewtonState`

) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad \operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively, and $φ_k$ is the Broyden factor which is `:constant`

by default but can also be set to `:Davidon`

.

**Constructor**

`Broyden(φ, update_rule::Symbol = :constant)`

`Manopt.SR1`

— Type`SR1 <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{SR1}_{k+1} = \widetilde{H}^\mathrm{SR1}_k + \frac{ (y_k - \widetilde{H}^\mathrm{SR1}_k s_k) (y_k - \widetilde{H}^\mathrm{SR1}_k s_k)^{\mathrm{T}} }{ (y_k - \widetilde{H}^\mathrm{SR1}_k s_k)^{\mathrm{T}} s_k }\]

`QuasiNewtonState`

) of

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert s_k\rVert_{x_{k+1}}\lVert y_k - \widetilde{H}^\mathrm{SR1}_k s_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

**Constructor**

`SR1(r::Float64=-1.0)`

Generate the `SR1`

update.

`Manopt.InverseBFGS`

— Type`InverseBFGS <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the inverse Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{BFGS}_{k+1} = \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{s_k y^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} \Bigr) \widetilde{B}^\mathrm{BFGS}_k \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{y_k s^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} \Bigr) + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k}\]

`QuasiNewtonState`

) of

respectively.

`Manopt.InverseDFP`

— Type`InverseDFP <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the inverse Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{DFP}_{k+1} = \widetilde{B}^\mathrm{DFP}_k + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{B}^\mathrm{DFP}_k y_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{DFP}_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{DFP}_k y_k}\]

`QuasiNewtonState`

) of

respectively.

`Manopt.InverseBroyden`

— Type`InverseBroyden <: AbstractQuasiNewtonUpdateRule`

Indicates in `AbstractQuasiNewtonDirectionUpdate`

that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of `InverseBFGS`

and `InverseDFP`

.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{Br}_{k+1} = \widetilde{B}^\mathrm{Br}_k - \frac{\widetilde{B}^\mathrm{Br}_k y_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k} + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} + φ_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k \Bigl( \frac{s_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{B}^\mathrm{Br}_k y_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k} \Bigr) \Bigl( \frac{s_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{B}^\mathrm{Br}_k y_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k} \Bigr)^{\mathrm{T}}\]

`QuasiNewtonState`

) of

respectively, and $φ_k$ is the Broyden factor which is `:constant`

by default but can also be set to `:Davidon`

.

**Constructor**

`InverseBroyden(φ, update_rule::Symbol = :constant)`

`Manopt.InverseSR1`

— Type`InverseSR1 <: AbstractQuasiNewtonUpdateRule`

indicates in `AbstractQuasiNewtonDirectionUpdate`

that the inverse Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{SR1}_{k+1} = \widetilde{B}^\mathrm{SR1}_k + \frac{ (s_k - \widetilde{B}^\mathrm{SR1}_k y_k) (s_k - \widetilde{B}^\mathrm{SR1}_k y_k)^{\mathrm{T}} }{ (s_k - \widetilde{B}^\mathrm{SR1}_k y_k)^{\mathrm{T}} y_k }\]

`QuasiNewtonState`

) of

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert y_k\rVert_{x_{k+1}}\lVert s_k - \widetilde{H}^\mathrm{SR1}_k y_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

**Constructor**

`InverseSR1(r::Float64=-1.0)`

Generate the `InverseSR1`

.

## State

The quasi Newton algorithm is based on a `DefaultManoptProblem`

.

`Manopt.QuasiNewtonState`

— Type`QuasiNewtonState <: AbstractManoptSolverState`

The `AbstractManoptSolverState`

represent any quasi-Newton based method and stores all necessary fields.

**Fields**

`direction_update`

: an`AbstractQuasiNewtonDirectionUpdate`

rule.`η`

: the current update direction`nondescent_direction_behavior`

: a`Symbol`

to specify how to handle direction that are not descent ones.`nondescent_direction_value`

: the value from the last inner product from checking for descent directions`p::P`

: a point on the manifold $\mathcal M$storing the current iterate`p_old`

: the last iterate`sk`

: the current step`yk`

: the current gradient difference`retraction_method::AbstractRetractionMethod`

: a retraction $\operatorname{retr}$ to use, see the section on retractions`stepsize::Stepsize`

: a functor inheriting from`Stepsize`

to determine a step size`stop::StoppingCriterion`

: a functor indicating that the stopping criterion is fulfilled`X::T`

: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate`X_old`

: the last gradient

**Constructor**

`QuasiNewtonState(M::AbstractManifold, p; kwargs...)`

Generate the Quasi Newton state on the manifold `M`

with start point `p`

.

**Keyword arguments**

`direction_update=`

`QuasiNewtonLimitedMemoryDirectionUpdate`

`(M, p, InverseBFGS(), 20; vector_transport_method=vector_transport_method)`

`stopping_criterion=`

[`StopAfterIteration`

9(@ref)`(1000)`

`|`

`StopWhenGradientNormLess`

`(1e-6)`

: a functor indicating that the stopping criterion is fulfilled`retraction_method=`

`default_retraction_method`

`(M, typeof(p))`

: a retraction $\operatorname{retr}$ to use, see the section on retractions`stepsize=`

`default_stepsize`

`(M, QuasiNewtonState)`

: a functor inheriting from`Stepsize`

to determine a step size`vector_transport_method=`

`default_vector_transport_method`

`(M, typeof(p))`

: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports`X=`

`zero_vector`

`(M, p)`

: a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

**See also**

## Technical details

The `quasi_Newton`

solver requires the following functions of a manifold to be available

- A
`retract!`

`(M, q, p, X)`

; it is recommended to set the`default_retraction_method`

to a favourite retraction. If this default is set, a`retraction_method=`

does not have to be specified. - A
`vector_transport_to!`

`M, Y, p, X, q)`

; it is recommended to set the`default_vector_transport_method`

to a favourite retraction. If this default is set, a`vector_transport_method=`

or`vector_transport_method_dual=`

(for $\mathcal N$) does not have to be specified. - By default quasi Newton uses
`ArmijoLinesearch`

which requires`max_stepsize`

`(M)`

to be set and an implementation of`inner`

`(M, p, X)`

. - the
`norm`

as well, to stop when the norm of the gradient is small, but if you implemented`inner`

, the norm is provided already. - A
`copyto!`

`(M, q, p)`

and`copy`

`(M,p)`

for points and similarly`copy(M, p, X)`

for tangent vectors. - By default the tangent vector storing the gradient is initialized calling
`zero_vector`

`(M,p)`

.

Most Hessian approximations further require `get_coordinates`

`(M, p, X, b)`

with respect to the `AbstractBasis`

`b`

provided, which is `DefaultOrthonormalBasis`

by default from the `basis=`

keyword.

## Literature

- [HAG18]
- W. Huang, P.-A. Absil and K. A. Gallivan.
*A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems*. SIAM Journal on Optimization**28**, 470–495 (2018). - [HGA15]
- W. Huang, K. A. Gallivan and P.-A. Absil.
*A Broyden class of quasi-Newton methods for Riemannian optimization*. SIAM Journal on Optimization**25**, 1660–1685 (2015). - [NW06]
- J. Nocedal and S. J. Wright.
*Numerical Optimization*. 2 Edition (Springer, New York, 2006).