Riemannian quasi-Newton methods
Manopt.quasi_Newton
— Functionquasi_Newton(M, F, ∇F, x)
Perform a quasi Newton iteration for F
on the manifold M
starting in the point x
using a retraction $R$ and a vector transport $T$
The $k$th iteration consists of
- Compute the search direction $η_k = -\mathcal{B}_k [∇f (x_k)]$ or solve $\mathcal{H}_k [η_k] = -∇f (x_k)]$.
- Determine a suitable stepsize $α_k$ along the curve $\gamma(α) = R_{x_k}(α η_k)$ e.g. by using
WolfePowellLineseach
. - Compute $x_{k+1} = R_{x_k}(α_k η_k)$.
- Define $s_k = T_{x_k, α_k η_k}(α_k η_k)$ and $y_k = ∇f(x_{k+1}) - T_{x_k, α_k η_k}(∇f(x_k))$.
- Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_k$.
Input
M
– a manifold $\mathcal{M}$.F
– a cost function $F \colon \mathcal{M} \to ℝ$ to minimize.∇F
– the gradient $∇F \colon \mathcal{M} \to T_x\mathcal M$ of $F$.x
– an initial value $x \in \mathcal{M}$.stoppingcriterion::StoppingCriterion=StopWhenAny( StopAfterIteration(max(1000, memorysize)), StopWhenGradientNormLess(10^(-6)) ), return_options=false,
Optional
basis
– (DefaultOrthonormalBasis()
) basis within the tangent space(s) to represent the Hessian (inverse).cautious_update
– (false
) – whether or not to use aQuasiNewtonCautiousDirectionUpdate
cautious_function
– ((x) -> x*10^(-4)
) – a monotone increasing function that is zero at 0 and strictly increasing at 0 for the cautious update.direction_update
– (InverseBFGS
()
) the update rule to use.initial_operator
– (Matrix{Float64}(I,n,n)
) initial matrix to use die the approximation, wheren=manifold_dimension(M)
, see alsoscale_initial_operator
.memory_size
– (20
) limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory representationretraction_method
– (ExponentialRetraction()
) a retraction method to use, by default the exponntial map.scale_initial_operator
- (true
) scale initial operator with $\frac{⟨s_k,y_k⟩_{x_k}}{\lVert y_k\rVert_{x_k}}$ in the computationstep_size
– (WolfePowellLineseach
(retraction_method, vector_transport_method)
) specify aStepsize
.stopping_criterion
- (StopWhenAny(StopAfterIteration(max(1000, memory_size)), StopWhenGradientNormLess(10^(-6))
) specify aStoppingCriterion
vector_transport_method
– (ParallelTransport()
) a vector transport to use, by default the parallel transport.return_options
– (false
) – specify whether to return just the resultx
(default) or the completeOptions
, e.g. to access recorded values. if activated, the extended result, i.e. the
Output
x_opt
– the resulting (approximately critical) point of the quasi–Newton method
OR
options
– the options returned by the solver (seereturn_options
)
Manopt.quasi_Newton!
— Functionquasi_Newton!(M, F, ∇F, x; options...)
Perform a quasi Newton iteration for F
on the manifold M
starting in the point x
using a retraction $R$ and a vector transport $T$.
Input
M
– a manifold $\mathcal{M}$.F
– a cost function $F \colon \mathcal{M} \to ℝ$ to minimize.∇F
– the gradient $∇F \colon \mathcal{M} \to T_x\mathcal M$ of $F$.x
– an initial value $x \in \mathcal{M}$.
For all optional parameters, see quasi_Newton
.
Background
The aim is to minimize a real-valued function on a Riemannian manifold, i.e.
Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction $η_k ∈ T_{x_k} \mathcal{M}$ at the current iterate $x_k$ and a suitable stepsize $α_k$ along $\gamma(α) = R_{x_k}(α η_k)$, where $R \colon T \mathcal{M} \to \mathcal{M}$ is a retraction. The next iterate is obtained by
In quasi-Newton methods, the search direction is given by
where $\mathcal{H}_k \colon T_{x_k} \mathcal{M} \to T_{x_k} \mathcal{M}$ is a positive definite self-adjoint operator, which approximates the action of the Hessian $\operatorname{Hess} f (x_k)[\cdot]$ and $\mathcal{B}_k = {\mathcal{H}_k}^{-1}$. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator $\operatorname{Hess} f(x_{k+1})$ or its inverse at every iteration, the previous operator $\mathcal{H}_k$ or $\mathcal{B}_k$ is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ acts on the tangent space $T_{x_{k+1}} \mathcal{M}$ of the freshly computed iterate $x_{k+1}$. In order to get a well-defined method, the following requirements are placed on the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ that is created by an update. Since the Hessian $\operatorname{Hess} f(x_{k+1})$ is a self-adjoint operator on the tangent space $T_{x_{k+1}} \mathcal{M}$, and $\mathcal{H}_{k+1}$ approximates it, we require that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is also self-adjoint on $T_{x_{k+1}} \mathcal{M}$. In order to achieve a steady descent, we want $η_k$ to be a descent direction in each iteration. Therefore we require, that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is a positive definite operator on $T_{x_{k+1}} \mathcal{M}$. In order to get information about the curvature of the objective function into the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$, we require that it satisfies a form of a Riemannian quasi-Newton equation:
or
where $T_{x_k \rightarrow x_{k+1}} \colon T_{x_k} \mathcal{M} \to T_{x_{k+1}} \mathcal{M}$ and the chosen retraction $R$ is the associated retraction of $T$. We note that, of course, not all updates in all situations will meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that
holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the $\mathcal{H}_k$ or $\mathcal{B}_k$ to the operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$. Unfortunately, the fulfillment of the Riemannian curvature condition is not given by a step size $\alpha_k > 0$ that satisfies the generalised Wolfe conditions. However, in order to create a positive definite operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ in each iteration, in [HuangGallivanAbsil2015] the so-called locking condition was introduced, which requires that the isometric vector transport $T^S$, which is used in the update formula, and its associate retraction $R$ fulfill
where $T^R$ is the vector transport by differentiated retraction. With the requirement that the isometric vector transport $T^S$ and its associated retraction $R$ satisfies the locking condition and using the tangent vector
where
in the update, it can be shown that choosing a stepsize $α_k > 0$ that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following we denote the specific operators in matrix notation and hence use $H_k$ and $B_k$, respectively.
Direction Updates
In general there are different ways to compute a fixed AbstractQuasiNewtonUpdateRule
. In general these are represented by
Manopt.AbstractQuasiNewtonDirectionUpdate
— TypeAbstractQuasiNewtonDirectionUpdate
An abstract represresenation of an Quasi Newton Update rule to determine the next direction given current QuasiNewtonOptions
.
All subtypes should be functors, i.e. one should be able to call them as H(M,x,d)
to compute a new direction update.
Manopt.QuasiNewtonMatrixDirectionUpdate
— TypeQuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate
These AbstractQuasiNewtonDirectionUpdate
s represent any quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, $H_k \mapsto H_{k+1}$, and the update of the approximation of the Hessian inverse, $B_k \mapsto B_{k+1}$. For the first case, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}^{n}_{i=1}$ are determined by solving a linear system of equations, i.e.
where $H_k$ is the matrix representing the operator with respect to the basis $\{b_i\}^{n}_{i=1}$ and $\widehat{∇f(x_k)}$ represents the coordinates of the gradient of the objective function $f$ in $x_k$ with respect to the basis $\{b_i\}^{n}_{i=1}$. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}^{n}_{i=1}$ are obtained simply by matrix-vector multiplication, i.e.
where $B_k$ is the matrix representing the operator with respect to the basis $\{b_i\}^{n}_{i=1}$ and $\widehat{∇f(x_k)}$ as above. In the end, the search direction $η_k$ is generated from the coordinates $\hat{eta_k}$ and the vectors of the basis $\{b_i\}^{n}_{i=1}$ in both variants. The AbstractQuasiNewtonUpdateRule
indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrix $H_{k+1}$ and $B_{k+1}$, and the basis $\{b_i\}^{n}_{i=1}$ is transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$, preferably with an isometric vector transport, or generated there.
Fields
basis
– the basis.matrix
– the matrix which represents the approximating operator.scale
– indicates whether the initial matrix (= identity matrix) should be scaled before the first update.update
– aAbstractQuasiNewtonUpdateRule
.vector_transport_method
– anAbstractVectorTransportMethod
See also
QuasiNewtonLimitedMemoryDirectionUpdate
QuasiNewtonCautiousDirectionUpdate
AbstractQuasiNewtonDirectionUpdate
Manopt.QuasiNewtonLimitedMemoryDirectionUpdate
— TypeQuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate
This AbstractQuasiNewtonDirectionUpdate
represents the limited-memory Riemanian BFGS update, where the approximating oprator is represented by $m$ stored pairs of tangent vectors $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ in the $k$-th iteration. For the calculation of the search direction $η_k$, the generalisation of the two-loop recursion is used (see [HuangGallivanAbsil2015]), since it only requires inner products and linear combinations of tangent vectors in $T_{x_k} \mathcal{M}$. For that the stored pairs of tangent vectors $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$, the gradient $∇f(x_k)$ of the objective function $f$ in $x_k$ and the positive definite self-adjoint operator
are used. The two-loop recursion can be understood as that the InverseBFGS
update is executed $m$ times in a row on $\mathcal{B}^{(0)}_k[\cdot]$ using the tangent vectors $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$, and in the same time the resulting operator $\mathcal{B}^{LRBFGS}_k [\cdot]$ is directly applied on $∇f(x_k)$. When updating there are two cases: if there is still free memory, i.e. $k < m$, the previously stored vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ have to be transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$; if there is no free memory, the oldest pair $\{ \widetilde{s}_{k−m}, \widetilde{y}_{k−m}\}$ has to be discarded and then all the remaining vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m+1}^{k-1}$ are transported into the tangent space $T_{x_{k+1}} \mathcal{M}$. After that we calculate and store $s_k = \widetilde{s}_k = T^{S}_{x_k, α_k η_k}(α_k η_k)$ and $y_k = \widetilde{y}_k$. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.
Fields
method
– the maximum number of vector pairs stored.memory_s
– the set of the stored (and transported) search directions times step size $\{ \widetilde{s}_i\}_{i=k-m}^{k-1}$.memory_y
– set of the stored gradient differences $\{ \widetilde{y}_i\}_{i=k-m}^{k-1}$.ξ
– a variable used in the two-loop recursion.ρ
– a variable used in the two-loop recursion.scale
–vector_transport_method
– aAbstractVectorTransportMethod
See also
InverseBFGS
QuasiNewtonCautiousDirectionUpdate
AbstractQuasiNewtonDirectionUpdate
Manopt.QuasiNewtonCautiousDirectionUpdate
— TypeQuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate
These AbstractQuasiNewtonDirectionUpdate
s represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in QuasiNewtonMatrixDirectionUpdate
or [LimitedMemoryQuasiNewctionDirectionUpdate
]. But the update given in QuasiNewtonMatrixDirectionUpdate
or [LimitedMemoryQuasiNewctionDirectionUpdate
] is only executed if
is satisfied, where $\theta$ is a monotone increasing function satisfying $\theta(0) = 0$ and $\theta$ is strictly increasing at $0$. If this is not the case, the corresponding update will be skipped, which means that for QuasiNewtonMatrixDirectionUpdate
the matrix $H_k$ or $B_k$ is not updated, but the basis $\{b_i\}^{n}_{i=1}$ is nevertheless transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$, and for [LimitedMemoryQuasiNewctionDirectionUpdate
] neither the oldest vector pair $\{ \widetilde{s}_{k−m}, \widetilde{y}_{k−m}\}$ is discarded nor the newest vector pair $\{ \widetilde{s}_{k}, \widetilde{y}_{k}\}$ is added into storage, but all stored vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ are transported into the tangent space $T_{x_{k+1}} \mathcal{M}$. If InverseBFGS
or InverseBFGS
is chosen as update, then the resulting method follows the method of [HuangAbsilGallivan2018], taking into account that the corresponding step size is chosen.
Fields
update
– anAbstractQuasiNewtonDirectionUpdate
θ
– a monotone increasing function satisfying $θ(0) = 0$ and $θ$ is strictly increasing at $0$.
See also
QuasiNewtonMatrixDirectionUpdate
QuasiNewtonLimitedMemoryDirectionUpdate
Hessian Update Rules
Using
Manopt.update_hessian!
— Functionupdate_hessian!(d, p, o, x_old, iter)
update the hessian wihtin the QuasiNewtonOptions
o
given a Problem
p
as well as the an AbstractQuasiNewtonDirectionUpdate
d
and the last iterate x_old
. Note that the current (iter
th) iterate is already stored in o.x
.
See also AbstractQuasiNewtonUpdateRule
for the different rules that are available within d
.
the following update formulae for either $H_{k+1}$ or $B_{k+1}$ are available.
Manopt.AbstractQuasiNewtonUpdateRule
— TypeAbstractQuasiNewtonUpdateRule
Specify a type for the different AbstractQuasiNewtonDirectionUpdate
s.
Manopt.BFGS
— TypeBFGS <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the Riemanian BFGS update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{H}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
Manopt.DFP
— TypeDFP <: AbstractQuasiNewtonUpdateRule
indicates in an AbstractQuasiNewtonDirectionUpdate
that the Riemanian DFP update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{H}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
Manopt.Broyden
— TypeBroyden <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the Riemanian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of BFGS
and DFP
.
We denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively, and $φ_k$ is the Broyden factor which is :constant
by default but can also be set to :Davidon
.
Constructor
Broyden(φ, update_rule::Symbol = :constant)
Manopt.SR1
— TypeSR1 <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the Riemanian SR1 update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{H}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
This method can be stabilized by only performing the update if denominator is larger than $r\lVert s_k\rVert_{x_{k+1}}\lVert y_k - \widetilde{H}^\mathrm{SR1}_k s_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NocedalWright2006]
Constructor
SR1(r::Float64=-1.0)
Generate the SR1
update, which by default does not include the check (since the default sets $t<0$`)
Manopt.InverseBFGS
— TypeInverseBFGS <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the inverse Riemanian BFGS update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{B}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
Manopt.InverseDFP
— TypeInverseDFP <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the inverse Riemanian DFP update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{B}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
Manopt.InverseBroyden
— TypeInverseBroyden <: AbstractQuasiNewtonUpdateRule
Indicates in AbstractQuasiNewtonDirectionUpdate
that the Riemanian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of InverseBFGS
and InverseDFP
.
We denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively, and $φ_k$ is the Broyden factor which is :constant
by default but can also be set to :Davidon
.
Constructor
InverseBroyden(φ, update_rule::Symbol = :constant)
Manopt.InverseSR1
— TypeInverseSR1 <: AbstractQuasiNewtonUpdateRule
indicates in AbstractQuasiNewtonDirectionUpdate
that the inverse Riemanian SR1 update is used in the Riemannian quasi-Newton method.
We denote by $\widetilde{B}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads
where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonOptions
) of
respectively.
This method can be stabilized by only performing the update if denominator is larger than $r\lVert y_k\rVert_{x_{k+1}}\lVert s_k - \widetilde{H}^\mathrm{SR1}_k y_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NocedalWright2006].
Constructor
InverseSR1(r::Float64=-1.0)
Generate the InverseSR1
update, which by default does not include the check, since the default sets $t<0$`.
Options
The quasi Newton algorithm is based on a GradientProblem
.
Manopt.QuasiNewtonOptions
— TypeQuasiNewtonOptions <: Options
These Quasi Newton Options
represent any quasi-Newton based method and can be used with any update rule for the direction.
Fields
x
– the current iterate, a point on a manifold∇
– the current gradientsk
– the current stepyk
the current gradient differencedirection_update
- anAbstractQuasiNewtonDirectionUpdate
rule.retraction_method
– anAbstractRetractionMethod
stop
– aStoppingCriterion
See also
Literature
- HuangGallivanAbsil2015
Huang, Wen and Gallivan, K. A. and Absil, P.-A., A Broyden Class of Quasi-Newton Methods for Riemannian Optimization, SIAM J. Optim., 25 (2015), pp. 1660-1685. doi: 10.1137/140955483
- HuangAbsilGallivan2018
Huang, Wen and Absil, P.-A and Gallivan, Kyle, A Riemannian BFGS Method Without Differentiated Retraction for Nonconvex Optimization Problems, SIAM J. Optim., 28 (2018), pp. 470-495. doi: 10.1137/17M1127582
- NocedalWright2006
Nocedal, J., Wright, S.: Numerical Optimization, Second Edition, Springer, 2006. doi: 10.1007/978-0-387-40065-5