Levenberg-Marquardt
Manopt.LevenbergMarquardt
— FunctionLevenbergMarquardt(M, F, jacF, x, num_components=-1)
Solve an optimization problem of the form
\[\operatorname{arg\,min}_{p ∈ \mathcal M} \frac{1}{2} \lVert F(p) \rVert^2,\]
where $F\colon\mathcal M \to ℝ^d$ is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Peeters1993]. The implementation follows Algorithm 1[Adachi2022].
Input
M
– a manifold $\mathcal M$F
– a cost function $F: \mathcal M→ℝ^d$jacF
– the Jacobian of $F$.jacF
is supposed to accept a keyword argumentbasis_domain
which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be theDefaultOrthonormalBasis
.x
– an initial value $x ∈ \mathcal M$num_components
– length of the vector returned by the cost function (d
). By default its value is -1 which means that it will be determined automatically by callingF
one additional time. Only possible whenevaluation
isAllocatingEvaluation
, for mutating evaluation this must be explicitly specified.
Optional
evaluation
– (AllocatingEvaluation
) specify whether the gradient works by allocation (default) formgradF(M, x)
orMutatingEvaluation
in place, i.e. is of the formgradF!(M, X, x)
.retraction_method
– (default_retraction_method(M)
) aretraction(M,x,ξ)
to use.stopping_criterion
– (StopWhenAny
(
StopAfterIteration
(200),
StopWhenGradientNormLess
(1e-12))
) a functor inheriting fromStoppingCriterion
indicating when to stop.
... and the ones that are passed to decorate_options
for decorators.
Output
the obtained (approximate) minimizer $x^*$, see get_solver_return
for details
References
Manopt.LevenbergMarquardt!
— FunctionLevenbergMarquardt!(M, F, jacF, x, num_components; kwargs...)
For more options see LevenbergMarquardt
.
Options
Manopt.LevenbergMarquardtOptions
— TypeLevenbergMarquardtOptions{P,T} <: AbstractGradientOptions
Describes a Gradient based descent algorithm, with
Fields
A default value is given in brackets if a parameter can be left out in initialization.
x
– a point (of typeP
) on a manifold as starting pointstop
– (StopAfterIteration(200) | StopWhenGradientNormLess(1e-12) | StopWhenStepsizeLess(1e-12)
) aStoppingCriterion
retraction_method
– (default_retraction_method(M)
) the retraction to use, defaults to the default set for your manifold.residual_values
– value of $F$ calculated in the solver setup or the previous iterationresidual_values_temp
– value of $F$ for the current proposal pointjacF
– the current Jacobian of $F$gradient
– the current gradient of $F$step_vector
– the tangent vector atx
that is used to move to the next pointlast_stepsize
– length ofstep_vector
η
– parameter of the algorithm, the higher it is the more likely the algorithm will be to reject new proposal pointsdamping_term
– current value of the damping termdamping_term_min
– initial (and also minimal) value of the damping termβ
– parameter by which the damping term is multiplied when the current new point is rejectedexpect_zero_residual
– (false
) if true, the algorithm expects that the value of residual (objective) at mimimum is equal to 0.
Constructor
LevenbergMarquardtOptions(M, initialX, initial_residual_values, initial_jacF; initial_vector), kwargs...)
Generate Levenberg-Marquardt options.
See also
- Adachi2022
S. Adachi, T. Okuno, and A. Takeda, “Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties.” arXiv, Oct. 01, 2022. arXiv: 2210.00253.
- Peeters1993
R. L. M. Peeters, “On a Riemannian version of the Levenberg-Marquardt algorithm,” VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, Serie Research Memoranda 0011, 1993. link: https://econpapers.repec.org/paper/vuawpaper/1993-11.htm.