Checks

If you have computed a gradient or differential and you are not sure whether it is correct.

Manopt.check_HessianFunction
check_Hessian(M, f, grad_f, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M, vector_at=p); kwargs...)

Check numerically whether the Hessian {operatorname{Hess} f(M,p, X) of f(M,p) is correct.

For this we require either a second-order retraction or a critical point $p$ of f.

given that we know that is whether

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \frac{t^2}{2}⟨\operatorname{Hess}f(p)[X], X⟩ + \mathcal O(t^3)\]

or in other words, that the error between the function $f$ and its second order Taylor behaves in error $\mathcal O(t^3)$, which indicates that the Hessian is correct, cf. also Section 6.8, Boumal, Cambridge Press, 2023.

Note that if the errors are below the given tolerance and the method is exact, no plot will be generated.

Keyword arguments

  • check_grad – (true) check whether $\operatorname{grad} f(p) \in T_p\mathcal M$.

  • check_linearity – (true) check whether the Hessian is linear, see is_Hessian_linear using a, b, X, and Y

  • check_symmetry – (true) check whether the Hessian is symmetric, see is_Hessian_symmetric

  • check_vector – (false) check whether $\operatorname{Hess} f(p)[X] \in T_p\mathcal M$ using is_vector.

  • mode - (:Default) specify the mode, by default we assume to have a second order retraction given by retraction_method= you can also this method if you already have a critical point p. Set to :CritalPoint to use gradient_descent to find a critical point. Note: This requires (and evaluates) new tangent vectors X and Y

  • atol, rtol – (same defaults as isapprox) tolerances that are passed down to all checks

  • a, b – two real values to check linearity of the Hessian (if check_linearity=true)

  • N - (101) number of points to check within the log_range default range $[10^{-8},10^{0}]$

  • exactness_tol - (1e-12) if all errors are below this tolerance, the check is considered to be exact

  • io – (nothing) provide an IO to print the check result to

  • gradient - (grad_f(M, p)) instead of the gradient function you can also provide the gradient at p directly

  • Hessian - (Hess_f(M, p, X)) instead of the Hessian function you can provide the result of $\operatorname{Hess} f(p)[X]$ directly. Note that evaluations of the Hessian might still be necessary for checking linearity and symmetry and/or when using :CriticalPoint mode.

  • limits - ((1e-8,1)) specify the limits in the log_range

  • log_range - (range(limits[1], limits[2]; length=N)) specify the range of points (in log scale) to sample the Hessian line

  • N - (101) number of points to check within the log_range default range $[10^{-8},10^{0}]$

  • plot - (false) whether to plot the resulting check (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.

  • retraction_method - (default_retraction_method(M, typeof(p))) retraction method to use for the check

  • slope_tol – (0.1) tolerance for the slope (global) of the approximation

  • throw_error - (false) throw an error message if the Hessian is wrong

  • window – (nothing) specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The kwargs... are also passed down to the check_vector and the check_gradient call, such that tolerances can easily be set.

While we do pass on check_vector to the inner gradient check as well as the retraction_method, the gradient check is meant to be a sanity check, so it does not throw an error nor produce a plot itself.

source
Manopt.check_differentialFunction
check_differential(M, F, dF, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Check numerically whether the differential dF(M,p,X) of F(M,p) is correct.

This implements the method described in Section 4.8, Boumal, Cambridge Press, 2023.

Note that if the errors are below the given tolerance and the method is exact, no plot will be generated,

Keyword arguments

  • exactness_tol - (1e-12) if all errors are below this tolerance, the check is considered to be exact
  • io – (nothing) provide an IO to print the check result to
  • limits ((1e-8,1)) specify the limits in the log_range
  • log_range (range(limits[1], limits[2]; length=N)) - specify the range of points (in log scale) to sample the differential line
  • N (101) – number of points to check within the log_range default range $[10^{-8},10^{0}]$
  • name ("differential") – name to display in the check (e.g. if checking differential)
  • plot- (false) whether to plot the resulting check (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method - (default_retraction_method(M, typeof(p))) retraction method to use for the check
  • slope_tol – (0.1) tolerance for the slope (global) of the approximation
  • throw_error - (false) throw an error message if the differential is wrong
  • window – (nothing) specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.
source
Manopt.check_gradientFunction
check_gradient(M, F, gradF, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Check numerically whether the gradient gradF(M,p) of F(M,p) is correct, that is whether

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \mathcal O(t^2)\]

or in other words, that the error between the function $f$ and its first order Taylor behaves in error $\mathcal O(t^2)$, which indicates that the gradient is correct, cf. also Section 4.8, Boumal, Cambridge Press, 2023.

Note that if the errors are below the given tolerance and the method is exact, no plot will be generated.

Keyword arguments

  • check_vector – (true) check whether $\operatorname{grad} f(p) \in T_p\mathcal M$ using is_vector.
  • exactness_tol - (1e-12) if all errors are below this tolerance, the check is considered to be exact
  • io – (nothing) provide an IO to print the check result to
  • gradient - (grad_f(M, p)) instead of the gradient function you can also provide the gradient at p directly
  • limits - ((1e-8,1)) specify the limits in the log_range
  • log_range - (range(limits[1], limits[2]; length=N)) - specify the range of points (in log scale) to sample the gradient line
  • N - (101) – number of points to check within the log_range default range $[10^{-8},10^{0}]$
  • plot - (false) whether to plot the resulting check (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method - (default_retraction_method(M, typeof(p))) retraction method to use for the check
  • slope_tol – (0.1) tolerance for the slope (global) of the approximation
  • atol, rtol – (same defaults as isapprox) tolerances that are passed down to is_vector if check_vector is set to true
  • throw_error - (false) throw an error message if the gradient is wrong
  • window – (nothing) specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The kwargs... are also passed down to the check_vector call, such that tolerances can easily be set.

source
Manopt.find_best_slope_windowFunction
(a,b,i,j) = find_best_slope_window(X,Y,window=nothing; slope=2.0, slope_tol=0.1)

Check data X,Y for the largest contiguous interval (window) with a regression line fitting “best”. Among all intervals with a slope within slope_tol to slope the longest one is taken. If no such interval exists, the one with the slope closest to slope is taken.

If the window is set to nothing (default), all window sizes 2,...,length(X) are checked. You can also specify a window size or an array of window sizes.

For each window size , all its translates in the data are checked. For all these (shifted) windows the regression line is computed (i.e. a,b in a + t*b) and the best line is computed.

From the best line the following data is returned

  • a, b specifying the regression line a + t*b
  • i, j determining the window, i.e the regression line stems from data X[i], ..., X[j]
source
Manopt.is_Hessian_linearFunction
is_Hessian_linear(M, Hess_f, p,
    X=rand(M; vector_at=p), Y=rand(M; vector_at=p), a=randn(), b=randn();
    throw_error=false, io=nothing, kwargs...
)

Check whether the Hessian function Hess_f fulfills linearity, i.e. that

\[\operatorname{Hess} f(p)[aX + bY] = b\operatorname{Hess} f(p)[X] + b\operatorname{Hess} f(p)[Y]\]

which is checked using isapprox and the kwargs... are passed to this function.

Optional Arguments

  • throw_error - (false) throw an error message if the Hessian is wrong
source
Manopt.is_Hessian_symmetricFunction
is_Hessian_symmetric(M, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M; vector_at=p);
throw_error=false, io=nothing, atol::Real=0, rtol::Real=atol>0 ? 0 : √eps

)

Check whether the Hessian function Hess_f fulfills symmetry, i.e. that

\[⟨\operatorname{Hess} f(p)[X], Y⟩ = ⟨X, \operatorname{Hess} f(p)[Y]⟩\]

which is checked using isapprox and the kwargs... are passed to this function.

Optional Arguments

  • atol, rtol - with the same defaults as the usual isapprox
  • throw_error - (false) throw an error message if the Hessian is wrong
source
Manopt.plot_slopeMethod
plot_slope(x, y; slope=2, line_base=0, a=0, b=2.0, i=1,j=length(x))

Plot the result from the error check functions, e.g. check_gradient, check_differential, check_Hessian on data x,y with two comparison lines

  1. line_base + tslope as the global slope the plot should have
  2. a + b*t on the interval [x[i], x[j]] for some (best fitting) comparison slope
source
Manopt.prepare_check_resultMethod
prepare_check_result(log_range, errors, slope)

Given a range of values log_range, where we computed errors, check whether this yields a slope of slope in log-scale

Note that if the errors are below the given tolerance and the method is exact, no plot will be generated,

Keyword arguments

  • exactness_tol - (1e3*eps(eltype(errors))) is all errors are below this tolerance, the check is considered to be exact
  • io – (nothing) provide an IO to print the check result to
  • name ("differential") – name to display in the check (e.g. if checking gradient)
  • plot- (false) whether to plot the resulting check (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • slope_tol – (0.1) tolerance for the slope (global) of the approximation
  • throw_error - (false) throw an error message if the gradient or Hessian is wrong
source

Literature