Illustration how to use mutating gradient functions

When it comes to time critital operations, a main ingredient in Julia are mutating functions, i.e. those that compute in place without additional Memory allocations. In the following the illustrate how to do this with Manopt.jl.

Let's start with the same function as in Get Started: Optimize! and compute the mean of some points. Just that here we use the sphere $\mathbb S^{30}$ and n=800 points.

From the just mentioned example, the implementation looks like

using Manopt, Manifolds, Random, BenchmarkTools
begin
    Random.seed!(42)
    m = 30
    M = Sphere(m)
    n = 800
    σ = π / 8
    x = zeros(Float64, m + 1)
    x[2] = 1.0
    data = [exp(M, x, random_tangent(M, x, Val(:Gaussian), σ)) for i in 1:n]
end;

Classical definition

The variant from the previous tutorial defines a cost $F(x)$ and its gradient $gradF(x)$

F(M, x) = sum(1 / (2 * n) * distance.(Ref(M), Ref(x), data) .^ 2)
F (generic function with 1 method)
gradF(M, x) = sum(1 / n * grad_distance.(Ref(M), data, Ref(x)))
gradF (generic function with 1 method)

we further set the stopping criterion to be a little more strict, then we obtain

begin
    sc = StopWhenGradientNormLess(1e-10)
    x0 = random_point(M)
    m1 = gradient_descent(M, F, gradF, x0; stopping_criterion=sc)
    @benchmark gradient_descent($M, $F, $gradF, $x0; stopping_criterion=$sc)
end
BenchmarkTools.Trial: 385 samples with 1 evaluation.
 Range (min … max):   6.179 ms … 42.132 ms  ┊ GC (min … max):  0.00% … 71.29%
 Time  (median):     12.983 ms              ┊ GC (median):     0.00%
 Time  (mean ± σ):   12.994 ms ±  7.803 ms  ┊ GC (mean ± σ):  17.14% ± 19.97%

  ▃█▅▃        ▅▇▅▂▁                                            
  █████▇▅▄▆█▇▆█████▆▁▇▅▁▁▁▅▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▁▆▄▅▆▅▅▇▆▄▅▄▄█ ▆
  6.18 ms      Histogram: log(frequency) by time      38.1 ms <

 Memory estimate: 9.57 MiB, allocs estimate: 32694.

Inplace computation of the gradient

We can reduce the memory allocations, by implementing the gradient as a functor. The motivation is twofold: On the one hand, we want to avoid variables from global scope, for example the manifold M or the data, to be used within the function For more complicated cost functions it might also be worth considering to do the same.

Here we store the data (as reference) and one temporary memory in order to avoid reallocation of memory per grad_distance computation. We have

begin
    struct grad!{TD,TTMP}
        data::TD
        tmp::TTMP
    end
    function (gradf!::grad!)(M, X, x)
        fill!(X, 0)
        for di in gradf!.data
            grad_distance!(M, gradf!.tmp, di, x)
            X .+= gradf!.tmp
        end
        X ./= length(gradf!.data)
        return X
    end
end

Then we just have to initialize the gradient and perform our final benchmark. Note that we also have to interpolate all variables passed to the benchmark with a $.

begin
    gradF2! = grad!(data, similar(data[1]))
    m2 = deepcopy(x0)
    gradient_descent!(
        M, F, gradF2!, m2; evaluation=MutatingEvaluation(), stopping_criterion=sc
    )
    @benchmark gradient_descent!(
        $M, $F, $gradF2!, m2; evaluation=$(MutatingEvaluation()), stopping_criterion=$sc
    ) setup = (m2 = deepcopy($x0))
end
BenchmarkTools.Trial: 1191 samples with 1 evaluation.
 Range (min … max):  3.428 ms …  16.832 ms  ┊ GC (min … max): 0.00% … 75.06%
 Time  (median):     4.072 ms               ┊ GC (median):    0.00%
 Time  (mean ± σ):   4.185 ms ± 749.057 μs  ┊ GC (mean ± σ):  0.25% ±  2.17%

          ▂█▇█▅▃▁                                              
  ▂▂▃▃▃▅▆▇███████▆▅▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▁▂▂▁▁▂▂▁▁▁▂▂▁▁▁▁▁▁▁▁▁▁▂▁▂▂ ▃
  3.43 ms         Histogram: frequency by time        6.78 ms <

 Memory estimate: 164.10 KiB, allocs estimate: 561.

Mote that the results m1and m2 are of course (approximately) the same.

distance(M, m1, m2)
3.332000937312528e-8