How to print debug output

Ronny Bergmann

This tutorial aims to illustrate how to perform debug output. For that we consider an example that includes a subsolver, to also consider their debug capabilities.

The problem itself is hence not the main focus. We consider a nonnegative PCA which we can write as a constraint problem on the Sphere

Let’s first load the necessary packages.

using Manopt, Manifolds, Random, LinearAlgebra
Random.seed!(42);
d = 4
M = Sphere(d - 1)
v0 = project(M, [ones(2)..., zeros(d - 2)...])
Z = v0 * v0'
#Cost and gradient
f(M, p) = -tr(transpose(p) * Z * p) / 2
grad_f(M, p) = project(M, p, -transpose.(Z) * p / 2 - Z * p / 2)
# Constraints
g(M, p) = -p # now p ≥ 0
mI = -Matrix{Float64}(I, d, d)
# Vector of gradients of the constraint components
grad_g(M, p) = [project(M, p, mI[:, i]) for i in 1:d]

Then we can take a starting point

p0 = project(M, [ones(2)..., zeros(d - 3)..., 0.1])

Simple debug output

Any solver accepts the keyword debug=, which in the simplest case can be set to an array of strings, symbols and a number.

  • Strings are printed in every iteration as is (cf. DebugDivider) and should be used to finish the array with a line break.
  • the last number in the array is used with DebugEvery to print the debug only every $i$th iteration.
  • Any Symbol is converted into certain debug prints

Certain symbols starting with a capital letter are mapped to certain prints, for example :Cost is mapped to DebugCost() to print the current cost function value. A full list is provided in the DebugActionFactory. A special keyword is :Stop, which is only added to the final debug hook to print the stopping criterion.

Any symbol with a small letter is mapped to fields of the AbstractManoptSolverState which is used. This way you can easily print internal data, if you know their names.

Let’s look at an example first: if we want to print the current iteration number, the current cost function value as well as the value ϵ from the ExactPenaltyMethodState. To keep the amount of print at a reasonable level, we want to only print the debug every twenty-fifth iteration.

Then we can write

p1 = exact_penalty_method(
    M, f, grad_f, p0; g=g, grad_g=grad_g,
    debug = [:Iteration, :Cost, " | ", (:ϵ,"ϵ: %.8f"), 25, "\n", :Stop]
);
Initial f(x): -0.497512 | ϵ: 0.00100000
# 25    f(x): -0.499449 | ϵ: 0.00017783
# 50    f(x): -0.499996 | ϵ: 0.00003162
# 75    f(x): -0.500000 | ϵ: 0.00000562
# 100   f(x): -0.500000 | ϵ: 0.00000100
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.

Specifying when to print something

While in the last step, we specified what to print, this can be extend to even specify when to print it. Currently the following four “places” are available, ordered by when they appear in an algorithm run.

  • :Start to print something at the start of the algorith. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number 0
  • :BeforeIteration to print something before an iteration starts
  • :Iteration to print something after an iteration. For example the group of prints from the last code block [:Iteration, :Cost, " | ", :ϵ, 25,] is added to this entry.
  • :Stop to print something when the algorithm stops. In the example, the :Stop adds the DebugStoppingCriterion is added to this place.

Specifying something especially for one of these places is done by specifying a Pair, so for example :BeforeIteration => :Iteration would add the display of the iteration number to be printed before the iteration is performed.

Changing this in the run does not change the output. Being more precise for the other entries, we could also write

p1 = exact_penalty_method(
    M, f, grad_f, p0; g=g, grad_g=grad_g,
    debug = [
        :BeforeIteration => [:Iteration],
        :Iteration => [:Cost, " | ", :ϵ, "\n"],
        :Stop => DebugStoppingCriterion(),
        25,
    ],
);
Initial f(x): -0.497512 | ϵ: 0.001
# 25    f(x): -0.499449 | ϵ: 0.0001778279410038921
# 50    f(x): -0.499996 | ϵ: 3.1622776601683734e-5
# 75    f(x): -0.500000 | ϵ: 5.623413251903474e-6
# 100   f(x): -0.500000 | ϵ: 1.0e-6
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.

This also illustrates, that instead of Symbols we can also always pass down a DebugAction directly, for example when there is a reason to create or configure the action more individually thatn the default from the symbol. Note that the number (25) yields that all but :Start and :Stop are only displayed every twenty-fifth iteration.

Subsolver debug

Subsolvers have a sub_kwargs keyword, such that you can pass keywords to the sub solver as well. This works well if you do not plan to change the subsolver. If you do you can wrap your own solver_state= argument in a decorate_state! and pass a debug= password to this function call. Keywords in a keyword have to be passed as pairs (:debug => [...]).

For most debugs, there further exists a longer form to specify the format to print. We want to use this to specify the format to print ϵ. This is done by putting the corresponding symbol together with the string to use in formatting into a tuple like (:ϵ," | ϵ: %.8f"), where we can already include the divider as well.

A main problem now is, that this debug is issued every sub solver call or initialisation, as the following print of just a . per sub solver test/call illustrates

p3 = exact_penalty_method(
    M, f, grad_f, p0; g=g, grad_g=grad_g,
    debug = ["\n",:Iteration, DebugCost(), (:ϵ," | ϵ: %.8f"), 25, "\n", :Stop],
    sub_kwargs = [:debug => ["."]]
);
Initial f(x): -0.497512 | ϵ: 0.00100000
....................................................................................
# 25    f(x): -0.499449 | ϵ: 0.00017783
.......................................................................
# 50    f(x): -0.499996 | ϵ: 0.00003162
..................................................
# 75    f(x): -0.500000 | ϵ: 0.00000562
..................................................
# 100   f(x): -0.500000 | ϵ: 0.00000100
....The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.

The different lengths of the dotted lines come from the fact that —at least in the beginning— the subsolver performs a few steps and each subsolvers step prints a dot.

For this issue, there is the next symbol (similar to the :Stop) to indicate that a debug set is a subsolver set :WhenActive, which introduces a DebugWhenActive that is only activated when the outer debug is actually active, or inother words DebugEvery is active itself. Furthermore, we want to print the iteration number before printing the subsolvers steps, so we put this into a Pair, but we can leave the remaining ones as single entries. Finally we also prefix :Stop with " | " and print the iteration number at the time we stop. We get

p4 = exact_penalty_method(
    M,
    f,
    grad_f,
    p0;
    g=g,
    grad_g=grad_g,
    debug=[
        :BeforeIteration => [:Iteration, "\n"],
        :Iteration => [DebugCost(), (:ϵ, " | ϵ: %.8f"), "\n"],
        :Stop,
        25,
    ],
    sub_kwargs=[
        :debug => [
            " | ",
            :Iteration,
            :Cost,
            "\n",
            :WhenActive,
            :Stop => [(:Stop, " | "), " | stopped after iteration ", :Iteration, "\n"],
        ],
    ],
);
Initial 
f(x): -0.497512 | ϵ: 0.00100000
 | Initial f(x): -0.387883
 | stopped after iteration Initial 
 | Initial f(x): -0.446785
 | stopped after iteration Initial 
 | Initial f(x): -0.454969
 | stopped after iteration Initial 
 | Initial f(x): -0.461926
 | stopped after iteration Initial 
 | Initial f(x): -0.467845
 | stopped after iteration Initial 
 | Initial f(x): -0.472877
 | stopped after iteration Initial 
 | Initial f(x): -0.477150
 | stopped after iteration Initial 
 | Initial f(x): -0.480773
 | stopped after iteration Initial 
 | Initial f(x): -0.483841
 | stopped after iteration Initial 
 | Initial f(x): -0.486434
 | stopped after iteration Initial 
 | Initial f(x): -0.488624
 | stopped after iteration Initial 
 | Initial f(x): -0.490479
 | stopped after iteration Initial 
 | Initial f(x): -0.492034
 | stopped after iteration Initial 
 | Initial f(x): -0.493342
 | stopped after iteration Initial 
 | Initial f(x): -0.494440
 | stopped after iteration Initial 
 | Initial f(x): -0.495362
 | stopped after iteration Initial 
 | Initial f(x): -0.496134
 | stopped after iteration Initial 
 | Initial f(x): -0.496781
 | stopped after iteration Initial 
 | Initial f(x): -0.497321
 | stopped after iteration Initial 
 | Initial f(x): -0.497773
 | stopped after iteration Initial 
 | Initial f(x): -0.498150
 | stopped after iteration Initial 
 | Initial f(x): -0.498464
 | stopped after iteration Initial 
 | Initial f(x): -0.498725
 | stopped after iteration Initial 
 | Initial f(x): -0.498944
 | stopped after iteration Initial 
 | # 1     f(x): -0.498969
 | The algorithm reached approximately critical point after 1 iterations; the gradient norm (3.4995246389869776e-5) is less than 0.001.
 | stopped after iteration # 1     
# 25    
 | Initial f(x): -0.499126
 | stopped after iteration Initial 
f(x): -0.499449 | ϵ: 0.00017783
 | Initial f(x): -0.499277
 | stopped after iteration Initial 
 | Initial f(x): -0.499403
 | stopped after iteration Initial 
 | Initial f(x): -0.499507
 | stopped after iteration Initial 
 | Initial f(x): -0.499593
 | stopped after iteration Initial 
 | Initial f(x): -0.499662
 | stopped after iteration Initial 
 | Initial f(x): -0.499721
 | stopped after iteration Initial 
 | Initial f(x): -0.499770
 | stopped after iteration Initial 
 | Initial f(x): -0.499811
 | stopped after iteration Initial 
 | Initial f(x): -0.499844
 | stopped after iteration Initial 
 | Initial f(x): -0.499871
 | stopped after iteration Initial 
 | Initial f(x): -0.499894
 | stopped after iteration Initial 
 | Initial f(x): -0.499913
 | stopped after iteration Initial 
 | Initial f(x): -0.499928
 | stopped after iteration Initial 
 | Initial f(x): -0.499941
 | stopped after iteration Initial 
 | Initial f(x): -0.499952
 | stopped after iteration Initial 
 | Initial f(x): -0.499960
 | stopped after iteration Initial 
 | Initial f(x): -0.499967
 | stopped after iteration Initial 
 | Initial f(x): -0.499973
 | stopped after iteration Initial 
 | Initial f(x): -0.499978
 | stopped after iteration Initial 
 | Initial f(x): -0.499982
 | stopped after iteration Initial 
 | Initial f(x): -0.499985
 | stopped after iteration Initial 
 | Initial f(x): -0.499988
 | stopped after iteration Initial 
 | Initial f(x): -0.499990
 | stopped after iteration Initial 
 | Initial f(x): -0.499992
 | stopped after iteration Initial 
 | # 1     f(x): -0.499992
 | # 2     f(x): -0.499992
 | The algorithm reached approximately critical point after 2 iterations; the gradient norm (0.00027436723916614346) is less than 0.001.
 | stopped after iteration # 2     
# 50    
 | Initial f(x): -0.499993
 | stopped after iteration Initial 
f(x): -0.499996 | ϵ: 0.00003162
 | Initial f(x): -0.499995
 | stopped after iteration Initial 
 | Initial f(x): -0.499996
 | stopped after iteration Initial 
 | Initial f(x): -0.499997
 | stopped after iteration Initial 
 | Initial f(x): -0.499997
 | stopped after iteration Initial 
 | Initial f(x): -0.499998
 | stopped after iteration Initial 
 | Initial f(x): -0.499998
 | stopped after iteration Initial 
 | Initial f(x): -0.499998
 | stopped after iteration Initial 
 | Initial f(x): -0.499999
 | stopped after iteration Initial 
 | Initial f(x): -0.499999
 | stopped after iteration Initial 
 | Initial f(x): -0.499999
 | stopped after iteration Initial 
 | Initial f(x): -0.499999
 | stopped after iteration Initial 
 | Initial f(x): -0.499999
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | # 1     f(x): -0.500000
 | The algorithm reached approximately critical point after 1 iterations; the gradient norm (5.000404403277298e-6) is less than 0.001.
 | stopped after iteration # 1     
# 75    
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
f(x): -0.500000 | ϵ: 0.00000562
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | # 1     f(x): -0.500000
 | The algorithm reached approximately critical point after 1 iterations; the gradient norm (4.202215558182483e-6) is less than 0.001.
 | stopped after iteration # 1     
# 100   
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
f(x): -0.500000 | ϵ: 0.00000100
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
 | Initial f(x): -0.500000
 | stopped after iteration Initial 
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.

where we now see that the subsolver always only requires one step. Note that since debug of an iteration is happening after a step, we see the sub solver run before the debug for an iteration number.

Advanced debug output

There is two more advanced variants that can be used. The first is a tuple of a symbol and a string, where the string is used as the format print, that most DebugActions have. The second is, to directly provide a DebugAction.

We can for example change the way the is printed by adding a format string and use DebugCost() which is equivalent to using :Cost. Especially with the format change, the lines are more consistent in length.

p2 = exact_penalty_method(
    M, f, grad_f, p0; g=g, grad_g=grad_g,
    debug = [:Iteration, DebugCost(), (:ϵ," | ϵ: %.8f"), 25, "\n", :Stop]
);
Initial f(x): -0.497512 | ϵ: 0.00100000
# 25    f(x): -0.499449 | ϵ: 0.00017783
# 50    f(x): -0.499996 | ϵ: 0.00003162
# 75    f(x): -0.500000 | ϵ: 0.00000562
# 100   f(x): -0.500000 | ϵ: 0.00000100
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.

You can also write your own DebugAction functor, where the function to implement has the same signature as the step function, that is an AbstractManoptProblem, an AbstractManoptSolverState, as well as the current iterate. For example the already mentionedDebugDivider(s) is given as

mutable struct DebugDivider{TIO<:IO} <: DebugAction
    io::TIO
    divider::String
    DebugDivider(divider=" | "; io::IO=stdout) = new{typeof(io)}(io, divider)
end
function (d::DebugDivider)(::AbstractManoptProblem, ::AbstractManoptSolverState, i::Int)
    (i >= 0) && (!isempty(d.divider)) && (print(d.io, d.divider))
    return nothing
end

or you could implement that of course just for your specific problem or state.

Technical details

This tutorial is cached. It was last run on the following package versions.

using Pkg
Pkg.status()
Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`
  [6e4b80f9] BenchmarkTools v1.5.0
  [5ae59095] Colors v0.12.11
  [31c24e10] Distributions v0.25.109
  [26cc04aa] FiniteDifferences v0.12.32
  [7073ff75] IJulia v1.24.2
  [8ac3fa9e] LRUCache v1.6.1
  [af67fdf4] ManifoldDiff v0.3.10
  [1cead3c2] Manifolds v0.9.18
  [3362f125] ManifoldsBase v0.15.10
  [0fc0a36d] Manopt v0.4.64 `~/work/Manopt.jl/Manopt.jl`
  [91a5bcdd] Plots v1.40.4
using Dates
now()
2024-06-08T17:03:18.750