How to do Constrained Optimization

This tutorial is a short introduction to using solvers for constraint optimisation in Manopt.jl.

Introduction

A constraint optimisation problem is given by

$$\tag{P} \begin{align*} \operatorname*{arg\,min}_{p\in\mathcal M} & f(p)\\ \text{such that} &\quad g(p) \leq 0\\ &\quad h(p) = 0,\\ \end{align*}$$

where $f\colon \mathcal M → ℝ$ is a cost function, and $g\colon \mathcal M → ℝ^m$ and $h\colon \mathcal M → ℝ^n$ are the inequality and equality constraints, respectively. The $\leq$ and $=$ in (P) are meant elementwise.

This can be seen as a balance between moving constraints into the geometry of a manifold $\mathcal M$ and keeping some, since they can be handled well in algorithms, see [BergmannHerzog2019], [LiuBoumal2020] for details.

Let's first again load the necessary packages

using Distributions, LinearAlgebra, Manifolds, Manopt, Random, PlutoUI
Random.seed!(42);

In this tutorial we want to look at different ways to specify the problem and its implications. We start with specifying an example problems to illustrayte the different available forms.

We will consider the problem of a Nonnegative PCA, cf. Section 5.1.2 in [LiuBoumal2020]:

let $v_0 ∈ ℝ^d$, $\lVert v_0 \rVert=1$ be given spike signal, that is a signal that is sparse with only $s=\lfloor δd \rfloor$ nonzero entries.

$$ Z = \sqrt{σ} v_0v_0^{\mathrm{T}}+N,$$

where $\sigma$ is a signal-to-noise ratio and $N$ is a matrix with random entries, where the diagonal entries are distributed with zero mean and standard deviation $1/d$ on the off-diagonals and $2/d$ on the daigonal

d = 150; # dimension of v0
σ = 0.1^2; # SNR
δ = 0.1; s = Int(floor(δ * d)); # Sparsity
S = sample(1:d, s; replace=false);
v0 =  [i ∈ S ? 1 / sqrt(s) : 0.0 for i in 1:d];
N = rand(Normal(0, 1 / d), (d, d)); N[diagind(N, 0)] .= rand(Normal(0, 2 / d), d);
 Z = Z = sqrt(σ) * v0 * transpose(v0) + N;

In order to recover $v_0$ we consider the constrained optimisation problem on the sphere $\mathcal S^{d-1}$ given by

$$\begin{align*} \operatorname*{arg\,min}_{p\in\mathcal S^{d-1}} & -p^{\mathrm{T}}Zp^{\mathrm{T}}\\ \text{such that} &\quad p \geq 0\\ \end{align*}$$

or in the previous notation $f(p) = -p^{\mathrm{T}}Zp^{\mathrm{T}}$ and $g(p) = -p$. We first initialize the manifold under consideration

M = Sphere(d - 1)
Sphere(149, ℝ)

A first Augmented Lagrangian Run

We first defined $f$ and $g$ as usual functions

f(M, p) = -transpose(p) * Z * p;
g(M, p) = -p;

since $f$ is a functions defined in the embedding $ℝ^d$ as well, we obtain its gradient by projection.

grad_f(M, p) = project(M, p, -transpose(Z) * p - Z * p);

For the constraints this is a little more involved, since each function $g_i = g(p)_i = p_i$ has to return its own gradient. These are again in the embedding just $\operatorname{grad} g_i(p) = -e_i$ the $i$ th unit vector. We can project these again onto the tangent space at $p$:

grad_g(M, p) = project.(
    Ref(M), Ref(p), [[i == j ? -1.0 : 0.0 for j in 1:d] for i in 1:d]
);

We further start in a random point:

x0 = random_point(M);

Let's check a few things for the initial point

f(M, x0)
-0.003045353902865068

How much the function g is positive

maximum(g(M, x0))
0.22534810458107218

Now as a first method we can just call the Augmented Lagrangian Method with a simple call:

with_terminal() do
    @time global v1 = augmented_Lagrangian_method(
    	M, f, grad_f, x0; G=g, gradG=grad_g,
    	debug=[:Iteration, :Cost, :Stop, " | ", :Change, 50, "\n"],
    );
end
Initial F(x): -0.003045 | 
# 50    F(x): -0.113223 | Last Change: 1.052775
# 100   F(x): -0.015272 | Last Change: 1.264730
# 150   F(x): 0.001832 | Last Change: 1.649106
# 200   F(x): -0.009295 | Last Change: 1.892576
# 250   F(x): 0.004567 | Last Change: 1.230164
# 300   F(x): 0.006210 | Last Change: 1.457302
The algorithm reached its maximal number of iterations (300).
 15.222746 seconds (20.78 M allocations: 8.376 GiB, 16.01% gc time, 44.11% compilation time)

Now we have both a lower function value and the point is nearly within the constraints, ... up to numerical inaccuracies

f(M, v1)
0.006210188960613741
maximum( g(M, v1) )
0.1672343921248145

A faster Augmented Lagrangian Run

Now this is a little slow, so we can modify two things, that we will directly do both – but one could also just change one of these – :

  1. Gradients should be evaluated in place, so for example

grad_f!(M, X, p) = project!(M, X, p, -transpose(Z) * p - Z * p);
  1. The constraints are currently always evaluated all together, since the function grad_g always returns a vector of gradients.

We first change the constraints function into a vector of functions. We further change the gradient both into a vector of gradient functions $\operatorname{grad} g_i, i=1,\ldots,d$, as well as gradients that are computed in place.

g2 = [(M, p) -> -p[i] for i in 1:d];
grad_g2! = [
    (M, X, p) -> project!(M, X, p, [i == j ? -1.0 : 0.0 for j in 1:d]) for i in 1:d
];
with_terminal() do
    @time global v2 = augmented_Lagrangian_method(
    	M, f, grad_f!, x0; G=g2, gradG=grad_g2!, evaluation=MutatingEvaluation(),
    	debug=[:Iteration, :Cost, :Stop, " | ", :Change, 50, "\n"],
    );
end
Initial F(x): -0.003045 | 
# 50    F(x): -0.113223 | Last Change: 1.052928
# 100   F(x): -0.113197 | Last Change: 0.001192
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (2.1073424255447017e-8) less than 1.0e-6.
  4.988727 seconds (7.04 M allocations: 4.901 GiB, 9.06% gc time, 36.26% compilation time)

As a technical remark: Note that (by default) the change to MutatingEvaluations affects both the constrained solver as well as the inner solver of the subproblem in each iteration.

f(M, v2)
-0.1128757020020641
maximum(g(M, v2))
4.233492020169974e-11

These are the very similar to the previous values but the solver took much less time and less memory allocations.

Exact Penalty Method

As a second solver, we have the Exact Penalty Method, which currenlty is available with two smoothing variants, which make an inner solver for smooth optimisationm, that is by default again [quasi Newton] possible: LogarithmicSumOfExponentials and LinearQuadraticHuber. We compare both here as well. The first smoothing technique is the default, so we can just call

with_terminal() do
    @time global v3 = exact_penalty_method(
    	M, f, grad_f!, x0; G=g2, gradG=grad_g2!, evaluation=MutatingEvaluation(),
    	debug=[:Iteration, :Cost, :Stop, " | ", :Change, 50, "\n"],
    );
end
Initial F(x): -0.003045 | 
# 50    F(x): -0.112259 | Last Change: 1.032813
# 100   F(x): -0.113220 | Last Change: 0.016477
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (2.580956827951785e-8) less than 1.0e-6.
  4.536010 seconds (5.54 M allocations: 3.653 GiB, 18.63% gc time, 34.17% compilation time)

We obtain a similar cost value as for the Augmented Lagrangian Solver above, but here the constraint is actually fulfilled and not just numerically “on the boundary”.

f(M, v3)
-0.1132206972033172
maximum(g(M, v3))
-3.683154382506838e-6

The second smoothing technique is often beneficial, when we have a lot of constraints (in the above mentioned vectorial manner), since we can avoid several gradient evaluations for the constraint functions here. This leads to a faster iteration time.

with_terminal() do
    @time global v4 = exact_penalty_method(
    	M, f, grad_f!, x0; G=g2, gradG=grad_g2!, evaluation=MutatingEvaluation(),
        smoothing=LinearQuadraticHuber(),
    	debug=[:Iteration, :Cost, :Stop, " | ", :Change, 50, "\n"],
    );
end
Initial F(x): -0.003045 | 
# 50    F(x): -0.113226 | Last Change: 0.008583
# 100   F(x): -0.113223 | Last Change: 0.000059
The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
The algorithm performed a step with a change (7.811392797280246e-7) less than 1.0e-6.
  1.733402 seconds (3.05 M allocations: 629.062 MiB, 7.11% gc time, 70.75% compilation time)

For the result we see the same behaviour as for the other smoothing.

f(M, v4)
-0.113223426964504
maximum(g(M, v4))
2.425261981010827e-8

Comparing to the unconstraint solver

We can compare this to the global optimum on the sphere, which is the unconstraint optimisation problem; we can just use Quasi Newton.

Note that this is much faster, since every iteration of the algorithms above does a quasi-Newton call as well.

with_terminal() do
    @time global w1 = quasi_Newton(
        M, f, grad_f!, x0; evaluation=MutatingEvaluation()
    );
end
  0.541804 seconds (582.90 k allocations: 60.147 MiB, 3.89% gc time, 94.20% compilation time)
f(M, w1)
-0.12892790271025847

But for sure here the constraints here are not fulfilled and we have veru positive entries in $g(w_1)$

maximum(g(M, w1))
0.3325034810089338

Literature

BergmannHerzog2019

R. Bergmann, R. Herzog, Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds, In: SIAM Journal on Optimization 29(4), pp. 2423–2444 (2019) doi: 10.1137/18M1181602, arXiv: 1804.06214.

LiuBoumal2020

C. Liu, N. Boumal, Simple Algorithms for Optimization on Riemannian Manifolds with Constraints, In: Applied Mathematics & Optimization 82, pp. 949–981 (2020), doi 10.1007/s00245-019-09564-3, arXiv: 1901.10000.