Frank Wolfe Method

Manopt.Frank_Wolfe_methodFunction
Frank_Wolfe_method(M, F, grad_F, p)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C \subset \mathcal M$

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p)\]

Where the constraint optimisation is hitten within the sub problem (Oracle)

\[ q_k = \operatorname{arg\,min}_{q \in C} ⟨\operatorname{grad} F(p_k), \log_{p_k}q⟩\]

for every iterate $p_k$ together with a stepsize $s_k≤1$, by default $s_k = \frac{2}{k+2}$

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Keyword Arguments

  • evaluation (AllocatingEvaluation) whether grad_F is an inplace or allocating (default) funtion
  • initial_vector=zero_vector (zero_vectoir(M,p)) how to initialize the inner gradient tangent vector
  • return_options – (false) – if activated, the extended result, i.e. the complete Options re returned. This can be used to access recorded values. If set to false (default) just the optimal value x_opt if returned stepsize::TStep=DecreasingStepsize(; length=2.0, shift=2)return_options = false,
  • stopping_criterionStopAfterIteration(500) |StopWhenGradientNormLess(1.0e-6)
  • subtask specify the oracle, can either be a closed form solution (in place function oracle(M, q, p, X) or a subsolver, e.g. (by default) a GradientProblem with GradientDescentOptions using the FrankWolfeCost and FrankWolfeGradient.
  • stepsize (DecreasingStepsize(; length=2.0, shift=2) a Stepsize to use; but it has to be always less than 1. The default is the one proposed by Frank & Wolfe: $s_k = \frac{2}{k+2}$.

all further keywords are passed down to decorate_options, e.g. debug.

source

Options

Manopt.FrankWolfeOptionsType
FrankWolfeOptions <: Options

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p – the current iterate, i.e. a point on the manifold
  • X – the current gradient $\operatorname{grad} F(p)$, i.e. a tangent vector to p.
  • evalulation AllocatingEvaluation specify the type if it is a function.
  • inverse_retraction_method – (default_inverse_retraction_method(M)) an inverse retraction method to use within Frank Wolfe.
  • subtask – a type representing the subtask (see below).
  • stop – (StopAfterIteration(200) |StopWhenGradientNormLess(1.0e-6)) a StoppingCriterion
  • stepsize - (DecreasingStepsize(; length=2.0, shift=2)) $s_k$ which by default is set to $s_k = \frac{2}{k+2}$.
  • retraction_method – (default_retraction_method(M)) a retraction to use within Frank-Wolfe

For the subtask, we need a method to solve

\[ \operatorname*{argmin}_{q∈\mathcal M} ⟨X, \log_p q⟩,\qquad \text{ where }X=\operatorname{grad} f(p)\]

where currently two variants are supported

  1. subtask(M, q, X, p) is a mutating function, i.e. we have a closed form solution of the optimization problem given M, X and p which is computed in place of q, which even works correctly, if we pass the same memory to p and q.
  2. subtask::Tuple{<:Problem,<:Options} specifies a plan to solve the subtask with a subsolver, i.e. the cost within subtask[1] is a FrankWolfeCost using references to pand X, that is to the current iterate and gradient internally. Similarly for gradient based functions using the FrankWolfeGradient.

Constructor

FrankWolfeOptions(M, p, X, subtask)

where the remaining fields from above are keyword arguments with their defaults already given in brackets.

source

Helpers

For the inner sub problem you can easily create the corresponding cost and gradient using

Manopt.FrankWolfeCostType
FrankWolfeCost{P,T}

A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values pand X are stored within this functor and hsould be references to the iterate and gradient from within FrankWolfeOptions.

source