Frank Wolfe Method

Frank_Wolfe_method(M, f, grad_f, p)
Frank_Wolfe_method(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C \subset \mathcal M$

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p)\]

Where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ q_k = \operatorname{arg\,min}_{q \in C} ⟨\operatorname{grad} F(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$, by default $s_k = \frac{2}{k+2}$.

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.


  • M – a manifold $\mathcal M$
  • f – a cost function $f: \mathcal M→ℝ$ to find a minimizer $p^*$ for
  • grad_f – the gradient $\operatorname{grad}f: \mathcal M → T\mathcal M$ of f
    • as a function (M, p) -> X or a function (M, X, p) -> X
  • p – an initial value $p ∈ \mathcal C$, note that it really has to be a feasible point

Alternatively to f and grad_f you can prodive the AbstractManifoldGradientObjective gradient_objective directly.

Keyword Arguments

  • evaluation (AllocatingEvaluation) whether grad_F is an inplace or allocating (default) funtion
  • initial_vector – (zero_vectoir(M,p)) how to initialize the inner gradient tangent vector
  • stopping_criterion – (StopAfterIteration(500) |StopWhenGradientNormLess(1.0e-6)) a stopping criterion
  • retraction_method – (default_retraction_method(M, typeof(p))) a type of retraction
  • stepsize (DecreasingStepsize(; length=2.0, shift=2) a Stepsize to use; but it has to be always less than 1. The default is the one proposed by Frank & Wolfe: $s_k = \frac{2}{k+2}$.

All other keyword arguments are passed to decorate_state! for decorators or decorate_objective!, respectively. If you provide the ManifoldGradientObjective directly, these decorations can still be specified


the obtained (approximate) minimizer $p^*$, see get_solver_return for details

Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Peform the Frank Wolfe method in place of p.

For all options and keyword arguments, see Frank_Wolfe_method.



FrankWolfeState <: AbstractManoptSolverState

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.


  • p – the current iterate, i.e. a point on the manifold
  • X – the current gradient $\operatorname{grad} F(p)$, i.e. a tangent vector to p.
  • inverse_retraction_method – (default_inverse_retraction_method(M, typeof(p))) an inverse retraction method to use within Frank Wolfe.
  • sub_problem – an AbstractManoptProblem problem for the subsolver
  • sub_state – an AbstractManoptSolverState for the subsolver
  • stop – (StopAfterIteration(200) |StopWhenGradientNormLess(1.0e-6)) a StoppingCriterion
  • stepsize - (DecreasingStepsize(; length=2.0, shift=2)) $s_k$ which by default is set to $s_k = \frac{2}{k+2}$.
  • retraction_method – (default_retraction_method(M, typeof(p))) a retraction to use within Frank-Wolfe

For the subtask, we need a method to solve

\[ \operatorname*{argmin}_{q∈\mathcal M} ⟨X, \log_p q⟩,\qquad \text{ where }X=\operatorname{grad} f(p)\]


FrankWolfeState(M, p, X, sub_problem, sub_task)

where the remaining fields from above are keyword arguments with their defaults already given in brackets.



For the inner sub-problem you can easily create the corresponding cost and gradient using


A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values pand X are stored within this functor and hsould be references to the iterate and gradient from within FrankWolfeState.


A structure to represent the gradeint of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X we have

\[F(q) = ⟨X, \log_p q⟩\]

Its gradient can be computed easily using adjoint_differential_log_argument.

The values pand X are stored within this functor and hsould be references to the iterate and gradient from within FrankWolfeState.