Frank Wolfe Method
Manopt.Frank_Wolfe_method — FunctionFrank_Wolfe_method(M, F, grad_F, p)Perform the Frank-Wolfe algorithm to compute for $\mathcal C \subset \mathcal M$
\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p)\]
Where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)
\[ q_k = \operatorname{arg\,min}_{q \in C} ⟨\operatorname{grad} F(p_k), \log_{p_k}q⟩.\]
for every iterate $p_k$ together with a stepsize $s_k≤1$, by default $s_k = \frac{2}{k+2}$.
The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.
Keyword Arguments
evaluation(AllocatingEvaluation) whethergrad_Fis an inplace or allocating (default) funtioninitial_vector=zero_vector(zero_vectoir(M,p)) how to initialize the inner gradient tangent vectorstopping_criterion–StopAfterIteration(500) |StopWhenGradientNormLess(1.0e-6)subtaskspecify the oracle, can either be a closed form solution (in place functionoracle(M, q, p, X)or a subsolver, e.g. (by default) aGradientProblemwithGradientDescentOptionsusing theFrankWolfeCostandFrankWolfeGradient.stepsize(DecreasingStepsize(; length=2.0, shift=2)aStepsizeto use; but it has to be always less than 1. The default is the one proposed by Frank & Wolfe: $s_k = \frac{2}{k+2}$.
all further keywords are passed down to decorate_options, e.g. debug.
Output
the obtained (approximate) minimizer $x^*$, see get_solver_return for details
Manopt.Frank_Wolfe_method! — FunctionFrank_Wolfe_method!(M, F, grad_F, q; kwargs...)Options
Manopt.FrankWolfeOptions — TypeFrankWolfeOptions <: OptionsA struct to store the current state of the Frank_Wolfe_method
It comes in two forms, depending on the realisation of the subproblem.
Fields
p– the current iterate, i.e. a point on the manifoldX– the current gradient $\operatorname{grad} F(p)$, i.e. a tangent vector top.evalulationAllocatingEvaluationspecify the type if it is a function.inverse_retraction_method– (default_inverse_retraction_method(M)) an inverse retraction method to use within Frank Wolfe.subtask– a type representing the subtask (see below).stop– (StopAfterIteration(200) |StopWhenGradientNormLess(1.0e-6)) aStoppingCriterionstepsize- (DecreasingStepsize(; length=2.0, shift=2)) $s_k$ which by default is set to $s_k = \frac{2}{k+2}$.retraction_method– (default_retraction_method(M)) a retraction to use within Frank-Wolfe
For the subtask, we need a method to solve
\[ \operatorname*{argmin}_{q∈\mathcal M} ⟨X, \log_p q⟩,\qquad \text{ where }X=\operatorname{grad} f(p)\]
where currently two variants are supported
subtask(M, q, X, p)is a mutating function, i.e. we have a closed form solution of the optimization problem givenM,Xandpwhich is computed in place ofq, which even works correctly, if we pass the same memory topandq.subtask::Tuple{<:Problem,<:Options}specifies a plan to solve the subtask with a subsolver, i.e. the cost withinsubtask[1]is aFrankWolfeCostusing references topandX, that is to the current iterate and gradient internally. Similarly for gradient based functions using theFrankWolfeGradient.
Constructor
FrankWolfeOptions(M, p, X, subtask)where the remaining fields from above are keyword arguments with their defaults already given in brackets.
Helpers
For the inner sub-problem you can easily create the corresponding cost and gradient using
Manopt.FrankWolfeCost — TypeFrankWolfeCost{P,T}A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads
\[F(q) = ⟨X, \log_p q⟩\]
The values pand X are stored within this functor and hsould be references to the iterate and gradient from within FrankWolfeOptions.
Manopt.FrankWolfeGradient — TypeFrankWolfeGradient{P,T}A structure to represent the gradeint of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X we have
\[F(q) = ⟨X, \log_p q⟩\]
Its gradient can be computed easily using adjoint_differential_log_argument.
The values pand X are stored within this functor and hsould be references to the iterate and gradient from within FrankWolfeOptions.