Frank—Wolfe method

Manopt.Frank_Wolfe_methodFunction
Frank_Wolfe_method(M, f, grad_f, p=rand(M))
Frank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)
Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

  • stepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size

  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

  • sub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • sub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

  • sub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.Frank_Wolfe_method!Function
Frank_Wolfe_method(M, f, grad_f, p=rand(M))
Frank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)
Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

  • stepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size

  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

  • sub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • sub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

  • sub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.FrankWolfeStateType
FrankWolfeState <: AbstractManoptSolverState

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

The sub task requires a method to solve

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

Constructor

FrankWolfeState(M, sub_problem, sub_state; kwargs...)

Initialise the Frank Wolfe method state.

FrankWolfeState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)

Initialise the Frank Wolfe method state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

where the remaining fields from before are keyword arguments.

source

Helpers

For the inner sub-problem you can easily create the corresponding cost and gradient using

Manopt.FrankWolfeCostType
FrankWolfeCost{P,T}

A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
Manopt.FrankWolfeGradientType
FrankWolfeGradient{P,T}

A structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads

\[F(q) = ⟨X, \log_p q⟩\]

Its gradient can be computed easily using adjoint_differential_log_argument.

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).