Gradients

Gradients

For a function $f\colon\mathcal M\to\mathbb R$ the Riemannian gradient $\nabla f(x)$ at $x\in\mathcal M$ is given by the unique tangent vector fulfilling

\[\langle \nabla f(x), \xi\rangle_x = D_xf[\xi],\quad \forall \xi \in T_x\mathcal M,\]

where $D_xf[\xi]$ denotes the differential of $f$ at $x$ with respect to the tangent direction (vector) $\xi$ or in other words the directional derivative.

This page collects the available gradients.

Manopt.forwardLogsMethod.
ξ = forwardLogs(M,x)

compute the forward logs $F$ (generalizing forward differences) orrucirng, in the power manifold array, the function

\[$F_i(x) = \sum_{j\in\mathcal I_i} \log_{x_i} x_j,\quad i \in \mathcal G,\]

where $\mathcal G$ is the set of indices of the Power manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$.

Input

Ouput

  • ξ – resulting tangent vector in $T_x\mathcal M$ representing the logs, where $\mathcal N$ is thw power manifold with the number of dimensions added to size(x).
source
Manopt.gradDistanceFunction.
gradDistance(M,y,x[, p=2])

compute the (sub)gradient of the distance (squared)

\[f(x) = \frac{1}{2} d^p_{\mathcal M}(x,y)\]

to a fixed MPointy on the Manifold M and p is an integer. The gradient reads

\[ \nabla f(x) = -d_{\mathcal M}^{p-2}(x,y)\log_xy\]

for $p\neq 1$ or $x\neq y$. Note that for the remaining case $p=1$, $x=y$ the function is not differentiable. This function returns then the zeroTVector(M,x), since this is an element of the subdifferential.

Optional

  • p – (2) the exponent of the distance, i.e. the default is the squared distance
source
∇u,⁠∇v = gradIntrICTV12(M,f,u,v,α,β)

compute (sub)gradient of the intrinsic infimal convolution model using the mid point model of second order differences, see costTV2, i.e. for some $f\in\mathcal M$ on a Power manifold $\mathcal M$ this function computes the (sub)gradient of

\[E(u,v) = \frac{1}{2}\sum_{i\in\mathcal G} d_{\mathcal M}(g(\frac{1}{2},v_i,w_i),f_i) + \alpha \bigl( \beta\mathrm{TV}(v) + (1-\beta)\mathrm{TV}_2(w) \bigr),\]

where both total variations refer to the intrinsic ones, gradTV and gradTV2, respectively.

source
Manopt.gradTVFunction.
gradTV(M,(x,y),[p=1])

compute the (sub) gradient of $\frac{1}{p}d^p_{\mathcal M}(x,y)$ with respect to both $x$ and $y$.

source
Manopt.gradTVFunction.
ξ = gradTV(M,λ,x,[p])

Compute the (sub)gradient $\partial F$ of all forward differences orrucirng, in the power manifold array, i.e. of the function

\[F(x) = \sum_{i}\sum_{j\in\mathcal I_i} d^p(x_i,x_j)\]

where $i$ runs over all indices of the Power manifold M and $\mathcal I_i$ denotes the forward neighbors of $i$.

Input

Ouput

  • ξ – resulting tangent vector in $T_x\mathcal M$.
source
Manopt.gradTV2Function.
gradTV2(M,x [,p=1])

computes the (sub) gradient of $\frac{1}{p}d_2^p(x_1,x_2,x_3)$ with respect to all $x_1,x_2,x_3$ occuring along any array dimension in the PowPoint x, where M is the corresponding Power manifold.

source
Manopt.gradTV2Function.
gradTV2(M,(x,y,z),p)

computes the (sub) gradient of $\frac{1}{p}d_2^p(x,y,z)$ with respect to $x$, $y$, and $z$, where $d_2$ denotes the second order absolute difference using the mid point model, i.e. let

\[ \mathcal C = \bigl\{ c\in \mathcal M \ |\ g(\tfrac{1}{2};x_1,x_3) \text{ for some geodesic }g\bigr\}\]

denote the mid points between $x$ and $z$ on the manifold $\mathcal M$. Then the absolute second order difference is defined as

\[d_2(x,y,z) = \min_{c\in\mathcal C_{x,z}} d(c,y).\]

While the (sub)gradient with respect to $y$ is easy, the other two require the evaluation of an adjointJacobiField. See Illustration of the Gradient of a Second Order Difference for its derivation.

source