Note: Differential Forms Cheatsheet
Tricks for working with the Laplacian using Differential Forms
Steve Trettel
|
$$\newcommand{\RR}{\mathbb{R}} \newcommand{\grad}{\operatorname{grad}} \newcommand{\div}{\operatorname{div}}$$
This is a list of facts useful when computing with differential forms collected in one spot because I keep needing to reference them. Expressions are given in a coordinate free way.
The Geometric Isomorphism $TM\leftrightarrow T^\ast M$
On a Riemannian manifold the metric provides an isomorphism between the tangent and cotangent bundles. This pairing is described implicitly: a 1-form $\alpha$ is paired with a vector field $X$ by the rule that
$$\alpha(-)=g(X, -)$$
When starting with a 1-form and producing a vector, this is called raising an index, and the reverse is lowering an index (names obviously from the appearance in index notation). This is also called the musical isomorphism where people write
$$X\mapsto X^\flat = \alpha\hspace{1cm} \alpha\mapsto \alpha^\sharp = X$$
Differentials and the Gradient
Let $M$ be a manifold and $f\colon M\to \RR$ a function. Differentiation naturally associates this to a 1-form $df$ whose action on vector fields is defined implicitly $$df(X):= X(f)$$
Given a Riemannian manifold $(M,g)$ a real valued function is also naturally paired with a vector field, the gradient defined by
$$ \grad f = (df)^\sharp\hspace{1cm} g(\grad f, X ):=df(X)$$
K-Forms
Differentiating Functions $\alpha(X)$
If $\alpha$ is a 1-form and $X$ is a vector field, $\alpha(X)$ is a smooth function on $M$ and so its differential is a 1-form. We can express this 1-form in a coordinate free manner via Cartan’s Magic Formula, using that $\alpha(X)=\iota_X(\alpha)$:
$$\mathcal{L}_X\alpha = d\iota_X\alpha+\iota_X d\alpha$$
$$\begin{align}
\implies d(\alpha(X))=d\iota_X\alpha &= \mathcal{L}_X\alpha-\iota_X d\alpha\
&= X\alpha(-)-\alpha\left([X,-]\right)-(d\alpha)(X,-)
\end{align}$$
Contraction of Forms
Given a $k$-form $\eta$ and a vector field $X$, one can naturally produce a $k-1$ form through contraction with $X$. This operation is also called the interior product, and the resulting $k-1$ form is denoted $\iota_X \eta$, or $X_\lrcorner \eta$ (I’ll use the former) and defined implicitly by
$$(\iota_X\eta)(v_1,\ldots, v_{k-1}):=\eta(v_1,\ldots, v_{k-1},X)$$
(Note: sometimes people put $X$ first). Contracting along the same vector field twice produces zero, as feeding $X$ into two different slots of a $k$-form must vanish by assymetry. Thus, like the exterior derivative we have
$$\iota_X\circ\iota_X \equiv 0$$
To compute the interior product of a wedge product $\eta = \alpha\wedge\beta$ where $\alpha$ is a $p$-form and $\beta$ a $q$-form, we get the $p+q-1$ form defined by
$$\iota_X(\alpha\wedge \beta)=(\iota_X\alpha)\wedge\beta +(-1)^p\alpha\wedge(\iota_X\beta)$$
The Volume Form
A volume form on a smooth manifold $M$ is any choice of nonzero top-dimensional differential form. But for an oriented Riemannian manifold $(M,g)$ there is a cannonical choice compatible with the metric $g$. This form is often written $\omega$ $dV$, or $\mathrm{vol}$ (sometimes with subscript $g$), and is defined so that oriented orthonormal bases of $TM$ are sent to $1$.
Pairing Vectors and $n-1$ forms
Contracting the volume form along a vector field sets up a natural Riemannian-geometric pairing between vector fields and $n-1$ forms, that often comes up in vector calculus, where $X$ is paired to $\iota_X\omega_g$
$$(\iota_X \omega)(v_1,\ldots, v_k):=\omega(v_1,\ldots,v_k,X)$$
Extending the Metric to Forms
1-Forms
On a Riemannian manifold $(M,g)$ the metric is naturally a symmetric 2-tensor, taking in pairs of vector fields. But it also provides the geometric isomorphism $TM\leftrightarrow T^\ast M$, which we can use to transport quantities between the two. This isomorphism lets one transport the metric naturally to 1-forms; given $\alpha,\beta$ we define $$g(\alpha,\beta):=g(\alpha^\sharp,\beta^\sharp)$$ Metric-derived quantities such as the norm are defined using this, for example $$|\alpha|^2:=|\alpha^\sharp|^2=g(\alpha^\sharp,\alpha^\sharp)$$
Tensor Powers
The metric $g$ naturally extends to tensor powers of the tangent bundle: elements of $\bigotimes^k TM$ are linear combinations of elementary tensor products $v_1\otimes v_2\otimes\cdots\otimes v_k$, and by (bi)-linearity it suffices to define our extension of $g$ to $\bigotimes^k TM$ on such simple tensors.
$$g(v_1\otimes\cdots\otimes v_k, w_1\otimes\cdots\otimes w_k):=g(v_1,w_1)g(v_2,w_2)\cdots g(v_k,w_k)$$ NOTE: it’s often convenient to introduce a factor of $\frac{1}{k!}$ into the definition.
k-Forms
Because $k$-forms can be constructed as a subspace of $k$ tensors (the properly normalized result of adding up all permutations of $\alpha_1\otimes\cdots \otimes \alpha_k$ in an alternating way), one way to build the metric’s extension to $k$-forms is to pair the extension to $k$-fold tensor products with the extensino to 1-forms. Thinking about what happens to the formula above when applied to a alternating sum of permutations, we get a alternating sum of products of the terms $g(v_i,w_j)$ - that is, we get the determinant of the matrix whose entries are $g(v_i,w_j)$. Thus, if we have two $k$-forms $\alpha,\beta$ (where we assume without loss of generality that they are wedge products of $1$-forms, as everything extends bilinearly) $$\alpha =\alpha_1\wedge \alpha_2\wedge\cdots\wedge \alpha_k \hspace{1cm} \beta =\beta_1\wedge \beta_2\wedge\cdots\wedge \beta_k$$
Then
$$g(\alpha,\beta):=\det\begin{pmatrix}
g(\alpha_1,\beta_1)&\cdots &g(\alpha_1,\beta_k)\
\vdots &\ddots & \vdots\
g(\alpha_k,\beta_1) &\cdots & g(\alpha_k,\beta_k)
\end{pmatrix}=\det\left(g(\alpha_i,\beta_j)\right)$$
Where on each pair $\alpha_i, \beta_j$ the metric is computed using the extension to 1-forms, $g(\alpha_i,\beta_j):=g(\alpha_i^\sharp,\beta_j^\sharp)$.
The Lie Derivative
The Lie derivative is a metric-independent notion of how much one tensor is changing if you flow along the integral curves of a vector field (allowing yourself and your measuring devices to stretch, contract, or do whatever the flow lines tell you to). More precisely, if $X$ is a vector field on $M$ and $\Phi_t\colon M\to M$ is the flow generated by integral curves, we define the Lie derivative of a tensor field $T$ as the difference quotient
$$(\mathcal{L}X T)p :=\lim{t\to 0}\frac{T{\Phi_t(p)}-(\Phi_t)_\ast T_p}{t}$$
Functions, Vectors and 1-Forms
Small valence tensors have nice simple formulas. For real valued functions this simplifes to a directional derivative
$$\mathcal{L}_X(f)= X(f)$$
and for vector fields this agrees with the Lie Bracket
$$\mathcal{L}_X(Y)=[X,Y]$$
The case of a 1-form can be worked out from these two facts alone: if $\alpha$ is a 1-form, then for any vector field $Y$ we have $\alpha(Y)$ is a function. We know how to compute the Lie derivative on functions and vector fields, and it must satisfy the Leibniz rule. Thus whatever $\mathcal{L}_X\alpha$ is, it must satisfy
$$\mathcal{L}_X(\alpha(Y))=(\mathcal{L}_X\alpha)Y+\alpha(\mathcal{L}_X Y)$$
Solving for the unknown term and plugging in what we do know,
$$\begin{align}
(\mathcal{L}_X\alpha)(Y) &= \mathcal{L}_X(\alpha(Y))-\alpha(\mathcal{L}_X Y)\
&= X(\alpha(Y))-\alpha([X,Y])
\end{align}
$$
Computing for General Tensors
Given these, we can express the Lie derivative of more general tensors using the following trick. We’ll take for example here that $T$ takes as input two vector fields and a 1-form, so $T(U,\alpha)$ is a real valued function. We again use the key idea that the Lie derivative should satisfy the Leibniz rule. That means applying to $T(U,\alpha)$ gives
$$\mathcal{L}_X(T(U,\alpha))=(\mathcal{L}_XT)(U,\alpha)+T(\mathcal{L}_XU,\alpha)+T(U,\mathcal{L}_X\alpha)$$
The first term in this sum involves our unknown $\mathcal{L}_X T$, but all the other quantites are known: we are either taking the Lie derivative of a function, a vector or a 1-form. So, we can fill them all in with their simplified pieces and solve for what we want:
$$(\mathcal{L}_X T)\colon (U,\alpha)\mapsto$$ $$X\left(T(U,\alpha)\right)-T([X,U],\alpha)-T(U,V,X\alpha-\alpha[X,\cdot])$$
Cartan’s Magic Formula
Cartan’s Magic formula is a relationship between the Lie derivative, contraction and the exterior derivative on a general smooth manifold $M$:
$$\mathcal{L}_X = d\circ \iota_X +\iota_X\circ d$$
This is very useful as it provides a means of computing the Lie derivative of $k$-forms much more compactly than the general procedure above which would require $k+1$ terms for the Leibniz rule.
The Hodge Star
Given a Riemannian Manifold $(M,g)$ the Hodge Star $\star$ sets up an isometry between the spaces of $k$ forms and $n-k$ forms with respect to the extension of $g$ to each. Given a $k$-form $\zeta$, its star $\star\zeta$ is defined to be the unique $n-k$ form satisfying
$$\eta\wedge\star\zeta := g(\eta,\zeta),\omega_g$$
for every $k$-form $\eta$, where $g(\eta,\zeta)$ is the extension of the Riemannian metric to $k$-forms, and $\omega_g$ is the volume form for $g$. Thus, star pairs the constant function $1$ and the volume form:
$$\star 1 =\omega_g\hspace{1cm}\star\omega_g=1$$
Using this we can rewrite the above for any two $k$ forms $\eta,\zeta$:
$$g(\eta,\zeta)=\star(\eta\wedge\star\zeta)=\star(\beta\wedge\star\alpha)$$
Composing with itself gives an endomorphism on $k$ forms, which is a multiple of the identity $$\star\circ\star = (-1)^{k(n-k)}\operatorname{id}$$
1-forms
Let $\alpha,\beta$ be two 1-forms, and recall $g(\alpha,\beta)$ is defined by the musical isomorphism, $g(\alpha^\sharp,\beta^\sharp)$. Since $g(\alpha,\beta)$ is a real valued function, we see $$\star g(\alpha,\beta)=g(\alpha,\beta)\omega_g$$ And we recognize the right hand side here as the definition of $\alpha\wedge\star\beta$: that is, $$\star g(\alpha,\beta)=\alpha\wedge\star\beta=\beta\wedge\star\alpha$$
Contracting the Volume Form
Given a vector field $X$, there are two natural ways to build a 1-form from the tools at hand: (i) we can take the geometric dual $X^\flat=g(X,-)$, or (ii) we can contract the volume form to get an $n-1$ form and then hodge dualize: $\star\iota_X\omega_g$. These are actually the same! More generally, there’s a very useful relationship between interior contraction and the hodge dual: for a form $\eta$ and vector field $X$,
$$\star(\iota_X\eta)=X^\flat\wedge\star\eta$$
Applying this to the volume form $\omega_g$ with $\star\omega_g=1$, we see
$$\star(\iota_X\omega_g)=X^\flat\wedge \star\omega_g = X^\flat \wedge 1 =X^\flat$$
Divergence
The divergence is defined for vector fields on a Riemannian manifold $(M,g)$ as the Lie derivative of the volume form: that is, it measures how volume is distorted when flowing along the integral curves of a vector field. Precisely, since the space of top dimensional forms is 1-dimensional we know any top-dim form is some multiple of the Riemannian volume, so we define $\div X$ implicitly by
$$\mathcal{L}_X\omega_g :=(\div X)\omega_g$$
Using the Cartan Magic Formula, we can expand this Lie derivative and find an expression for the divergence as a contraction:
$$\begin{align}
(\div X)\omega_g&=\mathcal{L}_X\omega_g\
&= (d\circ\iota_X+\iota_X\circ d)\omega_g\
&=d\left(\iota_X\omega_g\right)+\iota_X\left(d\omega_g\right)\
&=d(\iota_X\omega_g)
\end{align}$$
Where the last equality follows as $\omega_g$ is a top dimensional form so its exterior derivative must vanish. We can continue to get another useful form by using the relationship between the interior product and hodge dual:
$$
(\div X)\omega_g =d(\iota_X\omega_g)\
=d\left(\pm\star\star\iota_X\omega_g\right)\
=d\left(\pm\star X^\flat\right)\
=\pm d\star X^\flat
$$
(QUESTION: What is going on with the $\pm$ here? Should it just be $+$?) Thus, (here I’ve dropped the $\pm$, need to figure this out)
$$\div X = \star d\star X^\flat$$
The Laplacian
The laplacian of a function $f\colon M\to\RR$ on a Riemannian manifold $(M,g)$ is another function $\Delta f\colon M\to\RR$. The metric dependence of the operator can neatly be packaged into the hodge star,
$$\Delta f= \star d\star df$$
Like in Euclidean space, this is the divergence of the gradient, using
$$\div X = \star d\star X^\beta$$ $$\grad f := (df)^\sharp,\implies (\grad f)^\beta = df$$
Coordinates and Bases
If $(u,v)$ are coordinates[^1] on $M$, we write $\Phi\colon M\to \RR^2$ as the diffeomorphism $\Phi(p)=(u(p),v(p))=(\tilde{u},\tilde{v})$ with $(\tilde{u},\tilde{v})$ a point in $\RR^2$, $u\colon M\to\RR$ and $v\colon M\to\RR$ the coordinate functions, and $\Psi\colon \RR^2\to M$ its inverse. Each coordinate function is naturally paired with a differential $du, dv$ defined by their action on vector fields $$u\mapsto du\hspace{1cm}du(X):=X(u)$$ And, each coordinate function is also paired with a vector field as a differential operator which is defined by its action on functions, where $\frac{\partial}{\partial \tilde{u}}$ is ordinary partial differentiation on $\RR^2$ $$u\mapsto \partial_u\hspace{1cm}\partial_u(f):=\frac{\partial f\circ\Psi}{\partial \tilde{u}}\circ\Phi$$
Thus, coordinates $(u,v)$ produce a convenient choice of basis ${\partial_u, \partial_v}$ for $TM$ as well as ${du,dv}$ for $T^\ast M$. These bases are dual bases in the sense of linear algebra: we have $$du(\partial_u) = 1\hspace{0.5cm}dv(\partial_v)=1\hspace{1cm}du(\partial_v)=0\hspace{0.5cm}dv(\partial_u)=0$$
Let’s check this: we begin by computing $du(\partial_u):= \partial_u(u) =\frac{\partial u\circ \Psi}{\partial \tilde{u}}\circ\Phi$. Note the map $u\circ\Psi$ takes a point $(\tilde{u},\tilde{v})$ up to $M$ and then to $\tilde{u}$ by definition; so its just the projection $(\tilde{u},\tilde{v})\to\tilde{u}$. The $\tilde{u}$ partial derivative of this is constantly equal to $1$, so the precomposition with $\Phi$ is a function $M\to \RR$ which takes $p\in M$ to $1$. Thus $du(\partial_u)=1$. A similar calculation shows $dv(\partial_u)=0$, with the only change being we are now differentiating the projection $(\tilde{u},\tilde{v})\mapsto\tilde{v}$ with respect to $\tilde{u}$ which gives zero.
[1^]: For notational simplicity I’ll write everything as though I’m working with global coordinates. To be more precise, replace with something like “Let $U\subset M$ be an open neighborhood and $\Phi\colon U\to \RR^2$ a coordinate chart, $\Phi(p)=(u(p),v(p))$…”