Manual Riemannian geometry and geometric analysis (5th Edition)

Free download. Book file PDF easily for everyone and every device. You can download and read online Riemannian geometry and geometric analysis (5th Edition) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Riemannian geometry and geometric analysis (5th Edition) book. Happy reading Riemannian geometry and geometric analysis (5th Edition) Bookeveryone. Download file Free Book PDF Riemannian geometry and geometric analysis (5th Edition) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Riemannian geometry and geometric analysis (5th Edition) Pocket Guide.

This allows us to use a consistent approach to track community behaviour both between snapshots, via interpolation, and across snapshots, via averaging. Simply transferring previously derived formulae would not allow us to consider disconnected graphs, however, so our contributions also include a way of transforming disconnected graphs so that they are amenable to the matrix manifold tools.

Using both synthetic and experimental graph data, we experimentally evaluate two different kinds of geodesics. We identify their strengths, as compared with entry-wise linear interpolation, and also discuss their weaknesses. Finally, we derive interpolation and extrapolation error bounds for both geodesics shown in the Appendix and identify promising avenues of future research in this area.

You are here

Our framework enables more accurate prediction of community transitions by building interpolated graphs between snapshots, global community detection through data aggregation, and prediction of future behaviour through extrapolation from given snapshots. The novelty of our approach arises primarily from the application of Riemannian geometry to dynamic graphs.

To the best of our knowledge, the Riemannian framework presented in this paper is the first of its kind; it is our intent that the research community build from and extend this work to enable features of dynamic community detection not currently considered here. Differential geometry deals with mathematics on manifolds; manifolds are spaces that are locally Euclidean i. A Riemannian manifold is a type of manifold that has a metric associated with each point on the manifold.

The traditional methods for calculating angles and distances in flat spaces have to be modified on manifolds to account for manifold curvature, and the metric is an integral part of those modifications on Riemannian manifolds. A key part of Riemannian geometry, for the purposes of this paper, is the geodesic. Geodesics are the equivalent of straight lines in curved spaces.

A geodesic is locally the shortest path between two points. Great circles on a sphere are examples of geodesics on a curved manifold. Consider a flight from Vancouver, Canada to London, England: the two cities are at similar latitudes, so on a Mercator projection map, the shortest flight would seem to be a straight West-to-East trajectory.

In reality, however, flights between the two cities traverse the Pole because that is a shorter route — it is the great circle route. The discrepancy is due to the curvature of the Earth, which is distorted on a flat map. From another perspective, a geodesic is the path that a particle on a manifold would take if it were not subject to external forcing; a geodesic with constant speed has zero acceleration. Riemannian geometry can be applied to matrix manifolds.

The Grassman and Stiefel manifolds are perhaps the most frequently encountered matrix manifolds in differential geometry because they have closed-form solutions for quantities such as geodesics Absil et al. Pennec et al. These formulae are valuable because even when there is a well-defined metric on a manifold, distances and geodesics between points do not usually have closed-form expressions.

Such quantities have to be solved for numerically. Working on this matrix manifold, when appropriate, can be useful: matrix symmetry provides a reduction in effective dimension, and properties such as symmetry and positive-definiteness are automatically preserved. Bonnabel and Sepulchre extended this framework to include symmetric positive-semidefinite matrices. The extension essentially worked by decomposing a positive-semidefinite matrix into a nullspace component a Grassman manifold and a positive-definite component, which could then use the existing metric.

Researchers have previously used non-Euclidean geometries to investigate graphs Krioukov et al. The approach described in this paper differs in a subtle but meaningful way. In those papers, the mappings used treat graph nodes as points in a hyperbolic space. Our present work, however, treats the entire graph as a single point in a non-Euclidean space. The work of Bonnabel and Sepulchre combined with that of Pennec et al. Each graph is a point, and thus a time-indexed sequence of graphs forms a trajectory on the manifold.

This, in turn, means that we can calculate quantities such as trajectory velocities, distances between graphs represented by manifold distances between their respective points , and relevant geodesics. Given that we are interested in dynamic community detection, the Laplacian is a natural object to work with. The Laplacian uniquely defines a graph up to self-loops , and there is already a known connection between the Laplacian spectrum and community structure Newman Previous work in dynamic community detection e.

Graph Laplacians have a certain structure that make them amenable to the Riemannian geometry techniques presented here as well: Laplacians are symmetric for undirected graphs and positive-semidefinite. Adjacency matrices, for example, are generally indefinite and thus would not be suitable for use with the matrix manifolds described here.

This constant nullspace makes the geometric calculations much simpler than they would be otherwise. It is possible to use other Laplacians, such as the normalized Laplacian. If these Laplacians do not have constant nullspaces, though, the interpolation involves extra calculations detailed by Bonnabel and Sepulchre Assuming no self-loops, the combinatorial Laplacian also has the virtue of being easy to convert into an adjacency matrix.

That being said, as long as a Laplacian is symmetric positive-semidefinite and has a constant nullspace dimension for connected graphs , it is possible to calculate geodesic interpolations for that Laplacian. There are two other relevant considerations we wish to address here.

Firstly, the Laplacians of unweighted graphs constitute a discrete and therefore sparse subset of the matrix manifold. As such, any continuous trajectory will contain weighted graphs. Secondly, directed graphs do not have symmetric Laplacians, and thus they cannot be considered within this framework without symmetrizing them somehow e. For the purpose of community detection, though, edge direction may not be important. There are two primary components to our framework. The first involves modelling and analyzing the dynamic behaviour of the graph prior to any community detection. For this, we show how to calculate an average graph from a collection of snapshots for use in a time-averaged community detection and how to interpolate between time-indexed graph snapshots for seeing how the graph evolves over time.

In the Appendix , we derive and analyze bounds on the interpolation error in terms of distance on the manifold. The second component consists of applying community detection methods to the dynamic graph. In this paper, we will focus on spectral methods, because they have convenient properties under continuous Laplacian dynamics, and the Louvain method Blondel et al.

However, the Riemannian geometry methods do not require using any one particular community detection method. We begin with interpolation between two snapshots. Firstly, the Laplacians for a given dynamic graph all exist on a matrix manifold. For the trajectory L t on that manifold, though, the trajectory speed is not constant, the trajectory direction is not constant, and it is not the shortest path from L A to L B.

It is precisely analogous to the Mercator projection map example given earlier — moving at a constant velocity i. Experimentally, we have observed that the linear interpolation begins and ends its trajectory moving very quickly while the bulk of its trajectory moves relatively slowly. The difference between maximum and minimum velocities can be orders of magnitude, depending on the size of the graph and the distance between the two graphs being interpolated. If the two points are far enough apart, this product will go through a maximum between the two points.

This maximum can, again, be orders of magnitude greater than the product at either endpoint; like the trajectory velocity, this variation will depend on the size of the graphs in question and their distance apart. The geodesic interpolation, however, provides a linear variation in the product of the eigenvalues. In other words, the linear interpolation increases the overall connectivity of the graph between snapshots. Finally, the linear interpolation cannot always be used for extrapolation. All of the interpolated Laplacians are positive-semidefinite, but it is easy to provide examples where the extrapolation quickly becomes indefinite.

Instead, we propose using geodesic interpolation. A geodesic interpolation trajectory has a constant velocity, produces an eigenvalue product that varies linearly between endpoints that are connected graphs, and can be extrapolated indefinitely without leaving the manifold of positive-semidefinite manifolds with constant nullspace dimension. Following Bonnabel and Sepulchre , we show how to calculate this geodesic between two snapshots of a given dynamic graph. Consider the geodesic between L A and L B.

Furthermore, we can use the same U matrix for any Laplacian of a given dynamic graph without affecting our calculations, because the span of U is constant. If there are multiple time-sequenced snapshots, this method can be used to do a piecewise geodesic interpolation with t being shifted and scaled appropriately. Note that the constant Laplacian nullspace means that we can work solely with the R components of L and ignore the Grassman component.

If we are interested in the average behaviour of a dynamic graph, we can calculate the least-squared-distance mean the Karcher mean of a set of graph snapshots. We then list the sum-of-squared-distance function, the distance function itself, and the gradient of the squared distance Pennec et al. According to Pennec et al. Riemannian geometry centers around the Riemannian metric — changing the metric entails changing properties of the manifold such distances and geodesics. The current metric can be described as affine-invariant Pennec et al. We could also use a log-Euclidean metric as described by Arsigny et al.

The primary reason to consider using the log-Euclidean metric instead of the affine-invariant one is computational cost: the formulae for distances and geodesics are simpler and easier to calculate for the log-Euclidean metric. Those distance and geodesic formulae are, respectively,. Another computationally beneficial feature of the log-Euclidean metric is the closed-form expression that it has for calculating the mean of a set of matrices:. To utilize these formulae for interpolating between graphs, we would simply replace Eq.

There are other expressions that are simpler to evaluate for the affine-invariant metric, but those quantities may not be needed, and the different invariance properties of each metric may be valuable in different circumstances. On a practical level, the two metrics generally produce similar interpolations Arsigny et al. For the rest of this paper, we will distinguish the geodesics and means calculated with the two methods as being either affine-invariant AI geodesics or log-Euclidean LE.

The methods described in this paper currently assume that the graph in question is connected and remains so at all points of interest. In order to be widely applicable, the interpolation methods need to be able to handle changing connectivity. We can accommodate this by using a bias term with, potentially, a thresholding procedure. If need be, we can then apply a threshold to the resulting adjacency matrices or round those matrices to an appropriate number of decimal places. Empirically, we found that this approach did not significantly change the interpolated trajectories for connected graphs while also producing reasonable results for disconnected graphs.

If we consider the properties of the Riemannian metrics discussed in this paper, we can see why adding this small bias would not significantly disturb a geodesic trajectory. With these metrics, matrices with zero or infinite eigenvalues essentially exist at infinity. This, too, makes sense: as the eigenvalues become uniformly larger, the manifold becomes flatter, and the differences between the data points become smaller. The flatter the manifold, the closer the geodesic is to the linear interpolation. However, the geodesic interpolation is still guaranteed to remain positive definite, and the linear interpolation is not.

This suggests that if the linear interpolation were more desirable in a particular application but the application also called for the use of extrapolation, then using a geodesic with a large bias term could provide the desired capabilities. It is possible to use spectral clustering with the first non-trivial eigenvector for community detection, but this method can be improved upon by using multiple eigenvectors Boccaletti et al. This approach is convenient for continuous Laplacian dynamics because as long as the eigenvalues are distinct, we can expect the eigenvectors and eigenvalues to vary smoothly with smooth changes in L.

If the eigenvalues of the eigenvectors in question are not distinct, then the eigenvectors are not uniquely defined, and if eigenvalues whose eigenvectors are being used for spectral clustering cross during the course of a trajectory, the spectral clustering may experience a discontinuous jump. Disconnected graphs can provide exactly this kind of behaviour e. Moreover, if the number of disconnected components is not constant, then it will not suffice simply to consider the first m non-zero eigenvalues, for the set of such eigenvalues will not be constant.

One way of identifying and tracking communities is through defining a kernel for the nodes. Summing over all of the nodes then produces a density function. The maxima of that density function correspond to cluster centroids, and the separatrices between maxima define community boundaries in the reduced eigenspace.

With a symmetric Gaussian kernel, this density function would be. See an example of this in the spectral plot shown in Fig. The format of Fig. This particular plot shows two distinct communities with one node at approximately The density of a cluster is proportionate to the magnitude of the density function at the peak i. Community growth and contraction can be seen by points traversing community boundaries i.

Birth and death correspond to the emergence or disappearance of a peak in the density function. Merging and splitting correspond to the merging and splitting, respectively, of the density function peaks. Birth and death also correspond to pitchfork bifurcations, but this is not as immediately obvious. To identify death, merging, or splitting, we can track the Hessian of f. If it becomes singular at a point, that is an indication of a potential bifurcation there. Birth may be identified in the same way, but searching the space for such a phenomenon may be more difficult than simply tracking known maxima and monitoring the Hessian at those points.

Once the spectrum has been plotted, techniques such as k -means clustering can identify communities. This should produce a sufficient approximation of the separatrices between maxima. However, if two eigenvectors are used, it may even be easier to identify communities visually. To demonstrate our methods, we initially created a series of graph snapshots using a synthetic graph process. Once the merger was complete, we gradually decreased p int to simulate the splitting of a large community into smaller ones.

To test our methods on real-world data, we used proteomics data produced by Mitchell et al. The network data indicates time-varying linkages between different proteins in human lung epithelial cells that have been infected by the Severe Acute Respiratory Syndrome corona virus SARS-CoV. We implemented our methods in Python, making particular use of the matrix exponential and logarithm functions in the SciPy package.

To evaluate the interpolation and averaging results for the synthetic network, we recorded connectivity measurements, spectral snapshots from interpolated and averaged Laplacians, and the total number of communities in the interpolated and averaged Laplacians. These snapshots provided an evaluation that was more qualitative than quantitative. We then used the Louvain method to perform community detection. The spectral snapshots and connectivity measurements were not as useful for the proteomics network because the proteomics network was highly disconnected, but the Louvain method was still applicable for community detection.

To investigate the interpolation and averaging of community structure for this network, we tracked the total number of communities, the total number of communities with at least five members, community similarity, and graph energy. Because the network was highly disconnected, the Louvain method produced many small or single-member communities. Tracking the number of communities above a certain size helped to reduce the amount of noise due to that effect. By community similarity, we mean not just the number of communities but the composition of those communities as well.

The Rand index works by using a baseline or ground truth case, considering every distinct pair of nodes, and determining whether or not they are in the same community. It then looks at these same pairs in another graph of interest. If, for a given pair of nodes, the nodes are either in the same community as each other in both graphs or not in the same community as each other in both graphs, that pair gets a score of 1; otherwise they get a score of 0, indicating a dissimilarity between the community structures of the two graphs. The smaller the value, the less similar the structures are.

Given that we had no ground truth between the data snapshots, we instead looked at the changes in this metric from one snapshot to the next. Ideally, there would be a steady change in this value between points — a sawtooth pattern over the course of the whole interpolation — as we measured how the interpolation differed from the most recent data snapshot. Finally, to measure network connectivity, we used graph energy instead of a Laplacian eigenvalue product. The energy of a graph, E , is defined as the sum of the absolute values of the eigenvalues of the adjacency matrix.

Given that it is bounded by the number of edges, m , in an unweighted graph Brualdi , we can also use it to bound the number of edges:. For both sets of data, we used thresholding on the edge weights to get unweighted graph equivalents. This procedure, and especially the threshold value used, was more impactful on the proteomics data than on the synthetic data.

The graph spectral snapshots are shown in Fig. We can now interpolate from the third to the fourth data snapshot and then from fourth to the fifth data snapshot to further investigate this community merger and separation. Snapshots from the AI geodesic interpolation are shown in Fig. Increasing the temporal resolution would become increasingly cumbersome for presentation in a printed format. Synthetic graph spectral plots, frames The spectral plots of the synthetic data snapshots are presented in order from left to right, and top to bottom.

They show two communities that are stable and separate except for the merger shown in the fourth frame. There are also nodes that do not associate closely with any community at various points in time. Synthetic graph interpolation. At the top left, the first frame is the third data snapshot, the sixth frame is the fourth data snapshot, and the eleventh frame is the fifth data snapshot; the interpolated frames are taken at evenly spaced time intervals between the data snapshots.

The interpolated frames show a clear progression of community merging and splitting as well as some outliers that do not seem strongly attached to any community. Additional file 3: A video of the spectral plots created with the AI geodesic interpolation on the synthetic graph data progressing through the data snapshots in order from the initial to the final frame. MP4 kb. In Fig. The geodesic curves both interpolate the eigenvalue product linearly between points, whereas the linear interpolation is slightly concave.

For this dynamic graph, the data points are relatively close to each other, and thus the geodesic and linear interpolations are very similar. Logarithm of product of non-zero eigenvalues over time, synthetic graphs. The connectivity results are shown for the interpolations that do not use a threshold top and the interpolations that use a threshold of 0. Logarithm of product of non-zero eigenvalues over time with longer interpolation window no threshold , synthetic graphs.

The AI and LE geodesic are still indistinguishable with regards to the connectivity measure, however. Thresholding gives us a piecewise constant graph. The graph dynamics consist of an edge addition phase followed by an edge subtraction phase, so the thresholding parameter simply determines when that entry flips from 0 to 1 or vice versa. If we were to use a finer time resolution, we might see a slight difference between the linear and geodesic interpolations with respect to when this transition happens, but the basic behaviour would remain the same.

In performing community detection, we found that the geodesic interpolations produced adjacency matrices with negative entries. Almost all of these entries were on the order of 0. Negative edges need not be a barrier for community detection e. This was only necessary for community detection on graphs that did not use thresholding. When using a threshold, any value equal to or below the threshold, including a negative value, was set to 0. The spectral plots showed two communities merging and splitting with some outliers along the way. We found that the Louvain method split the merged community into four, and the outliers sometimes formed very small communities of their own Fig.

When we use a threshold, these behaviours cease, as we now have a graph that is piecewise constant in time for all three interpolations. Communities in interpolated synthetic graphs. When no threshold is applied top , the Louvain method produces varying numbers of communities during the merger of the two original communities, and even the data snapshot of the merged communities shows not one but four communities; we also see some differences between the two geodesic interpolations.

With a threshold bottom , though, the piece-wise constant nature of the interpolation shows forth. Finally, we consider the average behaviour of this graph using the mean graphs produced by each interpolation method. The spectral plots of these graphs are shown in Fig. This indicates that the merging of the two communities was only a transient effect and that the same communities re-emerged after the temporary merger.

The averaging process preseved the structure that we designed the dynamic graph to have. If the second pair of communities were significantly different from the first, then the spectrum of the average graph would not display two distinct communities so clearly. Synthetic graph average spectral plots. This is not surprising given both the propensity that linear interpolations have for increasing connectivity and the similarity of the geodesic and linear interpolations in this case. The AI geodesics produced far fewer such entries than the LE geodesics by an order of magnitude , and the AI entries were usually smaller.

Of the negative entries, the largest was As with the synthetic graphs, we simply set these negative entries to 0 when using the Louvain method. With no thresholding, we found that the results were too connected i. Thresholding produced better results. Generally speaking, the AI geodesic produced too many communities while the linear interpolation produced too few, and neither produced a steady deformation from one data point to the next.

The LE geodesic showed an intermediate behaviour in this regard, and a threshold of 0. The number of communities produced by the interpolation did not vary smoothly, but there was a general progression from data point to data point. Changing the threshold value had a small effect on the AI geodesic, but it did nothing to improve the linear interpolation, and using a threshold value of 0. We will return to this phenomenon later. Number of communities in interpolated proteomics network. The number of communities in the interpolations using thresholds of 0. In looking at Fig.

If we only consider communities of a certain size, we can get a more accurate picture of the true community dynamics. When using a threshold value, the results are somewhat similar to those in Fig. This book, like its previous editions, is dedicated to him. Jurgen Jost Contents 1. Foundational Material 1 1. Lie Algebras 44 1. Parallel Transport, Connections, and Covariant Derivatives 3.

The Yang-Mills Functional 3. Minimal Submanifolds Geodesics and Jacobi Fields 4. Symmetric Spaces and Kahler Manifolds 5. Morse Theory and Floer Homology 6. Variational Problems from Quantum Field Theory 7. Harmonic Maps 8. Regularity Questions 8. The Bochner Technique Foundational Material 1. M is called paracompact if any open covering possesses a locally finite refinement. A map between topological spaces is called continuous if the preimage of any open set is again open.

A bijective map which is continuous in both directions is called a homeomorphism. Definition 1. A maximal differentiable atlas is called a differentiable structure, and a differentiable manifold of di- mension d is a manifold of dimension d with a differentiable structure. From now on, all atlases are supposed to be differentiable.

Two atlases are called compatible if their union is again an atlas. In general, a chart is called com- patible with an atlas if adding the chart to the atlas yields again an atlas. An atlas is called maximal if any chart compatible with it is already contained in it. For a general, not differentiable manifold, this is much harder.

A differentiable manifold is called orientable if it possesses an oriented atlas. Of course, there exist also non- compact manifolds. The simplest example is R d. In general, any open subset of a differentiable manifold is again a differentiable mani- fold. Such a map is called a diffeomorphism if bijective and differentiable in both directions. Foundational Material For purposes of differentiation, a differentiable manifold locally has the structure of Euclidean space.

Thus, the differentiability of a map can be tested in local coordinates. The diffeomorphism requirement for the chart transitions then guarantees that differentiability defined in this manner is a consistent notion, i. We want to point out that in the context of the preceding def- initions, one cannot distinguish between two homeomorphic manifolds nor between two diffeomorphic differentiable manifolds. When looking at Definitions 1. Namely, one can put any type of restriction on the chart transitions, for example, require them to be affine, algebraic, real analytic, conformal, Euclidean volume preserving, Perhaps the most important example is the notion of a complex manifold.

We shall need this, however, only at certain places in this book, namely in 5. We conclude this section with a useful technical result. Lemma 1. Then there exists a partition of unity, subordinate to U a. This means 1. See any advanced textbook on Analysis, e.

Jost, Postmodern Anal- ysis, 3rd ed. The first clear formulation of that concept, however, was given by H. The only one dimensional manifolds are the real line and the unit circle S 1 , the latter being the only compact one. Two dimensional compact manifolds are classified by their genus and orientability character. In three dimensions, there exists a program by Thurston[, ] about the possible classification of compact three- dimensional manifolds.

References for the geometric approach to this classification will be given in the Survey on Curvature and Topology after Chapter 4 below. In dimension at most three, each manifold carries a unique differentiable struc- ture, and so here the classifications of manifolds and differentiable manifolds coin- cide. This is not so anymore in higher dimensions. Milnor[, ] discovered exotic 7-spheres, i.

Exotic spheres likewise exist in higher dimensions. Kervaire[] found an example of a manifold carrying no differentiable structure at all. In dimension 4, the understanding of differentiable structures owes important progress to the work of Donaldson. He defined invariants of a differentiable 4- manifold M from the space of selfdual connections on principal bundles over it. In particular, there exist exotic structures on ]R 4. A description can e. Here, g r ,. Thus, T17 is an open subset of R d x R d , hence in particular a differentiable manifold.

T17 is called the total space of the tangent bundle. We want to define the tangent space of M at the point p. We say that the tangent space T p M is represented in the chart x by T x p yx U. Thus, a tangent vector in T p M is given by the family of its coordinate representations. We then want to define df p as a linear map from T p M to R. Foundational Material Thus, in the chart x ' , w is represented by L v.

Here, a fundamental idea emerges that will be essential for the understanding of the sequel. T p M is a vector space of dimension d, hence isomorphic to This isomorphism, however, is not canonical, but depends on the choice of a chart. Under a change of charts, also other objects then are correspondingly transformed, for example derivatives of functions, or more generally of maps. In other words, a chart yields local representations for tangent vectors, derivatives, etc.

Or in still other words: We know how to differentiate differentiable functions that are defined on open subsets of R d. If now a function is given on a manifold, we pull it back by a chart, to an open subset of and then differentiate the pulled back function. In order to obtain an object that does not depend on the choice of chart, we have to know in addition the transformation behavior under chart changes.

A tangent vector thus is determined by how it operates on functions by differentiation. We can achieve this most simply as follows: Let the local coordinates on U be a: 1 ,. We want to collect the previous considerations in a formal definition: Definition 1. The space of equivalence classes is called the tangent space to M at the point p, and it is denoted by T p M.

We now want to define the tangent bundle of a differ- entiable manifold of dimension d. Finally, we briefly discuss the case of a complex manifold M, to have it at our disposal in 5. Other definitions of the tangent space of a differentiable manifold M are possible that are more elegant and less easy to compute with. This definition has the obvious advantage that it does not involve local coordinates.

The following lemma shows that locally, any immersion is a differentiable embedding: 1. This follows from the implicit function theorem. In local coordinates z 1 ,. By the implicit function theorem, there locally exists a map z 1 ,. A subset N' of N , equipped with the relative topology, thus is a differentiable submanifold of N, if N' is a manifold and the inclusion is a differentiable embedding. Similarly, the implicit function theorem implies Lemma 1. Of course, in these coordinates df x still has rank n. By the Lemma 1. Thus, the class of abstract differentiable manifolds is the same as the class of submanifolds of Euclidean space.

Neverthe- less, the abstract and intrinsic point of view offers great conceptual and technical advantages over the approach of submanifold geometry of Euclidean spaces. Again, we shall start from infinitesimal considerations. We would like to be able to measure the lengths of and the angles between tangent vectors. Then, one may, for example, obtain the length of a differentiable curve by integra- tion. In a vector space such a notion of measurement is usually given by a scalar product. We thus define Definition 1.

A Riemannian manifold is a differentiable manifold, equipped with a Riemannian metric. In order to understand the concept of a Riemannian metric, we again need to study local coordinate representations and the transformation behavior of these expressions. In these coordinates, a metric is represented by a positive definite, symmetric matrix dij 14 1. Foundational Material i. The transformation formula 1. Therefore, smooth dependence on the base point as required in def.

We now want to study the transformation behavior. In these coordinates, v and w have representations 0 1 ,. The simplest example of a Riemannian metric of course is the Euclidean one. Theorem 1. The metric is simply obtained by piecing the Euclidean metrics of the coordinate images together with the help of a partition of unity.

Of course, these expressions can be computed in local coordinates. On a Riemannian manifold M, the distance between two points p , q can be defined: 16 1. The distance function satisfies the usual axioms: Lemma 1. Proof, ii and iii are obvious. It suffices to show that in each chart the topology induced by d co- incides with the one of K d , i. Foundational Material Lemma 1.

Thus, geodesics are the critical points of the energy functional. By Lemma 1. As in the Euclidean case, one easily sees that regular curves can be parametrized by arc length. We shall attempt to minimize the length within the class of regular smooth curves, and we shall succeed and complete the program in Corollary 1.

As the length is invariant under reparametrization by Lemma 1. For such curves, by Lemma 1. Conversely, every critical point of the energy functional, i. Namely, for a solution of 1. We have shown Lemma 1. In addition, c depends smoothly on p and v. Denoting the geodesic of Theorem 1. In general, however, V p is not all of T p M, as is already seen in the example of a proper, open subset of K d , equipped with the Euclidean metric. Nevertheless, we shall see in Theorem 1. By The- orem 1.

In particular, p is mapped to 0. For 1. The obstruction will be given by the curvature tensor. After these preparations, we return to the analysis of the geodesic equa- tion 1. Corollary 1. The first claim follows from Corollary 1. Let t. This will then imply the second claim. The proof of this inequality goes as follows: 24 1.

By Corollary 1. By Theorem 1. Since M is compact, it can be covered by finitely many such neighbor- hoods and we choose po as the smallest such p. This geodesic depends continuously on p and q. Proof, po from Corollary 1. Moreover, by the last claim of Corollary 1. Exchanging the roles of p and q yields the continuous dependence on p as well. Only the shortest geodesic is unique, provided p and q are sufficiently close.

We recall the notion of homotopy between curves: Definition 1. The proof is elementary. Then there exists a geodesic in every homotopy class of curves from p to q, and this geodesic may be chosen as a shortest curve in its homotopy class. Likewise, every homotopy class of closed curves in M contains a curve which is shortest and geodesic.

Since the proof is the same in both cases, we shall only consider the case of closed curves. As a preparation, we shall first show 26 1. TTien 70 and 71 are homotopic. Here and in the sequel, all curves are parametrized proportionally to arc length. We may assume w.

The union of these geodesic segments yields a curve 7. Therefore, 7 has to be geodesic. Namely, otherwise, there would exist points p and q on 7 for which one of the two segments of 7 between p and q would have length at most po , but would not be geodesic. By the argument 1. Minimize over all curves between p and q and not only over those in a fixed homotopy class as in the proof of Theorem 1. The compactness of M implies the closedness of A. Analogously, c v may be extended in the direction of negative t. The claims follow easily.

Obviously, they do hold for Euclidean space which is not compact, but they do not hold for any proper open subset of Euclidean space, essentially since such a set is not complete. It will turn out that completeness will be the right condition for extending Theorem 1. We can now state the Theorem of Hopf-Rinow. The following statements are equivalent: 28 1.

Foundational Material i M is complete as a metric space or equivalently, it is complete as a topological space w. I is not empty, as it contains 0 by definition of r, and it is closed for continuity reasons. Inserting 1. That curve thus is shortest and therefore has to be geodesic as shown in the proof of Theorem 1. Since we observed that equality has to hold in 1. It is now easy to complete the proof of Theorem 1. Hence, B p , r is the image of the compact ball in T p M of radius r under the continuous map exp p. Hence, B p,r is compact itself.

Since K is assumed to be closed and shown to be contained in a compact set, it must be compact itself. It then is bounded, and, by ii , its closure is compact. It therefore contains a convergent subse- quence, and being Cauchy, it has to converge itself. This shows completeness of M. I then is nonempty, and by Theorem 1. Then B p, p is compact, being the image of the compact ball of radius r in T p M under the continuous map exp p. Therefore, the argument of Corollary 1. Thus, M also becomes a Riemannian manifold. We want to com- pute this metric in the local chart of 1.

Isometries leave the lengths of tangent vectors and therefore also the lengths and ener- gies of curves invariant. Thus, critical points, i. With this remark, we may easily determine the geodesics of S n. We claim that the geodesic c v through p with tangent vector v is nothing but the great circle through p with tangent vector v parametrized proportionally to arc length , i. Together with c v , Sc v is also a geodesic through p with tangent vector v.

The uniqueness result of Theorem 1. Foundational Material As another example, we consider the torus T 2 introduced in 1. We in- troduce a metric on T 2 by letting the projection 7r be a local isometry. Hence, the Riemannian metric on T 2 is well defined. Since 7r is a local isometry, Euclidean geodesics of R 2 are mapped onto geodesics of T 2.

More generally, a straight line with rational slope becomes a closed, hence periodic geodesic on T 2 , while the image of one with irrational slope lies dense in T 2. Before ending this paragraph, we want to introduce the following impor- tant notion: Definition 1. As the name suggests, the concept of a Riemannian metric was introduced by B. Riemann, in his habilitation address [].

He also suggested to consider more generally metrics obtained by taking metrics on the tangent spaces that are not induced by a scalar product. Such metrics were first systematically investigated by Finsler and are therefore called Finsler metrics. For a general metric space, a geodesic is defined as a curve which realizes the shortest distance between any two sufficiently close points lying on it. Those metric spaces that satisfy the conclusion of the Hopf-Rinow theorem that any two points can be connected by a shortest geodesic are called geodesic length spaces, and 1.

See e. A Lorentz manifold is a differentiable manifold with a Lorentz metric. Lorentz manifolds are the spaces oc- curing in general relativity. Let us briefly discuss some concepts. Length and energy of a curve may be defined formally as in the Riemannian case, and we again obtain geodesic equations. Geodesics whose tangent vectors all have norm zero are called null geodesics. They describe the paths of light rays. Note that in our above description of the Minkowski metric, the conventions have been chosen so that the speed of light is 1. Submanifolds of Lorentz manifolds whose tangent vectors are all space-like are ordinary Riemannian manifolds w.

Book Riemannian Geometry And Geometric Analysis 5Th Edition

For treatments of Lorentzian geometry, an introduction is []. Deeper aspects are treated in Hawking and Ellisfl 13]. Nash proved that every Riemannian manifold M can be isometrically em- bedded into some Euclidean space R fc. For the proof of this result, he developed an implicit function theorem in Frechet spaces and an iteration technique that have found other important applications. A simpler proof was found by Gunther[]. In our presentation, we only consider finite dimensional Riemannian manifolds. It is also possible, and often very useful, to introduce infinite dimensional Rieman- nian manifolds.

Those are locally modeled on Hilbert spaces instead of Euclidean ones. The lack of local compactness leads to certain technical complications, but most ideas and constructions of Riemannian geometry pertain to the infinite di- mensional case. Such infinite dimensional manifolds arise for example naturally as certain spaces of curves on finite dimensional Riemannian manifolds. A thorough treatment is given in []. Foundational Material is a vector space isomorphism, i. Such a pair ip, U is called a bundle chart.

Often, a vector bundle will simply be denoted by its total space. It is important to point out that a vector bundle is by definition locally, but not necessarily globally a product of base and fiber. A vector bundle may be considered as a family of vector spaces all iso- morphic to a fixed model R n parametrized in a locally trivial manner by a manifold. The transition maps express the transformation behavior of a vector in the fiber under a change of local trivialization.

A vector bundle can be reconstructed from its transition maps. We say that a vector bundle has the structure group G if there exists an atlas of bundle charts for which all transition maps have their values in G. We have already seen an example of a vector bundle above, namely the tangent bundle TM of a differentiable manifold M.

We now want to extend some algebraic concepts and constructions from vector spaces to vector bundles by performing them fiberwise. For example: Definition 1. Foundational Material Definition 1. By this pattern, all constructions for vector spaces can be extended to vector bundles. Of partic- ular importance for us will be dual space, exterior and tensor product.

Let us briefly recall the definition of the latter: Let V and W be vector spaces as always over R of dimension m and n, resp. We now want to study the transformation behavior of cotangent vectors. Let the bases e,; and be given by local coordinates, i. Thus a tangent vector transforms with the functional matrix of the coordinate change whereas a cotangent vector transforms with the transposed inverse of this matrix. This different transformation behavior is expressed by the following definition: Definition 1.

From the formula 1. We apply this orthonormalization process for each bundle chart and obtain a new bundle atlas whose transition maps always map an Euclidean orthonormal basis of R d into another such basis, and are hence in 0 d. The orientation allows to select an atlas for which all transition maps have positive functional determinant. From this, one sees that we also may obtain transition functions for the tangent bundle with positive determinant.

The orthonormalization process of Theorem 1. In the same manner as Theorem 1. Elements of f2 p M are called exterior p-forms. A dx Zp where r] x is a smooth function and x 1 ,. A dx lp ox 0 and extended by linearity to all of f2 p M. Apply Lemma 1. A dx lp. We saw already that M then also carries a Riemannian metric. Then TM is also locally spanned by wi,. We conclude this section with a consideration of the complex case - again, we remind the reader that is needed only in particular places, like 5. Here, in contrast to the Definition 1.

If, however, these conditions are satisfied, that is, M is a complex manifold and the transition maps are ho lomorphic, then we have a holomor- phic vector bundle. A dz 3q. A dz ]q. One may verify these relations also by direct computation, e. The curve c q is called the integral curve of X through q. For fixed q , one thus seeks a curve through q whose tangent vector at each point coincides with the value of X at this point, i.

This shows 1. Then, by Theorem 1. Thus, any point whose flow line passes through p is contained in the same flow line, namely the one starting at p. Therefore, there is precisely one flow line going through p. Also, flow lines in general are not closed even if the flow exists for all f el.

In general, a local 1-parameter group need not be extendable to a group, since the maximal interval of definition I q of c q need not be all of R. This is already seen by easy examples, e. However Theorem 1. Let now suppX C K , K compact. However, also higher order systems of ODE may be reduced to first order systems by introducing additional independent variables.

As an example, we want to study the system for geodesics, i. Lie Algebras 47 with coordinates x 1 ,. The geodesic flow on TM is obtained from the cogeodesic flow by the first equation of 1. Thus, the geodesic lines are the projections of the integral curves of the geodesic flow onto M. Foundational Material The reason for considering the cogeodesic instead of the geodesic flow is that the former is a Hamiltonian flow for the Hamiltonian H from 1.

We remark that by 1. Hence, by Corollary 1. Now 1. This implies the first two claims. The Jacobi identity follows by direct computation. Foundational Material Proof. Directly from Lemma 1. For a tensor field S, this is not possible any more, because the values of S at different points lie in different spaces, and it is not clear how to compare elements of different fibers.

For this purpose, however, one might use a map F of one fiber onto another one, and an element v of the first fiber may then be compared with an element w of the second fiber by comparing F v and w. One possibility to obtain such a map at least between neighbouring fibers which is sufficient for purposes of differentiation is to use a local 1-parameter group of diffeomorphisms.

From 1. Conversely, if the ift are isometries, 1. The space of all vector fields on a differentiable manifold constitute a Lie algebra by Corollary 1. The claim then follows if we show that the space of Killing fields is closed under the Lie bracket [. This, however, follows from the following identity which was derived in the proof of Theorem 1.

Since ift and ip s are isometries, so are ip-t. Thus, [X, Y indeed is a Killing field. An action from the right is defined analogously. The Lie groups we shall encounter will mostly be linear algebraic groups. In order to describe the most important ones, let F be a vector space over R. Since bijectivity is an open condition, the tangent space to G1 F , for example at the identity linear map, i. Of course, this is also the Lie algebra of 0 F , and therefore in the sequel, we shall sometimes write o F in place of so F.

As the ordinary exponential map converges, this series converges for all t G R, and e tA is continuous in t. In fact, the exponential map maps so V onto SO V. R" with its standard Euclidean scalar product. Sometimes, we shall also need complex vector spaces. Let Vc be a vector space over C of complex dimension m. Thus, Ad and ad associate to each element in G1 V resp. These representations are called adjoint representations. The unit element of a Lie group G will be denoted by e. Since a left invariant vector field is determined by its value at any point of G, for example at e, we obtain an isomorphism between T e G and the space of left invariant vector fields.

From Theorem 1. In analogy to the definition of a vector bundle Definition 1. As in Definition 1. Here, the structure group operates on G by left translations. The notions of vector and principal bundle are closely associated with each other as we now want to explain briefly. If we divide out this G-action, i. The transition functions for P also give transition functions for E via the left action of G on V. P can be considered as the bundle of admissible bases of E.

The transition functions describe a base change. For example, if we have an SO n vector bundle E, i. Lie groups, while only treated relatively briefly in the present text book, form a central object of mathematical study. An introduction to their ge- ometry and classification may be found in []. As symmetry groups of physical systems, they also play an important role in modern physics, in particular in quan- tum mechanics and quantum field theory.

Riemannian Geometry and Geometric Analysis - Jurgen Jost - Häftad () | Bokus

We shall encounter Lie groups again in chapter 5 as isometry groups of sym- metric spaces. A theorem of Myers-Steenrod says that the isometry group of a Riemannian manifold is a Lie group. For a generic Riemannian manifold, the isom- etry group is discrete or even trivial. A homogeneous space is a Riemannian man- ifold with a transitive group G of isometries. Homogeneous spaces form important examples of Riemannian manifolds and include the symmetric spaces discussed in chapter 5. In order to define Spin n , we start by introducing Clifford algebras.

For a substantial part of the algebraic constructions to follow in fact a not 60 1. Foundational Material necessarily nondegenerate quadratic form on V would suffice, but here we have no need to investigate the most general possible construction. On the contrary, for our purposes it suffices to take R" with its standard Euclidean scalar product. An orthonormal basis will be denoted by ei, In particular, the dimension of C1 V as a vector space is 2 n. Also, declaring this basis as being orthonormal, we obtain a scalar product on C1 V extending the one on V.

We define the degree of e a as being a. The e a of degree k generate the subset Cl fc C of elements of degree k. The former is a subalgebra of C1 V , but not the latter. The conclusion follows easily for monomials and with a little algebra also in the general case. From these two cases, the general pattern should be clear. It follows from Lemma 1. Spin P then becomes a Lie group. This follows from general properties of the exponential map.

Riemann geometry -- covariant derivative

This is the so-called vector representation, not to be confused with the spinor representation introduced below. Consequently p a is the reflection across the hyperplane orthogonal to a. This is an element of O V. G Pin P , p a is a product of reflections across hyperplanes, hence in O V. The pre- ceding construction also shows that all reflections across hyperplanes are contained in the image of p Pin P. Since all elements in Spin H are even, Lemma 1. Hence, a can be connected to ei. Thus we need to connect 1 and —1.

If we differentiate the relation characterizing Spin y , i. Let us also discuss the induced homomorphism 1. The preceding discussion implies that dp coin- cides with the Lie algebra isomorphism r of Lemma 1. Its tangent vector at 1, i. We have seen in the proof of Theorem 1. This, however, is the rotation in the ej,ej plane through an angle of 2d from e, towards ej. Again, we write ei, e-i in place of x, y. The subalgebra Cl ei; K 2 is generated by k, and thus it is isomorphic to C C Bt, where the purely imaginary complex numbers correspond to multiples of k. So, while Pin V is gener- ated by 1 Spin V is generated by 1 ,k.

Thus, 7 is an algebra homomorphism. Spin R 4 then is the group of products of two such elements, i. Thus, the e a again form a basis, and the only difference is that we now admit complex coefficients. For the sequel, we need to choose an orientation of V , i. Any other basis of V ob- tained from this particular one by an element of SO V then is also called positive.

It is easy to check that T is independent of the chosen positive orthonor- mal basis. A simple computation based on 1. Clifford multiplication by v interchanges these eigenspaces. This is a sim- ple consequence of Lemma 1. We have already seen in the preparations for Theorem 1. The maps given in 1. Namely, a homotopically nontrivial loop 7 in S' 1 induces 1. On the other hand, if we have a loop in Spin c U that is mapped to a homotopically trivial one in S' 1 when we compose 1. That kernel can be identified with Spin U by 1. Thus Theorem 1.

The treatment here will be based on the above discussion of ex- amples in the real case. We want to identify Cl c R 2 with C 2x2 , the space of two by two matrices with complex coefficients. Thus, we identify C1 C R 2 with C 2x2. One also easily checks that 1. Thus, r preserves the relations in the Clifford algebra, and it is not hard to verify that T in fact extends to the desired isomorphism between Cl c R 4 and C 4x4. The preceding examples seem to indicate a general pattern that we now wish to demonstrate by induction on the basis of Lemma 1.

We choose orthonormal bases vi, Now l has become a homomorphism between two algebras of the same di- mension, and it is injective and surjective on the generators, hence an iso- morphism. We also choose an orientation of V, i. Because of 1. If we want to emphasize the dimension n of V, we write S n in place of 5. For subsequent use in 5. The latter are in fact irreducible. C1 V , and therefore also V, acts on both of them by Clifford multiplication. We now want to extend the representation of Spin V to Spin c C.

Since a is complex linear, it commutes with multiplication by com- plex scalars, in particular with those of unit length. Let us discuss the example of C1 C R 4 once more. The formulas given above for the products r e a r ep also show that the representation admits a decomposition into two copies of C 2 that is preserved by the elements of even order of Cl c R 4. Both these spaces are three dimensional. In the above decom- position of the representation of ci c,ev M. Foundational Material Finally, let us briefly summarize the situation in the odd dimensional case. Here, according to Corollary 1.

When re- stricted to Spin H , these representations become isomorphic and irreducible. This yields the spinor representation in the odd dimensional case. We omit the details. We also observe that the spinor representation is a unitary representation in a natural manner. This implies Corollary 1. We let TM be the tangent bundle of M. A Riemannian manifold with a fixed spin structure is called a spin manifold.

This is also expressed by saying that the frame bundle is lifted to a Spin ro bundle. It is important to note that such a lift need not always be possible. One way to realize this is by considering the corresponding transition functions. In fact, the existence of a spin structure, i. Here, however, we cannot define these topological concepts.

Furthermore, if a spin structure exists, it need not to be unique. Sections are called half spinor fields. From Corollary 1. An oriented Riemannian manifold M equipped with a fixed spin 0 structure is called a spin c manifold. Again, the existence of a spin 0 structure depends on a topological con- dition, namely that W 2 M lifts to an integral class in H 2 M, Z 2.

Again, however, we cannot explain this here any further. We point out, however, that the required condition is satisfied for all oriented Riemannian manifolds of dimension 4. Thus, each oriented four-manifold possesses a spin 0 structure. Identifying S 1 with U l ,we see that a spin 0 structure induces a set of transition functions for a vector bundle L with fiber C, a so called complex line bundle. We return to the frame bundle P over M with fiber SO n. Again, these Clifford bundles can be decomposed into bundles of elements of even and of odd degree.

The chirality operator P cf.

  • Riemannian Geometry and Geometric Analysis | Mathematical Association of America.
  • Riemannian Geometry and Geometric Analysis;
  • Shamati: I Heard.
  • Breakthrough: Stories and Strategies of Radical Innovation.
  • The Truth of All Things: A Novel;
  • Den of Thieves.
  • Riemannian manifold?

The definition of the Clifford bundles did not need a spin or spin c structure on M. But suppose now that we do have such a structure, a spin structure, say. Spin n acts on Cl c R ra by conjugation. This action commutes with the action of Spin n on Cl c R" given by 1. Exercises for Chapter 1 1 Give five more examples of differentiable manifolds besides those dis- cussed in the text. Show that such a dif- ferentiable structure is unique. H n is called hyperbolic space.

Determine the geodesics of H n in the chart given in 6 The geodesics through 0 are the easiest ones. Determine the exponential map of the sphere S n , for example at the north polep. Write down normal coordinates. Compute the supremum of the radii of balls in T p S n on which exp p is injective. Where does exp p have maximal rank?

Same as 8 for the flat torus generated by 1, 0 and 0, 1 G R 2. Exercises for Chapter 1 81 What is the transformation behavior of the Christoffel symbols under coordinate changes? Do they define a tensor? Show that X is complete and compute its injectivity radius. Show that the structure group of the tangent bundle of an oriented d-dimensional Riemannian manifold can be reduced to SO d. Can one define the normal bundle of a differentiable submanifold of a differentiable manifold in a meaningful manner without introducing a Riemannian metric?

Let M be a differentiable submanifold of the Riemannian manifold N. Show that this topology coincides with the topology on M that is induced from the topology of N. Determine the corresponding flow on S n. Compute a formula for the Lie derivative in the direction of a vector field for a p-times contravariant and g-times covariant tensor. Show that for arbitrary vector fields X , Y. Let V be a real vector space with a scalar product and let A P V be the p-folcl exterior product of V.

A Up, iui A If ei,. An orientation on V is obtained by distinguishing a basis of V as positive. Any other basis that is obtained from this basis by a base change with positive determinant then is likewise called positive, and the remaining bases are called negative. Let now V carry an orientation. Since the star operator is supposed to be linear, it is determined by its values on some basis 2. De Rham Cohomology and Harmonic Differential Forms In particular, this implies that the star operator does not depend on the choice of positive orthonormal basis in V, as any two such bases are related by a linear transformation with determinant 1.

For a negative basis instead of a positive one, one gets a minus sign on the right hand sides of 2. Lemma 2. A e ip , depending on whether ej 1 ,. Now A A e ip , and — l p d -p thus is the determinant of the base change from ,. It suffices to show 2. A e d , and the claim follows from 2. Since M is oriented, we may select an orientation on all tangent spaces T X M , hence also on all cotangent spaces TfM in a consistent manner. We simply choose the Euclidean orthonormal basis g r,.

Since all chart transitions of an oriented manifold have positive functional deter- minant, calling the basis. Since M carries a Riemannian structure, we have a scalar product on each TfM. There- fore, by Lemma 2. M This product on f2 p M is obviously bilinear and positive definite. De Rham Cohomology and Harmonic Differential Forms So far, we have considered only smooth sections of vector bundles, in particular only smooth p-forms. For later purposes, we shall also need L p - and Sobolev spaces of sections of vector bundles.

For this aim, from now on, we deviate from Definition 1. It remains bilinear, and also positive definite, because as usual, in the definition of L 2 , functions that differ only on a set of measure zero are identified. We now make the assumption that M is compact, in order not to always have to restrict our considerations to compactly supported forms. Definition 2. Since two stars appear on the right hand side of 2. We just define it locally, hence globally up to a choice of sign which then cancels in 2. Nevertheless, we shall usually omit the index p.

Corollary 2. Directly from the definition of A. Direct computation. A dx d , since p is compactly supported. This Laplace operator therefore differs from the usual one on by a minus sign. This is regrettable, but cannot be changed any more since the notation has been established too thoroughly. With our definition above, A is a positive operator. This is seen as follows: Since for functions, i.

5th International Conference on

We choose ji A dx Zd. We shall see a different proof in 3. The remaining formulae then are clear again. Let M be a differentiable manifold. Because of 2. This property determines an equivalence relation on the space of closed forms in f2 p M , and the set of equivalence classes is a vector space over R, called the p-tli de Rham cohomology group and denoted by H P dR M,R. Usually, however, we shall simply write H p M.

In this Paragraph, we want to show the following fundamental result: Theorem 2. Uniqueness is easy: Let wi,w 2 G f2 p M be cohomologous and both harmonic. Let lo o be a closed differential form, representing the given cohomology class in H P M. The essential step consists in showing that the infimum is achieved by a smooth form r].

Such an rj then has to satisfy the Euler-Lagrange equations for D , i. L 2 -convergence. Let now V C be open. As long as we restrict ourselves to relatively compact subsets of U, all coordinate changes lead to equivalent norms. Furthermore, by a covering argument, it suffices to find for every x in the closure of U' a neighborhood U" on which the claimed equivalence of norms holds. After these remarks, we may assume that first of all 7ro tp is the map onto normal coordinates with center Xo, and that secondly for the metric in our neighborhood of Xq.

Since U' C U is compact by assumption, the claim for U' follows by a covering argument. Hence all results for Sobolev spaces in eucl. Theorem A. By the Riesz representation theorem, any bounded linear functional on a Hilbert space is representable as the scalar product with an element of the space itself.