68. My PhD Thesis: Part 3: More Topology, Geometric Group Theory, and More Algebra

(Epistemic status: A creased, stained map to what were once my favorite hunting grounds. Accessible to anyone who can support substantial abstraction; prior math knowledge is not necessary. In particular, ignorance of calculus is not an obstruction here, but total ignorance of geometry or like, arithmetic or logic, will be. Extremely dense and probably won’t get you there, but at least you’ll ask better questions. Partially dedicated to DG, JM, and PR.) 

The path to my old hunting grounds (as pictured above) is long and twisty and winds through a lot of necessary math along the way. You'll be best served by carefully going through the parentheticals and answering the questions I ask, so as to keep track of the blazes and to keep your footing. You'll still get something out of this if you move more quickly, but you might end up lost further down the line. You might find the pace a little slow if you already know the territory - test yourself by answering those same parentheticals as quickly and off-handedly as I ask them.

To start with, answers to the two challenge problems.

For the two-part one about covering theory, let's start with the natural degree-\(2\) map from \(S^2 \to \mathbb{R}P^2\). This one is the antipodal map. Start by arbitrarily picking some open hemisphere of the sphere - that is, a hemisphere not including its great-circle boundary. Add in some contiguous half of that boundary; call it \(\frac{S^2}{2}\). We now have a perfect division of the sphere into two parts, such that any line through the origin contains precisely one point from the half we've constructed and the remaining half of the sphere, assuming without loss of generality that our sphere is centered at the origin. The map follows immediately: for every line through the origin, send both of the points of intersection between it and the sphere to that unique point of the pair which lives in in \(\frac{S^2}{2}\).

https://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/CrossCapSlicedOpen.PNG/500px-CrossCapSlicedOpen.PNG 

The story for the torus is pretty similar: in the same way as you can present the torus as a square with oriented arrows on its side that provide a gluing recipe for the torus, you can do the same for the Klein bottle (the first picture below), which also indirectly gives us a way to see the double-covering immediately (the second picture below) by looking at the square pictures (called fundamental domains) for a Klein bottle and its mirror image and pieceing them together into the square picture for a torus. Some depictions of the Klein bottle make this twofold cover especially clear (the third picture below).

 

 

 https://upload.wikimedia.org/wikipedia/commons/9/9e/Kleinbagel_cross_section.png

There's actually slightly more to say here. We need to make a quick trip back to the green path to talk about two foundational topics from point-set topology that we skipped over when we were first talking about topology, but which we'll need later on when we talk about a special type of \(3\)-manifold.

When we think about geometric objects, we might want to characterize them in ways that go beyond pure connectedness. We can describe whether or not a given space behaves normally with an inside and an outside, and we can even talk meaningfully about the size of an object, at least in very rough terms.

For the first of these, the concept we want is "orientability". Orientability is something that only applies to manifolds of dimension \(2\) and up, which means that if we want to figure out whether a shape is orientable, it needs to be true that anywhere on the surface we stand, the area around us looks like a flat plane. Just like with topology, we ignore self-intersections, and get to make ourselves as small as we like. A cone is not a manifold - the point is too sharp. Neither is a cube. But a sphere is, and so is a a torus. You can also have more general \(n\)-manifolds, which are like \(2\)-manifolds except that the space in a tiny patch around you needs to look like \(\mathbb{R}^n\), or \(n\)-dimensional real space. For example, this whole time, we have been leading up to talking about \(3\)-manifolds, the topic of my thesis. Most of the shapes you can think of are orientable. A shape is orientable if you can consistently define "up" from within the space; more formally, you need to be able to make smoothly varying, globally consistent assignments of surface normals at each point in the space. Spheres and toruses are always orientable - just think of how you can carry a flag pointed straight up while walking around the world however you like, and the flag will never end up pointed in the opposite direction. By contrast, the two double-covered shapes above - the Klein bottle and the real projective plane - are both nonorientable. If you've ever heard that a Klein bottle has no inside, this is why. Imagine walking along the surface of a Klein bottle like the one at the bottom right of the first picture of the three above. You can, for example, follow the red loop: start on the outside near the "base" of the bottle, walk along the surface and the handle, pass through the wall of the bottle, walk along the tube, and end up back where you started... except now you're upside-down. If you've ever heard of a Mobius strip, that's another good example of a nonorientable surface.

For the other one, the concept we want is called "compactness". Compactness is a kind of very vague smallness with respect to some choice of topology - a choice of which subsets of a space count as "small" or "nearby", which need not correspond to literal closeness. We call these sets "open", because one of the best examples of a topology is the metric ball topology, where the overall space is some flat space with real coordinates of finite dimension, and the sets that we consider small are the empty set, the entire space (which are both always counted as "nearby" sets), and the insides of any sphere of any finite radius. If we can completely contain some shape with these open sets, we call that a cover of the shape. A shape is compact if for any such cover, we can always find some finite subset of the covering sets which still cover the space. For example, a square is compact in the metric ball topology from above: no matter how you arrange any number of circles that cover a given square (including its edges), you can always pick out some finite number of the circles that still cover the square. (Try it and see!) Thankfully, we can mostly ignore this definition, because by the Heine-Borel theorem, because we are basically working in some real space, being compact is equivalent to being closed and bounded. A shape is closed if it contains its own boundary, like an orange along with its peel. A shape is bounded if we can draw a circle around it of some finite radius, however large, and contain the whole shape. A ball without its enclosing sphere is not closed. A tube with a finite cross-sectional circle but which extends infinitely in both directions is not bounded. An ordinary doughnut is both closed and bounded, and so compact.

Now for the other challenge problem, the one about polynomials and groups/fields of positive characteristic. (That just means that that characteristic isn't \(0\), and in fact for a field must be some prime \(p\).) For the first part, the thing to note is that for a prime \(p\) and any \(0 \leq a < p\), we have \(a^{p-1} \equiv 1 \pmod{p}\), so that any polynomial of degree at least \(p-1\) is directly equivalent to some polynomial of lesser degree. To know this in turn, you can either notice that \(\mathbb{Z}_p^\times\), the multiplicative group of the cyclic group of prime order, has for a generator any non-\(1\) element, or you can just know Fermat's little theorem. 

To use this to show that every polynomial splits over all \(\overline{\mathbb{F}_p}\) past some finite point without appealing to the construction of \(\overline{\mathbb{F}_p}\) as algebraicall , we start by noticing that famously, by the Fundamental Theorem of Algebra, \(\mathbb{C}\) (or equivalently \(\overline{\mathbb{Q}}\)) is algebraically closed. By compactness, this means that for any given polynomial, there can be only finitely many choices of \(p\) for which \(\overline{\mathbb{F}_p}\) could possibly fail to factor it. (In fact, there are none, but this argument can't show that.) Then all you need to do is pass to the last such prime, go one prime further, and you're done.

At any rate: at last we near the end of the path to my old hunting grounds! Unfortunately, this is where the trails start to get treacherous. Thus far, everything has been either a core part of a strong undergraduate or ordinary graduate course in pure math, but as we head along the orange and magenta paths, we touch on topics not generally covered and even some new results dating to the late 2010s. Some are relatively straightforward to understand if you've made it this far, but others will be fairly challenging. Let's start with a warm-up topic from the very start of the magenta just past the end of the red, regarding constructions on groups.

Consider the following: in order to understand a group \(G\) and its properties, you don't necessarily have to know the group in full, and in fact sometimes this is intractable. So, one thing that you can do is look at what happens when you look at different choices of finite-size group \(F_i\) and see what kinds of homomorphisms (recall, functions that respect group properties) \(\phi : G \to F_i\) you can possibly come up with. If for any \(g \in G\) you can always find some finite group \(F\) and homomorphism \(\phi : G \to F\) such that \(\phi(g) \neq \phi(1)\) whenever \(g \neq 1\), then your group is residually finite: for any pair of distinct elements in \(G\), you can keep them peeled apart from each other even after passing to a much smaller group which you might understand much better.

One reason why we care about whether a group is residually finite is the word problem for groups. It's actually not possible in general to start with the letters and relations that make up a group, take an arbitrary word in those letters, and figure out for sure in finite time whether or not that word is the identity. This should be surprising until you realize that you can encode arbitrary computations in sufficiently rich string-rewriting rules... like the relations of a group. Worse yet, from the frame of algebraic topology and fundamental groups, we can think of this as trying to figure out whether a given loop within a specified space is contractible to a point or not! Horribly, \(4\)-manifolds and higher can have arbitrary finitely presented fundamental group (which I might get around to talking about later), such that this question is not in fact solvable for every such manifold... but it turns out that \(3\)-manifold groups are required to be residually finite on top of being finitely presented, and this is a reason for us specifically to care about residual finiteness as we near our destination. As another, sharper reason why we might care about residual finiteness, the foregoing word problem means that we cannot always even directly demonstrate that a given group presentation represents a nontrivial group; one way we might try to get around this is through the explicit construction of a nonconstant map to a nontrivial finite group. But this requires that such a map has any hope at all of distinguishing each of the group elements from the identity, which is not necessarily guaranteed.

For example, the group \((\mathbb{Q}, +)\) is not residually finite. It's what's called a divisible group: for any fraction \(\frac{p}{q}\) and any integer \(n > 1\), we can construct another fraction \(r \in \mathbb{Q}\) such that \(r \cdot n = \frac{p}{q}\); in particular, \(r = \frac{p}{qn}\). More generally, we say that a group \(G\) is divisible if for all \(g \in G\) and all integers \(n > 1\), there exists some \(h \in G: h^n = g\), where we recall that we usually write the group operation as multiplication so that if the group operation is really addition, we get something more like multiplication instead. But divisible groups are never residually finite. To see this, suppose we have a divisible group \(G\) and we want to construct a nontrivial homomorphism to any other choice \(F\) of nontrivial finite group. Let \(f = |F|\); notably, \(f > 1\) is an integer. Then for any \(g \in G\), we have some \(h\) for which \(h^f = g\). Now, the order of an element must always be a factor of the order of the group itself, and on those groups, let's look at what happens when we try to build a nontrivial homomorphism from \(G\) to \(F\). Let \(\phi : G \to F\) be such a homomorphism. Then we have \(\phi(g) = \phi(h^f) = \phi(h)^f = 1\), since \(\phi(h) \in F\). But that means that such a nontrivial \(\phi\) can't exist. 

A related concept is that of the profinite completion of a group. In the same way that we can try to understand a group by understanding what happens when we try to map it to finite groups, we can also try to understand it by understanding its normal subgroups of finite index and then looking at the resulting quotients. Recall that the index \([G:H]\) of a subgroup \(H < G\) is, loosely speaking, the inverse of the fraction of the group contained in the subgroup and is always a positive integer; and that \(H\) is normal if conjugation of \(G\) by the elements of \(H\) does nothing, and that in particular we care about normal subgroups for precisely this reason: it means that we can talk meaningfully about "dividing out" \(G\) by \(H\), getting a new quotient group \(G/H\) whose size is precisely \([G:H]\).

The profinite completion of a group \(G\), then, can be thought of as the best approximation to \(G\) as cobbled together out of finite-sized quotient groups of \(G\). More formally, it's the inverse limit (confusingly, a type of categorical limit and not colimit!) over all finite-index normal subgroups \(N\) of \(G\) of the (finite) quotient groups \(G/N\), written as \(\varprojlim \{G/N\}_{N \unhld G}\) and we write that profinite completion as \(\widehat{G}\); what precisely an inverse limit is is out of scope of this writeup (and I recommend looking into it further) but to give you a rough idea, an inverse limit is a way of compatibly gluing together small objects into a larger one, so that in this case, each element of \(\hat{G}\) corresponds to the locally-consistent choice of what quotient group element it maps to under all the maps, or equivalently, the locally-consistent choice of what coset representatives of the normal subgroups that element corresponds to, that is, its residue classes.

Let's look at two simple concrete examples. For one, consider \((\mathbb{Q} , +)\), the additive group of the rationals as mentioned above. If you check, you'll find that it has no proper normal subgroups of finite index at all! For a quick way to convince yourself of this, take any rational number and look at the normal subgroup that it generates. This will have, at the least, one coset for every prime number coprime to its denominator. Adding more fractions to the generating set won't help, either - all that does for you is turn the resulting subgroup into one that you can write down as having a single generator - one whose denominator is the least common multiple of the fractions that you used. So \(\widehat{\mathbb{Q}} \cong 1\). For another, let \(G\) be any finite group at all. Then elements of \(\widehat{G}\) will correspond to some element of \(G\) along with its coset representatives for each normal subgroup... but there's only finitely many of those, and each such element is completely determined by the element of \(G\) that we picked at the start. So for every finite \(G\), we have \(\widehat{G} \cong G\).

For a challenge problem, consider \(\mathbb{Z}\), the group of integers. What are its normal subgroups? What are all of its finite quotient groups? Pick two of your favorite integers and write down their respresentations in terms of the residues. Then write down an element of \(\widehat{\mathbb{Z}}\) that does NOT correspond to any true integer; this is an example of a profinite integer.

One immediate connection between profinite completions and residual finiteness is the fact that a group is residually finite iff it fits naturally inside its own profinite completion: more formally, that \(G\) is residually finite iff \(\iota: G \to \widehat{G}\) is injective. To see this, we trace through in both directions. For the forward direction, consider that if \(\iota\) is injective, then \(\iota(g) \neq 1_{\widehat{G}} \Rightarrow g \neq 1_G\); if \(G\) were not residually finite, then there would be a \(g \in G\) not separated by finite quotients, and that would mean that \(g \in N\) for all normal subgroups \(N \unlhd G\). But that would mean that \(\phi(g) = 1_{\widehat{G}}\), a contradiction. For the reverse direction, start by assuming that \(G\) is residually finite. and suppose that for some pair of elements \(g, h \in G, g \neq h\), we have \(\iota(g) = \iota(h)\), that is, \(gN = hN\) inside of \(G\) for all normal subgroups \(N \unlhd G\). Then by residual finiteness, because we cannot separate \(g\) from \(h\) by finite quotients, in fact it must be true that \(g = h\).

Notably, at the time of writing it is still a major open question whether \(F_2 = \langle a, b \rangle\), the free group on two generators, is profinitely distinguishable from all other residually finite and finitely generated groups; this conjecture of Remeslennikov has stood for perhaps 50 years, since almost the beginning of the study of residually finite and profinite groups. Since every other finitely-generated free group is a finite-index subgroup of \(F_2\) and thus that all finitely generated subgroups at all are quotients of some finite-index subgroup of \(F_2\) and in light of the above discussion, a resolution to the conjecture would be a major step forward no matter which way it fell. If \(F_2\) were profinitely rigid among all residually finite finitely-generated groups, then we would know that everything important about a group can be known just by looking at its finite quotients; likewise, if it were profinitely isomorphic to some other residually finite finitely-generated group, that would tell us something important about when it is that profinite isomorphism fails to correspond to true isomorphism, and what exactly we miss when we look only at the finite quotients of a group.

One last concept before we move on: virtual properties. When we say that a group \(G\) is "virtually" something, what we mean is that the property may not hold of \(G\) itself, but that it does hold of some \(H < G\) with \([G:H] < \infty\). You might have a group which is virtually commutative (it has a commutative group as a finite-index subgroup) or virtually free (some finite-index subgroup has some generators and no relations). But why is this something we should care about at all?

The answer is complex and there's many ways of seeing it, and all of them take us onto the orange path. Maybe you can begin to see how by this point, everything we've talked about so far begins to tangle up into a single field of study. First, some properties of interest are equivalent to their virtual counterparts. A virtually residually finite group is in fact residually finite. A group with virtual torsion in fact has torsion. But another one is through a connection to geometric group theory as well as covering theory from within algebraic topology. Briefly, if some algebraic property holds of a group \(G\), and \(G\) is the fundamental group of some geometric object \(X_G\), there is generally some appropriate geometric property that holds of \(X_G\). If instead the property only virtually holds of a group \(G\), then because any finite-sheeted cover of \(X_G\) - call it \(X_H\) - has a fundamental group \(H\) which is a finite-index subgroup of \(G\), there then exists some such covering \(X_\tilde{G}\) whose fundamental group \(\tilde{G}\) has that algebraic property actually hold, and the corresponding geometric property then also holds of \(X_\tilde{G}\). Thus mathematicians also term properties that hold of a finite-sheeted cover of a space to be virtual, and often consider such properties to almost hold or even effectively hold of the original space. Stated a different way, a virtual property holds of a group or a shape if there exists a closely-related group which is simpler and smaller, or a related shape which is unfolded more and thus simpler but larger, for which the property holds exactly. Importantly, this reduction of a group or unfolding of a shape has an inherently finite quality to it - neither too simple to change anything nor so complex it destroys all structure, but just right; something that in many ways has the same flavor as the original object of interest. That simpler object is close enough that we can reason about the original object by understanding the simpler case. It's a little like coming to understand a complex periodic pattern by looking at a single repeating cell: it can't contain the whole pattern, but it's close enough to tell you want you want to know.

Indeed, just such a virtual geometric group theoretic property is the subject of one of the recent keystone results that my thesis depended on: the virtual Haken theorem. It says that for any \(3\)-manifold that we pick which is compact, orientable, and irreducible, and which has an infinite fundamental group, we can always find some finite-sheeted covering of the \(3\)-manifold which is Haken. Well... we talked about what a \(3\)-manifold is - that's just a smooth shape where every patch around you looks like flat real \(3\)-space. We talked about what it means to be orientable - you can consistenly say what "up" means - and what it means to be compact - small, sort of, and in fact closed and bounded for our purposes. And all it means for the fundamental group to be infinite is that it has at least one loop inside of it that can't be pulled tight and that still can't be pulled tight if you go around the loop more than once (check out the \(SO(3)\) trick!) - remember, that loop would give us a \(\mathbb{Z}\)-factor in our fundamental group. But what does it mean for a \(3\)-manifold to be irreducible? Or Haken? Both are simpler than they might seem.

A \(3\)-manifold \(M\) is irreducible if one of two things are true: either there's no way to write it as a connected sum \(M = N_1 \# N_2\) where neither summand is \(S^3\) (because if you connect-sum \(S^3\) with any \(3\)-manifold you get the same thing back, same as with \(S^2\) and any \(2\)-manifold), or it's specifically the hypertorus \(S^2 \times S^1\). Basically: if it's not this one specific \(3\)-manifold, there's no meaningful way to cut it up into two smaller pieces.

On the other hand, a \(3\)-manifold \(M\) is Haken if it's compact, irreducible, and also somewhere inside of it, there's an incompressible \(2\)-manifold \(S\) - a surface. By incompressible, we mean that \(S\) is neither a sphere \(S^2\) nor a solid circle \(B^2\), and there's no solid circle \(D\) inside \(M\) with its boundary entirely in \(S\) where that boundary doesn't also bound a different solid circle entirely inside \(S\). If there were such a disk without a counterpart, we'd call it a compressing disk, because it would give us a new way to pull some loops from inside \(S\) tight in a way that can't be achieved inside \(S\) alone.

Delightfully, we can tie this back to our earlier discussions of fundamental groups and homomorphisms - for such a surface \(S\), there's an injective homomorphism \(\iota: \pi_1(S) \hookrightarrow \pi_1(M)\), that is, there's a copy of the fundamental group of \(S\) that lives inside the fundamental group of \(M\); in other words, no loop that's contrained to \(S\) can be pulled tight to a point even if you let it move throughout the interior of \(M\).

Haken \(3\)-manifolds are special. By cutting along the incompressible surfaces, we get simpler pieces, and moreover, if we keep on doing this, we don't need to do this forever: we eventually ground out in a collection of \(3\)-dimensional balls. This often makes life easier if we want to prove something about general \(3\)-manifolds: if we have a Haken \(3\)-manifold, then to prove something about the manifold, all we need to do is prove it for \(3\)-balls, then show that the proof still goes through if we glue one \(3\) ball to another - probably in an ambient space at least \(4\) dimensions - along some well-behaved surface that they can be made to share, like some kind of multi-holed torus that fits inside both balls. This lets us prove what we want to prove using a fairly simple induction argument - though we won't end up needing to do that.

So taking it all together, the virtual Haken theorem says that any time you have a \(3\) manifold which is closed and bounded, has a consistent direction of "up", has meaningful loops that can't be pulled tight, and which can't be cut into smaller pieces, you can always find some way of stitching finitely many copies of the space together so that you end up with another \(3\)-manifold in which any interesting surface you find can't be pulled any tighter.

So why do we care? We'll need to back out a little bit and talk about the basics of graph theory. A graph is comprised of a set of vertices and a set of edges attached to those vertices; we might write this \(\Gamma = (V_\Gamma, E_\Gamma)\). Every edge has a starting vertex and an ending vertex, which might be the same, forming a loop. Graphs can represent pretty much anything you care about where there's some kind of Thing and a Connection that connects at most two of the Things. Subway and train maps are graphs, where the vertices represent stations and the edges represent a rail line connecting two adjacent stations. You can also draw social graphs, where the vertices represent people and the edges represent some kind of relationship between the people.

From a different direction, it's possible to build up complex groups by starting with simpler groups and then gluing them together somehow. We've actually already seen one example of this: the free group on two generators, \(F_2 = \langle a, b \rangle\), which is the fundamental group of the wedge sum of two circles, \(S^1 \vee S^1\). More generally, you can take two groups \(G, H\) and form something called their amalgamated free product, a more complex group that contains them both. An element of the amalgamated free product \(G \ast H\) looks like a word \(g_1 h_1 g_2 h_2...\), a product which alternates back and forth between an element from \(G\) and one from \(H\), where we add only the extra requirement that \(1_G = 1_H\) as homomorphisms require. (That way, we can totally have such a word starting with an element of \(H\): just set \(g_1 = 1\).) Otherwise, elements of \(G\) and \(H\) don't interact - why should they?

It's no accident that \(\pi_1(S^1) = \mathbb{Z}\) and that \(\pi_1(S^1 \vee S^1) = \mathbb{Z} \ast \mathbb{Z} \equiv F_2\): this is yet another example of how algebra and geometry play off against each other and display deep correspondences. In fact, hopefully you remember the Seifert-van Kampen theorem from last post, where we built up the fundamental groups of overlapping spaces in terms of the fundamental groups of the summand spaces and the fundamental group of the overlap space.

From geometric group theory, we have the concept of a graph of groups: a graph where every vertex and every edge has a group attached to it. In addition, for every edge group, we require there to be an injective homomorphism from that group into the vertex or vertices that it connects to.

Alright, but how do these all connect up? It's a little convoluted but it's extremely important, so stay with me.

  • We start with an arbitrary \(3\)-manifold \(M\) which is compact and orientable.
  • First, we can ignore connect-sums, as they're pretty easy to handle in terms of the parts; this is Seifert-van Kampen, and we can pass to the case where \(M\) is irreducible, too. 
  • Next, the JSJ decomposition theorem (by Jaco, Shalen, and Johannson) says that whenever we have any compact, orientable, and irreducible 3-manifold, there's a best way to find a finite number of annuli and tori \(\{S_j\}\) inside \(M\) that you can cut along - different from undoing a simpler connected sum - to reduce \(M\) to a finite set of parts \(\{M_i\}\), where each of those parts is already a fairly simple and well-understood \(3\)-manifold.
    • The cuts are unique up to isotopy - recall, pushing and stretching and bending with out cutting, piercing, or gluing.
    • The parts can be Seifert-fibered, which means that they have nice circule symmetry and in fact are comprised of one ordinary circle for every point on something like a \(2\)-manifold (an orbifold, actually); they can be hyperbolic, with constant negative curvature and well-behaved fundamental groups; or they can be toroidal or a thickened torus, which are both simple to handle as well.
    • By "best", we mean that the geometry of \(M\) changes as you move around it, and that the JSJ decomposition's choice of cuts cleanly separates where one geometry ends and another begins.
    • In fact, Thurston's geometrization theorem tells us what kinds of geometries the parts can have, and JSJ decompositions only use 
  • These pieces give us a graph of groups, where the vertices are the \(\{\pi_1(M_i)\}\) and the edges are the \(\{\pi_1(S_j)\}\), which are in fact all \(\mathbb{Z}\) or \(\mathbb{Z}^2\), as you can even prove for yourself. (Try it!) In fact we can trace the corresponding graphs of groups all the way down every time we cut, turning a vertex into a vertex with a loop or a pair of vertices with an edge between them.
    • If we have a true edge, the amalgamated free product of the two resulting vertex groups, quotiented out by the normal subgroup corresponding to the cutting surface, just like Seifert-van Kampen.
    • If we have a loop, we get something called an HNN extension, which is a lot like an amalgamated free product of just one group, where what would would have been the shared subgroup in a true edge now has to be inside the new group in two different ways connected by a homomorphism, and conjugation by a newly introduced generator takes elements from the first copy to their image under that homomorphism in the second copy.
    • We could also run this process in reverse, starting with a sprawling graph of groups and applying HNN extensions and amalgamated free products to collapse edges all the way back up the chain until we have a single vertex with \(\pi_1(M)\) attached.
    • At this point, we've extracted all the geometric information that we can, and going further is to make powerful proofs work better. 
  • The resulting \(\{M_i\}\) might not be Haken, which is a problem for induction proofs that want to start with nice simple \(3\)-balls and end up with arbitrary \(3\)-manifolds. This would have stopped us if we were in the 1970s, but we're in the incomprehensible future year of 2026 and we have the virtual Haken theorem, which is almost as good; we can pass to a finite cover of the non-Haken parts and get a Haken decomposition anyway.
    • You can think of this as a little like the non-Haken parts being "too small" to contain an incompressible surface and wanting to have only a fraction of a \(3\)-ball inside of them; passing to a finite cover is then like having only \(\frac{2}{5}\)'s worth of a \(3\) ball, so that passing to a fivefold cover gets you something that you can break into two \(3\)-balls.
  • We end up with a way to break down a compact (smallish) orientable (directionally nonweird) \(3\)-manifold first into its geometrically natural pieces in a way that keeps track of those parts, and then further into extremely simple pieces, though we might have to account for the fact that some of the parts might be too small to turn directly into those very simple parts and so for those we have to expand them to see more copies before we can then turn them into the simple parts.
For one more challenge problem, consider the three-torus, \(T^3 = S^1 \times S^1 \times S^1\), which you can think of as being the unit cube where opposite faces are glued to each other much as \(T^2 = S^1 \times S^1\) is the unit square where opposite sides are glued to each other. \(T^3\) is orientable and compact. Explicitly specify one way to make repeated cuts along surfaces to turn it into the \(3\)-ball. (Hint: it should take you three cuts, all of which are pretty straightforward. Don't overthink it.)

Happy hunting!

Comments

Popular posts from this blog

4. Seven-ish Words from My Thought-Language

20. A Sketch of Helpfulness Theory With Equivocal Principals

11. Why the First “High Dimension” is Six or Maybe Five