Surface Deformations: Visualizing Homotopies

A homotopy is a (smooth) map which continuously deforms a space nicely (without introducing any singularities).

I was part of illiMath2006: an NSF funded Research Experience for Undergraduates (REU) summer program at the University of Illinois at Urbana-Champaign. One of the objectives of the program was to deliver an academia-industry collaborative project centered around a student principal investigator (me). My mentors for the project were George Francis at UIUC, Stuart Levy at the National Center for Supercomputing Applications, and Ulises Cervantes-Pimentel at Wolfram Research.

Our project Sakubo was such a success that Wolfram Research decided to continue the project the following year when I worked there as a summer 2007 intern. We built szgMathematica (= Syzygy + Mathematica) which enables users to use Mathematica’s graphical functions, such as Plot3D or ParametricPlot3D, and view the resulting surfaces in the CUBE.

We even built a function that simulated a rollercoaster where a user could input any closed parametrized curve for the rollercoaster track. The parametrization controlled the speed of the motion. We tried many closed curves from Mathematica’s new (at the time) Knot library and found that crazier knots were not as fun to ride as the standard trefoil. Perhaps there was just too much going on.

The second iteration of the project involved many more people. I worked with George Francis at UIUC, Stuart Levy, Camille Goudeseune at the NCSA, Jim Crowell at ISL, and Ulises Cervantes-Pimentel, Joshua Martell at WRI.

Random_Discrete_Morse

Computing the topological type, or homology, of a given a simplicial complex is straight forward, but not always computationally feasible. The bottleneck is in computing the Smith Normal Form, which is a polynomial time algorithm [Kannan-Bachem], but still too slow on large examples (about cubic).

Discrete Morse theory [Whitehead, Forman] provides a tool set which can be used to find reasonable upper bounds.  A randomized search for small discrete Morse vectors was introduced by Benedetti and Lutz, which we implemented as a client for the topological software package polymake.

I gave a workshop on Random_Discrete_Morse at the Homology: Theoretical and Computational Aspects (HTCA 2015) conference in Genoa, Italy.

While experimenting with Random_Discrete_Morse, we discovered a whole family of contractible, yet non-collapsible simplicial 2-complexes. We call them the sawblade complexes. Read more about the sawblade complexes here.

This project was joint work with Frank Lutz and Michael Joswig, with help from Benjamin Lorenz, Bruno Benedetti, and Karim Adiprasito.

References

Bistellar simplification

In the late 80s Udo Pachner introduced the idea of a bistellar move which can modify the triangulation of an object (simplicial manifold) without changing its fundamental properties (topological type). About a decade later, Frank Lutz implemented bistellar_simplification as a tool to classify objects by their topological type using a simulated annealing strategy. Later, Niko Witte wrote a C++ implementation of Frank’s code for polymake, which considerably sped up the computation. I, then, spent several (loong) months trying to improve Niko’s code so that it can be used to recognize our spheres.

The 2-dimensional version of a bistellar move is well known in the graphics community, though it is usually called by a different name: edge flips. It can be used to find “nice” triangle meshes of some domain, which may, for example, be applied to finite element-type computations.

The idea of the bistellar move is simple. We’ll be using some terminology from here. Let’s start with a simplicial complex \(K\) of dimension \(d\) and a simplex \(\sigma \in K\) of dimension \(0\le i \le d\). (Technically, \(K\) should be a closed combinatorial manifold.)

A bistellar \(i\)-move is \(\Phi_K(\sigma) = K – \text{star}(\sigma,K) + \tau \ast \partial \sigma\), where \(\tau\) is a \((d-i)\)-simplex NOT in \(K\) such that \(\partial \tau = \text{link}(\sigma,K)\). The move is only valid if a \(\tau\) satisfying these properties exists.

The animated gif below shows some bistellar moves in action. You can see why the algorithm is called a bistellar simplification — we can change the triangle mesh to have fewer triangles.

Consider an abstract graph whose nodes are simplicial complexes which are connected by an edge if there exists a single bistellar move to get from one complex to the other. This graph is known as the bistellar flip graph or Pachner graph. The bistellar_simplification algorithm is equivalent to a random walk on this infinite graph which attempts to identify the topological type of the input complex \(K\) by lowering the \(f\)-vector of \(K\).

Tracking the \(f\)-vector is the cheapest way to track our progress of the simplification. In fact, for a 4-dimensional sphere, the number of vertices and edges alone is enough to learn about the entire \(f\)-vector [Klee]. So we can actually track the progress of the simplification on a 2 dimensional graph.

Below is a video tracking the progress of our modified bistellar_simplification on our Akbulut-Kirby 4-sphere AK_I_3. Since following a single dot was difficult to see, I am displaying the \(f\)-vector (or, more precisely, just the vertex and edge count) of 1000 consecutive complexes at a time. For each of those 1000 complexes, I counted the number of 0-,1-,2-,3-,4-dimensional bistellar moves were made (red = 4-move grows the \(f\)-vector, green = 0-move makes \(f\)-vector smaller; green = good). The box on the bottom right displays the vertex and edge count of the complex at the specified round.

Notice that the simulated annealing strategy brings the \(f\)-vector down until it gets stuck for a while, then it grows the \(f\)-vector in an attempt to jiggle the complex out of the local min. The current implementation uses a greedy random strategy, the modifications we made to the strategy only improved the algorithm enough to recognize AK_I_3, AK_II_3, AK_III_3. The other spheres to our knowledge have not been recognized by any heuristic software we know of.

We have also tried to implement a Markov chain Monte Carlo (Metropolis) method to improve the algorithm. We have tried several different parameters for the energy. Our tests so far indicate that an MCMC strategy results in a significant slow down of the algorithm. We are now looking into testing other deep learning techniques.

Triangulations of the Akbulut-Kirby Spheres

Our triangulations of the Akbulut-Kirby Spheres

Manifold recognition is known to be unsolvable for dimensions \(d \ge 4\) [Markov]. The related sphere recognition problem is known to be undecidable for dimensions \(d > 4\). However, 3-sphere recognition is decidable [Rubinstein, Thompson], but was found to be NP [Schleimer] and co-NP [Hass-Kuperberg]. The decidability of the 4-sphere recognition problem is still unknown.

Fortunately, there are several heuristic methods which can effectively and quickly recognize spheres to be spheres for all triangulations (or, more accurately, simplicial complexes) that we know of. Except, that is, for the family of examples which we constructed: our triangulations of the Akbulut-Kirby spheres.

The Akbulut-Kirby spheres were originally introduced as candidate exotic spheres [Akbulut-Kirby]. Selman Akbulut and Rob Kirby described a smooth manifold which is homeomorphic to a 4-dimensional sphere, but could not find a diffeomorphism to the sphere. If they could prove that there are no diffeomorphisms to the sphere, then they would have found a counterexample to the smooth Poincare conjecture in dimension 4. It took over 2 decades, but Akbulut himself found that diffeomorphism [Akbulut]. And the smooth Poincare conjecture in dimension 4 is still an open problem today.

Akbulut and Kirby simplified the description enough that one can follow the general idea without being an expert in differential topology and Kirby calculus.

Start with a 4-dimensional ball to which we attach two handles (like the handle on a coffee mug). Formally these handles are called one-handles. So we have now attached two one-handles. We then label each handle so we can keep track of which is which. Let us call the purple one \(x\) and the green one \(y\). Notice the image is a 3-dimensional depiction of our 4-manifold.We then want to glue two 4-balls onto this manifold to close up the two holes that we introduced when we attached the two one-handles in the beginning. But we will glue them on in a non-trivial way. We will follow along the two paths (red and blue) shown above. If you start from the blue dot and walk along, you’ll find the blue path goes along the \(x\)-handle, then \(y\), then \(x\), then the \(y\)-handle backwards, then \(x\) backwards, then \(y\) backwards, and return to the blue dot. We describe this blue path as \(xyxy^{-1}x^{-1}y^{-1}\) and the red one is \(x^5y^{-4}\).

We started with a 4-ball, introduced two holes, then closed off those holes. So we’re back to having just a ball. Make a copy of that ball. Then match up the boundaries of the two balls. And voila: we get a 4-dimensional sphere.

This sphere we constructed is described by the finitely presented group \(G = \langle x,y \mid xyx=yxy. x^5 = y^4 \rangle\), which you can read all about in the beginning of Kirby’s Topology of 4-manifolds.

That’s essentially the entire construction.

We developed a few simplicial versions of this procedure. It was a bit involved (enough for me to get a PhD), but we produced some nice pictures. Here’s a taste. For more details, check out the dissertation or the preprint.

We know that the manifolds we constructed are proper 4-spheres by Akbulut’s proof and the fact that in dimension 4, PL=DIFF. Yet, none of the sphere recognition heuristic software we know of have been able to correctly recognize them to be spheres. To our knowledge, these are the only such examples.

For a more graphical summary of this project, check out this poster that was presented at the MSRI Modern Math Workshop at SACNAS 2015.

References

LaTeX hack: Adding a custom .cls to your search path

After a few hours of frustrated head bashing against my desk, I’ve finally figured out a way to add a directory containing a custom class file to my TeX search path. So before I forget the vital — yet simple — two steps, I will record them here. (Yep, that’s 4 whole hours for 2 lines of code.)

First let me give credit where credit is due. This page helped me immensely. Thank you, you wonderful LaTeX hackers.

Let’s assume you’ve created an amazingly useful class file that is useful to no one else but you. (This would also apply for a custom style file (.sty) as well.) And say you want to use that class for more than one LaTeX project. It would be easy to just have the .cls file in the same folder as your LaTeX project. But you want to have just the one .cls for multiple LaTeX projects.

For this example, I’m going to call that file syllabus.cls. Let’s put that into its own unusually named folder
~/some/path/syllabus/syllabus.cls

Before we move ahead to the two lines of code, you’ll need to learn a bit about your LaTeX installation. This is the head-bashing part.

(In case you’re curious: I’m on a Mac OSX (Yosemite) with TeX-Live 2015. Also, I use TeX-lipse (with Skim) for my editor.)

Open your Terminal and type

kpsewhich --var-value TEXMFLOCAL

For me, this returns
/usr/local/texlive/texmf-local

From the many posts I read on tex.stackexchange, I expected that it would be enough to place my syllabus folder right here. But it turns out life is not so kind.

One needs to dig a bit deeper to:
/usr/local/texlive/texmf-local/tex/latex/local

I don’t know if this is true for everybody, but that was the case for me.

And now, the moment you’ve all been waiting for:


sudo mv ~/some/path/syllabus/ /usr/local/texlive/texmf-local/tex/latex/local
sudo texhash

And that’s all. Now back to writing up some syllabi.

A dystopian Academe

I’ve had a rather long blog-hiatus these last few months despite having just started it. It turns out that the last year of a PhD program — attending conferences, submitting articles, applying for jobs, oh, and writing that pesky dissertation — with a toddler in tow is rather challenging (who knew?) and maintaining a blog somehow ended up taking a backseat, way waay in the back. But, actually, I did write several posts. I just never published them.

These posts sat in my drafts folder collecting dust because they were rants. Rants about academia. They detailed my frustration and disappointment with this world that I had idealized as an undergraduate student. Before my life as a grad student, you would have been able to see twinkling stars reflected in my eyes as I gazed up at Professor so-and-so and their superstar academic life.

A few weeks ago, as I started preparing my applications for postdoc positions, I deleted all of those posts. For the same reason that I never published them in the first place. For fear of political repercussions. I didn’t want to lose a chance to get some job because of some opinion I happen to have. And besides, they were just rants, right?

This morning, I read this letter from a guy who was so frustrated with academia that he quit just a few months before he finished. It echoed some of the frustration I felt a few months ago and reignited the anger I attempted to bury back then. Putting aside the comments about his possible burnout, I think the points he makes are valid.

Right off the bat, let me be very clear that I love the research I’m doing and the people I work with now. I really, really do.

But I have definitely seen the awful things mentioned in that letter. And it’s forced me to reevaluate my possible career direction, especially now that I’m getting ready to send out my applications for whatever may come next.

All that political mumbo-jumbo that academia claims to exist above, it’s all there lurking behind the closed meetings attended by greedy, ambitious tenured faculty and cautious, conservative administrators. At least in industry, all that cut-throat deception and lord-of-the-flies competition for prestige is out in the open, even expected. No one pretends to be “better than those money-grubbing capitalists”. If I can’t escape dealing with assholes, better the devil you know — or at least the devil that doesn’t hide his horns.

But there are also great people in academia. Who want to change academia from the inside. Who believe that other people in academia also want the academy to be great. These are the people who got me interested in academia in the first place.

The problem for me now is not that I think academia is devoid of good people. And it’s certainly not that I’ve stopped believing in its message. I do. I still want to be part of the group who endeavors to extend human knowledge and to openly and freely share that understanding.

The biggest problem I have is that I get it. I mean, I need to eat, too. And the work I just spent six months of my life on “for the betterment of society” should at least feed me and my family for the next six months of painstaking work, right?

But I believe the system isn’t broken. It’s just been around so long that people have started exploiting some of its loop-holes. The system just needs to evolve. Install some anti-virus software and update it from time-to-time.

I remember I once had a conversation with Peter some years ago — way before I ever got a behind-the-scenes look at an academic Board meeting — about how the people who are able to talk about change (the tenured bunch) are too comfortable to ruffle any feathers and the people who have ideas to bring about change (the postdocs and tenure-tracked lot) are too scared to say anything publicly.

So I have an idea.

The anti-virus protection for academia has to begin with a discussion. We have to identify its flaws and consider the possible ways of correcting them. But most people who can say anything worth hearing can’t because they may lose their jobs or grants or whatever.

Now a blog seems to me to be a great place to begin and have a discussion. And it may even be possible to have this discussion anonymously — as long as it is thoroughly monitored and identities are well masked.

I will gladly take on the role of monitor and editor for such a blog. I’ll even offer up my blog space for this (with Peter’s permission, of course, since this is his server space). If enough people get involved, we could perhaps have a great thing going.

Anyone who wants to be part of this conversation should email me personally. After I’ve vetted the authors to my satisfaction, I would then collect relevant topics, edit their content to remove any identifiable markers and post them on this or some new blog. Any comments for the posts will go through a similar process.

Visitors to the blog will feel like they are listening in on a conversation I am having with myself. The “voice” of the blog will always be mine though the thoughts behind it may be that of many.

So what do you say? Want to start a conversation?

Smith Normal Form

In my last post, we talked about homology and Betti numbers. We had a very simple example with a simplicial complex $K$ and we were able to easily compute $H_1(K)$ and $\beta_1(K)$. But that wasn’t really the whole story.

To write an algorithm that can find the Betti numbers of any given (finite) simplicial complex, we actually need to know more about how to read off the geometric information of the input complex and what to do with that information.

(Most of the material in this post can be found in Munkres, Elements of Algebraic Topology $\S 11$.)

Let’s first lay out the basics.

A(n abstract) simplicial complex is a collection $K$ of finite non-empty sets such that if $\sigma$ is an element of $K$, so is every non-empty subset of $\sigma$.

An element $\sigma$ of $K$ is called a simplex of $K$; its dimension is one less than the number of its elements. We may refer to $\sigma$ as a $p$-simplex or denote it as $\sigma^p$ to indicate its dimension $p$.

Each nonempty subset $\tau$ of $\sigma$ is called a face of $\sigma$; we say that $\tau$ is incident to $\sigma$ and denote it $\tau \prec \sigma$.

The dimension $d$ of $K$ is the largest dimension of one of its simplices. A $d$-simplex of $K^d$ is called a facet of $K$.

Given a finite complex $L$, we say that there is a labeling of the vertices of $L$, which means that there is a surjective function $f$ mapping the vertex set of $L$ to a set of labels (usually $\{0,1,2, \dots\}$ or $\{1,2, \dots\}$). Corresponding to this labeling is an abstract complex $K$ whose vertices are the labels and whose simplices consist of all sets of the form $\{f(v-0), f(v_1), \dots, f(v_n)\}$, where $v_0, \dots, v_n$ span a simplex of $L$.

This labeling mumbo-jumbo is essentially formalizing the obvious. We want to refer to specific simplices of $K$ and the convention used is to name it by its vertices.

One important property of a simplicial complex is that for every cell $\sigma$ in $K$, all of the faces of $\sigma$ are also in $K$. This property we will exploit when setting up our problem.

Step 1: Setting up the input

For a given simplicial complex $K^d$ of dimension $d$, we need only the set of $d$-simplices of $K$, the facets of $K$, as the input for our algorithm. With this list we can determine all the simplices of $K$ and their connectivity.

We’re now ready to set up all of our $C_p$’s, the $p$-chain groups of $K$. Actually, what we really have is the just the basis elements of the $C_p$, but that’s all we need. For each $p= 0, \dots, d$, the basis of $C_p$ is the set of $p$-simplices of $K$.

Let $F$ be our facet list of $K^d$. Every element of $F$ contains $(d+1)$ elements. For any $i=0, \dots, d$, the basis of $C_i$ will be the union of all subsets of elements of $F$ of length $i+1$.

So if $K$ is just two triangles, $F = \{ [1 \, 2\, 3],[2\, 3\, 4]\}$, then the basis of $C_1(K)$ is just the set of edges of $K$ or $\{[1\, 2], [1\, 3], [2\, 3], [2\, 4], [3\, 4]\}$, all length 2 subsets of elements of $F$. Even though edge $[2\, 3]$ is in both triangles, we only need to count it once.

Step 2: Setting up the boundary map

The boundary maps $\partial_p$ are the main characters in homology groups. From group theory, we know that all we really need to know about a homomorphism $f: G \to G’$ is how $f$ acts on the basis elements of $G$ in terms of the basis elements of $G’$.

More formally, let’s take a look at this definition:

Definition (matrix of a homomorphism) Let $G$ and $G’$ be free abelian groups with bases $a_1, \dots, a_n$ and $a_1′, \dots, a_m’$, respectively. If $f: G \to G’$ is a homomorphism, then $$f(a_j) = \sum_{i=1}^{m} \lambda_{ij}a_i’$$ for unique integers $\lambda_{ij}$. The matrix $(\lambda_{ij})$ is called the matrix of $f$ relative to the given bases for $G$ and $G’$.

The chain groups $C_p$ are the free abelian groups. And our boundary maps $\partial_p: C_p \to C_{p-1}$ are the homomorphisms between them. To learn more about the boundary maps, we must further analyze the matrix of $\partial_p$.

Step 3: Computing the Smith Normal Form

To understand what the boundary map is doing, let’s first recall a nice theorem from algebra.

Theorem (normal form) Let $G$ and $G’$ be free abelian groups of ranks $n$ and $m$, respectively; let $f: G \to G’$ be a homomorphism. Then there are bases for $G$ and $G’$ such that, relative to these bases, the matrix of $f$ has the form $$B=\left[\begin{array}{c|c}\begin{matrix}b_1 & ~ & 0 \\ ~ & \ddots & ~ \\ 0 & ~&b_q\end{matrix} & 0 \\ \hline 0 & 0\end{array}\right]$$ where $b_i \ge 1$ and $b_1 | b_2| \dots | b_q$.

This matrix is uniquely determined by $f$ and is called the normal form for the matrix of $f$.

To get from some jumbled up matrix $(\lambda_{ij})$ to its normal form, we perform what are called elementary row and column operations. These operations are

  1. exchange row $i$ and row $k$
  2. multiply row $i$ by $-1$
  3. replace row $i$ by (row $i$) + $q$ (row $k$), where $q$ is an integer and $k \ne i$,

plus the 3 respective column operations.

Perhaps you remember seeing the elementary row operations from linear algebra. It’s important to note that here we’re only allowing integer multiplication for operation type 3 (and 6 for the column operations).

Step 4: What it means

Recall what our goal was: to compute the homology groups of some finite simplicial complex $K$. And we know that all homology groups $H_p \cong \mathbb{Z}_{t_1} \times \cdots \times \mathbb{Z}_{t_k} \times \mathbb{Z}^{\beta}$ can be written in terms of their Betti numbers $\beta$ and their torsion coefficients $t_1, \dots t_k$.

Now that we have the normal form for all the boundary maps $\partial_p$ for $p=0, \dots, d$, we can easily read off this information.

Torsion coefficients: The entries of the normal form for the matrix of $\partial_p: C_p \to C_{p-1}$ that are greater than 1 are the torsion coefficients of $K$ in dimension $p-1$.

Betti numbers: Let $B_i$ be the normal form for the matrix of $\partial_i: C_i \to C_{i-1}$. The Betti number of $K$ in dimension $p$ is $$\beta_p = \text{rank} C_p – \text{rank} B_p – \text{rank} B_{p+1}.$$

So we’re done.

Or we should be, but we’re not. If we’re done why, then, are people still working on developing homology algorithms?

We’ll answer that question in another post.

Simplicial Homology

I started writing a post about Homology Algorithms but realized that some discussion about homology should come first. And for those of you who already know what homology is, we should at least be on the same page in terms of language and notation. Besides, there’s more than one way to think about homology.

The way that I understand homology—the way I describe it here—is influenced almost entirely by Carsten Lange. He helped me understand the basics of algebraic topology so that I can read about discrete Morse theory without feeling like a total oaf. This also means that we are thinking about homology with the eventual goal of computing (i.e., with a computer) the homology of some explicit, finite discrete topological space. In particular, we will work with simplicial complexes.

References used include the classics, Munkres and Hatcher, and for a speedy introduction, Zomorodian’s “Topology for Computing”. (And, as usual, a ton of help has been provided by my advisors Frank and Bruno.)

And before I begin, let me just say: Big ups to Carsten and all dedicated educators like him who, for countless hours, over many days and weeks, sit together with students one-on-one until they have the confidence to seek knowledge on their own.

Homology in topology

The big question in topology is whether or not two spaces are homeomorphic. To say that two spaces are homeomorphic, we need only find any homeomorphism between them. But to say that two spaces are not homeomorphic, we would have to show that no function between them is a homeomorphism. One way to handle this latter (infinite) case is by using topological invariants.

A topological invariant is a property of a space that can be used to distinguish topological spaces. For example, say we have two sacks labeled $A$ and $B$. We can’t see inside them, but we can tell that sack $A$ has one item in it while sack $B$ has two. Clearly the contents of sack $A$ and sack $B$ cannot be the same.

However, if $A$ and $B$ both have one item each, we can’t tell whether those two items are of the same type. Perhaps one is a steak and the other a rice crispy treat—clearly very different objects, both nutritionally and topologically. (Can you tell I missed lunch?)

The topological invariant described in these examples is connectedness, which is only useful when the number of connected components is different. But if they have the same number of connected components, we can’t say definitively whether the objects are of the same topological type. And that’s how it works with all topological invariants.

Definition (topological invariant) Given two topological spaces $K_1$ and $K_2$, a map $\tau$ is a topological invariant if whenever $\tau(K_1) \neq \tau(K_2)$, then $K_1 \ncong K_2$. But if $\tau(K_1) = \tau(K_2)$, then $K_1 \overset{?}{\cong} K_2$ and we don’t know whether they are homeomorphic or not.

Homology groups are topological invariants. We take some geometric information about the topological space, form some algebraic object (specifically, a group) containing that information, then apply some machinery from group theory to say something about the original space.

And if two topological spaces give rise to homology groups that are not isomorphic, then those two spaces are not homeomorphic.

Definition of Homology

Let’s jump right in with a definition, then explain it in detail using an example.

Definition (homology group) For a topological space $K$, we associate a chain complex $\mathcal{C}(K)$ that encodes information about $K$. This $\mathcal{C}(K)$ is a sequence of free abelian groups $C_p$ connected by homomorphisms $\partial_p: C_p \to C_{p-1}$. \begin{align*}0 \overset{\partial_{d+1}}{\longrightarrow} C_d \overset{\partial_{d}}{\longrightarrow} C_{d-1} \overset{\partial_{d-1}}{\longrightarrow} \cdots \overset{\partial_{2}}{\longrightarrow} C_1 \overset{\partial_{1}}{\longrightarrow} C_0 \overset{\partial_{0}}{\longrightarrow} 0\end{align*} Then the $p$-th homology group is defined as \begin{align*}H_p(K):= \text{ker}(\partial_p) / \text{im}(\partial_{p+1}).\end{align*}

That’s a lot of terms we have yet to define.

Let’s start at the beginning with our input space $K$. We can assume that $K$ is a simplicial complex since we’ll only be dealing with simplicial complexes and they’re just nice to handle. So $K$ is a simplicial complex of dimension $d$. This means that for any face $\sigma \in K$, its dimension is at most $|\sigma| \le d$.

Example: simplicial complex $K$ with vertices labeled 1,2,3,4.
Figure 1: Simplicial complex $K$ with vertices labeled 1,2,3,4.

In Figure 1, we have our simplicial complex $K$. I’ve labeled the vertices so it’s easier for me to refer to the many parts of $K$. Our complex $K$ consists of

  • one 2-cell: $f_1 = [2 \, 3\, 4]$;
  • five 1-cells: $e_1=[1 \, 2], e_2=[1 \, 4], e_3=[2 \, 3], e_4=[2 \, 4], e_5=[3 \, 4]$; and
  • four 0-cells: $v_1=[1], v_2=[2], v_3=[3], v_4=[4]$.

This complex $K$ is of dimension $d=2$.

The group $C_p$ is a group of something called $p$-chains. Recall that a group is a set with a binary operation (in our case, addition) that is (i) associative, (ii) has an identity element, and (iii) has inverses. And when we refer to any group, instead of listing all the elements of the group all the time, it’s much more economical to talk about groups with respect to its generators. For the generators of $C_p$, we have to go back to $K$.

The $p$ in $C_p$ refers to the dimension of cells in $K$. A 1-chain of $K$, for example, is some strange abstract object that is generated by all the 1-cells of $K$, that is, we “add” up a bunch of 1-cells. So $e_1 + 2 e_5$ is an example of a 1-chain. It has no obvious geometric meaning (though, apparently, some people would disagree). If it helps, you can think of $C_p$ as a vectorspace whose basis is the set of $p$-cells of $K$ and the scalars are in $\mathbb{Z}$.

Figure 2: Relabeled and oriented.
Figure 2: Relabeled and oriented.

Notice that the labeling of the vertices from Figure 1 gives us a nice built-in orientation, i.e., ordering, for each cell, see Figure 2. For example, edge $e_1 = [1 \, 2]$ so the orientation we give this edge is from $v_1$ to $v_2$ (because $1<2$). This orientation will help us define the additive inverse of a 1-cell. So $-e_1$ is what you would expect it to be; it’s just $e_1$ oriented in the opposite direction, from $v_2$ to $v_1$.

Now the definition states that $C_p$ is not just any old group. It is a free Abelian group. This is not as bad as it sounds. It just means, again, that we can just go back to thinking of $C_p$ like it’s a vectorspace.

So $C_p$ is free, which just means that every $p$-chain in $C_p$ can be uniquely written as $\sum_i n_i \sigma_i^p$, where $n_i \in \mathbb{Z}$ and $\sigma_p$ are $p$-cells. This idea should be deeply ingrained into your understanding of basis vectors in a vectorspace.

And $C_p$ is also Abelian. So our addition is commutative $\sigma + \tau = \tau + \sigma$ and that also let’s us keep thinking about this vectorspace idea.

So now that we know what the $C_p$ are, let’s talk about the homomorphisms $\partial_p$ between them. These homomorphisms have a special name; they are called boundary maps. This is where the geometry and combinatorics fit together very nicely. Let’s go back to our Figure 2.

The boundary of some $p$-cell $\sigma^p=[v_0 \, v_1 \, \dots \, v_p]$ is a $(p-1)$-chain with the alternating sum $$\partial_p(\sigma) = \sum_{i=0}^{p} (-1)^i [v_0 \, v_1 \, \cdots \, \hat{v_i} \, \cdots \, v_p ]$$ where $\hat{v_i}$ means you leave that vertex out.

For example, $\partial_1(e_1) = v_2 – v_1$ or $\partial_2(f_1) = \partial_2([2 \, 3 \, 4]) = [3 \, 4] – [2 \, 4] + [2 \, 3] = e_3 + e_5 – e_4$. This corresponds nicely with the figure. Just follow along the blue arrows on the boundary of $f_1$ from $v_2$ in a counter clockwise direction.

The boundary function works similarly for all dimensions $p$. For the higher dimensions, just keep in mind what I said earlier about orientation being determined by the labels. The orientation for an edge is easy to visualize, but not so much for,  say, a 6-simplex. Everything works out nicely combinatorially with the above definition.

And because $C_p$ is a free abelian group, we see that the boundary map $\partial_p$ is a homomorphism. Notice also that $\partial_p \circ \partial_{p+1} = 0$. Whenever this happens (to any sequence of Abelian groups with homomorphisms between them with this property), we call such a sequence a chain complex.

Geometry of homology

Let’s introduce a few more important terms.

The set $B_p = \text{im}(\partial_{p+1})$ is the set of boundaries and the set $Z_p = \text{ker}(\partial_p)$ is the set of cycles (which, in German, is “Zyklus”).

So a cycle is any $p$-chain that sums up to zero. This is a very combinatorial/algebraic definition for a word that sounds very geometric. Intuitively, when you hear the word “cycle”, you might think of some path that closes, maybe like a loop. It’s very important, however, to recognize that “cycle” and “loop” (from homotopy theory) are very very different objects.

Let me go off on a short tangent here. Have you noticed that the word “curve” is different depending on who you talk to? For a physicist, a curve is a trajectory–a function $\gamma: [0,1] \to X$ for some space $X$. For a geometer, however, a curve is a subset of a space. Merely the image $\text{im}(\gamma)$. So it doesn’t matter whether a mosquito flies around in a circle 7 times or just once; it still traces just a single circle.

Analogously, a cycle doesn’t care about how you “go around the cycle” as long as the algebraic sum of the corresponding $(p-1)$-chains in the cycle adds up to zero.

And remember: since we have an Abelian group, the order of the addition (or “direction” of the cycle) doesn’t matter at all. For a loop, however, it matters whether you go around the loop clockwise or counterclockwise.

Let’s look again at the composition $\partial_p \circ \partial_{p+1} = 0$. It says, geometrically, that the boundary of $(p+1)$-chains are cycles. So the set of boundaries $B_p$ is contained in the set of cycles $Z_p$, i.e., $B_p \subseteq Z_p$.

Not only that, because $\partial_{p+1}$ is a homomorphism, $B_p$ is even a subgroup of $Z_p$. Geometrically speaking, the quotient group $H_p = Z_p / B_p$ means we partition the set of cycles $Z_p$ to those that bound and those that don’t. This means we can isolate cycles that do not bound cells. That is, we can find holes!

Figure 2: Relabeled and oriented.
Figure 2: Relabeled and oriented.

Let’s go back to our example and think about $H_1 (K)$. The “simplest” 1-chains that make a loop are $e_1 + e_4 – e_2 = \ell_1$ and $e_3 + e_5 – e_4=\ell_2$. All other loops are some linear combination of $\ell_1, \ell_2$. For example $e_1 + e_3 + e_5 – e_2 + (e_4 – e_4) = \ell_1 + \ell_2$. So $Z_1$ is generated by $\ell_1, \ell_2$, i.e., $Z_1 = \{c\, \ell_1 + d\, \ell_2 \mid c,d \in \mathbb{Z} \} \cong \mathbb{Z}^2$. But we also know that $\ell_2 = \partial_{2} (f_1)$ and since $f_1$ is the only 2-cell in $K$, we have $C_2 = \{ c \,f_1 \mid c \in \mathbb{Z} \}$ and therefore $B_1 = \text{im}(\partial_{2}) = \{ c \, \ell_2 \mid c \in \mathbb{Z}\} \cong \mathbb{Z}$.

So $H_1 = Z_1/B_1 \cong \mathbb{Z}^2 / \mathbb{Z} \cong \mathbb{Z}$.

Using homology

Now that the homology group has been defined, we should know what it means. There is an important theorem in group theory that we should recall at this point. It’s so important that it is a fundamental theorem.

Theorem (Fundamental Theorem of Finitely Generated Abelian Groups) Every finitely generated Abelian group $G$ is isomorphic to a direct product \begin{align*}G \cong H \times T, \end{align*} where $H = \mathbb{Z} \times \cdots \times \mathbb{Z} = \mathbb{Z}^{\beta}$ is a free Abelian subgroup of $G$ having finite rank $\beta$ and $T = \mathbb{Z}_{t_1} \times \mathbb{Z}_{t_2} \times \cdots \times \mathbb{Z}_{t_k}$ consists of finite cyclic groups of rank $t_1, \dots, t_k$.

The $T$ is called the torsion subgroup of $G$; the $\beta$ is unique and called the Betti number of $G$; and the $t_1, \dots, t_k$ are also unique and called the torsion coefficients of $G$.

This fundamental theorem says that the homology group $H_p$ (which is a quotient group of a finitely generated Abelian group over a finitely generated Abelian group and therefore itself a finitely generated Abelian group) will always be isomorphic to something that looks like \begin{align*}H_p \cong \mathbb{Z}_{t_1} \times \cdots \times \mathbb{Z}_{t_k} \times \mathbb{Z}^{\beta}\end{align*} a finite direct sum of (finite and infinite) cyclic groups. So now all topological spaces can be nicely categorized by their homology groups according to their Betti numbers and torsion coefficients.

The homology groups $H_p$ are of dimension $p$ and each of them have an associated Betti number $\beta_p$. Since we figured out for our example that $H_1 (K) \cong \mathbb{Z} = \mathbb{Z}^1$, our Betti number $\beta_1 = 1$.

These Betti numbers have a wonderful geometric meaning.  This number $\beta_p$ counts the number of $p$-dimensional holes in $K$. Ah, yes. We finally get to something meaningful and even useful!

Betti numbers can count, as in our example, the number of holes.

Since $\beta_1(K)=1$, this means $K$ has one 1-dimensional hole. And there is indeed a hole in $K$ (because there is no face $[1 \, 2 \, 4]$).

But what is a “hole”?

Honestly, I don’t know how to draw a good picture of a general $p$-dimensional hole (though examples of specific cases are easy). Thinking of holes like in homotopy theory–as the empty spaces where $p$-spheres $S^p$ cannot be shrunk down to a point–is a very nice and intuitive notion. But there are some spaces that,  like the Poincare sphere for example, have $H_1 = 0$ and $\pi_1 \neq 0$. So it’s not enough to think about holes as the holes we see when working with fundamental groups.

We saw from the definition that homology groups are equivalence classes of cycles, of those that bound and those that don’t. So we collect all the “simplest” cycles and check whether they are the boundary of any faces. Any cycle that does not have a corresponding face which it bounds must bound a hole.

It turns out that counting holes has all kinds of industry applications. It can be used to analyze the structure of solid materials or proteins. It can be used in image processing or pattern recognition. It can even be used in robotics and sensor networks.

So homology algorithms have been developed to help these industry people to count holes. And now we know exactly how to write such an algorithm, right?

The significance of adipra

I realize that anyone reading this post doesn’t know what adipra is. How can you? I made it up.

The sphere I constructed most recently was an idea of Karim Adiprasito. While building it, I needed to name it so that I can refer to it in my code. So I temporarily named it adipra and that’s what I’m going to call it here.

Actually, in this post, I don’t want to talk about what adipra is. Not yet. Let’s talk just about what makes adipra special. For anyone who’s interested, all the specifics can be found here.

We need a little language to get started.

Def A combinatorial d-manifold is a triangulated d-manifold whose vertex links are PL spheres.

Let’s assume we know what a triangulated d-manifold means without hashing out the details here. If you need to, you can think of it as a simplicial complex.

The star of a vertex v in a triangulated d-manifold T is the collection of facets of T that contain v. The link of a vertex v is like the star of v, but take out all the v‘s. For example, let’s take a simple hexagon with a vertex in the center.

vertexlink
Example of star and link of vertex

Let T=$\{[0\, 1 \,2],[0\, 1\, 6],[0\, 2\, 3],[0\, 3\, 4],[0\, 4\, 5],[0\, 5\, 6]\}$, then \begin{align*}\text{star}(3,T) &=\{[0\, 2 \,3],[0\, 3\, 4]\} \text{ (green triangles)}\\ \text{link}(3,T)&=\{[0\, 2],[0\, 4]\} \text{ (red edges)} \end{align*}

That’s simple enough to understand even in higher dimensions. Just remember we’re always doing things combinatorially.

Next is the PL sphere. I said earlier that we can assume we know what a triangulated d-manifold is. What we’re actually talking about is a triangulated PL d-manifold. A PL-sphere is a PL manifold that is bistellarly equivalent to the boundary of a d-simplex. I’ll write in more detail what bistellarly equivalent means in a later post. For now, just think of it as the discrete version of being homeomorphic.

Putting all that together, we understand a combinatorial d-manifold to be a triangulated manifold, that is, some simplicial complex-like thing where we require that each of its verticies is sort of covered by a ball. Naturally, the next question to ask is: are there triangulated manifolds that are not combinatorial?

For d=2,3, all triangulations (of the d-sphere) are combinatorial. For d=2, the vertex links should be homeomorphic to $S^1$ or bistellarly equivalent to the triangle (boundary of a 2-simplex). Similarly, for d=3, vertex links are $S^2$. For d=4, all triangulated 4-manifolds are also combinatorial. This result is due to Perelman. The vertex links are 3-spheres, which, as you know, is what Perelman worked on. [Remember that PL=DIFF in dim 4.] But it falls apart for d$\ge 5$ as there are non-PL triangulations of the d-sphere.

Ok, so here’s a spoiler about adipra: it’s a non-PL triangulation of the 5-sphere. But that’s not all.

There’s a nice theorem by Robin Forman, the father of discrete Morse theory.

Theorem (Forman) Every combinatorial d-manifold that admits a discrete Morse function with exactly two critical cells is a combinatorial d-sphere.

Actually, this is not really Forman’s theorem. This is a theorem by Whitehead which Forman reformulated using his language of discrete Morse theory. This is the original theorem.

Theorem (Whitehead) Any collapsible combinatorial d-manifold is a combinatorial d-ball.

What Forman did is take a sphere, take out one cell, call that guy a critical cell, then collapse the rest of it down (using Whitehead’s theorem) to get the other critical cell. Thus you have two critical cells.

So the question you ask next is: can you have non-combinatorial spheres with 2 critical cells?

Yes, actually, you can! Karim showed that you can have a non-PL/non-combinatorial triangulation of the 5-sphere that has a discrete Morse function with exactly 2 critical cells! And then I built an explicit example (with Karim’s instructions) and called it adipra.

KolKom 2012

My first conference talk since starting Phase II was at KolKom 2012, the colloquium on combinatorics in Berlin. But I don’t know anything about combinatorics or optimization. What am I doing at a combinatorics conference!?

Actually, I don’t know much about anything. But it’s not hard to make me sound like I know about some things. The title of my talk was “constructing complicated spheres as test examples for homology algorithms”. Yes, algorithms! That’s the key! It was Frank’s idea to add the word “algorithms” to my title so that I can at least pretend to belong to this conference. It made for a very long title, but at least my abstract was accepted.

You can look at my slides, but I’ll give a short overview here.

We begin with a definition of homology and motivate some applications for homology algorithms. Betti numbers, in particular, are interesting for industrial applications. To compute the Betti numbers of some given simplicial complex, we will be dealing with matrices. We want to find the Smith Normal Form of these matrices. But the bigger the complex, the bigger the matrix, and the longer it takes to find the Smith normal form. So we throw the complex into a preprocess to reduce the size of the input. This preprocess uses discrete Morse theory.

Morse theory is used to distinguish topological spaces in differential topology. Forman came up with a nice discrete counterpart that conveniently preserves many of the main results from smooth Morse theory including the Morse inequalities. The Morse inequality $\beta_i \le c_i \; \forall i$ means that the Betti number $\beta_i$ is bounded above by $c_i$, the number of critical points of dimension $i$. Since our original goal was to compute Betti numbers, this result will definitely come in handy.

The critical points $c_i$ are the critical points of some discrete Morse function. These discrete Morse functions can be interpreted as a sort of collapse. By collapsing big complexes, we can reduce big complexes (that have big matrices for which we need to find the Smith normal form) to smaller complexes.

Ideally, we want to find the discrete Morse function with the lowest $c_i$’s. But computing the optimal discrete Morse vector, a vector $c = (c_0,c_1,c_2, \dots)$, is $\mathcal{NP}$-hard! So Lutz and Benedetti have come up with the idea of using random discrete Morse theory. Surprisingly, their algorithm is able to find the optimal discrete Morse vector most (but not all) of the time. To get a better idea of when they’re better able to find the optimum, they came up with a complicatedness factor which measures how hard it is to find that optimum.

That’s where I come in. I constructed some complicated simplicial complexes. They are being used to test some homology algorithms that are currently under development.

The first of my complicated complexes is the Akbulut-Kirby sphere, one of a family of Cappell-Shaneson spheres. These spheres have an interesting history. The AK-sphere, in particular, was one thought to be an exotic sphere. Exotic spheres are spheres that are homeomorphic, but not diffeomorphic to a standard sphere. In dimension $4$, this would mean it could be a counterexample to the smooth Poincar\’e conjecture, or SPC4 for short. Unfortunately, some years after the AK-sphere was proposed, it was shown to be standard after all. You can learn more about it here.

So I built an AK-sphere. Actually, I wrote code that can build any of the whole family of the CS-spheres. Before talking about what we did, let’s start with how these spheres are built. To understand the construction, you have to first accept two simple ideas:

  • The boundary of a solid ball is a sphere.
That’s not too hard to accept, which is why I said it first even though it’s the last step.
  • Given a donut, if you fill the “hole” (to make it whole, HA!), it will become a solid ball.

That’s also not too difficult to see. You can imagine the filling to be something like a solid cylinder that you “glue” onto the inside part of the donut.

Now for some language. This donut is actually a ball with a handle, which is known as a 1-handle. The filling for the hole is called a 2-handle. And we can specify how the 2-handle is glued in using an attaching map.

So to build the AK-sphere here’s what we do:

  1. Take a 5-ball.
  2. Attach two 1-handles. Let’s call one of them $x$ and the other $y$ so we can tell them apart.
  3. Glue in two 2-handles to close the “holes” but use attaching maps that have the following presentation: \begin{align*} xyx&= yxy & x^r = y^{r-1}\end{align*}This means that the attaching map goes (on the boundary of the 5-ball) along $x$ first, then $y$, then $x$, then $y$ in the reverse direction, then $x$ in reverse, and finally $y$ in reverse. Similarly with $x^r = y^{r-1}$. [Note: For the AK-sphere, $r = 5$. By varying $r$ you get the family of CS-spheres.]
  4. Take its boundary.

Since the 2-handles have closed the holes created by the 1-handles, what we have left after step 3 is again a 5-ball, though a bit twisted. So its boundary in step 4 should be a sphere.

To construct these spheres, we decided to go in a different order. It would be hard to imagine the attaching maps in dimension 5. So we went down a smidge to dimension 3. And instead of drawing the paths of the attaching maps on/in a ball, we built the ball around the paths.

As you might imagine, these paths come out a bit knotted. But by bringing them up in dimension, we can “untangle” it because there’s more space in higher dimensions. The way that we go up in dimension is by “crossing by $I$” where $I$ is the usual unit interval. Crossing by $I$ has the convenient side effect that everything you “crossed” ends up on the boundary of the resulting complex.

So we start with these two paths (some triangulated torii). We fill in the space between them with cleverly placed tetrahedra so that it ends up being a ball with two 1-handles. We now have a genus 2 simplicial complex, where two sets of tetrahedra are labelled as the path representing $xyx = yxy$ and the other $x^r = x^{r-1}$. Cross $I$, cross $I$ so that we are in dimension 5 with the two paths on the boundary of this ball with two 1-handles. Glue in the (5-dimensional) 2-handles (whatever that means). Take the boundary. Done!

The experiments we’ve run on them so far have shown some promise. The complexes are not too large (with f-vector$=(496,7990,27020,32540,13016)$) so the experiments don’t take too long to run. They’ve already been used by Konstantin Mischaikow and Vidit Nanda to improve their on-going Perseus algorithm. [edit: In the original post, I mistakenly referenced CHomP, which has been static for some time.]

The other complex I constructed is Mazur’s 4- manifold. We do a similar thing, but this time there’s only one path to deal with (that is, only one 1-handle to close up with one 2-handle). The results from this one has some nice topological properties. I’ll get to that in a later post once we’re ready to publish.