Why algebraic geometry is interesting

The first few items concern classical algebraic geometry, which studies polynomials over the reals and complex numbers.

  1. Classical algebraic geometry is the ‘next step’ after linear algebra. Linear algebra allows only addition and multiplication by scalars. If you allow multiplication of coordinates, you get polynomials in multiple variables, and that’s exactly what classical algebraic geometry is. There, one studies ‘varieties’ which are like linear subspaces. Instead of being defined by a linear equation (like x+y+z=0), they are defined by algebraic equations (like $latex y+x3-z=0$). Despite some hiccups, dimension is well-behaved, and each extra equation usually cuts down the dimension by 1.
  2. A lot of very interesting theorems came out of algebraic geometry, especially when projective space is thrown in. Projective space adds points at infinity to Euclidean space. So, for instance, two parallel lines in the plane intersect at infinity. This gives you theorems like Bezout’s theorem, which says that two varieties (i.e. zero sets of algebraic functions) that aren’t degenerate (like sharing factors) will intersect in the projective complex numbers a number of times equal to the product of their degrees (so two quadric surfaces in projective C3 intersect 4 times).
  3. Classical algebraic equations come up all the time. Matrix multiplication is algebraic, the determinant equation is an algebraic equation, etc.
  4. Now we get to crazier things. When abstract algebra was being developed, mathematicians began to see connections between polynomial rings and algebraic equations. In particular, every variety corresponded to a ‘prime ideal’ in the complex numbers and points corresponded to maximal ideals. What does this mean? Look at the complex numbers C. Take any polynomial in C. By the fundamental theorem of algebra, it factors into linear polynomials (x-a)(x-b)\cdots(x-c). So the only ‘prime’ polynomials are the linear ones (x-a). But these are in 1-1 correspondence with points. It gets more complicated in higher dimensions.
  5. So now mathematicians knew that you could make prime ideals in a ring correspond to points in a space. So they took it and ran with it: what if you took ‘any’ ring and made a space whose points corresponded to prime ideals? This is the idea of a ‘spectrum’. It turns out that commutative rings of ‘finite dimension’ have the most tractable spectra. These spaces have weird properties. For instance, the element 0 usually forms its own ideal, but corresponds to no classical point. It’s an extra point that is somehow close to all other points, because every other ideal contains it. The integers have a countable number of points, one for 0 and one for each prime. Things get really, really weird if you look at the prime ideals of Z[X], the ring of integer polynomials.
  6. Mathematicians quickly discovered that almost everything you did with polynomial rings had a geometric analogue. Quotienting by a prime ideal corresponded to restricting to a subspace. Inverting a prime ideal corresponded to taking the complement. Quotienting by only the linear terms of an ideal gave the tangent space. Tensor products gave ‘fiber products’. The possibilities were endless!
  7. It even extended to number theory, giving number theory a geometric setting. In many ways, number theory could be done by working with countable spaces like the integers in the same way that you work with uncountable spaces like the complex numbers. This explains many of the connections between finite fields and complex numbers, such as the Riemann hypothesis’s connection to certain finite fields.
  8. Now algebraic geometry is even being applied to physics in the form of string theory. String theory is deeply involved in algebraic geometry.

These are just a few reasons that algebraic geometry is interesting to me.

This post was originally posted on reddit.

Image by Jbourjai.

Advertisements

Introduction to character theory

Character theory is a popular area of math used in studying groups.

Representations. Character theory has its basis in representation theory. The idea of representation theory is that every finite group (and a lot of infinite ones!) can be represented as a collection of matrices. For instance, the group of order 2 can be represented by two nxn matrices, one (call it A) with all -1’s on the diagonal, and one (call it B) with all 1’s. Then A2 =B, B2 =B, and AB=BA=A, which shows that it really is the group of order 2.

Another way of representing the same group is to have A be the 2×2 matrix that is 0 on the diagonal and 1 elsewhere. B is still the identity. This is another perfectly good representation.

Sometimes it is helpful to look at matrices which only represent a part of a group; in this situation, you don’t have an isomorphism between the group and the matrices, but you do have a homomorphism. One example is sending the group of order 4 {1,t,t2 ,t3 } to the same matrices above, sending 1 and t2 to B and t and t3 to A.

It’s easy to see that there are infinitely many representations for every group. In fact, you can take any group of matrices A_1,…,A_n and conjugate them all by another matrix C to get CA_1C-1, …, CA_nC-1. This gives you another representation (this is also called a similarity transformation). The simplest representation for every group (called the trivial representation) sends every element to the identity matrix.

Characters. What mathematicians did is say, “Is there any better way to classify these representations?” One thing they tried was to find invariants, i.e. things that don’t change under transformations. One idea was to use the trace. The trace of a matrix is invariant under similarity transformations, i.e. conjugacies.

And so mathematicians would take a representation and find its trace. The collection of all these traces is called a character. So the character of our first representation for the group of order 2 is the map sending A to -n and B to n; while the character of the second representation is the map sending A to 0 and B to 2. The character of the trivial representation in dimension n is the map sending everything to n.

This is a very simple example, but as mathematicians tried more complicated examples, they noticed a pattern (I’m simplifying the history here). The set of characters was generated by a very small number of characters, meaning that every character was a linear combination (with positive integer coefficients) of a very small number of characters, called irreducible characters.

For instance, in the group of order 2, every character is a sum of the character with values 1,1 and the character with values -1,1. These can both be given by a representation using 1×1 matrices.

Even more interesting, this decomposition into irreducible characters always gave a decomposition of representations, meaning that the matrices could be put by a transformation into block form with each block corresponding to one of the characters.

Thus, in our first example, the nxn matrices are the sum of n copies of the 1-dimensional representation. Note that the diagonal matrices are already in block form.

For the second example, note that the character is one copy of each irreducible character. The matrix A can be conjugated to the 2×2 matrix with a 1 on the upper left and a -1 on the lower right, corresponding to the two kinds of characters.

It gets crazier. It turns out that each irreducible character is orthogonal to every other character if you write out the list and take the dot product. So, 1,1 and 1,-1 are orthogonal. Conversely, given any 2 elements of the group that are not conjugate, the corresponding lists of their values in the irreducible characters are orthogonal.

This allows one to split a character into its irreducible parts very easily.

Three random notes at the end: 1. Character values don’t have to be rational or even real, but they are always algebraic integers. 2. Abelian groups always have 1-dimensional irreducible characters. These characters are actually homomorphisms into the complex numbers. 3. Building off the previous example, you can define characters on the real line to be homomorphisms into the multiplicative complex numbers. In this case, all the characters have the form t->ei(xt), where x is a constant. These characters are still orthogonal (using integration instead of addition), and any function from the real numbers to the complex numbers can be decomposed into these characters using integrals. This is, in fact, fourier theory. The fourier transform just takes a function and splits it into its irreducible characters.

This post was originally posted on reddit.

Image by RobHar.

 

An explanation of the eight geometries of universes (i.e. three manifolds)

One basic fact about 2-d surfaces is that they all belong to some ‘geometry’. Informally, a geometry is a space that is homogeneous (so every point is just like any other point), simply connected (meaning that it has no ‘holes’) and had as many symmetries as possible.

With this definition, there are three 2-dimensional geometries:

  1. Euclidean geometry (an infinite sheet of paper)
  2. Spherical geometry (the surface of a ball)
  3. Hyperbolic geometry (M.C. Escher’s Circle Limit)

These geometries all have a curvature. Euclidean geometry is not curved, spherical geometry is positively curved, and hyperbolic geometry is negatively curved, like a saddle or a pseudosphere.

The 3-dimensional geometries were described by Thurston, and Perelman proved that every ‘simple’ 3-manifold belongs to one of them.

Thurston reasoned this way. In 3-manifolds, you can be isotropic or non-isotropic; isotropic means looking the same in every direction.

If the geometry is isotropic, then it is curved the same way in every direction. So we get geometries of constant curvature. They are:

  1. Euclidean geometry (how we usually imagine space)
  2. Spherical geometry (the 3-sphere; this would be a finite universe)
  3. Hyperbolic geometry (look up dodecahedral tesselations)

The next group of geometries has one axis different from the other 2. The simplest examples are the product geometries:

  1. The Euclidean plane times a line. This is just Euclidean space, so we don’t count it again.
  2. The sphere times a line. This geometry is finite in two directions and infinite in another.
  3. The hyperbolic plane times a line. Comes up in knot theory.

Now things get weird. You can also have twisted products:

  1. The Euclidean plane twisted with a line. This is Nil geometry, where travelling along the ‘line’ direction makes the plane get more and more skewed. This geometry has been used for a century or more in Hamiltonian mechanics.
  2. A sphere twisted with the line. This gives us the 3-sphere again.
  3. The hyperbolic plane twisted with a line. This is the geometry of “the universal cover of SL2(R)” (they need a better name). Motion here occurs in screw-like patterns.

Finally, there is only one geometry where all 3 axes are different. This is:

  1. Sol geometry. In this geometry, travelling up the z axis contracts the x-axis and expands the y-axis. Going down does the opposite.

These are the eight geometries.

This post was originally posted on reddit.

Image by: TomRuen, from software by Jeff Weeks

What point-set topology is (without coffee or donuts)

What is point-set topology?

Point-set topology is the most basic kind of topology. The most important ideas in topology are continuity, compactness, and connectedness. The goal of topology is to understand these properties for a wide variety of spaces. The intuitive definitions of these three things are:

  1. Continuity: A function is continuous if it takes nearby points to nearby points.
  2. Compactness. A space is compact if it is the union of a finite number of arbitrarily small sets (we usually say it is ‘covered’ by these sets).
  3. Connectedness. A space is connected if it cannot be split up into two pieces that are distant from each other.

Each of these concepts had weasel words: nearby, small, and distant. What do these things mean?

It turns out that they can all be made precise by using one idea: open sets. If two points are in a lot of the same open sets, they are close. If they are not in a lot of open sets, they are far away.

So we have more precise definitions:

  1. Continuity: a function f is continuous if the preimage of every open set is open.

What does this mean? Let U be an open set around any image point f(x) (so U can be as small as we want). Then f is continuous if for every such U, there is an open set V in the domain that contains x and that maps into U. This is the epsilon-delta definition without numbers.

  1. Compactness: A space X is compact if given any collection of open sets that cover X (no matter how small each one is), we can choose out a finite number of them that still cover X. (Notice that the real numbers aren’t compact because if you choose really small open sets, like the open sets (n,n+1), you need infinitely many to cover the whole line. The circle, though, is compact).
  2. Connectedness: A space is connected if you cannot split it into two disjoint open subsets (because each one being open means that they are far from each other).

So the question remains: What is an open set? And here is the secret of topology: an open set can be anything you want. So you just pick the open sets, and see what properties come out. Each choice of open sets is called a topology.

Almost anything goes when picking topologies, but for basic concepts to make sense (like constant functions being continuous) you need 3 basic rules; 1. The whole space is open, and the empty set is open. 2. The intersection of two (or any finite number) of open sets is open. 3. The union of any collection of open sets is open.

Pick weird topologies, and you get weird properties. Like the open ray topology; if the open sets are of the form (-infinity,a) then every subset is connected. Weird! Usually, though, you pick topologies that resemble Rn or other well known spaces. For fun reading, check out the book Counterexamples in Topology.

The image at the top of this post is one I created for the compact set page on Wikipedia.

This post was originally posted on reddit.

Why you should care about hyperbolic groups

Hyperbolic groups belong to the obscure area of geometric group theory. Their definition (https://en.wikipedia.org/wiki/Hyperbolic_group) is difficult to work through–thin triangles? What are those?

So why should you care? In increasing order of coolness:

  1. Group theory: Finite groups have been studied for centuries. The next big class after the class of finite groups is infinite, finitely generated groups. The vast majority of such groups are hyperbolic. The word problem, infamously hard to solve and similar to the halting problem) is solvable in hyperbolic groups.
  2. Cosmology: It is well known that every surface can be made spherical, Euclidean, or hyperbolic. A little less well known is the Geometrization Theorem, which says there are 8 possible geometries of 3-manifolds, of which the largest group (by far!) is the group of hyperbolic universes; that is, universes whose fundamental group is hyperbolic. A random universe is hyperbolic with a very high probability.
  3. Fractals: This is part of my own research: Every hyperbolic group has a fractal nature at infinity which can often be drawn explicitly by subdivision rules. These subdivision rules carry all of the information about the hyperbolic group. Some images can be seen at http://www.math.vt.edu/people/floyd/subdivisionrules/gallery/gallery.html. The image at the top of this post is a finite subdivision rule.