I'm curious how people think of Algebras (in the universal sense, i.e., monoids, groups, rings, etc.). Cayley diagrams of groups with few generators are useful for thinking about group actions on itself. I know that a categorical approach is becoming more mainstream. For me, lattice theory is my fallback.

Lattice theory is useful to remember the diamond morphism theorem and lattice morphism theorem. Whenever I need to remember if a group can be expressed as a semidirect product I look for two subgroups where their meet is the bottom of the subgroup lattice, the join is the top of this lattice and one of those subgroups is contained in the normal sublattice. I find this easier than to remember the formal definition since I've translated it to relations that are spacial in the lattice. Now I'm studying ideal theory and commutative algebra.

I think of the zero set $\mathbb(V)$ as an up-set of the smallest prime ideal containing the element. I'm curious if this is a general way others have gone about thinking "algebraically".

Alexander Gruber
  • 26,937
  • 30
  • 121
  • 202
  • 2,520
  • 14
  • 18
  • 7
    This question is a little soft. – Qiaochu Yuan Mar 14 '13 at 19:50
  • 5
    A good answer to this question would be awesome to read, but I'm not sure if I can give one. I use mental imagery a *lot* with groups, but the imagery is a little fuzzy and abstract. I'm not sure if I could describe it. – Alexander Gruber Mar 14 '13 at 20:03
  • 1
    @AlexanderGruber The most basic rings are all of the form $\operatorname{Hom}(A,A)$ where $A$ is an abelian group. Every ring implicitly is a subring of such a ring. Given $(R,+,\cdot,0,1)$ you can take the abelian group $A=(R,+,0)$. $\operatorname{Hom}(A,A)$ are to rings what the full symmetric group on a set is to groups. What baffles me is an ituition for commutative rings, or commutative multiplication in general. There is something geometric happening when multiplication is commutative, hence algebraic geometry, but I don't really have an intuition what it is. – Thomas Andrews Mar 20 '13 at 15:11
  • 2
    @Thomas Andrews: The intuition for commutative rings comes from the basic example of "nice" (for example smooth, continuous, etc.) functions on a "nice" space. In fact, in algebraic geometry one first observes that every commutative ring is actually the ring of regular functions on a topological space, namely its spectrum. Ring elements can always be imagined as functions! See also the functional calculus for C*-algebras. – Martin Brandenburg Mar 22 '13 at 14:50
  • @MartinBrandenburg I was recently thinking that the intuition for why, say, natural number addition/multiplication is commutative is that it arises from a a co-product/product in a category. The same is true for $\land$ and $\lor$ in a lattice - the lattice itself is the category there. Category products and co-products are very intuitively (and easily proven to be) commutative and associative... – Thomas Andrews Mar 22 '13 at 15:04
  • @MartinBrandenburg I was thinking in category-theoretic terms because (most) semigroups are faithfully represented as sub-semigroups of $\hom(X,X)$ - in some sense the most basic associative operator is map composition. (There are some cases where the obvious representation is not faithful. If, say $\exists a\neq b: \forall x:a*x=b*x$.) – Thomas Andrews Mar 22 '13 at 16:47

4 Answers4


Examples! All abstract theories were motiviated historically by specific examples. They are still motivating today, offer nice mental images and tell us what the study of abstract objects is all about.

As for groups, I think of automorphism groups of geometric objects, as well as of fundamental groups of nice spaces, or more simply of the group corresponding to a puzzle such as Rubik's cube. The conjugation $a^{-1} b a$ means that I first make a setup move $a$, then make my actual move $b$, and then I have to reset with $a^{-1}$. This intuition is also useful for general and more abstract groups. The product is just some kind of concatenation of doing something, and the inverse just reverses the action. The commutator $a b a^{-1} b^{-1}$ measures the overlap of the moves $a$ and $b$.

A groupoid is just a bunch of groups which communicate with each other. There are many puzzles which correspond to groupoids, for example the Square One.

For monoids, I think of endomorphism monoids of geometric objects. The product may be imagined as above. We only cannot go backwards. Every choice is ultimate.

As for commutative rings, I think of the ring of nice functions on a nice space. In algebraic geometry one learns that in fact every commutative ring is the ring of regular functions on its spectrum. Thus one may always imagine elements of a commutative ring as functions. For functions we have a lot of experience and intuition since the school days.

Modules over a commutative ring $R$ can be added ($\oplus$) and multiplied ($\otimes$) much as if they would constitute a ring. We have the distributivity law $M \otimes \oplus_i N_i = \oplus_i (M \otimes N_i)$. In fact they constitute a $2$-ring (see my paper with A. Chirvasitu), or more simply (decategorified) the isomorphism classes of finitely generated modules constitute a semiring (actually this idea leads to K-theory $K_0(R)$). In some sense we can calculate with modules as if they were numbers. The natural number $n$ corresponds to the free module $R^n$. Many isomorphisms between modules decategorify to combinatorial proofs, for example $\Lambda^n(X \oplus Y) = \bigoplus_{p+q=n} \Lambda^p(X) \otimes \Lambda^q(Y)$ decategorifies to the Vandermonde identity $\binom{x+y}{n} = \sum_{p+q=n} \binom{x}{p} \cdot \binom{y}{q}$. Besides, in commutative algebra it becomes important to realize that $R$-modules are in fact quasi-coherent modules on $\mathrm{Spec}(R)$, and therefore may be treated as bundles. In particular the fiber $M \otimes_R \mathrm{Quot}(R/\mathfrak{p})$ serves as a first approximation for $M$ at some prime ideal $\mathfrak{p}$. After that one might continue with the thickenings $M \otimes_R R/\mathfrak{p}^n$, with the formal fiber $M \otimes_R \hat{R_{\mathfrak{p}}}$, with the stalk $M \otimes_R R_{\mathfrak{p}}$ and finally with the localizations $M \otimes_R R_f$ for $f \notin \mathfrak{p}$, which exactly captures the local behaviour of $M$ around $\mathfrak{p}$.

As for graded modules in contrast to modules, we just add another dimension which is given by the grading. For many basic considerations, a graded module may be visualized as a long line of dots, representing the homogeneous components. A homomorphism between graded modules of degree $0$ is a long ladder. For degree $n$ you have to bend the rungs.

As for sheaves, I often imagine them as two-dimensional objects where one dimension is given by the open subsets and the other one is given by the sections on these open subsets. This also helps to understand and remember some basic notion of sheaf theory, such as surjective homomorphisms and flabby sheaves. Another nice visualization for sheaves is given by their equivalent definition as étale spaces. In fact, from this point of view the notions of sections, germs and stalks make the agricultural analogy just perfect.

In general, one of the main ideas of algebraic geometry is to study geometric objects by algebra, but it is also vice versa: Algebraic objects can be understood by means of geometry. Of course there are lots of other areas of mathematics following these lines, for example noncommutative geometry and geometric group theory.

I could also add mental images for all these categorical notions (colimits as generalized suprema, natural transformations as homotopies, adjoint functors as homotopy equivalences, monoidal categories as categorified rings, etc.), but this answer is long enough.

Martin Sleziak
  • 50,316
  • 18
  • 169
  • 342
Martin Brandenburg
  • 146,755
  • 15
  • 248
  • 458

I like to think of a lot of constructions as playing pretend. I'll go over the integers modulo $n$ for instance. Suppose we start to construct the naturals $0,1,2,\cdots$ and their additive inverses, then allow ourselves multiplication - but in our resulting theory we never specify when two numbers are not equal. Then we cannot derive any sort of contradiction from, say, $0=n$ (keep in mind that a contradiction is of the form $P\wedge\neg P$ for some proposition $P$). If we add this as an axiom and allow ourselves to "pretend" that $0$ and $n$ are the same, then the arithmetic we end up describing is in fact that of the integers mod $n$. Bottom line: if you can pretend there is an algebraic structure with such and such properties, and everything remains consistent, then there does exist said structure.

The philosophy of "playing pretend" may not exactly be a mental image per se, but it is a state of mind which allows us to operate with various types of heavy algebraic machinery in an easy and intuitive manner that I think is important to know about in the context of abstract algebra. Also, perhaps a set / model theorist / logician could formalize the concept of "pretend" - say, the "theory" of an algebraic structure being a collection of certain types of descriptions that can be made about it, and then "pretending" relations are true is tantamount to adjoining propositions to the underlying theory as axioms. My familiarity is too weak to make pronouncements on this.

The generalization of this is to think of quotients as pretending relations are true. Formally, we are creating an equivalence relation preserved by the ambient algebraic operations which is obtained by forming the collection of all "equations" that are obtained by performing algebra on a fixed set of relations. For instance, the abelianization $G^{\rm ab}:=G/[G,G]$ is obtained by adjoining commutativity relations $ab=ba$ (for each pair $a,b\in G$) to the theory of the group $G$ (same idea for a ring $R$). In this vein, every group $G$ is the quotient of the free group whose alphabet is the underlying set of $G$ by the collection of relations encoded in the multiplication table for $G$.

As a fun and simple example, in my abstract algebra class we once were trying to find an example of a ring $R$ for which the polynomial ring $R[x]$ contained a nonconstant idempotent (we had already seen that $\deg (f\cdot g)=\deg f+\deg g$ needed to be amended to $\le$ when the base ring contained zero divisors). The simplest case would be to consider linear polynomials, i.e. $(ax+b)^2=ax+b$ which rearranges to $a^2x^2+(2ab-a)x+(b^2-b)=0$, so we can simply make $a,b$ formal variables and set $R={\bf Z}[a,b]/(a^2,2ab-a,b^2-b)$ so that $ax+b$ is idempotent in $R[x]$.

It should go without saying that, the more things we are pretending are true about a structure, the more restrained and constricted this structure must end up being. Sometimes we create a trivial object in the process. Another way of saying this is, in an algebraic structure various things might be distinct but once we pretend two things are equal that we didn't consider equal before, a domino effect occurs in which a whole bunch of things suddenly do become equal.

The notion of freeness in this context is only pretending as much as is strictly necessary. If you have a set $X$ and want to form the free group out of it, you at the very least have to assume you can multiply and invert elements of $X$ and that the resulting group has some identity element, but beyond this nothing else needs to be assumed - thus, if you assume any further relations you are ending up with a quotient of the free group.

Similarly to create the free product $A*B$, you at the very least need to keep the multiplication tables for $A$ and $B$, and need to multiply elements of $A$ against those of $B$, but beyond this nothing else need be assumed. Pretending $A$ and $B$ commute against each other yields the direct sum as a quotient of the free product: $A\oplus B\cong (A*B)/[A,B]$ (where we view $A$ and $B$ as subgroups of the free product). The semidrect product $N\rtimes H$ can be formed by quotienting the free product $N*H$ by the relations $hnh^{-1}=h(n)$ for all $n\in N,h\in H$, which is to say that we "pretend" that conjugation by $H$ is the same as applying automorphisms.

On this interpretation, if $A$ and $B$ are quotients of free groups $A=FX/R$ and $B=FY/S$ (where $X,Y$ are disjoint and $R,S$ are relations, or more directly group-theoretically the subgroups generated by the words considered equal to the identity via the relations we desire to impose), then the free product is easily understood to be $FX/R*FY/S\cong F(X\sqcup Y)/\langle R\cup S\rangle$.

Often lie algebras $\frak g$ are thought of as subalgebras of an endomorphism algebra with the standard commutator bracket as the lie bracket. This allows both algebra-type multiplication and the lie bracket operation on the structure. However, just as abstract groups need not be thought of as symmetries or functions, we need not think of $\frak g$ directly as having any sort of multiplication operation (and therefore no commutator bracket), only the abstract lie bracket $[\cdot,\cdot]$ satisfying the given axioms. To create the universal enveloping algebra $U({\frak g})$ out of the full tensor algebra $T({\frak g})\cong\bigoplus_{n\ge0}{\frak g}^{\otimes n}$ we pretend we can multiply elements of $\frak g$ together - we subject this to the distributive property $a(b+c)=ab+ac$ - and then on top of this we pretend that the abstract lie bracket is in fact the commutator, i.e. quotient by the relations $[a,b]=ab-ba$ for all $a,b\in T$.

If $H\le G$ is a subgroup and $V$ is a representation of $H$ (basically a $K[H]$-module) then to create a $G$-rep out of this we allow ourselves to pretend there are actions of $G$ on $V$, subject to the subgroup $H$ acting in the already understood way. This induced representation thus is formed by "extending the scalars" of $V$ from $K[H]$ to $K[G]$: this is the isomorphism ${\rm Ind}_H^GV\cong K[G]\otimes_{K[H]}V$. More generally, tensoring against a ring extension allows us to form a module (or algebra) by pretending we have more scalars to multiply than we had before. For more useful applications of the tensor product, see the answers given to the questions here and here.

In the links it is also explained that the difference between tensor product and direct sum can be understood quantum-mechanically, in which we view vector spaces as possible formal linear combinations of basis vectors - i.e. superpositions of pure states of a physical system, and the essence of superposition in QM (informally - this is my opinion) is pretending a system can be in a mixture of classical states, so I'd classify this under the same banner.

Other items I want to mention on this topic are fraction fields and virtual representations (both examples of a Grothendieck-type construction that in some sense allows us to 'complete' an 'incomplete' algebraic structure). Whereas quotients contract structures into smaller ones, and extension of scalars pulls scalars from somewhere else we already have on hand, the fraction field creates fractions out of the existing structure. That is, given an integral domain $R$, we form pretend fractions $a/b$ for $a,b\in R$ and then subject it to the obvious rules e.g. $a/b+c/d=(ad+bc)/(cd)$.

Representations of a given group $G$ over a field $K$ can be combined through either direct sums or tensor products, and these still satisfy distributivity $(A\oplus B)\otimes C\cong(A\otimes C)\oplus(B\otimes C)$; in this way the representations form a semiring (with the trivial representation as the multiplicative one) in which we can add and multiply, but we don't have any notion of subtraction that is necessary to have a bona fide ring. Thus, we simply pretend we can subtract, and we get the representation (aka Green) ring ${\rm Rep}(G)$ (we may tensor to obtain larger sets of scalars); the elements of the resulting ring are called virtual representations.

In the same way, $G$-sets (sets equipped with $G$-actions) may be multiplied through Cartesian products and added through disjoint union, and when we pretend we can subtract things we obtain the so-called Burnside ring (which may be viewed as a subring of the representation ring via the process of linearizing $G$-sets into spaces which are $G$-reps).

Probably more things can be thought of as pretend-new-things-are-true-about-an-existing-object, like say direct limits or fibred coproducts etc. but this is what comes to mind.

Martin Sleziak
  • 50,316
  • 18
  • 169
  • 342
  • 80,883
  • 8
  • 148
  • 244

Let $G=\langle X; \mathbf{r}\rangle$ be a group given in terms of generators and relators. It is often useful to see what a word representing the identity element of the group will look like.

Write every relator on a circle. These are our tiles.

Let $W$ be a word over $X$ and write $W$ in around a circle. Then $W$ is equal to the trivial word if and only if $W$ can be "tiled" by the relators (this make sense as cyclic shifts (and, indeed, all conjugates) of $W$ are also equal to the identity).

This is a very powerful tool, and a snazzy way of visualising your groups. These are called van Kampen Diagrams, or just Diagrams. There is also a dual notion, called Pictures.

An excellent reference to this is the end of the book "Combinatorial group theory" by Lyndon and Schupp (the chapter on small cancellation theory). Another excellent, but more modern, reference is the lecture notes of Hamish Short, Diagrams and groups (but I am less familiar with these).

For example, look at the group presentation $\langle a, b; [a, b]\rangle$. Then a valid diagram is (from Wikipedia),

A sample diagram

Fixing a vertex on the boundary and reading clockwise around the boundary then the word you get is necessarily equal in the group to the trivial word.

Now, if these tiles have a certain structure (e.g. a small cancellation structure) then you get a solution to the word problem and the conjugacy problem (and sometimes even the isomorphism problem, but that is a very recent result and boils down to hyperbolic groups). For example, if there is no way of fitting less than four tiles around a given tile (this is called the $C(4)$ condition), and when tiling every (internal) vertex is adjacent to at least four tiles (this is called the $T(4)$ condition) then the word and conjugacy problems are soluble. There is also the $C^{\prime}(1/\lambda)$ condition, which is the most useful of all, which says roughly (badly! but I am running out of time...) that if you fit some tiles around any tile then they do not touch with less than $1/\lambda$ if their respective lengths.

Okay, so that is the two-dimensional version. You are tiling the surface of a disc. What if you were to tile a sphere? Well, this comes down to the second homology group of your presentation, and is related to asphericity. There is a chapter in a book about this by Steve Pride and Bill Bogely, and Pride had some "rather clever stuff" about this earlier, according to Martin Bridson. Also, Baumslag, Bridson, Miller and Short used the Bogely-Pride theory to prove some pretty awesome stuff about fibre products in their paper "Fibre products, non-positive curvature, and decision problems" (although they do not actually use diagrams in their paper (or, at least, I do no remember them doing so, and flicking through it I see no images of diagrams)).

Mark Sapir is a prolific writer of this stuff. He has a recent paper which uses them. Also look up his word with Victor Guba on Diagram Groups. These are groups which are the symmetries of diagrams. Thompson's group $F$ is one, so these are important!

Martin Sleziak
  • 50,316
  • 18
  • 169
  • 342
  • 28,057
  • 7
  • 60
  • 128
  • 4
    Could you possibly write/draw/tikz a simple example of the group visualization? I'm not sure I understand this "tiles" thing. – Alexander Gruber Mar 15 '13 at 20:18
  • @AlexanderGruber: I will try and given an example later if I have time. The best place to look really is Lyndon and Schupp. However, I quite liked Sapir's paper which I mentioned above, and Kharlampovich and Myasnikov's paper called "Hyperbolic groups and free constructions" which made me realise how these diagrams are to look at HNN-extensions. There is also a good (I think...) introduction by Wise and McCammond called "Fans and ladders in small cancellation theory". They give a really detalied study of what these diagrams look like for small cancellation groups. – user1729 Mar 18 '13 at 13:52
  • As an exercise, try looking at the fundamental group of a closed, orientable surface (of genus at least $3$) and wondering what a word which is equal to the identity must look like. Use these diagrams. If I have time, this is the example I will give. But...I probably won't have time! – user1729 Mar 18 '13 at 13:54
  • If you have a space with fundamental group $G$, then you can see each of the generators as cycles and each of the relationships as retractions of a cycle, which is really just the image of an polygon with its interior=. Allowed to glue these polygons together at any edge where the labels are the same but in different directions, if the result is another simply-connected polygon, then the edge of this new polygon is derived "relation" because the simply connected polygon clearly has a retraction. – Thomas Andrews Mar 20 '13 at 00:06
  • 3
    Perhaps the tiling you refer to is the Van Kampen diagram (see [wikipedia](http://en.wikipedia.org/wiki/Van_Kampen_diagram)) – Myself Mar 22 '13 at 21:09
  • @Myself: Yes, sorry, I thought I had used the words "van Kampen diagram" somewhere in what I wrote! A silly omission... – user1729 Mar 25 '13 at 12:34

They're puzzles and games to me. It might sound ridiculous to think about Dedekind domains as puzzles, but that's always the way I've thought about algebra.

Using the rules of the game, move the pieces and prove the proof. Group theory has relatively simple rules and only one move, $+$, but like Go it it can get out of hand relatively quickly. At first, the game gets better as the rules get more difficult: now let's say $a+b \neq b+a$. Go! Ring theory adds another move $\times$, which changes gameplay quite a bit. Field theory has the most rules, which somehow seems to make the game easier, although you can always make the objective harder until it's beyond reach.

As the rules and you get more complicated, different types of pieces come out of the woodwork. First there are the usual suspects: Abelian and non-Abelian groups, commutative and noncommutative rings, algebraic number fields, etc. Eventually there are stranger pieces on the board like ideals, unique factorization domains, principle ideal domains and so forth. Not all of the rules apply to all of the pieces, and sometimes it's difficult to keep them all straight. You can only castle if there's nothing between the rook and the king, they haven't been moved, and the king isn't in check, right?

Douglas B. Staple
  • 1,998
  • 12
  • 20