TLDR abstraction isn’t free or always desirable.

Atiyah sums up a lot of the feelings in this post here

When someone tells me a general theorem I say that I want an example that is both simple and significant. It’s very easy to give simple examples that are not very interesting or interesting examples that are very difficult. If there isn’t a simple, interesting case, forget it.

Mathematics is built on abstractions. On the other hand, we have to have concrete realizations of them. Your brain has to operate on two levels. It has to have abstract hierarchies, but it also has to have concrete steps you can put your feet on.

Everything useful in mathematics has been devised for a purpose. Even if you don’t know it, the guy who did it first, he knew what he was doing. Without knowing the examples, the whole thing is pointless. It is a mistake to focus on the techniques without constantly, not only at the beginning, knowing what the good examples are, where they came from, why they’re done. The abstractions and the examples have to go hand in hand.

You can’t just do particularity either. You can’t just pile one theorem on top of another and call it mathematics. You have to have some architecture, some structure. Poincaré said science is not just a collection of facts any more than a house is a collection of bricks. You have to put it together the right way.

Think about all the definitions in math as a really big tree, rooted in whatever axioms you use.

Good definitions are vertices in the tree with a high branching factor. Definitions that lead to skinny, long paths are not terribly useful because there are fewer examples that actually use them.

A generalized definition always has at least as many children as its refinements, but the added pain of the greater abstraction isn’t always worth it.

## Example: Vector Spaces to Bimodules

I was reading Jonathan Gleason’s linear algebra textbook and I got to the part where he started talking about bimodules.

Vector spaces are a common algebraic structure that lead to linear algebra. We can define a vector space as an abelian group with a field acting on it via scalar multiplication.

We can weaken the field to a commutative ring to get a module.

Dropping commutativity gives left and right modules.

We can require the left and right scalar multiplications to be compatible to get a bimodule.

But all this additional abstraction doesn’t lead to many simple and significant examples.

To be fair(er) to Gleason, he makes a point of stating that the bimodules are more useful as a general setting.

But ultimately, bimodules are more of a dead end than vector spaces, even to most pure mathematicians. There’s a reason we teach topological spaces and not just metric spaces in analysis, but that we don’t go further than modules in algebra.

## Nonexample: Symmetric Matrices to Normal Operators

This is a case where generalizing the definition leads to all sorts of interesting new examples.

A symmetric matrix is equal to its transpose. A normal operator commutes with its transpose.

It turns out that the Spectral Theorem holds for normal operators, and that’s a huge theorem, enough to pay the cost of the abstraction. Which is a small cost, since “commutes with transpose” is pretty easy to remember.

## Takeaway

Abstractions need to pull their weight to be worth having.