Today I grasped the difference between the differential and the
derivative (only took 6 years `-_-`

), lost my fear of \(dx\) as a single
symbol (until learning the exterior derivative changes that into 2 symbols
again), and finally got a hint as to why transposes pop up in weird
places in “matrix calculus”.

Someone described a blog post of mine as “polemic and a quick example”. That’s an annoyingly good description. It’ll take some effort to level up to “polemic and the perfect example”.

Read half of a thesis about using algebra in reinforcement learning. It’s a bloody mess.

In a way, the action space acts on the state space via the transition
dynamics, but it’s not a true semigroup action. They have the wrong type
signature because of the probabilistic dynamics. Actions in RL give a
*distribution* over (successor) states rather than a state.
Deterministic transitions give a manifold with a reward structure, like
a finite state machine that’s not finite. Not a great name. But I wonder
if sufficiently nice actions and states (e.g. lie groups) make the
problem tractable.

Terry Tao’s
advice is real
right now, as I made this list of topics I’d need to learn to even begin
to know *how* to formalize the fuzzy ideas in my head:

- Differential geometry
- Dynamical systems
- Random dynamical systems
- Group theory
- Lie theory
- Automata theory
- Category theory

That list would take at least a year of solid work on nothing else. And
that’s for *one* random idea. I’m not so worried about someone stealing
an idea after thinking that over. I need all the help I can get.