Spent another fun afternoon on the BART.
What I Learned
Myelin is apparently important.
Chicken salad is pretty good if you make your own mayonnaise (Learned this at home, not on the train).
Proved a bunch of stuff using the tensor and exterior product. To any people interested in getting deep into machine learning, a bit of category theory and algebra will serve you far better than you might think. It’ll also save you from the tyranny of indices and that damned Einstein notation.
Read a little bit of robotics and I was struck by how Clifford algebras weren’t mentioned at all, even though they look they’d be damned useful.
Givental’s linear algebra course kicked my ass, but I use more and more of it every year, from quadratic forms to flags.
Came up with my own example of a functor that helped me internalize that objects and morphisms in categories aren’t sets and functions. They can map to them, but are really their own thing. Hammering this into my own head repeatedly has taken a while (3 years since I first saw a commutative diagram).
Reminded myself that the Bellman operator relies on discounting away the old estimate of the value function and drawing more and more from the actual rewards to find the true value function for a policy. Policy/value iteration then relies on improving it once you’ve found the actual value function.
The notation in Lynch and Park’s Modern Robotics is ugly as hell. Take a hint from computer science and use slice notation for long sequences. Or at least use a few more summation signs to avoid cluttering half a page with unreadable equations.
The fact that dope vector is a technical term is pretty dope.