GPT-f


My main thought about papers like this is how it affects my own future lines of inquiry. Makes me lean into “cleverness” areas more since I currently lack the compute to do this style of inquiry. I expect other researchers have run it through their own internal calculus too.

This is my first actual look at Metamath and while the website’s “dependency graph” feature is really cool, I’m horrified that humans write in this.

The bit about bootstrapping is like catnip.

We demonstrate that iteratively training a value function on statements generated by our language model leads to improved prover performance, which immediately suggests a strategy for continuous self improvement: keep training on proofs generated by the prover.

Related Posts

Just because 2 things are dual, doesn't mean they're just opposites

Boolean Algebra, Arithmetic POV

discontinuous linear functions

Continuous vs Bounded

Minimal Surfaces

November 2, 2023

NTK reparametrization

Kate from Vancouver, please email me

ChatGPT Session: Emotions, Etymology, Hyperfiniteness

Some ChatGPT Sessions