GPT-f


My main thought about papers like this is how it affects my own future lines of inquiry. Makes me lean into “cleverness” areas more since I currently lack the compute to do this style of inquiry. I expect other researchers have run it through their own internal calculus too.

This is my first actual look at Metamath and while the website’s “dependency graph” feature is really cool, I’m horrified that humans write in this.

The bit about bootstrapping is like catnip.

We demonstrate that iteratively training a value function on statements generated by our language model leads to improved prover performance, which immediately suggests a strategy for continuous self improvement: keep training on proofs generated by the prover.

Related Posts

I finally have an answer to "who's your favorite singer?"

My Top Tip for Helping People Get Started Programming

Random paper on angles

An Image is Worth 16x16 Words

Random stuff

Lossless Data Compression with Neural Networks by Fabrice Bellard

Downscaling Numerical Weather Models With GANs (My CI 2019 Paper)

Learning Differential Forms and Questions

PyTorch Lightning is worth using

Feynman Lectures, Chapter 2