An Image is Worth 16x16 Words


Transformer gives you function \(F: [X] \to [Y]\). Anything that can be totally ordered can be learned. So it’s pretty general.

To me, this paper is more evidence for the transformer as a universal architecture. Some other (famous) examples:

  • AlphaFold
  • GPT
  • BERT

The Idea

Take an image (and its target image in training), cut it into patches, embed each patch, then use transformer on that. It’s that simple.

I wonder if different sequentialization strategies would work better or worse (rasterize up/down instead of left/right)? Would doing it randomly give noticeably different performance?

Bitter Lesson

We find that large scale training trumps inductive bias.

I gotta get more compute.

Related Posts

I finally have an answer to "who's your favorite singer?"

My Top Tip for Helping People Get Started Programming

GPT-f

Random paper on angles

Random stuff

Lossless Data Compression with Neural Networks by Fabrice Bellard

Downscaling Numerical Weather Models With GANs (My CI 2019 Paper)

Learning Differential Forms and Questions

PyTorch Lightning is worth using

Feynman Lectures, Chapter 2