An Image is Worth 16x16 Words


Transformer gives you function \(F: [X] \to [Y]\). Anything that can be totally ordered can be learned. So it’s pretty general.

To me, this paper is more evidence for the transformer as a universal architecture. Some other (famous) examples:

  • AlphaFold
  • GPT
  • BERT

The Idea

Take an image (and its target image in training), cut it into patches, embed each patch, then use transformer on that. It’s that simple.

I wonder if different sequentialization strategies would work better or worse (rasterize up/down instead of left/right)? Would doing it randomly give noticeably different performance?

Bitter Lesson

We find that large scale training trumps inductive bias.

I gotta get more compute.

Related Posts

Use of emphasis in speech

Generating a lot of language data with a theorem prover

"Litany Against Fear" in Present Tense

When it's time to party we will party hard

these are people who died

divine carrot

the frog

what it’s like to get nail phenolization

Why 0 to the power of 0 is 1

Lines and Points are Circles