This is an opinionated and incomplete book. These are sloppy notes.

I nearly cut my finger off a week ago and it’s hard to type with 9 fingers. Jaynes died midway through writing his book, and it’s hard to write with 0 lives. So we both have excuses.

## Dedication

Dedicated to the memory of Sir Harold Jeffreys, who saw the truth and preserved it.

“Saw the truth and preserved it”. I like this quote.

## Preface

This section is perhaps the most interesting in the book, and that’s not an insult. It concisely describes a fascinating idea:

Probability theory is the extension of logic to uncertainty.

As a slogan: logic is probability theory where all probabilities are 0 or 1.

## Vendettas

Jaynes does not seem to like infinity. This is a recurring theme. Don’t mention the Axiom of Choice or nonmeasurable sets around him.

As a mathematician, I counter with: No one shall expel us from the Paradise that Cantor has created.

## Chapter 1

The takeaway is that probability gives you a way to quantify how likely the converse of a proposition is to be true.

A can imply B, but that does not mean B implies A. On the other hand, if A implies B, then B being true can’t make A less likely to be true than if B was false.

So $A \implies B$ gives a fuzzy sort of reverse implication. This fuzziness is where probability comes into the picture. It lacks certainty, but often certainty is too much to ask for, especially in real life. We’ll see later that Bayes rule gives us this sort of reverse fuzzy modus ponens.

Jaynes lists out a bunch of desired principles of logical thinking that are each relatively reasonable to accept. He will later use this to set up the axioms of probability theory as consequences of this alternative foundation.

This is nice because the Kolmogorov axioms are simple, but less intuitive. These principles are more complicated but more intuitive.

### Boolean Algebra

This section introduces notation used in the rest of the book, so don’t skip it completely.

## Chapter 2

This is full of sloppy writing.

I think math books can either be monographs that cover everything in technical correctness to a sometimes painful degree, or they can teach you the core ideas, but they should pick one and stick with it. It’s too easy to get lost in the details otherwise.

This chapter walks through a convoluted functional equation to show that the product rule of probability is true.

The insistence on writing everything as a conditional probability for philosophical reasons is nice in theory, but is a pain to read.

You can safely skip the derivation. I promise it will grant no insight, and it’ll make you want to die while reading it.

It’s also half-assed. It says it wants to do everything from first principles to show how general these principles of plausible reasoning are, then goes and tacks on assumptions like differentiability because otherwise the proof is apparently 11 pages long. Then why waste my time with 6 pages that didn’t teach me anything about plausible reasoning?

Takeaway: Bayes’ rule is fuzzy reverse modus ponens. The product rule is the fundamental rule of how to chain implications.

### Principle of Indifference

This is derived as a special case of maximum entropy from symmetry equations.

In 2.6, Jaynes starts ranting against the use of infinite sets that are not derived from finite ones via some limiting process. You do you, man.

This part bugged me:

In this example, the undecidability is not an inherent property of the proposition or the event; it signifies only the incompleteness of our own information. But this is equally true of abstract mathematical systems; when a proposition is undecidable in such a system, that means only that its axioms do not provide enough information to decide it. But new axioms, external to the original set, might supply the missing information and make the proposition decidable after all.

In the future, as science becomes more and more oriented to thinking in terms of information content, Godel’s result will be seen as more of a platitude than a paradox. …

These considerations seem to open up the possibility that, by going into a wider field by invoking principles external to probability theory, one might be able to prove the consistency of our rules. At the moment, this appears to us to be an open question.

The Continuum Hypothesis’s truth is independent of ZFC. There’s all sorts of reasonable-sounding systems that make it true or false. So this is hardly a platitude.

And if you extend a system to prove your original system is consistent, your new system has no proof that it itself is consistent. Further extensions give the same issue.

### Venn Diagrams

Venn diagrams are a useful tool, and Jaynes gives a convoluted reason to avoid them that doesn’t convince me.

### Kolmogorov axioms

Jaynes argues against interpreting probabilities in terms of sets, which is essentially the measure-theoretic foundation. He tells the reader to look at Appendix A if they’ve studied probability before.

## Appendix A

Jaynes gives an excellent reason for why probabilities should sum to 1, and not just any finite number.

He also makes a good case that Kolmogorov handles conditional probability awkwardly, something I’ve thought too.

## Comparability

The requirement that degrees of plausibility be represented by real numbers can be replaced by any totally ordered set (at least for finitely many propositions).

The totality of a total order may feel unnecessary to some, so Jaynes looks at weakening it to partial orders and lattices.

The cool idea to me was that as your knowledge grows, your lattice collapses to a line.