book: The Theory That Would Not Die

I’ve postponed this review for a month because, well, it’s a bad book. I spent time and effort to be certain of this–re-reading for unforgivably smug and fatuous tone, checking my many notes for the author’s obvious mistakes, and worst of all finding nearly every page of this book crammed with evidence of her utterly missing the very point of her subject. As though she were proud of her errors.

So this is a glib, pretentious, counterproductive book. A bad book–there, I wrote it. Now let’s get this review over with.

We begin with a thin sampling from her galaxy of goofs:

  • (p. 7) “…a thought experiment, a 1700s version of a computer simulation.” Good grief.
  • (p. 13) “Laplace, the future Einstein of his age”–what could such breathless fuzz mean? Beyond which: Einstein’s math skills certainly weren’t up to Laplace’s.
  • (p. 14) the gravitational pulls of Jupiter and Saturn on Halley’s comet make for a four-body problem, not a three-body problem (she forgot…the sun. Oops.).

That’s just to page 14. The author misunderstands false positives (p. 227), and she woefully confuses probabilities and utilities (pp. 103, 145, & 230-48), Bayesian chains and neural networks (p. 250), and weights and probabilities in maximum likelihood (p. 92)…I could recite perhaps 50 more goofs, but you get the idea.

The author casts 200 years of difficult conceptualization and experimentation–the hardest thought-work that exists–as some Jerry Springer food fight between two well-defined cadres of statisticians. Now, across 200 years and thousands of scientists, naturally there were insults. But the question of when and how to use Bayesian prior probability distributions is far from social or academic. Bayes’ Law describes everything you think about the world.

That is no exaggeration, and here’s why: If you responsibly set your prior probability of some given outcome at anything between 0% or 100%–that is, if you don’t pretend you know something you don’t–then even if you start with your priors’ being very wrong, when you’ve later gotten sufficient relevant evidence, that will overwhelm the priors, and when given the same evidence eventually everyone’s posterior distributions (what they believe) on a given subject will converge to about the same, with more agreement as evidence builds. Which is pretty much the history of science. But it requires that you set your priors somewhere between 0% and 100%, otherwise no evidence matters. You hold a coin. Is it the coin fair? Before flipping it repeatedly to find out, I’d best set my prior probability of its coming up heads to 50%. In this case and always, the key is not to pretend I know something I don’t know, that I cannot know without evidence. You can use your (unreliable) priors, but only as long as lack of evidence allows it.

But if instead you set you prior belief to 0% or to 100%, then no amount of evidence can ever change your belief. That’s not a definition, or an opinion, or a convention. It’s just math.

And right there is the real controversy about Bayes’ Law, the one the author misses and misses and misses, as though she means to. There’s no point in detailing tabloid-worthy bits of personal history when you don’t understand the theory of your book’s title. The real controversy is not between one academic statistical approach or the other, as this book would have you believe. Pay attention–the controversy is: whether blind faith is permissible or not. Whether it is legitimate–or even sane–to insist that one is necessarily right, no matter what evidence to the contrary. Whether one should strive for Minimally Informative Priors vs. strive to protect at all costs (and worse, to propagate) whatever you were taught as a child, whatever your social group or political party says, or whatever simply comforts you.

This point is far from new. Had the author bothered to actually consider what she wrote about, she would have come across hundreds of smarter people who have explained this point patiently, as (Thomas Huxley): “Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads you, or you shall learn nothing.” I can’t imagine a better lay explanation of Bayes’ Law’s importance. Except perhaps for numerous warnings against clinging to priors, probably mankind’s primary failing. As warned by Bertrand Russell: “The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” Farther back, Emerson: “A sect or a party is an elegant incognito, devised to save a man from the vexation of thinking.” Or must I take you back to Aristotle?–“It is the mark of an educated mind to be able to entertain a thought without accepting it.”

Science’s rich tradition of facing squarely the potential conflicts between what we (at first) expect and what we then measure–this tradition is all available to the author and to everyone. But the author ignored all this to favor her own brand of sloppy daytime-TV journalism. Thus: I do not forgive this author. And it’s hard to understand the Yale University Press’ decision to include this mess under its mark.

So–One Star for getting their spelling right, I guess, and for random bits of personal history that amuse harmlessly. But, Gentle Reader, you’re advised to avoid this book lest you too risk forfeiting a month’s mental effort to bulldoze its continents of crap from your cranium. You’re welcome.

“The Theory That Would Not Die”, Sharon Bertsch McGrayne, 2011, Yale University Press.

Comments are closed.