Archive for April, 2020

Mr. Spock, Pseudo-scientist

Wednesday, April 15th, 2020

I’m one of those aging folks who still remember the original run of Star Trek (no colon, no The Original Series or any other kind of elaboration — just Star Trek). It was a groundbreaking show, and whether you like it or not (there are plenty of reasons to do both), it held out a positive vision for the future, and sketched a societal ethos that was not entirely acquisitive, and not even as secular and materialistic as later outings in the Star Trek franchise. The officers of the Enterprise were not latter-day conquistadors. They were genuine explorers, with a Prime Directive to help them avoid destroying too many other nascent cultures. (Yes, I know: they violated it very frequently, but that was part of the point of the story. Sometimes there was even a good reason for doing so.)

It also offered the nerds among us a point of contact. Sure, Captain Kirk was kind of a cowboy hero, galloping into situations with fists swinging and phasers blazing, and, more often than not, reducing complex situations to polar binaries and then referring them either to fisticuffs or an outpouring of excruciatingly impassioned rhetoric. Dr. McCoy, on the other hand, was the splenetic physician, constantly kvetching about everything he couldn’t fix, and blaming people who were trying to work the problem for not being sensitive enough to be as ineffectual as he was. But Mr. Spock (usually the object of McCoy’s invective) was different. He was consummately cool, and he relied upon what he called Logic (I’m sure it had a capital “L” in his lexicon) for all his decision-making. He was the science officer on the Enterprise, and also the first officer in the command structure. Most of the more technically savvy kids aspired to be like him.

It was an article of faith that whatever conclusions Spock reached were, because he was relying on Logic, logical. They were the right answer, too, unless this week’s episode was explicitly making a concession to the value of feelings over logic (which happened occasionally, but not often enough to be really off-putting), and they could be validated by science and reason. You can’t argue with facts. People who try are doomed to failure, and their attempt is at best a distraction, and often worse. 

Up to that point, I am more or less on board, though I was always kind of on the periphery of the nerd cluster, myself. I suspected then (as I still do) that there are things that logic (with an upper-case or a lower-case L) or mathematics cannot really address. Certainly not everything is even quantifiable. But it was the concept of significant digits that ultimately demolished, for me, Mr. Spock’s credibility as a science officer. When faced with command decisions, he usually did reasonably well, but when pontificating on mathematics, he really did rather badly. (Arguably he was exactly as bad at it as some of the writers of the series. Small wonder: see the Sherlock Holmes Law, which I’ve discussed here previously.)

The concept of significant digits (or figures) is really a simple one, though its exact specifications involve some fussy details. Basically it means that you can’t make your information more accurate merely by performing arithmetic on it. (It’s more formally explained here on Wikipedia.) By combining a number of things that you know only approximately and doing some calculations on them, you’re not going to get a more accurate answer: you’re going to get a less accurate one. The uncertainty of each of those terms or factors will increase the uncertainty of the whole.

So how does Spock, for all his putative scientific and logical prowess, lose track of this notion, essential to any kind of genuine scientific thinking? In the first-season episode “Errand of Mercy”, he has a memorable exchange with Kirk: 

Kirk: What would you say the odds are on our getting out of here?

Spock: Difficult to be precise, Captain. I should say approximately 7,824.7 to 1.

Kirk: Difficult to be precise? 7,824 to 1?

Spock: 7,824.7 to 1.

Kirk: That’s pretty close approximation.

Spock: I endeavor to be accurate.

Kirk: You do quite well.

No, he doesn’t do quite well. He does miserably: he has assumed in his runaway calculations that the input values on which he bases this fantastically precise number are known to levels of precision that could not possibly be ascertained in the real world, especially in the middle of a military operation — even a skirmish in which all the participants and tactical elements are known in detail (as they are not here).  The concept of the “fog of war” has something to say about how even apparent certainties can quickly degrade, in the midst of battle, into fatal ignorance. Most of the statistical odds for this kind of thing couldn’t be discovered by any rational means whatever.

Precision and accuracy are not at all the same thing. Yes, you can calculate arbitrarily precise answers based on any data, however precise or imprecise the data may be. Beyond the range of its significant digits, however, this manufactured precision is worse than meaningless: it conveys fuzzy knowledge as if it were better understood than it really is. It certainly adds nothing to the accuracy of the result, and only a terrible scientist would assume that it did. Spock’s answer is more precise, therefore, than “about 8000 to one”, but it’s less accurate, because it suggests that the value is known to a much higher degree of precision than it possibly could be. Even “about 8000 to one” is probably not justifiable, given what the characters actually know. (It’s also kind of stupid, in the middle of a firefight, to give your commanding officer gratuitously complex answers to simple questions: “Exceedingly poor,” would be more accurate and more useful.

This has not entirely escaped the fan community, of course: “How many Vulcans does it take to change a lightbulb?” is answered with, “1.000000”. This is funny, because it is, for all its pointless precision, no more accurate than “one”, and in no situations would fractional persons form a meaningful category when it comes to changing light bulbs. (Fractional persons might be valid measurements in other contexts — for example, in a cannibalistic society. Don’t think about it too hard.) 

Elsewhere in the series, too, logic is invoked as a kind of deus ex machina — something to which the writer of the episode could appeal to justify any decision Mr. Spock might come up with, irrespective of whether it was reasonable or not. Seldom (I’m inclined to say never, but I’m not going to bother to watch the whole series over again just to verify the fact) are we shown the operation of even one actual logical operation.

The structures of deductive reasoning (logic’s home turf) seldom have a great deal to do with science, in any case. Mathematical procedures are typically deductive. Some philosophical disciplines, including traditional logic, are too. Physical science, however, is almost entirely inductive. In induction, one generalizes tentatively from an accumulation of data; such collections of data are seldom either definitive or complete. Refining hypotheses as new information comes to light is integral to the scientific process as it’s generally understood. The concept of significant digits is only one of those things that helps optimize our induction.

Odds are a measure of ignorance, not knowledge. They do not submit to purely deductive analysis. For determinate events, there are no odds. Something either happens or it doesn’t, Mr. Spock notwithstanding. However impossibly remote it might have seemed yesterday, the meteorite that actually landed in your back yard containing a message from the Great Pumpkin written in Old Church Slavonic now has a probability of 100% if it actually happened. If it didn’t, its probability is zero. There are no valid degrees between the two.

Am I bashing Star Trek at this point? Well, maybe a little. I think they had an opportunity to teach an important concept, and they blew it. It would have been really refreshing (and arguably much more realistic) to have Spock occasionally say, “Captain, why are you asking me this? You know as well as I do that we can’t really know that, because we have almost no data,” or “Well, I can compute an answer of 28.63725, but it has a margin of error in the thousands, so it’s not worth relying upon.” Obviously quiet data-gathering is not the stuff of edge-of-the-seat television. I get that. But it’s what the situation really would require. (Spock, to his credit, often says, “It’s like nothing we’ve ever seen before,” but that’s usually just prior to his reaching another unsubstantiated conclusion about it.)

I do think, however, that the Star Trek promotion of science as an oracular fount of uncontested truth — a myth that few real scientists believe, but a whole lot of others (including certain scientistic pundits one could name) do believe — is actively pernicious. It oversells and undercuts the legitimate prerogatives of science, and in the long run undermines our confidence in what it actually can do well. There are many things in this world that we don’t know. Some of the things we do know are even pretty improbable.  Some very plausible constructs, on the other hand, are in fact false. I’m all in favor of doing our best to find out, and of relying on logical inference where it’s valid, but it’s not life’s deus ex machina. At best, it’s a machina ex Deo: the exercise of one — but only one — of our God-given capacities. Like most of them, it should be used responsibly, and in concert with the rest.

The Sherlock Holmes Law

Friday, April 3rd, 2020

I rather like Arthur Conan Doyle’s Sherlock Holmes stories. I should also admit that I’m not a hard-core devotee of mysteries in general. If I were, I probably would find the frequent plot holes in the Holmes corpus more annoying than I do. I enjoy them mostly for the period atmosphere, the prickly character of Holmes himself, and the buddy-show dynamic of his relationship with Doctor Watson. To be honest, I’ve actually enjoyed the old BBC Holmes series with Jeremy Brett at least as much as I have enjoyed reading the original works. There’s more of the color, more of the banter, and less scolding of Watson (and implicitly the reader) for not observing the one detail in a million that will somehow eventually prove relevant.

Irrespective of form, though, the Holmes stories have helped me articulate a principle I like to call the “Sherlock Holmes Law”, which relates to the presentation of fictional characters in any context. In its simplest form, it’s merely this:

A fictional character can think no thought that the author cannot.

This is so obvious that one can easily overlook it, and in most fiction it rarely poses a problem. Most authors are reasonably intelligent — most of the ones who actually see publication, at least — and they can create reasonably intelligent characters without breaking the credibility bank. 

There are of course some ways for authors to make characters who are practically superior to themselves. Almost any writer can extrapolate from his or her own skills to create a character who can perform the same tasks faster or more accurately. Hence though my own grasp of calculus is exceedingly slight, and my ability to work with the little I do know is glacially slow, I could write about someone who can look at an arch and mentally calculate the area under the curve in an instant. I know that this is something one can theoretically do with calculus, even if I’m not able to do it myself. There are well-defined inputs and outputs. The impressive thing about the character is mostly in his speed or accuracy. 

This is true for the same reason that you don’t have to be a world-class archer to describe a Robin Hood who can hit the left eye of a gnat from a hundred yards. It’s just another implausible extrapolation from a known ability. As long as nobody questions it, it will sell at least in the marketplace of entertainment. Winning genuine credence might require a bit more.

Genuinely different kinds of thinking, though, are something else. 

I refer this principle to the Holmes stories because, though Mr. Holmes is almost by definition the most luminous intellect on the planet, he’s really not any smarter than Arthur Conan Doyle, save in the quantitative sense I just described. Doyle was not a stupid man, to be sure (though he was more than a little credulous — apparently he believed in fairies, based on some clearly doctored photographs). But neither was he one of the rare intellects for the ages. And so while Doyle may repeatedly assure us (through Watson, who is more or less equivalent to Doyle himself in both training and intelligence) that Holmes is brilliant, what he offers as evidence boils down to his ability to do two things. He can:

a) observe things very minutely (even implausibly so);

and

b) draw conclusions from those observations with lightning speed. That such inferences themselves strain logic rather badly is not really the point: Doyle has the writer’s privilege of guaranteeing by fiat that they will turn out to be correct.

Time, of course, is one of those things for which an author has a lot of latitude, since books are not necessarily (or ever, one imagines) written in real time. Even if it takes Holmes only a few seconds to work out a chain of reasoning, it’s likely that Doyle himself put much more time into its formation. While that probably does suggest a higher-powered brain, it still doesn’t push into any genuinely new territory. Put in computer terms, while a hypothetical Z80 chip running at a clock speed of 400Mhz would be a hundred times faster than the 4Mhz one that powered my first computer back in the 1982, it would not be able to perform any genuinely new operations. It would probably be best for running CP/M on a 64K system — just doing so really quickly.

It’s worth noting that sometimes what manifests itself chiefly as an increase in speed actually does represent a new kind of thinking. There is a (perhaps apocryphal) story about Carl Friedrich Gauss (1777-1855), who, when he was still in school, was told to add the digits from one to a hundred as punishment for some classroom infraction or other. As the story goes, he thought about it for a second or two, and then produced the correct result (5050), much to the amazement of his teacher. Gauss had achieved his answer not by adding all those numbers very rapidly, but by realizing that if one paired and added the numbers at the ends of the sequence, moving in toward the center, one would always get 101: i.e., 100 + 1 = 101; 99 + 2 = 101; and so on. There would then be fifty such pairs — hence 50 x 101: 5050. 

A character cannot produce that kind of idea if the author doesn’t understand it first. It makes the depiction of superintelligent characters very tricky, and sometimes may even limit the portrayal of stupid ones who don’t think the way the rest of us do.

For readers, however, it is different. Literary works (fictional or not) can open up genuinely new kinds of ideas to readers. While a writer who has achieved a completely new way of thinking about some technical problem is less likely to expound it in fiction than in some sort of a treatise or an application with the patent office, fictional works often present ideas one has never considered before in the human arena. It need not be a thought that’s new to the world in order to be of value — it needs merely to be new to you.

Such a thought, no matter how simple it may seem once you see it, can blow away the confines of our imaginations. It’s happened to me at a few different stages in my life. Tolkien’s The Lord of the Rings awakened me when I was a teenager to something profound about the nature of language and memory. C. S. Lewis’ “The Weight of Glory” revolutionized the way I thought about other people. Tolstoy’s War and Peace laid to rest any notion I had that other people’s minds (or even my own) could ever be fully mapped. Aquinas’ Summa Theologica (especially Q. 1.1.10) transformed forever my apprehension of scriptureThe list goes on, but it’s not my point to catalogue it completely here.

Where has that happened to you?