Archive for the ‘Philosophy’ Category

The Politics of Perplexity in Twenty-First Century America

Friday, July 17th, 2020

In the context of twenty-first century America, “politics” is perhaps one of the most curiously irritating words in the English language. I know from personal experience – whether from observing others, or from paying attention to myself – that there is a visceral reflex to feel something between annoyance and disgust upon hearing the word. If politics rears its ugly head, you may think something along the lines of “I’ve had enough of that, thank you!” before rapidly extricating yourself from an unwanted intrusion into an otherwise perfect day. Alternatively, I suspect many of us know people who hear the word “politics” or some related term and can immediately launch into an ambitious lecture on what is wrong and what should be done that somehow promises (implausibly) to solve all our social, political, and economic problems in one fell legislative swoop. We’re surrounded by bitter disputes – online and on television, in print and in person – over political issues, to the extent that it can be hard to stomach contemplating (much less discussing) politics without feeling a little irritated, even disgusted, with both our neighbors and ourselves.

These powerful emotional reactions should give us some pause for reflection. In theory, if not always in practice, the United States of America is a democratic republic, ruled by representative officials in the name of its citizenry. Even without considering the matter deeply, it should be clear to us that such a government cannot function if its citizens are entirely disengaged, as radical factions across the political spectrum will be left to do the politicking on our behalf. Whether we like it or not, our nation’s political life will likely remain interested in us even if we are uninterested in return. We might as well make the best of it, and get down to the business of figuring out where, exactly, we went wrong, and what might be done to repair the damage.

Since the early twentieth century, the predominant approach to teaching American students about their form of government has been in the form of what is known as political science. This perspective is primarily (though not exclusively) concerned with educating students about the practical mechanics of their government and the political dynamics of the American electorate – in short, the branches of the United States government, their differing roles and jurisdictions, group behavioral dynamics, and so forth. All of these political institutions and phenomena are generally treated as abstractions that can be measured and predicted with some degree of accuracy using scientific methodology and data analysis.

The meaning of political science must be carefully qualified and defined. Science is derived from the Latin scientia, or knowledge. The majority of ancient, medieval, and early modern political thinkers used the term political science to refer to the study of politics as a domain of the humanities. They studied politics in light of inquiries in philosophy and history: they did not, as a general rule, conceive of the art of government as something that could be understood as an institutional abstraction that operated independently of the deepest human needs and desires (such as for law and virtue), or the eternal problems that confront every human individual and society (what is justice and truth, and how de we find them?). Above all else, classical political science aimed at cultivating self-governing (moderate) individuals that would be capable of wielding political power responsibly while refraining from tyrannical injustice. Hence, in the conclusion of Plato’s Republic, Socrates teaches Glaucon that the highest end of political science is to teach the soul to bear “all evils and all goods… and practice justice with prudence in every way.” (Republic, Book X, 621c).

Modern political science operates on an entirely different basis and different assumptions about human beings and political life. It begins with the premise that human beings, like all natural things, are subject to mechanical laws that render them predictable. Once these laws are understood, the political life of human beings can be mastered and directed towards progress (understood as material comforts and technological innovation) to a degree that was never remotely possible in prior eras of human history. This view of political science emerged first among certain thinkers of the Enlightenment, and became a close companion to the development of the entire field of social science in the late nineteenth century. Both modern political and social science emerged from a common intellectual project that aimed to apply modern scientific methods and insights to the study of very nearly every aspect of human communal life – economics, social dynamics (sociology), religion, sexuality, psychology, and politics, among others.

This application of human technical knowledge to endemic social problems, economic systems, and political institutions (among other domains of human life) was expected to deliver unprecedented advances that would mirror and eventually surpass the tremendous technological and intellectual achievements of the Scientific Revolution. Max Weber, a social scientist of incredible imtelligence and one of the most brilliant minds of the early twentieth century, fully expected that the complimentary discoveries of both natural and social science would ensure that human “progress goes on ad infinitum.” For many intellectuals in Europe and the United States in Weber’s day, human social and political life had become like a machine that could be kept in a perpetual state of inexorable forward motion. This view remains a powerful one within certain spheres of the social sciences and general public, and has been articulated perhaps most eloquently in the public sphere by the Harvard psychologist Steven Pinker, among others, even if it is gradually declining in popularity among the greater mass of the American citizenry.

Academically, this modern scientific approach to understanding American government had many apparent advantages that explain both its widespread acceptance and its continued influence within the academy. For one, it enabled teachers to focus on explaining the structure of U.S. government with a focus on the technical mechanics of government that can be mastered intuitively by most students, regardless of their particular political views and prejudices. Similarly, it relieves teachers and students of having to focus on tiresome historical minutia or obscure philosophical debates that bear no obvious relevance to contemporary issues: students can study their government based on recent experiences that are more easily comprehensible for them than those of, say, two hundred years ago. Above all else, contemporary political science treats the study of American government in utilitarian and mechanistic terms, thereby minimizing occasions for awkwardly passionate or unsolvable confrontations over thorny issues that touch on moral as well as historical and philosophical complexities. What many students will learn from this education is that the American form of government is perfectly reasonable, orderly, and balanced, with predictable mechanics that ensure its stability and perpetuity; in short, it makes sense. And not only does the American government operate like a well-oiled machine, but it also leaves individuals tremendous room to define themselves and act within an ever-expanding horizon of freedoms. Government exists mainly to resolve practical matters of policy and administration, leaving moral questions largely to the domain of the private sphere.

Many may rightly ask: if this model is true, then why does the American government function so poorly in practice? And why are Americans so remarkably inept at finding common ground for resolving pressing political issues? Indeed, there are alarming trends that should inspire us to doubt the viability of this interpretation. Polling conducted over the past decade consistently shows that Americans of all political persuasions are increasingly distrustful of both their governments and of their fellow citizens who hold opposing views. Rigid ideological voices have emerged among both liberal and conservative parties that insist that dialogue is impossible and compromise on any issue is a sign of political weakness, and that a candidate’s quality should be determined by ideological considerations rather than by competence and experience. As electoral politics have devolved into brutal slugging matches between increasingly extreme views, the actual levers of political power have gradually shifted into the hands of a theoretically subordinate but frequently unaccountable and inefficient bureaucracy.

The fruits of this widespread culture of distrust has been the breakdown of civic life and political order amidst frustration and mutual recrimination throughout American society. Many are understandably frustrated with a system of government that seems incapable or unwilling to fulfill its most basic functions. For that matter, generations of young Americans have now grown up in the shadow of a dysfunctional government that leaves them with little incentive for acting as responsible and engaged citizens. It should be no wonder that there are now voices who now ask questions such as the following: if our current Constitution is a product of eighteenth century political circumstances and ideals, should we not perhaps craft a new political system that is better adapted our contemporary needs and values?

Perhaps these are all passing fads, and some bearable equilibrium will return in short order. I am doubtful that such an event is likely in the near future. Recent events have shown that contemporary Americans of all political stripes are divided not merely by petty partisan differences over policy decisions and electoral contests, but even more importantly by fierce disagreements over fundamental questions about the nature of political life and American civic identity that transcend mere partisan disagreement, and we are not remotely close to resolving these disputes. What is it to be a human? What is freedom? What is justice? We do not have common answers for any of these fundamental questions, nor do we seem (at least, as of this writing) to have a clear direction for amicably resolving these disputes in the public sphere.

Yet these disputes, however unpleasant and acrimonious, provide us with a hint of where, exactly, we may have gone wrong. Far from liberating us from antiquated concerns, our modern political education (and the novel mode of thought that created it) may lie at the heart of our perplexity. Modern political science has worked tremendous wonders in allowing us to track the chimerical shifting of public whims in opinion polls or understand the psychology of group dynamics, but it has also obfuscated our ability to grapple with and comprehend problems that are part of the permanent condition of our species. Political institutions and policy alone cannot solve America’s most vexing problems. And we should remember that representative government depends ultimately on the qualities of both officeholders and voters to function properly; institutions abstracted from the body politic cannot rule themselves. Our government, as John Adams observed in 1798, “was designed for a moral and religious people. It is wholly inadequate to the government of any other.” Adams thought that republican government could not exist without some degree of self-government among the citizenry, or else it must devolve into a mass of petty tyrants; we are, perhaps, in the process of proving his point for him.

I suspect that the root of modern American political dissatisfaction is not so much in our continued subjection to an apparently antiquated form of government, nor merely in our frustration with the peculiar idiocies of our political parties, but rather in our own failure to accurately comprehend and utilize our form of government. In an era of change and tumult, we would do well, as the American novelist and essayist John Dos Passos put it in 1941, to “look backwards as well as forwards” as we attempt to extricate ourselves from our current political predicament. While we may face many distinctly twenty-first century problems in certain respects, our most pressing problems – justice, love, truth, goodness, and so forth – are as old as the human species. We live in troubled times: but so, too, did prior generations of Americans. I hope that, if we can find it in ourselves to turn back and reconsider the first principles of American government, its deep roots in English political life and philosophy, we may yet discover a firm foundation that will give us a lifeline from our current perplexity, and enable us to engage more fully in a life of dutiful, informed, and responsible citizenship that can be passed on to future generations.


Saturday, July 11th, 2020

I have to date remained silent here about the COVID-19 pandemic, because for the most part I haven’t had anything constructive to add to the discussion, and because I thought that our parents and students would probably prefer to read about something else. I also try, when possible, to discuss things that will still be of interest three or even ten years from now, and to focus largely on issues of education as we practice it. 

Still, COVID-19 has obviously become a consuming focus for many—understandably, given the extent of the problem—and what should be managed in the most intelligent way possible according to principles of epidemiology and sane public policy has become a political football that people are using as further grounds to revile each other. I’m not interested in joining that game. Knaves and cynical opportunists will have their day, and there’s probably not much to do that will stop them—at least nothing that works any better than just ignoring them.

But there is one piece of the public discourse on the subject that has shown up more and more frequently, and here it actually does wander into a domain where I have something to add. The adjective that has surfaced most commonly in public discussions about the COVID-19 epidemic with all its social and political consequences is “unprecedented”. The disease, we are told by some, is unprecedented in its scope; others lament that it’s having unprecedented consequences both medically and economically. The public response, according to others, is similarly unprecedented: for some that’s an argument that it is also unwarranted; for others, that’s merely a sign that it’s appropriately commensurate with the scope of the unprecedented problem; for still others, it’s a sign that it’s staggeringly inadequate.

As an historian I’m somewhat used to the reckless way in which the past is routinely ignored or (worse) subverted, according to the inclination of the speaker, in the service of this agenda or that. I’ve lost track of the number of people who have told me why Rome fell as a way of making a contemporary political point. But at some point one needs to raise an objection: seriously—unprecedented? As Inigo Montoya says in The Princess Bride, “You keep using that word. I do not think it means what you think it means.” To say that anything is unprecedented requires it to be contextualized in history—not just the last few years’ worth, either.

In some sense, of course, every happening in history, no matter how trivial, is unprecedented—at least if history is not strictly cyclical, as the Stoics believed it was. I’m not a Stoic on that issue or many others. So, no: this exact thing has indeed never happened before. But on that calculation, if I swat a mosquito, that’s unprecedented, too, because I’ve never swatted that particular mosquito before. This falls into Douglas Adams’ useful category of “True, but unhelpful.” Usually people use the word to denote something of larger scope, and they mean that whatever they are talking about is fundamentally different in kind or magnitude from anything that has happened before. But how different is COVID-19, really?

The COVID-19 pandemic is not unprecedented in its etiology. Viruses happen. We even know more or less how they happen. One does not have to posit a diabolical lab full of evil gene-splicers to account for it. Coronaviruses are not new, and many others have apparently come and gone throughout human history, before we even had the capacity to detect them or name them. Some of them have been fairly innocuous, some not. Every time a new one pops up, it’s a roll of the dice—but it’s not our hand that’s rolling them. Sure: investing in some kind of conspiracy theory to explain it is (in its odd way) comforting and exciting. It’s comforting because it suggests that we have a lot more control over things than we really do. It’s exciting, because it gives us a villain we can blame. Blame is a top-dollar commodity in today’s political climate, and it drives more and more of the decisions being made at the highest levels. Ascertaining the validity of the blame comes in a distant second to feeling a jolt of righteous indignation. The reality is both less exciting and somewhat bleaker: we don’t have nearly as much control as we’d like to believe. These things happen and will continue to happen without our agency or design. Viruses are fragments of genetic material that have apparently broken away from larger organic systems, and from there they are capable of almost infinite, if whimsical, mutation. They’re loose cannons: that’s their nature. That’s all. Dangerous, indisputably. Malicious? Not really.

The COVID-19 pandemic is not unprecedented in its scope and ability to be lethal. Epidemics and plagues have killed vast numbers of people over wide areas throughout history. A few years ago, National Geographic offered a portrait of the world’s most prolific killer. It was not a mass murderer, or even a tyrant. It was the flea, and the microbial load it carried. From 1348 through about 1352, the Black Death visited Europe with a ferocity that probably was unprecedented at the time. Because records from the period are sketchy, it’s hard to come up with an exact count, but best estimates are that it killed approximately a third of the population of Europe all within that little three-to-four-year period. The disease continued to revisit Europe approximately every twenty years for some centuries to come, especially killing people of childbearing age each time, with demographic results that vastly exceed what we might determine from a sheer count of losses. In some areas whole cities were wiped out, and the death toll in Europe alone may have run as high as two hundred million: the extent of its destruction throughout parts of Asia has not been ascertained. Smallpox, in the last century of its activity (1877-1977), killed approximately half a billion people. The 1918 Spanish influenza epidemic killed possibly as many as a hundred million. Wikipedia here lists over a hundred similar catastrophes caused by infectious diseases of one sort or another, each of which had a death toll of more than a thousand; it lists a number of others where the count cannot even be approximately ascertained.

Nor is the COVID-19 pandemic unprecedented in its level of social upheaval. The Black Death radically changed the social, cultural, economic, and even the religious configuration of Europe almost beyond recognition. After Columbus, Native American tribes were exposed to Old World disease agents to which they had no immunities. Many groups were reduced to less than a tenth of their former numbers. Considering these to be instances of genocide is, I think, to ascribe far more intentionality to the situation than it deserves (though there seem to have been some instances where it was intended), but the outcome was indifferent to the intent. The Spanish Influenza of 1918, coming as it did on the heels of World War I, sent a world culture that was already off balance into a deeper spiral. It required steep curbs on social activity to check its spread. Houses of worship were closed then too. Other pubic gatherings were forbidden. Theaters were closed. Even that was not really unprecedented, though: theaters had been closed in Elizabethan London during several of the recurrent visitations of the bubonic plague. The plot of Romeo and Juliet is colored by a quarantine. Boccaccio’s Decameron is a collection of tales that a group of people told to amuse themselves while in isolation, and Chaucer’s somewhat derivative Canterbury Tales are about a group of pilgrims heading for the shrine of St. Thomas à Becket for having given them aid while they were laboring under a plague. People have long known that extraordinary steps need to be taken, at least temporarily, in order to save lives during periods of contagion. It’s inconvenient, it’s costly, and it’s annoying. It’s not a hoax, and it’s not tyrannical. It’s not novel.

So no, in most ways, neither the appearance of COVID-19 nor our responses to it are really unprecedented. I say this in no way to minimize the suffering of those afflicted with the disease, or those suffering from the restrictions put in place to curb its spread. Nor do I mean to trivialize the efforts of those battling its social, medical, or economic consequences: some of them are positively heroic. But claiming that this is all unprecedented looks like an attempt to exempt ourselves from the actual flow of history, and to excuse ourselves from the very reasonable need to consult the history of such events in order to learn what we can from them—for there are, in fact, things to be learned.

In responding to the plagues and calamities of the past, it is perhaps unsurprising that people responded, then as now, primarily out of fear. Fear is one of the most powerful of human motivators, but it is seldom a wise counselor. There have been conspiracy theories before too: during the Black Death, for example, some concluded that that the disease was due to witchcraft, and so they set out to kill cats, on the ground that they were witches’ familiars. The result, of course, was that rats—the actual vectors for the disease, together with their fleas, were able to breed and spread disease all the more freely. Others sold miracle cures to credulous (and fearful) populations; these of course accomplished nothing but heightening the level of fear and desperation.

There were also people who were brave and self-sacrificing, who cared for others in these trying times. In 1665, the village of Eyam in Derbyshire quarantined itself with the plague. They knew what they could expect, and they were not mistaken. Everyone in the town perished, but their decision saved thousands of lives in neighboring villages. Fr. Damien De Veuster ministered to the lepers on Molokai before succumbing to the disease himself: he remains an icon of charity and noble devotion and is the patron saint of Hawaii.

The human race has confronted crisis situations involving infectious diseases, and the decisions they require, before. They are not easy, and sometimes they call for self-sacrifice. There is sober consolation to be wrung from the fact that we are still here, and that we still, as part of our God-given nature, have the capacity to make such decisions—both the ones that protect us and those sacrificial decisions we make to save others. We will not get through the ordeal without loss and cost, but humanity has gotten through before, and it will again. We are neither entirely without resources, but neither are we wholly in control. We need to learn from what we have at our disposal, marshal our resources wisely and well, and trust in God for the rest.

Mr. Spock, Pseudo-scientist

Wednesday, April 15th, 2020

I’m one of those aging folks who still remember the original run of Star Trek (no colon, no The Original Series or any other kind of elaboration — just Star Trek). It was a groundbreaking show, and whether you like it or not (there are plenty of reasons to do both), it held out a positive vision for the future, and sketched a societal ethos that was not entirely acquisitive, and not even as secular and materialistic as later outings in the Star Trek franchise. The officers of the Enterprise were not latter-day conquistadors. They were genuine explorers, with a Prime Directive to help them avoid destroying too many other nascent cultures. (Yes, I know: they violated it very frequently, but that was part of the point of the story. Sometimes there was even a good reason for doing so.)

It also offered the nerds among us a point of contact. Sure, Captain Kirk was kind of a cowboy hero, galloping into situations with fists swinging and phasers blazing, and, more often than not, reducing complex situations to polar binaries and then referring them either to fisticuffs or an outpouring of excruciatingly impassioned rhetoric. Dr. McCoy, on the other hand, was the splenetic physician, constantly kvetching about everything he couldn’t fix, and blaming people who were trying to work the problem for not being sensitive enough to be as ineffectual as he was. But Mr. Spock (usually the object of McCoy’s invective) was different. He was consummately cool, and he relied upon what he called Logic (I’m sure it had a capital “L” in his lexicon) for all his decision-making. He was the science officer on the Enterprise, and also the first officer in the command structure. Most of the more technically savvy kids aspired to be like him.

It was an article of faith that whatever conclusions Spock reached were, because he was relying on Logic, logical. They were the right answer, too, unless this week’s episode was explicitly making a concession to the value of feelings over logic (which happened occasionally, but not often enough to be really off-putting), and they could be validated by science and reason. You can’t argue with facts. People who try are doomed to failure, and their attempt is at best a distraction, and often worse. 

Up to that point, I am more or less on board, though I was always kind of on the periphery of the nerd cluster, myself. I suspected then (as I still do) that there are things that logic (with an upper-case or a lower-case L) or mathematics cannot really address. Certainly not everything is even quantifiable. But it was the concept of significant digits that ultimately demolished, for me, Mr. Spock’s credibility as a science officer. When faced with command decisions, he usually did reasonably well, but when pontificating on mathematics, he really did rather badly. (Arguably he was exactly as bad at it as some of the writers of the series. Small wonder: see the Sherlock Holmes Law, which I’ve discussed here previously.)

The concept of significant digits (or figures) is really a simple one, though its exact specifications involve some fussy details. Basically it means that you can’t make your information more accurate merely by performing arithmetic on it. (It’s more formally explained here on Wikipedia.) By combining a number of things that you know only approximately and doing some calculations on them, you’re not going to get a more accurate answer: you’re going to get a less accurate one. The uncertainty of each of those terms or factors will increase the uncertainty of the whole.

So how does Spock, for all his putative scientific and logical prowess, lose track of this notion, essential to any kind of genuine scientific thinking? In the first-season episode “Errand of Mercy”, he has a memorable exchange with Kirk: 

Kirk: What would you say the odds are on our getting out of here?

Spock: Difficult to be precise, Captain. I should say approximately 7,824.7 to 1.

Kirk: Difficult to be precise? 7,824 to 1?

Spock: 7,824.7 to 1.

Kirk: That’s pretty close approximation.

Spock: I endeavor to be accurate.

Kirk: You do quite well.

No, he doesn’t do quite well. He does miserably: he has assumed in his runaway calculations that the input values on which he bases this fantastically precise number are known to levels of precision that could not possibly be ascertained in the real world, especially in the middle of a military operation — even a skirmish in which all the participants and tactical elements are known in detail (as they are not here).  The concept of the “fog of war” has something to say about how even apparent certainties can quickly degrade, in the midst of battle, into fatal ignorance. Most of the statistical odds for this kind of thing couldn’t be discovered by any rational means whatever.

Precision and accuracy are not at all the same thing. Yes, you can calculate arbitrarily precise answers based on any data, however precise or imprecise the data may be. Beyond the range of its significant digits, however, this manufactured precision is worse than meaningless: it conveys fuzzy knowledge as if it were better understood than it really is. It certainly adds nothing to the accuracy of the result, and only a terrible scientist would assume that it did. Spock’s answer is more precise, therefore, than “about 8000 to one”, but it’s less accurate, because it suggests that the value is known to a much higher degree of precision than it possibly could be. Even “about 8000 to one” is probably not justifiable, given what the characters actually know. (It’s also kind of stupid, in the middle of a firefight, to give your commanding officer gratuitously complex answers to simple questions: “Exceedingly poor,” would be more accurate and more useful.

This has not entirely escaped the fan community, of course: “How many Vulcans does it take to change a lightbulb?” is answered with, “1.000000”. This is funny, because it is, for all its pointless precision, no more accurate than “one”, and in no situations would fractional persons form a meaningful category when it comes to changing light bulbs. (Fractional persons might be valid measurements in other contexts — for example, in a cannibalistic society. Don’t think about it too hard.) 

Elsewhere in the series, too, logic is invoked as a kind of deus ex machina — something to which the writer of the episode could appeal to justify any decision Mr. Spock might come up with, irrespective of whether it was reasonable or not. Seldom (I’m inclined to say never, but I’m not going to bother to watch the whole series over again just to verify the fact) are we shown the operation of even one actual logical operation.

The structures of deductive reasoning (logic’s home turf) seldom have a great deal to do with science, in any case. Mathematical procedures are typically deductive. Some philosophical disciplines, including traditional logic, are too. Physical science, however, is almost entirely inductive. In induction, one generalizes tentatively from an accumulation of data; such collections of data are seldom either definitive or complete. Refining hypotheses as new information comes to light is integral to the scientific process as it’s generally understood. The concept of significant digits is only one of those things that helps optimize our induction.

Odds are a measure of ignorance, not knowledge. They do not submit to purely deductive analysis. For determinate events, there are no odds. Something either happens or it doesn’t, Mr. Spock notwithstanding. However impossibly remote it might have seemed yesterday, the meteorite that actually landed in your back yard containing a message from the Great Pumpkin written in Old Church Slavonic now has a probability of 100% if it actually happened. If it didn’t, its probability is zero. There are no valid degrees between the two.

Am I bashing Star Trek at this point? Well, maybe a little. I think they had an opportunity to teach an important concept, and they blew it. It would have been really refreshing (and arguably much more realistic) to have Spock occasionally say, “Captain, why are you asking me this? You know as well as I do that we can’t really know that, because we have almost no data,” or “Well, I can compute an answer of 28.63725, but it has a margin of error in the thousands, so it’s not worth relying upon.” Obviously quiet data-gathering is not the stuff of edge-of-the-seat television. I get that. But it’s what the situation really would require. (Spock, to his credit, often says, “It’s like nothing we’ve ever seen before,” but that’s usually just prior to his reaching another unsubstantiated conclusion about it.)

I do think, however, that the Star Trek promotion of science as an oracular fount of uncontested truth — a myth that few real scientists believe, but a whole lot of others (including certain scientistic pundits one could name) do believe — is actively pernicious. It oversells and undercuts the legitimate prerogatives of science, and in the long run undermines our confidence in what it actually can do well. There are many things in this world that we don’t know. Some of the things we do know are even pretty improbable.  Some very plausible constructs, on the other hand, are in fact false. I’m all in favor of doing our best to find out, and of relying on logical inference where it’s valid, but it’s not life’s deus ex machina. At best, it’s a machina ex Deo: the exercise of one — but only one — of our God-given capacities. Like most of them, it should be used responsibly, and in concert with the rest.

The Sherlock Holmes Law

Friday, April 3rd, 2020

I rather like Arthur Conan Doyle’s Sherlock Holmes stories. I should also admit that I’m not a hard-core devotee of mysteries in general. If I were, I probably would find the frequent plot holes in the Holmes corpus more annoying than I do. I enjoy them mostly for the period atmosphere, the prickly character of Holmes himself, and the buddy-show dynamic of his relationship with Doctor Watson. To be honest, I’ve actually enjoyed the old BBC Holmes series with Jeremy Brett at least as much as I have enjoyed reading the original works. There’s more of the color, more of the banter, and less scolding of Watson (and implicitly the reader) for not observing the one detail in a million that will somehow eventually prove relevant.

Irrespective of form, though, the Holmes stories have helped me articulate a principle I like to call the “Sherlock Holmes Law”, which relates to the presentation of fictional characters in any context. In its simplest form, it’s merely this:

A fictional character can think no thought that the author cannot.

This is so obvious that one can easily overlook it, and in most fiction it rarely poses a problem. Most authors are reasonably intelligent — most of the ones who actually see publication, at least — and they can create reasonably intelligent characters without breaking the credibility bank. 

There are of course some ways for authors to make characters who are practically superior to themselves. Almost any writer can extrapolate from his or her own skills to create a character who can perform the same tasks faster or more accurately. Hence though my own grasp of calculus is exceedingly slight, and my ability to work with the little I do know is glacially slow, I could write about someone who can look at an arch and mentally calculate the area under the curve in an instant. I know that this is something one can theoretically do with calculus, even if I’m not able to do it myself. There are well-defined inputs and outputs. The impressive thing about the character is mostly in his speed or accuracy. 

This is true for the same reason that you don’t have to be a world-class archer to describe a Robin Hood who can hit the left eye of a gnat from a hundred yards. It’s just another implausible extrapolation from a known ability. As long as nobody questions it, it will sell at least in the marketplace of entertainment. Winning genuine credence might require a bit more.

Genuinely different kinds of thinking, though, are something else. 

I refer this principle to the Holmes stories because, though Mr. Holmes is almost by definition the most luminous intellect on the planet, he’s really not any smarter than Arthur Conan Doyle, save in the quantitative sense I just described. Doyle was not a stupid man, to be sure (though he was more than a little credulous — apparently he believed in fairies, based on some clearly doctored photographs). But neither was he one of the rare intellects for the ages. And so while Doyle may repeatedly assure us (through Watson, who is more or less equivalent to Doyle himself in both training and intelligence) that Holmes is brilliant, what he offers as evidence boils down to his ability to do two things. He can:

a) observe things very minutely (even implausibly so);


b) draw conclusions from those observations with lightning speed. That such inferences themselves strain logic rather badly is not really the point: Doyle has the writer’s privilege of guaranteeing by fiat that they will turn out to be correct.

Time, of course, is one of those things for which an author has a lot of latitude, since books are not necessarily (or ever, one imagines) written in real time. Even if it takes Holmes only a few seconds to work out a chain of reasoning, it’s likely that Doyle himself put much more time into its formation. While that probably does suggest a higher-powered brain, it still doesn’t push into any genuinely new territory. Put in computer terms, while a hypothetical Z80 chip running at a clock speed of 400Mhz would be a hundred times faster than the 4Mhz one that powered my first computer back in the 1982, it would not be able to perform any genuinely new operations. It would probably be best for running CP/M on a 64K system — just doing so really quickly.

It’s worth noting that sometimes what manifests itself chiefly as an increase in speed actually does represent a new kind of thinking. There is a (perhaps apocryphal) story about Carl Friedrich Gauss (1777-1855), who, when he was still in school, was told to add the digits from one to a hundred as punishment for some classroom infraction or other. As the story goes, he thought about it for a second or two, and then produced the correct result (5050), much to the amazement of his teacher. Gauss had achieved his answer not by adding all those numbers very rapidly, but by realizing that if one paired and added the numbers at the ends of the sequence, moving in toward the center, one would always get 101: i.e., 100 + 1 = 101; 99 + 2 = 101; and so on. There would then be fifty such pairs — hence 50 x 101: 5050. 

A character cannot produce that kind of idea if the author doesn’t understand it first. It makes the depiction of superintelligent characters very tricky, and sometimes may even limit the portrayal of stupid ones who don’t think the way the rest of us do.

For readers, however, it is different. Literary works (fictional or not) can open up genuinely new kinds of ideas to readers. While a writer who has achieved a completely new way of thinking about some technical problem is less likely to expound it in fiction than in some sort of a treatise or an application with the patent office, fictional works often present ideas one has never considered before in the human arena. It need not be a thought that’s new to the world in order to be of value — it needs merely to be new to you.

Such a thought, no matter how simple it may seem once you see it, can blow away the confines of our imaginations. It’s happened to me at a few different stages in my life. Tolkien’s The Lord of the Rings awakened me when I was a teenager to something profound about the nature of language and memory. C. S. Lewis’ “The Weight of Glory” revolutionized the way I thought about other people. Tolstoy’s War and Peace laid to rest any notion I had that other people’s minds (or even my own) could ever be fully mapped. Aquinas’ Summa Theologica (especially Q. 1.1.10) transformed forever my apprehension of scriptureThe list goes on, but it’s not my point to catalogue it completely here.

Where has that happened to you?

Reflections on Trisecting the Angle

Thursday, March 12th, 2020

I’m not a mathematician by training, but the language and (for want of a better term) the sport of geometry has always had a special appeal for me. I wasn’t a whiz at algebra in high school, but I aced geometry. As a homeschooling parent, I had a wonderful time teaching geometry to our three kids. I still find geometry intriguing.

When I was in high school, I spent hours trying to figure out how to trisect an angle with compass and straightedge. I knew that nobody had found a way to do it. As it turns out, in 1837 (before even my school days) French mathematician Pierre Wantzel proved that it was impossible for the general case (trisecting certain special angles is trivial). I’m glad I didn’t know that, though, since it gave me a certain license to hack at it anyway. Perhaps I was motivated by a sense that it would be glorious to be the first to crack this particular nut, but mostly I just wondered, “Can it be done, and if not, why not?”

Trisecting the angle is cited in Wikipedia as an example of “pseudomathematics”, and while I will happily concede that any claim to be able to do so would doubtless rely on bogus premises or operations, I nevertheless argue that wrestling with the problem honestly, within the rules of the game, is a mathematical activity as valid as any other, at least as an exercise. I tried different strategies, mostly trying to find a useful correspondence between the (simple) trisection of a straight line and the trisection of an arc. My efforts, of course, failed (that’s what “impossible” means, after all). Had they not, my own name would be celebrated in different Wikipedia articles describing how the puzzle had finally been solved. It’s not. In my defense, I hasten to point out that I never was under the impression that I had succeeded. I just wanted to try and to know either how to do it or to know the reason why.

My failed effort might, by many measures, be accounted a waste of time. But was it? I don’t think it was. Its value for me was not in the achievement but in the striving. Pushing on El Capitan isn’t going to move the mountain, either, but doing it regularly will provide a measure of isometric exercise. Similarly confronting an impossible mental challenge can have certain benefits.

And so along the way I gained a visceral appreciation of some truths I might not have grasped as fully otherwise.

In the narrowest terms, I came to understand that the problem of trisecting the angle (either as an angle or as its corresponding arc) is fundamentally distinct from the problem of trisecting a line segment, because curvature — even in the simplest case, which is the circular — fundamentally changes the problem. One cannot treat the circumference of a circle as if it were linear, even though it is much like a line segment, having no thickness and a specific finite extension. (The fact that π is irrational seems at least obliquely connected to this, though it might not be: that’s just a surmise of my own.)

In the broadest terms, I came more fully to appreciate the fact that some things are intrinsically impossible, even if they are not obvious logical contradictions. You can bang away at them for as long as you like, but you’ll never solve them. This truth transcends mathematics by a long stretch, but it’s worth realizing that failing to accomplish something that you want to accomplish is not invariably a result of your personal moral, intellectual, or imaginative deficiencies. As disappointing as it may be for those who want to believe that every failure is a moral, intellectual, or imaginative one, it’s very liberating for the rest of us.

Between those obvious extremes are some more nuanced realizations. 

I came to appreciate iterative refinement as a tool. After all, even if you can’t trisect the general angle with perfect geometrical rigor, you actually can come up with an imperfect but eminently practical approximation — to whatever degree of precision you require. By iterative refinement (interpolating between the too-large and the too-small solutions), you can zero in on a value that’s demonstrably better than the last one every time. Eventually, the inaccuracy won’t matter to you any more for any practical application. I’m perfectly aware that this no longer pure math — but it is the very essence of engineering, which has a fairly prominent and distinguished place in the world. Thinking about this also altered my appreciation of precision as a pragmatic real-world concept. 

A more general expression of this notion is that, while some problems never have perfect solutions, they sometimes can be practically solved in a way that’s good enough for a given purpose. That’s a liberating realization. Failure to achieve the perfect solution needn’t stop you in your tracks. It doesn’t mean you can’t get a very good one. It’s worth internalizing this basic truth. And only by wrestling with the impossible do we typically discover the limits of the possible. That in turn lets us develop strategies for practical work-arounds.

Conceptually, too, iterative refinement ultimately loops around on itself and becomes a model for thinking about such things as calculus, and the strange and wonderful fact that, with limit theory, we can (at least sometimes) achieve exact (if occasionally bizarre) values for things that we can’t measure directly. Calculus gives us the ability (figuratively speaking) to bounce a very orderly sequence of successive refinements off an infinitely remote backstop and somehow get back an answer that is not only usable but sometimes actually is perfect. This is important enough that we now define the value of pi as the limit of the perimeter of a polygon with infinitely many sides.

It shows also that this is not just a problem of something being somehow too difficult to do: difficulty has little or nothing to do with intrinsic impossibility (pace the Army Corps of Engineers: they are, after all, engineers, not pure mathematicians). In fact we live in a world full of unachievable things. Irrational numbers are all around us, from pi to phi to the square root of two, and even though no amount of effort will produce a perfect rational expression of any of those values, they are not on that account any less real. You cannot solve pi to its last decimal digit because there is no such digit, and no other rational expression can capture it either. But the proportion of circumference to diameter is always exactly pi, and the circumference of the circle is an exact distance. It’s magnificently reliable and absolutely perfect, but its perfection can never be entirely expressed in the same terms as the diameter. (We could arbitrarily designate the circumference as 1 or any other rational number; but then the diameter would be inexpressible in the same terms.)

I’m inclined to draw some theological application from that, but I’m not sure I’m competent to do so. It bears thinking on. Certainly it has at least some broad philosophical applications. The prevailing culture tends to suggest that whatever is not quantifiable and tangible is not real. There are a lot of reasons we can’t quantify such things as love or justice or truth; it’s also in the nature of number that we can’t nail down many concrete things. None of them is the less real merely because we can’t express them perfectly.

Approximation by iterative refinement is basic in dealing with the world in both its rational and its irrational dimensions. While your inability to express pi rationally is not a failure of your moral or rational fiber, you may still legitimately be required — and you will be able — to get an arbitrarily precise approximation of it. In my day, we were taught the Greek value 22/7 as a practical rational value for pi, though Archimedes (288-212 BC) knew it was a bit too high (3.1428…). The Chinese mathematician Zhu Chongzhi (AD 429-500) came up with 355/113, which is not precisely pi either, but it’s more than a thousand times closer to the mark (3.1415929…). The whole domain of rational approximation is fun to explore, and has analogical implications in things not bound up with numbers at all.

So I personally don’t consider my attempts to trisect the general angle with compass and straightedge to be time wasted. It’s that way in most intellectual endeavors, really: education represents not a catalogue of facts, but a process and an exercise, in which the collateral benefits can far outweigh any immediate success or failure. Pitting yourself against reality, win or lose, you become stronger, and, one hopes, wiser. 

Bulletin for Seniors (and Juniors?) Interested in Ethics

Tuesday, July 2nd, 2019

The course on Ethics offered in the autumn is at a college level, so the work will be challenging and interesting. The text originally identified, Alasdair MacIntyre’s After Virtue, begins with the problem that the variety of moral beliefs, and the difficulty of finding objective reasons to prefer one over the other, invites the conclusion of relativism, that is, what is right and wrong depends on one’s culture and preferences and there is no universal standard. MacIntyre rejects that conclusion. To explore how to evaluate competing moral beliefs, he develops a strand of Western ethical theory that has its origins with Aristotle. One weakness of MacIntyre’s book, for the high school student, is that he assumes considerable knowledge about historical approaches to ethics. The problem he deals with, and the solution he proposes, makes more sense if the historical material is mastered first. I have been looking for a good text to present the historical material and have found it in Robin Lovin’s Introduction to Christian Ethics. That book will be added to the course listing. Now the course is well balanced. Roughly the first half will present the general topic of ethics and survey various approaches taken by Western thinkers since Socrates. The second half will focus on MacIntyre’s book. I hope this course will be worthy of study both for its inherent interest and for the way it provides an introduction to some important philosophers in the Western tradition. It will also be an opportunity to develop college-level writing skills.