Author Archive

Unprecedented?

Saturday, July 11th, 2020

I have to date remained silent here about the COVID-19 pandemic, because for the most part I haven’t had anything constructive to add to the discussion, and because I thought that our parents and students would probably prefer to read about something else. I also try, when possible, to discuss things that will still be of interest three or even ten years from now, and to focus largely on issues of education as we practice it. 

Still, COVID-19 has obviously become a consuming focus for many—understandably, given the extent of the problem—and what should be managed in the most intelligent way possible according to principles of epidemiology and sane public policy has become a political football that people are using as further grounds to revile each other. I’m not interested in joining that game. Knaves and cynical opportunists will have their day, and there’s probably not much to do that will stop them—at least nothing that works any better than just ignoring them.

But there is one piece of the public discourse on the subject that has shown up more and more frequently, and here it actually does wander into a domain where I have something to add. The adjective that has surfaced most commonly in public discussions about the COVID-19 epidemic with all its social and political consequences is “unprecedented”. The disease, we are told by some, is unprecedented in its scope; others lament that it’s having unprecedented consequences both medically and economically. The public response, according to others, is similarly unprecedented: for some that’s an argument that it is also unwarranted; for others, that’s merely a sign that it’s appropriately commensurate with the scope of the unprecedented problem; for still others, it’s a sign that it’s staggeringly inadequate.

As an historian I’m somewhat used to the reckless way in which the past is routinely ignored or (worse) subverted, according to the inclination of the speaker, in the service of this agenda or that. I’ve lost track of the number of people who have told me why Rome fell as a way of making a contemporary political point. But at some point one needs to raise an objection: seriously—unprecedented? As Inigo Montoya says in The Princess Bride, “You keep using that word. I do not think it means what you think it means.” To say that anything is unprecedented requires it to be contextualized in history—not just the last few years’ worth, either.

In some sense, of course, every happening in history, no matter how trivial, is unprecedented—at least if history is not strictly cyclical, as the Stoics believed it was. I’m not a Stoic on that issue or many others. So, no: this exact thing has indeed never happened before. But on that calculation, if I swat a mosquito, that’s unprecedented, too, because I’ve never swatted that particular mosquito before. This falls into Douglas Adams’ useful category of “True, but unhelpful.” Usually people use the word to denote something of larger scope, and they mean that whatever they are talking about is fundamentally different in kind or magnitude from anything that has happened before. But how different is COVID-19, really?

The COVID-19 pandemic is not unprecedented in its etiology. Viruses happen. We even know more or less how they happen. One does not have to posit a diabolical lab full of evil gene-splicers to account for it. Coronaviruses are not new, and many others have apparently come and gone throughout human history, before we even had the capacity to detect them or name them. Some of them have been fairly innocuous, some not. Every time a new one pops up, it’s a roll of the dice—but it’s not our hand that’s rolling them. Sure: investing in some kind of conspiracy theory to explain it is (in its odd way) comforting and exciting. It’s comforting because it suggests that we have a lot more control over things than we really do. It’s exciting, because it gives us a villain we can blame. Blame is a top-dollar commodity in today’s political climate, and it drives more and more of the decisions being made at the highest levels. Ascertaining the validity of the blame comes in a distant second to feeling a jolt of righteous indignation. The reality is both less exciting and somewhat bleaker: we don’t have nearly as much control as we’d like to believe. These things happen and will continue to happen without our agency or design. Viruses are fragments of genetic material that have apparently broken away from larger organic systems, and from there they are capable of almost infinite, if whimsical, mutation. They’re loose cannons: that’s their nature. That’s all. Dangerous, indisputably. Malicious? Not really.

The COVID-19 pandemic is not unprecedented in its scope and ability to be lethal. Epidemics and plagues have killed vast numbers of people over wide areas throughout history. A few years ago, National Geographic offered a portrait of the world’s most prolific killer. It was not a mass murderer, or even a tyrant. It was the flea, and the microbial load it carried. From 1348 through about 1352, the Black Death visited Europe with a ferocity that probably was unprecedented at the time. Because records from the period are sketchy, it’s hard to come up with an exact count, but best estimates are that it killed approximately a third of the population of Europe all within that little three-to-four-year period. The disease continued to revisit Europe approximately every twenty years for some centuries to come, especially killing people of childbearing age each time, with demographic results that vastly exceed what we might determine from a sheer count of losses. In some areas whole cities were wiped out, and the death toll in Europe alone may have run as high as two hundred million: the extent of its destruction throughout parts of Asia has not been ascertained. Smallpox, in the last century of its activity (1877-1977), killed approximately half a billion people. The 1918 Spanish influenza epidemic killed possibly as many as a hundred million. Wikipedia here lists over a hundred similar catastrophes caused by infectious diseases of one sort or another, each of which had a death toll of more than a thousand; it lists a number of others where the count cannot even be approximately ascertained.

Nor is the COVID-19 pandemic unprecedented in its level of social upheaval. The Black Death radically changed the social, cultural, economic, and even the religious configuration of Europe almost beyond recognition. After Columbus, Native American tribes were exposed to Old World disease agents to which they had no immunities. Many groups were reduced to less than a tenth of their former numbers. Considering these to be instances of genocide is, I think, to ascribe far more intentionality to the situation than it deserves (though there seem to have been some instances where it was intended), but the outcome was indifferent to the intent. The Spanish Influenza of 1918, coming as it did on the heels of World War I, sent a world culture that was already off balance into a deeper spiral. It required steep curbs on social activity to check its spread. Houses of worship were closed then too. Other pubic gatherings were forbidden. Theaters were closed. Even that was not really unprecedented, though: theaters had been closed in Elizabethan London during several of the recurrent visitations of the bubonic plague. The plot of Romeo and Juliet is colored by a quarantine. Boccaccio’s Decameron is a collection of tales that a group of people told to amuse themselves while in isolation, and Chaucer’s somewhat derivative Canterbury Tales are about a group of pilgrims heading for the shrine of St. Thomas à Becket for having given them aid while they were laboring under a plague. People have long known that extraordinary steps need to be taken, at least temporarily, in order to save lives during periods of contagion. It’s inconvenient, it’s costly, and it’s annoying. It’s not a hoax, and it’s not tyrannical. It’s not novel.

So no, in most ways, neither the appearance of COVID-19 nor our responses to it are really unprecedented. I say this in no way to minimize the suffering of those afflicted with the disease, or those suffering from the restrictions put in place to curb its spread. Nor do I mean to trivialize the efforts of those battling its social, medical, or economic consequences: some of them are positively heroic. But claiming that this is all unprecedented looks like an attempt to exempt ourselves from the actual flow of history, and to excuse ourselves from the very reasonable need to consult the history of such events in order to learn what we can from them—for there are, in fact, things to be learned.

In responding to the plagues and calamities of the past, it is perhaps unsurprising that people responded, then as now, primarily out of fear. Fear is one of the most powerful of human motivators, but it is seldom a wise counselor. There have been conspiracy theories before too: during the Black Death, for example, some concluded that that the disease was due to witchcraft, and so they set out to kill cats, on the ground that they were witches’ familiars. The result, of course, was that rats—the actual vectors for the disease, together with their fleas, were able to breed and spread disease all the more freely. Others sold miracle cures to credulous (and fearful) populations; these of course accomplished nothing but heightening the level of fear and desperation.

There were also people who were brave and self-sacrificing, who cared for others in these trying times. In 1665, the village of Eyam in Derbyshire quarantined itself with the plague. They knew what they could expect, and they were not mistaken. Everyone in the town perished, but their decision saved thousands of lives in neighboring villages. Fr. Damien De Veuster ministered to the lepers on Molokai before succumbing to the disease himself: he remains an icon of charity and noble devotion and is the patron saint of Hawaii.

The human race has confronted crisis situations involving infectious diseases, and the decisions they require, before. They are not easy, and sometimes they call for self-sacrifice. There is sober consolation to be wrung from the fact that we are still here, and that we still, as part of our God-given nature, have the capacity to make such decisions—both the ones that protect us and those sacrificial decisions we make to save others. We will not get through the ordeal without loss and cost, but humanity has gotten through before, and it will again. We are neither entirely without resources, but neither are we wholly in control. We need to learn from what we have at our disposal, marshal our resources wisely and well, and trust in God for the rest.

Mr. Spock, Pseudo-scientist

Wednesday, April 15th, 2020

I’m one of those aging folks who still remember the original run of Star Trek (no colon, no The Original Series or any other kind of elaboration — just Star Trek). It was a groundbreaking show, and whether you like it or not (there are plenty of reasons to do both), it held out a positive vision for the future, and sketched a societal ethos that was not entirely acquisitive, and not even as secular and materialistic as later outings in the Star Trek franchise. The officers of the Enterprise were not latter-day conquistadors. They were genuine explorers, with a Prime Directive to help them avoid destroying too many other nascent cultures. (Yes, I know: they violated it very frequently, but that was part of the point of the story. Sometimes there was even a good reason for doing so.)

It also offered the nerds among us a point of contact. Sure, Captain Kirk was kind of a cowboy hero, galloping into situations with fists swinging and phasers blazing, and, more often than not, reducing complex situations to polar binaries and then referring them either to fisticuffs or an outpouring of excruciatingly impassioned rhetoric. Dr. McCoy, on the other hand, was the splenetic physician, constantly kvetching about everything he couldn’t fix, and blaming people who were trying to work the problem for not being sensitive enough to be as ineffectual as he was. But Mr. Spock (usually the object of McCoy’s invective) was different. He was consummately cool, and he relied upon what he called Logic (I’m sure it had a capital “L” in his lexicon) for all his decision-making. He was the science officer on the Enterprise, and also the first officer in the command structure. Most of the more technically savvy kids aspired to be like him.

It was an article of faith that whatever conclusions Spock reached were, because he was relying on Logic, logical. They were the right answer, too, unless this week’s episode was explicitly making a concession to the value of feelings over logic (which happened occasionally, but not often enough to be really off-putting), and they could be validated by science and reason. You can’t argue with facts. People who try are doomed to failure, and their attempt is at best a distraction, and often worse. 

Up to that point, I am more or less on board, though I was always kind of on the periphery of the nerd cluster, myself. I suspected then (as I still do) that there are things that logic (with an upper-case or a lower-case L) or mathematics cannot really address. Certainly not everything is even quantifiable. But it was the concept of significant digits that ultimately demolished, for me, Mr. Spock’s credibility as a science officer. When faced with command decisions, he usually did reasonably well, but when pontificating on mathematics, he really did rather badly. (Arguably he was exactly as bad at it as some of the writers of the series. Small wonder: see the Sherlock Holmes Law, which I’ve discussed here previously.)

The concept of significant digits (or figures) is really a simple one, though its exact specifications involve some fussy details. Basically it means that you can’t make your information more accurate merely by performing arithmetic on it. (It’s more formally explained here on Wikipedia.) By combining a number of things that you know only approximately and doing some calculations on them, you’re not going to get a more accurate answer: you’re going to get a less accurate one. The uncertainty of each of those terms or factors will increase the uncertainty of the whole.

So how does Spock, for all his putative scientific and logical prowess, lose track of this notion, essential to any kind of genuine scientific thinking? In the first-season episode “Errand of Mercy”, he has a memorable exchange with Kirk: 

Kirk: What would you say the odds are on our getting out of here?

Spock: Difficult to be precise, Captain. I should say approximately 7,824.7 to 1.

Kirk: Difficult to be precise? 7,824 to 1?

Spock: 7,824.7 to 1.

Kirk: That’s pretty close approximation.

Spock: I endeavor to be accurate.

Kirk: You do quite well.

No, he doesn’t do quite well. He does miserably: he has assumed in his runaway calculations that the input values on which he bases this fantastically precise number are known to levels of precision that could not possibly be ascertained in the real world, especially in the middle of a military operation — even a skirmish in which all the participants and tactical elements are known in detail (as they are not here).  The concept of the “fog of war” has something to say about how even apparent certainties can quickly degrade, in the midst of battle, into fatal ignorance. Most of the statistical odds for this kind of thing couldn’t be discovered by any rational means whatever.

Precision and accuracy are not at all the same thing. Yes, you can calculate arbitrarily precise answers based on any data, however precise or imprecise the data may be. Beyond the range of its significant digits, however, this manufactured precision is worse than meaningless: it conveys fuzzy knowledge as if it were better understood than it really is. It certainly adds nothing to the accuracy of the result, and only a terrible scientist would assume that it did. Spock’s answer is more precise, therefore, than “about 8000 to one”, but it’s less accurate, because it suggests that the value is known to a much higher degree of precision than it possibly could be. Even “about 8000 to one” is probably not justifiable, given what the characters actually know. (It’s also kind of stupid, in the middle of a firefight, to give your commanding officer gratuitously complex answers to simple questions: “Exceedingly poor,” would be more accurate and more useful.

This has not entirely escaped the fan community, of course: “How many Vulcans does it take to change a lightbulb?” is answered with, “1.000000”. This is funny, because it is, for all its pointless precision, no more accurate than “one”, and in no situations would fractional persons form a meaningful category when it comes to changing light bulbs. (Fractional persons might be valid measurements in other contexts — for example, in a cannibalistic society. Don’t think about it too hard.) 

Elsewhere in the series, too, logic is invoked as a kind of deus ex machina — something to which the writer of the episode could appeal to justify any decision Mr. Spock might come up with, irrespective of whether it was reasonable or not. Seldom (I’m inclined to say never, but I’m not going to bother to watch the whole series over again just to verify the fact) are we shown the operation of even one actual logical operation.

The structures of deductive reasoning (logic’s home turf) seldom have a great deal to do with science, in any case. Mathematical procedures are typically deductive. Some philosophical disciplines, including traditional logic, are too. Physical science, however, is almost entirely inductive. In induction, one generalizes tentatively from an accumulation of data; such collections of data are seldom either definitive or complete. Refining hypotheses as new information comes to light is integral to the scientific process as it’s generally understood. The concept of significant digits is only one of those things that helps optimize our induction.

Odds are a measure of ignorance, not knowledge. They do not submit to purely deductive analysis. For determinate events, there are no odds. Something either happens or it doesn’t, Mr. Spock notwithstanding. However impossibly remote it might have seemed yesterday, the meteorite that actually landed in your back yard containing a message from the Great Pumpkin written in Old Church Slavonic now has a probability of 100% if it actually happened. If it didn’t, its probability is zero. There are no valid degrees between the two.

Am I bashing Star Trek at this point? Well, maybe a little. I think they had an opportunity to teach an important concept, and they blew it. It would have been really refreshing (and arguably much more realistic) to have Spock occasionally say, “Captain, why are you asking me this? You know as well as I do that we can’t really know that, because we have almost no data,” or “Well, I can compute an answer of 28.63725, but it has a margin of error in the thousands, so it’s not worth relying upon.” Obviously quiet data-gathering is not the stuff of edge-of-the-seat television. I get that. But it’s what the situation really would require. (Spock, to his credit, often says, “It’s like nothing we’ve ever seen before,” but that’s usually just prior to his reaching another unsubstantiated conclusion about it.)

I do think, however, that the Star Trek promotion of science as an oracular fount of uncontested truth — a myth that few real scientists believe, but a whole lot of others (including certain scientistic pundits one could name) do believe — is actively pernicious. It oversells and undercuts the legitimate prerogatives of science, and in the long run undermines our confidence in what it actually can do well. There are many things in this world that we don’t know. Some of the things we do know are even pretty improbable.  Some very plausible constructs, on the other hand, are in fact false. I’m all in favor of doing our best to find out, and of relying on logical inference where it’s valid, but it’s not life’s deus ex machina. At best, it’s a machina ex Deo: the exercise of one — but only one — of our God-given capacities. Like most of them, it should be used responsibly, and in concert with the rest.

The Sherlock Holmes Law

Friday, April 3rd, 2020

I rather like Arthur Conan Doyle’s Sherlock Holmes stories. I should also admit that I’m not a hard-core devotee of mysteries in general. If I were, I probably would find the frequent plot holes in the Holmes corpus more annoying than I do. I enjoy them mostly for the period atmosphere, the prickly character of Holmes himself, and the buddy-show dynamic of his relationship with Doctor Watson. To be honest, I’ve actually enjoyed the old BBC Holmes series with Jeremy Brett at least as much as I have enjoyed reading the original works. There’s more of the color, more of the banter, and less scolding of Watson (and implicitly the reader) for not observing the one detail in a million that will somehow eventually prove relevant.

Irrespective of form, though, the Holmes stories have helped me articulate a principle I like to call the “Sherlock Holmes Law”, which relates to the presentation of fictional characters in any context. In its simplest form, it’s merely this:

A fictional character can think no thought that the author cannot.

This is so obvious that one can easily overlook it, and in most fiction it rarely poses a problem. Most authors are reasonably intelligent — most of the ones who actually see publication, at least — and they can create reasonably intelligent characters without breaking the credibility bank. 

There are of course some ways for authors to make characters who are practically superior to themselves. Almost any writer can extrapolate from his or her own skills to create a character who can perform the same tasks faster or more accurately. Hence though my own grasp of calculus is exceedingly slight, and my ability to work with the little I do know is glacially slow, I could write about someone who can look at an arch and mentally calculate the area under the curve in an instant. I know that this is something one can theoretically do with calculus, even if I’m not able to do it myself. There are well-defined inputs and outputs. The impressive thing about the character is mostly in his speed or accuracy. 

This is true for the same reason that you don’t have to be a world-class archer to describe a Robin Hood who can hit the left eye of a gnat from a hundred yards. It’s just another implausible extrapolation from a known ability. As long as nobody questions it, it will sell at least in the marketplace of entertainment. Winning genuine credence might require a bit more.

Genuinely different kinds of thinking, though, are something else. 

I refer this principle to the Holmes stories because, though Mr. Holmes is almost by definition the most luminous intellect on the planet, he’s really not any smarter than Arthur Conan Doyle, save in the quantitative sense I just described. Doyle was not a stupid man, to be sure (though he was more than a little credulous — apparently he believed in fairies, based on some clearly doctored photographs). But neither was he one of the rare intellects for the ages. And so while Doyle may repeatedly assure us (through Watson, who is more or less equivalent to Doyle himself in both training and intelligence) that Holmes is brilliant, what he offers as evidence boils down to his ability to do two things. He can:

a) observe things very minutely (even implausibly so);

and

b) draw conclusions from those observations with lightning speed. That such inferences themselves strain logic rather badly is not really the point: Doyle has the writer’s privilege of guaranteeing by fiat that they will turn out to be correct.

Time, of course, is one of those things for which an author has a lot of latitude, since books are not necessarily (or ever, one imagines) written in real time. Even if it takes Holmes only a few seconds to work out a chain of reasoning, it’s likely that Doyle himself put much more time into its formation. While that probably does suggest a higher-powered brain, it still doesn’t push into any genuinely new territory. Put in computer terms, while a hypothetical Z80 chip running at a clock speed of 400Mhz would be a hundred times faster than the 4Mhz one that powered my first computer back in the 1982, it would not be able to perform any genuinely new operations. It would probably be best for running CP/M on a 64K system — just doing so really quickly.

It’s worth noting that sometimes what manifests itself chiefly as an increase in speed actually does represent a new kind of thinking. There is a (perhaps apocryphal) story about Carl Friedrich Gauss (1777-1855), who, when he was still in school, was told to add the digits from one to a hundred as punishment for some classroom infraction or other. As the story goes, he thought about it for a second or two, and then produced the correct result (5050), much to the amazement of his teacher. Gauss had achieved his answer not by adding all those numbers very rapidly, but by realizing that if one paired and added the numbers at the ends of the sequence, moving in toward the center, one would always get 101: i.e., 100 + 1 = 101; 99 + 2 = 101; and so on. There would then be fifty such pairs — hence 50 x 101: 5050. 

A character cannot produce that kind of idea if the author doesn’t understand it first. It makes the depiction of superintelligent characters very tricky, and sometimes may even limit the portrayal of stupid ones who don’t think the way the rest of us do.

For readers, however, it is different. Literary works (fictional or not) can open up genuinely new kinds of ideas to readers. While a writer who has achieved a completely new way of thinking about some technical problem is less likely to expound it in fiction than in some sort of a treatise or an application with the patent office, fictional works often present ideas one has never considered before in the human arena. It need not be a thought that’s new to the world in order to be of value — it needs merely to be new to you.

Such a thought, no matter how simple it may seem once you see it, can blow away the confines of our imaginations. It’s happened to me at a few different stages in my life. Tolkien’s The Lord of the Rings awakened me when I was a teenager to something profound about the nature of language and memory. C. S. Lewis’ “The Weight of Glory” revolutionized the way I thought about other people. Tolstoy’s War and Peace laid to rest any notion I had that other people’s minds (or even my own) could ever be fully mapped. Aquinas’ Summa Theologica (especially Q. 1.1.10) transformed forever my apprehension of scriptureThe list goes on, but it’s not my point to catalogue it completely here.

Where has that happened to you?

Reflections on Trisecting the Angle

Thursday, March 12th, 2020

I’m not a mathematician by training, but the language and (for want of a better term) the sport of geometry has always had a special appeal for me. I wasn’t a whiz at algebra in high school, but I aced geometry. As a homeschooling parent, I had a wonderful time teaching geometry to our three kids. I still find geometry intriguing.

When I was in high school, I spent hours trying to figure out how to trisect an angle with compass and straightedge. I knew that nobody had found a way to do it. As it turns out, in 1837 (before even my school days) French mathematician Pierre Wantzel proved that it was impossible for the general case (trisecting certain special angles is trivial). I’m glad I didn’t know that, though, since it gave me a certain license to hack at it anyway. Perhaps I was motivated by a sense that it would be glorious to be the first to crack this particular nut, but mostly I just wondered, “Can it be done, and if not, why not?”

Trisecting the angle is cited in Wikipedia as an example of “pseudomathematics”, and while I will happily concede that any claim to be able to do so would doubtless rely on bogus premises or operations, I nevertheless argue that wrestling with the problem honestly, within the rules of the game, is a mathematical activity as valid as any other, at least as an exercise. I tried different strategies, mostly trying to find a useful correspondence between the (simple) trisection of a straight line and the trisection of an arc. My efforts, of course, failed (that’s what “impossible” means, after all). Had they not, my own name would be celebrated in different Wikipedia articles describing how the puzzle had finally been solved. It’s not. In my defense, I hasten to point out that I never was under the impression that I had succeeded. I just wanted to try and to know either how to do it or to know the reason why.

My failed effort might, by many measures, be accounted a waste of time. But was it? I don’t think it was. Its value for me was not in the achievement but in the striving. Pushing on El Capitan isn’t going to move the mountain, either, but doing it regularly will provide a measure of isometric exercise. Similarly confronting an impossible mental challenge can have certain benefits.

And so along the way I gained a visceral appreciation of some truths I might not have grasped as fully otherwise.

In the narrowest terms, I came to understand that the problem of trisecting the angle (either as an angle or as its corresponding arc) is fundamentally distinct from the problem of trisecting a line segment, because curvature — even in the simplest case, which is the circular — fundamentally changes the problem. One cannot treat the circumference of a circle as if it were linear, even though it is much like a line segment, having no thickness and a specific finite extension. (The fact that π is irrational seems at least obliquely connected to this, though it might not be: that’s just a surmise of my own.)

In the broadest terms, I came more fully to appreciate the fact that some things are intrinsically impossible, even if they are not obvious logical contradictions. You can bang away at them for as long as you like, but you’ll never solve them. This truth transcends mathematics by a long stretch, but it’s worth realizing that failing to accomplish something that you want to accomplish is not invariably a result of your personal moral, intellectual, or imaginative deficiencies. As disappointing as it may be for those who want to believe that every failure is a moral, intellectual, or imaginative one, it’s very liberating for the rest of us.

Between those obvious extremes are some more nuanced realizations. 

I came to appreciate iterative refinement as a tool. After all, even if you can’t trisect the general angle with perfect geometrical rigor, you actually can come up with an imperfect but eminently practical approximation — to whatever degree of precision you require. By iterative refinement (interpolating between the too-large and the too-small solutions), you can zero in on a value that’s demonstrably better than the last one every time. Eventually, the inaccuracy won’t matter to you any more for any practical application. I’m perfectly aware that this no longer pure math — but it is the very essence of engineering, which has a fairly prominent and distinguished place in the world. Thinking about this also altered my appreciation of precision as a pragmatic real-world concept. 

A more general expression of this notion is that, while some problems never have perfect solutions, they sometimes can be practically solved in a way that’s good enough for a given purpose. That’s a liberating realization. Failure to achieve the perfect solution needn’t stop you in your tracks. It doesn’t mean you can’t get a very good one. It’s worth internalizing this basic truth. And only by wrestling with the impossible do we typically discover the limits of the possible. That in turn lets us develop strategies for practical work-arounds.

Conceptually, too, iterative refinement ultimately loops around on itself and becomes a model for thinking about such things as calculus, and the strange and wonderful fact that, with limit theory, we can (at least sometimes) achieve exact (if occasionally bizarre) values for things that we can’t measure directly. Calculus gives us the ability (figuratively speaking) to bounce a very orderly sequence of successive refinements off an infinitely remote backstop and somehow get back an answer that is not only usable but sometimes actually is perfect. This is important enough that we now define the value of pi as the limit of the perimeter of a polygon with infinitely many sides.

It shows also that this is not just a problem of something being somehow too difficult to do: difficulty has little or nothing to do with intrinsic impossibility (pace the Army Corps of Engineers: they are, after all, engineers, not pure mathematicians). In fact we live in a world full of unachievable things. Irrational numbers are all around us, from pi to phi to the square root of two, and even though no amount of effort will produce a perfect rational expression of any of those values, they are not on that account any less real. You cannot solve pi to its last decimal digit because there is no such digit, and no other rational expression can capture it either. But the proportion of circumference to diameter is always exactly pi, and the circumference of the circle is an exact distance. It’s magnificently reliable and absolutely perfect, but its perfection can never be entirely expressed in the same terms as the diameter. (We could arbitrarily designate the circumference as 1 or any other rational number; but then the diameter would be inexpressible in the same terms.)

I’m inclined to draw some theological application from that, but I’m not sure I’m competent to do so. It bears thinking on. Certainly it has at least some broad philosophical applications. The prevailing culture tends to suggest that whatever is not quantifiable and tangible is not real. There are a lot of reasons we can’t quantify such things as love or justice or truth; it’s also in the nature of number that we can’t nail down many concrete things. None of them is the less real merely because we can’t express them perfectly.

Approximation by iterative refinement is basic in dealing with the world in both its rational and its irrational dimensions. While your inability to express pi rationally is not a failure of your moral or rational fiber, you may still legitimately be required — and you will be able — to get an arbitrarily precise approximation of it. In my day, we were taught the Greek value 22/7 as a practical rational value for pi, though Archimedes (288-212 BC) knew it was a bit too high (3.1428…). The Chinese mathematician Zhu Chongzhi (AD 429-500) came up with 355/113, which is not precisely pi either, but it’s more than a thousand times closer to the mark (3.1415929…). The whole domain of rational approximation is fun to explore, and has analogical implications in things not bound up with numbers at all.

So I personally don’t consider my attempts to trisect the general angle with compass and straightedge to be time wasted. It’s that way in most intellectual endeavors, really: education represents not a catalogue of facts, but a process and an exercise, in which the collateral benefits can far outweigh any immediate success or failure. Pitting yourself against reality, win or lose, you become stronger, and, one hopes, wiser. 

Crafting a Literature Program

Saturday, February 22nd, 2020

The liberal arts are, to great measure, founded on written remains, from the earliest times to our own. Literature (broadly construed to take in both fiction and non-fiction) encompasses a bewildering variety of texts, genres, attitudes, belief systems, and just about everything else. Like history (which can reasonably be construed to cover everything we know, with the possible, but incomplete, exception of pure logic and mathematics), literature is a problematic area of instruction: it is both enormously important and virtually impossible to reduce to a clear and manageable number of postulates. 

In modern educational circles, literary studies are often dominated by critical schools, the grinding of pedagogical axes, and dogmatic or interpretive agendas of all sorts — social, political, psychological, or completely idiosyncratic. Often these things loom so large as to eclipse the reality that they claim to investigate. It is as if the study of astronomy had become exclusively bound up with the technology of telescope manufacture, but no longer bothered with turning them toward the stars and planets. Other difficulties attend the field as well.

We’re sailing on an ocean here…

The first is just the sheer size of the field. Yes, astronomy may investigate a vast number of stars, and biology may look at a vast number of organisms and biological systems, but the effort there is to elicit what is common to the diverse phenomena (which did not in and of themselves come into being as objects of human contemplation) and produce a coherent system to account for them. Literature doesn’t work that way. There is an unimaginably huge body of literature out there, and it’s getting bigger every day. Unlike science or milk, the old material doesn’t spoil or go off; it just keeps accumulating. Even if (by your standards or Sturgeon’s Law) 90% of it is garbage, that still leaves an enormous volume of good material to cover. There’s no way to examine more than the tiniest part of that.

…on which the waves never stop moving…

Every item you will encounter in a study of literature is itself an overt attempt to communicate something to someone. That means that each piece expresses its author’s identity and personality; in the process it inevitably reflects a range of underlying social and cultural suppositions. In their turn, these may be common to that author’s time and place, or they may represent resistance to the norms of the time. Any given work may reach us through few or many intermediaries, some of which will have left their stamp on it, one way or the other. Finally, every reader receives every literary product he or she encounters differently, too. That allows virtually infinite room for ongoing negotiation between author and reader in shaping the experience and its meaning — which is the perennially shifting middle ground between them.

…while no two compasses agree…

I haven’t seen this discussed very much out in the open, though perhaps I just don’t frequent the right websites, email lists, or conferences. But the reality — the elephant in the room — is that no two teachers agree on what qualifies as good and bad literature. Everyone has ideas about that, but they remain somewhat hidden, and often they are derived viscerally rather than systematically. For example, I teach (among other things) The Odyssey and Huckleberry Finn; I have seen both attacked, in a national forum of English teachers, as having no place in the curriculum because they are (for one reason or another) either not good literature or because they are seen as conveying pernicious social or cultural messages. I disagree with their conclusion, at least — obviously, since I do in fact teach them, but the people holding these positions are not stupid. In fact, they make some very strong arguments. They’re proceeding from basic assumptions different from my own…but, then again, so does just about everyone. That’s life.

…nor can anyone name the destination:

Nobody talks about this much, either, but it’s basic: our literature teachers don’t even remotely agree on what they’re doing. Again, I don’t mean that they are incompetent or foolish, but merely that there is no universally agreed-upon description of what success in a literature program looks like. Success in a science or math program, or even a foreign language program, is relatively simple to quantify and consequently reasonably simple to assess. Not so here. Every teacher seems to bring a different yardstick to the table. Some see their courses as morally neutral instruction in the history and techniques of an art form; others see it as a mode of indoctrination in values, according to their lights. For some, that’s Marxism. For some, it’s conservative Christianity. For some, it’s a liberal secular humanism. For others…well, there is no accounting for all the stripes of opinion people bring with the to the table — but the range is very broad.

is it any wonder people are confused?

So where are we, then, anyway? The sum is so chaotic that most public high students I have asked in the past two decades appear to have simply checked out: they play the game and endure their English classes, but the shocking fact is that, even while enrolled in them at the time, almost all have been unable to tell me what they were reading for those classes. This is not a furtive examination: I’ve simply asked them, “So, what are you reading for English?” If one or two didn’t know, I’d take that as a deficiency in the student or a sudden momentary diffidence on the subject. When all of them seem not to know, however, I suspect some more systemic shortfall. I would suggest that this is not because they are stupid either, but because their own literary instruction has been so chaotic as to stymie real engagement with the material.

It’s not particularly surprising, then, that literature is seen as somehow suspect, and that homeschooling parents looking for literature courses for their students feel that they are buying a pig in a poke. They are. They have to be wondering — will this course or that respect my beliefs or betray them? Will the whole project really add up to anything? Will the time spend on it add in any meaningful sense to my students’ lives, or is this just some gravy we could just as well do without? Some parents believe (rightly or wrongly: it would be a conflict of interest for me even to speculate which) that they probably can do just as well on such a “soft” subject as some program they don’t fully understand or trust. 

One teacher’s approach

These questions are critical, and I encourage any parent to get some satisfactory answers before enrolling in any program of literary instruction, including mine. Here are my answers: if they satisfy you, I hope you’ll consider our program. If not, look elsewhere with my blessing, but keep asking the questions.

In the first instance, my project is fairly simple. I am trying to teach my students to read well. Of course, by now they have mastered the mechanical art of deciphering letters, combining them into words, and extracting meaning from sentences on a page. But there’s more to reading than that: one must associate those individual sentences with each other and weigh them together to come to a synthetic understanding of what the author is doing. They need in the long run to consider nuance, irony, tonality, and the myriad inflections an author imparts to the text with his or her own persona. Moreover, they need t consider what a given position or set of ideas means within its own cultural conversation. All those things change the big picture.

There’s a lot there to know, and a lot to learn. I don’t pretend to know it all myself either, but I think I know at least some of the basic questions, and I have for about a generation now been encouraging students to ask them, probe them, and keep worrying at the feedback like a dog with a favorite bone. In some areas, my own perspectives are doubtless deficient. I do, on the other hand, know enough about ancient and medieval literature, language, and culture that I usually can open some doors that students hadn’t hitherto suspected. Once one develops a habit of looking at these things, one can often see where to push on other kinds of literature as well. The payoff is cumulative.

There are some things I generally do not do. I do not try to use literary instruction as a reductive occasion or pretext for moral or religious indoctrination. Most of our students come from families already seriously engaged with questions of faith and morals, and I prefer to respect that fact, leaving it to their parents and clergy. I also don’t believe that any work of literature can be entirely encompassed by such questions, and hence it would be more than a little arrogant of me to try to constrain the discussion to those points.

This is not to say that I shy away from moral and religious topics either (as teachers in our public schools often have to do perforce). Moral and theological issues come up naturally in our conversations, and I do not suppress them; I try to deal with them honestly from my own perspective as a fairly conservative reader and as a Christian while leaving respectful room for divergence of opinion as well. (I do believe that my own salvation is not contingent upon my having all the right answers, so I’m willing to be proven wrong on the particulars.)

It is never my purpose to mine literary works for “teachable points” or to find disembodied sententiae that I can use as an excuse to exalt this work or dismiss that one. This is for two reasons. First of all, I have too much respect for the literary art to think that it can or should be reduced to a platitudinous substrate. Second, story in particular (which is a large part of what literature involves) is a powerful and largely autonomous entity. It cannot well be tamed; any attempt to subvert it with tendentious arguments (from either the author’s point of view or from the reader’s) almost invariably produces bad art and bad reading. An attempt to tell a student “You should like this work, but must appreciate it only in the following way,” is merely tyrannical — tyrannical in the worst way, since it sees itself as being entirely in the interest of and for the benefit of the student. Fortunately, for most students, it’s also almost wholly ineffectual, though a sorry side effect is that a number find the whole process so off-putting that they ditch literature altogether. That’s probably the worst possible outcome for a literature program.

I also do not insist on canons of my own taste. If students disagree with me (positively or negatively) about the value of a given work, I’m fine with that. I don’t require anyone to like what I like. I deal in classics (in a variety of senses of the term) but the idea of an absolute canon of literature is a foolish attempt to control what cannot be controlled. It does not erode my appreciation for a work of literature that a student doesn’t like it. The fact that twenty generations have liked another won’t itself make me like it either, if I don’t, though it does make me reticent to reject it out of hand. It takes a little humility to revisit something on which you have already formed an opinion, but it’s salutary. It’s not just the verdict of the generations that can force me back to a work again, either: if a student can see something in a work that I have hitherto missed and can show me how to appreciate it, I gain by that. At the worst, I’m not harmed; at the best, I’m a beneficiary. Many teachers seem eager to enforce their evaluations of works on their students. I don’t know why. I have learned more from my students than from any other source, I suspect. Why would I not want that to continue?

Being primarily a language scholar, I do attempt to dig into texts for things like grammatical function — both as a way of ascertaining the exact surface meanings and as a way of uncovering the hidden complexities. Those who haven’t read Shakespeare with an eye on his brilliant syntactical ambiguity in mind are missing a lot. He was a master of complex expression, and what may initially seem oddly phrased but obvious statements can unfold into far less obvious questions or bivalent confessions. After thirty years of picking at it, I still have never seen an adequate discussion in the critical literature on Macbeth’s “Here had we now our country’s honour roofed / Were the graced person of our Banquo present (Macbeth 3.4.39-40).”  The odd phrasing is routinely explained as something like “All the nobility of Scotland would be gathered under one roof if only Banquo were present,” but I think he is saying considerably more than that, thanks to the formation of contrary-to-fact conditions and the English subjunctive.

My broadest approach to literature is more fully elaborated in Reading and Christian Charity, an earlier posting on this blog and also one of the “White Papers” on the school website. I hope all parents (and their students) considering taking any of my courses will read it, because it contains the essential core of my own approach to literature, which differs from many others, both in the secular world and in the community flying the banner of Classical Christian Education. If it is what you’re looking for, I hope you will consider our courses. 

[Some of the foregoing appeared at the Scholars Online Website as ancillary to the description of the literature offerings. It has been considerably revised and extended here.]

Causes

Saturday, February 1st, 2020

The Greek philosopher Aristotle thought widely and deeply on many subjects. Some of his ideas have proven to be unworkable or simply wrong — his description of a trajectory of a thrown object, for example, works only in Roadrunner cartoons: in Newtonian physics, a thrown ball does not turn at a right angle and fall after it’s run out of forward-moving energy. The force vectors vary continuously, and its trajectory describes an arc. We can forgive Aristotle, I think, for not having calculus at his disposal. That he didn’t apparently observe the curvature of a trajectory is a little bit harder to explain.

Others of his ideas are rather narrowly culturally bound. His views on slavery are rightly repudiated almost everywhere, and many others are not very useful to us today. I personally find his description of Athenian tragedy in the Poetics far too limiting: the model of the hero who falls from greatness due to a tragic flaw is one model (though not really the only one) for describing the Oedipus Rex, but it doesn’t apply even loosely to most of the rest of surviving Athenian tragedy. This curiously Procrustean interpretive template is championed mostly by teachers who have read only one or two carefully-chosen plays.

Some of Aristotle’s ideas, though, remain quite robust. His metaphysical thought is still challenging, and, even if one disagrees, it’s very useful to know how and why one disagrees. His logical writings, too, remain powerful and compelling, and are among the best tools ever devised to help us think about how we think.

Among his most enduringly useful ideas, I think, is his fourfold categorization of cause. This is basic to almost everything we think about, since most of our understanding of the universe is couched, sooner or later, in terms of story. Story is fundamentally distinguished from isolated lists of events because of its reliance on cause and effect. 

There are, according to Aristotle, four different kinds of cause: material cause, efficient cause, formal cause, and final cause. This may all sound rather fussy and technical, but the underlying ideas are fairly simple, and we rely on them, whether we know it or not, every day. For an example, we can take a common dining room table.

The material cause of something is merely what it’s made of. That can be physical matter or not, but it’s the source stuff, in either case. The material cause of our table is wood, glue, perhaps some nails or screws, varnish, and whatever else goes into its makeup (metal, glass, plastic, or whatever else might be part of your dining room table). 

The formal cause is its form itself. It’s what allows us to say that any individual thing is what it is — effectively its definition. The table’s formal cause is largely bound up in its functional shape. It may have a variable number of legs, for example, but it will virtually always present some kind of horizontal surface that you can put things on. 

The efficient cause is the agency that brings something about — it’s the maker (personal or impersonal) or the causative process. That’s most like our simplest sense of “cause” in a narrative. The efficient cause of the table is the carpenter or the factory or workers that produced it. 

The final cause is the purpose for which something has come into being (if it is purposed) — in the case of the table, to hold food and dishes for us while we’re eating.

Not everything must have all four of these causes, at least in any obvious sense, but most have some; everything will have at least one. They are easy to recall, and remarkably useful when confronting “why?” questions. Still, people often fail to distinguish them in discourse — and so wind up talking right past one another.

Though I cannot now find a record of it, I recall that when a political reporter asked S. I. Hayakawa (himself an academic semanticist before turning to politics) in 1976 why he thought he’d been elected to the Senate, he answered by saying that he supposed it was because he got the most votes. This was, of course, a perfectly correct answer to the material-cause notion of “why”, but was entirely irrelevant to what the reporter was seeking, which probably had more to do with an efficient cause. Hayakawa surely knew it, too, but apparently didn’t want to be dragged into the discussion the reporter was looking for. Had the reporter been quicker off the mark with Aristotelian causes, he might have been able to pin the senator-elect down for a more satisfactory answer.

Aristotle wrote in the fourth century B.C., but his ideas are still immediately relevant. While one can use them to evade engagement (as Hayakawa did in this incident), we can also use them to clarify our communication. True communication is a rare and valuable commodity in the world, in just about every arena. Bearing these distinctions in mind can help you achieve it.

Time to Think

Saturday, January 18th, 2020

On average, my students today are considerably less patient than those of twenty years ago. They get twitchy if they are asked merely to think about something. They don’t know how. My sense is not that they are lazy: in fact, it’s perhaps just the opposite. Just thinking about something feels to them like idling, and after they have given it a good thirty seconds, they sense that it’s time to move on to something more productive — or at least more objectively measurable. They don’t seem to believe that they are accomplishing anything unless they are moving stepwise through some defined process that they can quantify and log, and that can be managed and validated by their parents or teachers. It doesn’t matter how banal or downright irrelevant that process might be: they are steps that can be completed. A secondary consequence is that if they start to do something and don’t see results in a week or two, they write it off as a bad deal and go chasing the next thing. It is no longer sufficient for a return on investment to be annual or even quarterly: if it’s not tangible, it’s bogus, and if it’s not more or less instantaneous, it’s time wasted.

On average, my students today also have their time booked to a degree that would have been unthinkable in my youth. When I was in junior high and high school, I did my homework, I had music lessons, and I was involved in a handful of other things. I had household chores as well. But I also had free time. I rode my bicycle around our part of town. I went out and climbed trees. I pursued reading that interested me just because I wanted to. I drew pictures — not very good ones, but they engaged me at the time. Most importantly, I was able (often in the midst of these various undirected activities) simply to think about those open-ended questions that underlie one’s view of life. Today I have students involved in multiple kinds of sports, multiple music lessons, debate, and half a dozen other things. There are no blank spaces in their schedules.

I can’t help thinking that these two trends are non-coincidentally related. There are at least two reasons for this, one of them internal, and one external. Both of them need to be resisted.

First of all, in the spiritually vacant materialistic culture surrounding us, free and unstructured time is deprecated because it produces no tangible product — not even a reliable quantum of education. One can’t sell it. Much of the public has been bullied by pundits and advertisers into believing that if you can’t buy or sell something, it must not be worth anything. We may pay lip service to the notion that the most important things in life are free, but we do our best to ignore it in practice. 

As a correlative, we have also become so invested in procedure that we mistake it for achievement. I’ve talked about this recently in relation to “best practices”. The phenomenon is similar in a student’s time management. If something can’t be measured as progress, it’s seen as being less than real. To engage in unstructured activity when one could be pursuing a structured one is seen as a waste.

This is disastrous for a number of reasons. 

I’ve already discussed here the problem of confusing substance and process. The eager adoption of “best practices” in almost every field attests the colossally egotistical notion that we now know the best way to do just about anything, and that by adhering to those implicitly perfected processes, we guarantee outcomes that are, if not perfect, at least optimal. But it doesn’t work that way. It merely guarantees that there will be no growth or experimentation. Such a tyrannical restriction of process almost definitionally kills progress. The rut has defined the route.

Another problem is that this is a fundamentally mercantile and materialist perspective, in which material advantage is presumptively the only good. For a Christian, that this is false should be a no-brainer: you cannot serve both God and mammon. 

I happily admit that there are some situations where it’s great to have reliable processes that really will produce reliable outcomes. It’s useful to have a way to solve a quadratic equation, or hiring practices that, if followed, will keep one out of the courts. But they mustn’t eclipse our ability to look at things for what they are. If someone can come up with better ways of solving quadratic equations or navigating the minefields of human resources, all the better. When restrictive patterns dominate our instructional models to the point of exclusivity, they are deadening.

Parents or teachers who need to scrutinize and validate all their children’s experiences are not helping them: they’re infantilizing them. When they should be growing into a mature judgment, and need to be allowed to make real mistakes with real consequences, they are being told instead not to risk using their own judgment and understanding, but to follow someone else’s judgment unquestioningly. Presumably thereby they will be spared the humiliation of making mistakes, and they will also not be found wanting when the great judgment comes. That judgment takes many forms, but it’s always implicitly there. For some it seems to have a theological component. 

In the worldly arena, it can be college admission, or getting a good job, or any of a thousand other extrinsic hurdles that motivate all good little drones from cradle to grave. College is of the biggie at this stage of the game. There is abroad in today’s panicky world the notion that a student has to be engaged in non-stop curricular and extracurricular activities even to be considered for college. That’s false, but it’s scary, and fear almost always trumps the truth. Fear can be fostered and nurtured with remarkable dexterity, and nothing sells like fear: this has been one of the great (if diabolical) discoveries of advertisers since the middle of the last century. Fear is now the prime motivator of both our markets and our politics. It’s small wonder that people are anxious about both: they’ve been bred and acculturated for a life of anxiety. They’re carefully taught to fear, so that they will buy compulsively and continually. The non-stop consumer is a credulous victim of the merchants of fear. We need, we are told, to circle the wagons, repel boarders, and show a unified face to the world. Above all, we should not question anything. 

Though we seem more often to ignore it or dismiss it with a “Yes, but…”, our faith tells us  that perfect love casts out fear. The simple truth is one that we’ve always known. Fear diminishes us. Love enlarges us. What you’re really good at will be what you love; what you love is what you’ll be good at. Which is the cause and which the effect is harder to determine: they reinforce one another. You can only find out what you love, though, if, without being coerced, you take the time and effort to do something for its own sake, not for any perceived extrinsic reward that’s the next link in Madison Avenue’s cradle-to-grave chain of anxious bliss.

There’s nothing wrong with structured activities. If you love debate, by all means, do debate. If you love music, do music. If you love soccer, play soccer. If you don’t love them, though, find something else that you do love to occupy your time, stretch your mind, and feed your soul. Moreover, even those activities need to be measured out in a way that leaves some actual time that hasn’t been spoken for. There really is such a thing as spreading oneself too thin. Nothing turns out really well; excellence takes a back seat to heaping up more and more of a desperate adequacy. In my experience, the outstanding student is not the one who has every moment of his or her day booked, but the one who has time to think, and to acquire the unique fruits of undirected reflection. They can’t be gathered from any other source. You can’t enroll in a program of undirected contemplation. You can only leave room for it to happen. It will happen on its own time, and it cannot be compelled to appear on demand.

The over-programmed student is joyless in both study and play, and isn’t typically very good at either one. Drudges who do everything they do in pursuit of such a phantom success will never achieve it. The students who have done the best work for me over the years have without exception been the ones who bring their own personal thoughts to the table. For them, education is not just a set of tasks to be mastered or grades to be achieved, but the inner formation of character — a view of life and the world that shapes what their own success will look like. Our secular culture is not going to help you find or define your own success: it’s interested only in keeping you off balance, and on retainer as a consumer. Take charge of your own mind, and determine what winning looks like to you. Otherwise, you will just be playing — and most likely losing — a game you never wanted to play in the first place.

Autonomy of Means Again: “Best Practices”

Wednesday, January 1st, 2020

When our kids were younger and living at home, they also frequently had dishwashing duty. Even today we haven’t gotten around to buying a mechanical dishwasher, but when five people were living (and eating) at home, it was good not to have to do all that by ourselves.

But as anyone who has ever enlisted the services of children for this job will surely remember, the process needs to be refined by practice. Even more importantly, though, no matter how good the process seems to be, it can’t be considered a success if the outcome is not up to par. At different points, when I pointed out a dirty glass or pan in the drain, all three of our kids responded with, “But I washed it,” as if that had fully discharged their responsibility.

The problem is that though they might have washed it, they had not cleaned it. The purpose of the washing (a process, which can be efficacious or not) is to have a clean article of tableware or cookware (the proper product of the task). Product trumps process: the mere performance of a ritual of cleansing may or may not have the desired result. Inadequate results can at any time call the sufficiency of the process into question.

This is paradigmatic of something I see more and more these days in relation to education. The notion that completing a process — any process — is the same as achieving its goal is beguiling but false. Depending on whether we’re talking about a speck of egg on a frying pan or the inadequate adjustment of the brakes on your car, that category mistake can be irksome or it can be deadly.

In education it’s often more than merely irksome, but usually less than deadly. I’ve already talked about the “I translated it but I don’t understand it” phenomenon here: ; the claim has never made sense to me, since if you actually translated it, that means that you actually expressed the sense as you understood it. No other set of processes one can do with a foreign-language text is really translating it.

Accordingly I’m skeptical of educational theorists (including those who put together some of the standards for the accreditation process we are going through now) buzzing about “best practices”. This pernicious little concept, borrowed with less thought than zeal from the business world, misleadingly suggests — and to most people means — practices that are ipso facto sufficient: pursuing them to the letter guarantees a satisfactory outcome. And yet sometimes the dish is dirty, the translation is gibberish, or the brakes fail.

It’s not really even a good idea in the business world. An article in Forbes by Mike Myatt in 2012 trenchantly backs up its title claim, “Best Practices — Aren’t”; in 2014, Liz Ryan followed up the same concept with “The Truth about Best Practices”. Both articles challenge the blinkered orthodoxies of the “best practices” narrative.

The problem with any process-side validation of an activity is that, in the very act of being articulated, it tends to eclipse the purpose for which the task is being done. Doing it the right way takes precedence over doing the right thing. Surely the measure of an education is the learning itself — not the process that has been followed. The process is just a means to the end. In Charles Williams’ words, which I’ve quoted and referred to here before (here  and here), “When the means are autonomous, they are deadly”.

Of course they may not in any given situation cause someone to die — but means divorced from their proper ends inevitably subvert, erode, and deform the goals for which they were originally ordained. This is especially true in education, precisely because there’s no broad consensus on what the product looks like. Accordingly the only really successful educational process is one that’s a dynamic outgrowth of the situation at hand, and it can ultimately be validated only by its results.

Liz Ryan notes, “They’re only Best Practices if they work for you.” There are at least two ways of understanding that phrase, and both of them are right: they’re only Best Practices if they work for you, and they’re only Best Practices if they work for you. Their utility depends on both the person and the outcome. Nor should it be any other way.

Socrates’ Argumentation — Method, Madness, or Something Else?

Monday, July 31st, 2017

The common understanding of basic terms and ideas is often amiss. Sometimes that’s innocuous; sometimes it’s not.

Many in the field of classical education tout what they call the Socratic Method, by which they seem to mean a process that draws the student to the correct conclusion by means of a sequence of leading questions. The end is predetermined; for good or ill, the method is primarily a rhetorical strategy to convince students that the answer was their own idea all along, thus achieving “buy-in”, so to speak. As rhetorical strategies go, it’s not really so bad.

Is it also good pedagogical technique? I am less certain. The short-term advantage of persuading a student that something is his or her own idea is materially compromised by the fact that (on these terms, at least) the method is fundamentally disingenuous. If the questioner feigns ignorance, while all the while knowing precisely where these questions must lead, perceptive students, at least, will eventually realize that they are being played. Some may not resent that; others certainly will, and will seek every opportunity to disengage themselves from a process that they rightly consider a pretense.

Whether it’s valid pedagogically or not, however, we mustn’t claim that it’s Socratic. Socrates did indeed proceed by asking questions. He asked them incessantly. He was annoying, in fact — a kind of perpetual three-year-old, asking “why?” after each answer, challenging every supposition, and never satisfied with the status quo or with any piece of accepted wisdom. It can be wearying to respond to this game; harried parents through the years have learned to shut down such interrogation: “Because I said so!” The Athenians shut Socrates’ questioning down with a cup of hemlock.

But the fact is that the annoying three-year-old is probably the most capable learning agent in the history of the world. The unfettered inquiry into why and how — about anything and everything — is the very stuff of learning. It’s why young children learn sophisticated language at such a rate. “Because I said so,” is arguably the correct answer to “Why must I do what you say?” But as an answer to a question about the truth, rather than as the justification of a command, it’s entirely inadequate, and even a three-year-old knows the difference. If we consider it acceptable, we are surrendering our credentials as learners or as teachers.

The difference between the popular notion of this so-called Socratic method and the method Socrates actually follows in the Platonic dialogues is that Socrates apparently had no fixed goal in view. He was always far more concerned to dismantle specious knowledge than to supply a substitute in its place. He was willing to challenge any conclusions, and the endpoint of most of his early dialogues was not a settled agreement, but merely an admission of humility: “Well, golly, Socrates. I’m stuck. I guess I really have no idea what I was talking about.” Socrates thought that this was a pretty good beginning; indeed, he claimed that his one advantage over other presumed experts was that he at least knew that he didn’t know anything, while they, just as ignorant in fact, believed that they knew something.

Taken on this view, the Socratic method is really a fairly poor way of training someone. If you are teaching people to be technicians of some sort or other, you want them to submit to the program and take instruction. It’s arguably not the best tool for practical engineering, medicine, or the law. (There is now a major push in resistance to using any kind of real Socratic method in law school, for example.)

But training is precisely not education. Education is where the true Socratic process comes into its own. It’s about the confrontation of minds, the clarification of definitions, and the discovery and testing of new ideas. It’s a risky way of teaching. It changes the underlying supposition of the enterprise. It can no longer be seen merely as a one-way download of information from master to pupil. In its place it commends to us a common search for the truth. At this point, the teacher is at most the first among equals.

This makes — and will continue to make — a lot of people uncomfortable. It makes many teachers uncomfortable, because in the process they risk losing control — not necessarily behavioral control of a class, but their identity (often carefully groomed and still more zealously protected) as oracles whose word should not be questioned. It opens their narrative and their identity to questioning, and may put them on the defensive.

It makes students uncomfortable too — especially those who are identified as “good” students — the ones who dot every “i” and cross every “t”, and never seem to step out of line or challenge the teacher’s authority. These are the ones likeliest, in a traditional high school, to be valedictorians and materially successful, according to a few recent studies — but not the ones likeliest to make real breakthrough contributions. (The recent book Barking up the Wrong Tree by Eric Barker has some interesting things to say about this: one can read a precis of his contentions here. Barker’s work is based at least in part on Karen Arnold’s Lives of Promise, published in 1995, and discussed here.)

In practical terms, education is a mixed bag.

There is a place for training. We need at least some of the “download” kind of instruction. Basic terms need to be learned before they can be manipulated; every discipline has its grammar. I really do know Latin, for example, better than most of my students, and, at least most of the time, what I say is likelier to be correct. But my saying so neither constitutes nor assures correctness, and if a student corrects me, then, assuming he or she is right, it should be my part to accept that correction graciously, not to insist on a falsehood because I can prevail on the basis of my presumed status. If the correction is wrong, the course of charity is also to assume good intention on the student’s part, and clarify the right answer in my turn. Either way, there is no room for “alternative facts”. There is truth, and there is falsehood. The truth is always the truth, irrespective of who articulates it, and it — not I or my student — deserves the primary respect. We must serve the truth, not the other way around.

At some point in their education, though, students should also be invited to get into the ring with each other and with the teacher, to state their cases with conviction, and back them up with reasoned argument and well-documented facts. If they get knocked down, they need to learn to get back up again and keep on engaging in the process. It hurts a lot less if one realizes that it’s not one’s own personal worth that’s at stake: it’s the truth that is slowly coming to light as we go along. That’s the experience — and the thrill of the chase that it actually entails — that constitutes the deeper part of education. That’s what the true Socratic method was — and still should be — about.

Two modes of learning are prevalent today in colleges, especially — the lecture course and the seminar. In the lecture, the students are, for the most part, passive recipients of information. The agent is the lecturer, who delivers course content in a one-way stream. It’s enshrined in hundreds of years of tradition, and it has its place. But a student who never moves beyond that will emerge more or less free of actual education. The seminar, on the other hand, is about the dialectic — the back-and-forth of the process. It requires the student to become, for a time, the teacher, to challenge authority not because it is authority but because truth has the higher claim. Here disagreement is not toxic: it’s the life blood of the process, and it’s lifegiving for the student.

At Scholars Online, we have chiefly chosen to rely on something like the seminar approach for our live chats. We have, we think, very capable teachers, and there are some things that they need to impart to the students. But to large measure, these can be done by web-page “lectures”, which a student can read on his or her own time. The class discussion, however, is reciprocal, and that reciprocity of passionately-held ideas is what fires a true love of learning. It’s about the exchange — the push and pull, honoring the truth first and foremost. It may come at a cost: in Socrates’s case it certainly did. But it’s about awakening the life of the mind, without which there is no education: schooling without real engagement merely produces drones.

Failure as a good thing

Friday, March 11th, 2016

People tout many different goals in the educational enterprise, but not all goals are created equal. They require a good deal of sifting, and some should be discarded. Many of them seem to be either obvious on the one hand or, on the other, completely wrong-headed (to my way of thinking, at least).

One of the most improbable goals one could posit, however, would be failure. Yet failure — not as an end (and hence not a final goal), but as an essential and salutary means to achieving a real education — is the subject of Jessica Lahey’s The Gift of Failure (New York, HarperCollins, 2015). In all fairness, I guess I was predisposed to like what she had to say, since she’s a teacher of both English and Latin, but I genuinely think that it is one of the more trenchant critiques I have read of modern pedagogy and the child-rearing approaches that have helped shape it, sometimes with the complicity of teachers, and sometimes in spite of their best efforts.

Christe first drew my attention to an extract of her book at The Atlantic here. When we conferred after reading it, we discovered that we’d both been sufficiently impressed that we’d each ordered a copy of the book.

Lahey calls into question, first and foremost, the notion that the student (whether younger or older) really needs to feel that he or she is doing well at all stages of the process. Feeling good about your achievement, whether or not it really amounts to anything, is not in fact a particularly useful thing. That seems common-sensical to me, but it has for some time gone against the grain of a good deal of teaching theory. Instead, Lahey argues, failing — and in the process learning to get up again, and throw oneself back into the task at hand — is not only beneficial to a student, but essential to the formation of any kind of adult autonomy. Insofar as education is not merely about achieving a certain number of grades and scores, but about the actual formation of characer, this is (I think) spot-on.

A good deal of her discussion is centered around the sharply diminishing value of any system of extrinsic reward — that is, anything attached secondarily to the process of learning — be it grades on a paper or a report card, a monetary payoff from parents for good grades, or the often illusory goal of getting into a good college. The only real reward for learning something, she insists, is knowing it. She has articulated better than I have a number of things I’ve tried to express before. (On the notion that the reason to learn Latin and Greek was not as a stepping-stone to something else, but really to know Latin and Greek, see here and here. On allowing the student freedom to fail, see here. On grades, see here.) Education should be — and arguably can only be — about learning, not about grades, and about mastery, not about serving time, passing tests so that one can be certified or bumped along to something else. In meticulous detail, Lahey documents the uselessness of extrinsic rewards at almost every level — not merely because they fail to achieve the desired result, but because they drag the student away from engagement in learning, dull the mind and sensitivity, and effectively promote the ongoing infantilization of our adolescents — making sure that they are never directly exposed to the real and natural consequences of either their successes or their failures. Put differently, unless you can fail, you can’t really succeed either.

Rather than merely being content to denounce the inadequacies of modern pedagogy, Ms. Lahey has concrete suggestions for how to turn things around. She honestly reports how she has had to do so herself in her ways of dealing with her own children. The book is graciously honest, and I enthusiastically recommend it to parents and teachers at every level. If I haven’t convinced you this far, though, at least read the excerpt linked above. The kind of learning she’s talking about — engaged learning tied to a real love of learning, coupled with the humility to take the occasional setback not as an invalidation of oneself but as a challenge to grow into something tougher — is precisely what we’re hoping to cultivate at Scholars Online. If that’s what you’re looking for, I hope we can provide it.