The Politics of Perplexity in Twenty-First Century America

July 17th, 2020

In the context of twenty-first century America, “politics” is perhaps one of the most curiously irritating words in the English language. I know from personal experience – whether from observing others, or from paying attention to myself – that there is a visceral reflex to feel something between annoyance and disgust upon hearing the word. If politics rears its ugly head, you may think something along the lines of “I’ve had enough of that, thank you!” before rapidly extricating yourself from an unwanted intrusion into an otherwise perfect day. Alternatively, I suspect many of us know people who hear the word “politics” or some related term and can immediately launch into an ambitious lecture on what is wrong and what should be done that somehow promises (implausibly) to solve all our social, political, and economic problems in one fell legislative swoop. We’re surrounded by bitter disputes – online and on television, in print and in person – over political issues, to the extent that it can be hard to stomach contemplating (much less discussing) politics without feeling a little irritated, even disgusted, with both our neighbors and ourselves.

These powerful emotional reactions should give us some pause for reflection. In theory, if not always in practice, the United States of America is a democratic republic, ruled by representative officials in the name of its citizenry. Even without considering the matter deeply, it should be clear to us that such a government cannot function if its citizens are entirely disengaged, as radical factions across the political spectrum will be left to do the politicking on our behalf. Whether we like it or not, our nation’s political life will likely remain interested in us even if we are uninterested in return. We might as well make the best of it, and get down to the business of figuring out where, exactly, we went wrong, and what might be done to repair the damage.

Since the early twentieth century, the predominant approach to teaching American students about their form of government has been in the form of what is known as political science. This perspective is primarily (though not exclusively) concerned with educating students about the practical mechanics of their government and the political dynamics of the American electorate – in short, the branches of the United States government, their differing roles and jurisdictions, group behavioral dynamics, and so forth. All of these political institutions and phenomena are generally treated as abstractions that can be measured and predicted with some degree of accuracy using scientific methodology and data analysis.

The meaning of political science must be carefully qualified and defined. Science is derived from the Latin scientia, or knowledge. The majority of ancient, medieval, and early modern political thinkers used the term political science to refer to the study of politics as a domain of the humanities. They studied politics in light of inquiries in philosophy and history: they did not, as a general rule, conceive of the art of government as something that could be understood as an institutional abstraction that operated independently of the deepest human needs and desires (such as for law and virtue), or the eternal problems that confront every human individual and society (what is justice and truth, and how de we find them?). Above all else, classical political science aimed at cultivating self-governing (moderate) individuals that would be capable of wielding political power responsibly while refraining from tyrannical injustice. Hence, in the conclusion of Plato’s Republic, Socrates teaches Glaucon that the highest end of political science is to teach the soul to bear “all evils and all goods… and practice justice with prudence in every way.” (Republic, Book X, 621c).

Modern political science operates on an entirely different basis and different assumptions about human beings and political life. It begins with the premise that human beings, like all natural things, are subject to mechanical laws that render them predictable. Once these laws are understood, the political life of human beings can be mastered and directed towards progress (understood as material comforts and technological innovation) to a degree that was never remotely possible in prior eras of human history. This view of political science emerged first among certain thinkers of the Enlightenment, and became a close companion to the development of the entire field of social science in the late nineteenth century. Both modern political and social science emerged from a common intellectual project that aimed to apply modern scientific methods and insights to the study of very nearly every aspect of human communal life – economics, social dynamics (sociology), religion, sexuality, psychology, and politics, among others.

This application of human technical knowledge to endemic social problems, economic systems, and political institutions (among other domains of human life) was expected to deliver unprecedented advances that would mirror and eventually surpass the tremendous technological and intellectual achievements of the Scientific Revolution. Max Weber, a social scientist of incredible imtelligence and one of the most brilliant minds of the early twentieth century, fully expected that the complimentary discoveries of both natural and social science would ensure that human “progress goes on ad infinitum.” For many intellectuals in Europe and the United States in Weber’s day, human social and political life had become like a machine that could be kept in a perpetual state of inexorable forward motion. This view remains a powerful one within certain spheres of the social sciences and general public, and has been articulated perhaps most eloquently in the public sphere by the Harvard psychologist Steven Pinker, among others, even if it is gradually declining in popularity among the greater mass of the American citizenry.

Academically, this modern scientific approach to understanding American government had many apparent advantages that explain both its widespread acceptance and its continued influence within the academy. For one, it enabled teachers to focus on explaining the structure of U.S. government with a focus on the technical mechanics of government that can be mastered intuitively by most students, regardless of their particular political views and prejudices. Similarly, it relieves teachers and students of having to focus on tiresome historical minutia or obscure philosophical debates that bear no obvious relevance to contemporary issues: students can study their government based on recent experiences that are more easily comprehensible for them than those of, say, two hundred years ago. Above all else, contemporary political science treats the study of American government in utilitarian and mechanistic terms, thereby minimizing occasions for awkwardly passionate or unsolvable confrontations over thorny issues that touch on moral as well as historical and philosophical complexities. What many students will learn from this education is that the American form of government is perfectly reasonable, orderly, and balanced, with predictable mechanics that ensure its stability and perpetuity; in short, it makes sense. And not only does the American government operate like a well-oiled machine, but it also leaves individuals tremendous room to define themselves and act within an ever-expanding horizon of freedoms. Government exists mainly to resolve practical matters of policy and administration, leaving moral questions largely to the domain of the private sphere.

Many may rightly ask: if this model is true, then why does the American government function so poorly in practice? And why are Americans so remarkably inept at finding common ground for resolving pressing political issues? Indeed, there are alarming trends that should inspire us to doubt the viability of this interpretation. Polling conducted over the past decade consistently shows that Americans of all political persuasions are increasingly distrustful of both their governments and of their fellow citizens who hold opposing views. Rigid ideological voices have emerged among both liberal and conservative parties that insist that dialogue is impossible and compromise on any issue is a sign of political weakness, and that a candidate’s quality should be determined by ideological considerations rather than by competence and experience. As electoral politics have devolved into brutal slugging matches between increasingly extreme views, the actual levers of political power have gradually shifted into the hands of a theoretically subordinate but frequently unaccountable and inefficient bureaucracy.

The fruits of this widespread culture of distrust has been the breakdown of civic life and political order amidst frustration and mutual recrimination throughout American society. Many are understandably frustrated with a system of government that seems incapable or unwilling to fulfill its most basic functions. For that matter, generations of young Americans have now grown up in the shadow of a dysfunctional government that leaves them with little incentive for acting as responsible and engaged citizens. It should be no wonder that there are now voices who now ask questions such as the following: if our current Constitution is a product of eighteenth century political circumstances and ideals, should we not perhaps craft a new political system that is better adapted our contemporary needs and values?

Perhaps these are all passing fads, and some bearable equilibrium will return in short order. I am doubtful that such an event is likely in the near future. Recent events have shown that contemporary Americans of all political stripes are divided not merely by petty partisan differences over policy decisions and electoral contests, but even more importantly by fierce disagreements over fundamental questions about the nature of political life and American civic identity that transcend mere partisan disagreement, and we are not remotely close to resolving these disputes. What is it to be a human? What is freedom? What is justice? We do not have common answers for any of these fundamental questions, nor do we seem (at least, as of this writing) to have a clear direction for amicably resolving these disputes in the public sphere.

Yet these disputes, however unpleasant and acrimonious, provide us with a hint of where, exactly, we may have gone wrong. Far from liberating us from antiquated concerns, our modern political education (and the novel mode of thought that created it) may lie at the heart of our perplexity. Modern political science has worked tremendous wonders in allowing us to track the chimerical shifting of public whims in opinion polls or understand the psychology of group dynamics, but it has also obfuscated our ability to grapple with and comprehend problems that are part of the permanent condition of our species. Political institutions and policy alone cannot solve America’s most vexing problems. And we should remember that representative government depends ultimately on the qualities of both officeholders and voters to function properly; institutions abstracted from the body politic cannot rule themselves. Our government, as John Adams observed in 1798, “was designed for a moral and religious people. It is wholly inadequate to the government of any other.” Adams thought that republican government could not exist without some degree of self-government among the citizenry, or else it must devolve into a mass of petty tyrants; we are, perhaps, in the process of proving his point for him.

I suspect that the root of modern American political dissatisfaction is not so much in our continued subjection to an apparently antiquated form of government, nor merely in our frustration with the peculiar idiocies of our political parties, but rather in our own failure to accurately comprehend and utilize our form of government. In an era of change and tumult, we would do well, as the American novelist and essayist John Dos Passos put it in 1941, to “look backwards as well as forwards” as we attempt to extricate ourselves from our current political predicament. While we may face many distinctly twenty-first century problems in certain respects, our most pressing problems – justice, love, truth, goodness, and so forth – are as old as the human species. We live in troubled times: but so, too, did prior generations of Americans. I hope that, if we can find it in ourselves to turn back and reconsider the first principles of American government, its deep roots in English political life and philosophy, we may yet discover a firm foundation that will give us a lifeline from our current perplexity, and enable us to engage more fully in a life of dutiful, informed, and responsible citizenship that can be passed on to future generations.

Unprecedented?

July 11th, 2020

I have to date remained silent here about the COVID-19 pandemic, because for the most part I haven’t had anything constructive to add to the discussion, and because I thought that our parents and students would probably prefer to read about something else. I also try, when possible, to discuss things that will still be of interest three or even ten years from now, and to focus largely on issues of education as we practice it. 

Still, COVID-19 has obviously become a consuming focus for many—understandably, given the extent of the problem—and what should be managed in the most intelligent way possible according to principles of epidemiology and sane public policy has become a political football that people are using as further grounds to revile each other. I’m not interested in joining that game. Knaves and cynical opportunists will have their day, and there’s probably not much to do that will stop them—at least nothing that works any better than just ignoring them.

But there is one piece of the public discourse on the subject that has shown up more and more frequently, and here it actually does wander into a domain where I have something to add. The adjective that has surfaced most commonly in public discussions about the COVID-19 epidemic with all its social and political consequences is “unprecedented”. The disease, we are told by some, is unprecedented in its scope; others lament that it’s having unprecedented consequences both medically and economically. The public response, according to others, is similarly unprecedented: for some that’s an argument that it is also unwarranted; for others, that’s merely a sign that it’s appropriately commensurate with the scope of the unprecedented problem; for still others, it’s a sign that it’s staggeringly inadequate.

As an historian I’m somewhat used to the reckless way in which the past is routinely ignored or (worse) subverted, according to the inclination of the speaker, in the service of this agenda or that. I’ve lost track of the number of people who have told me why Rome fell as a way of making a contemporary political point. But at some point one needs to raise an objection: seriously—unprecedented? As Inigo Montoya says in The Princess Bride, “You keep using that word. I do not think it means what you think it means.” To say that anything is unprecedented requires it to be contextualized in history—not just the last few years’ worth, either.

In some sense, of course, every happening in history, no matter how trivial, is unprecedented—at least if history is not strictly cyclical, as the Stoics believed it was. I’m not a Stoic on that issue or many others. So, no: this exact thing has indeed never happened before. But on that calculation, if I swat a mosquito, that’s unprecedented, too, because I’ve never swatted that particular mosquito before. This falls into Douglas Adams’ useful category of “True, but unhelpful.” Usually people use the word to denote something of larger scope, and they mean that whatever they are talking about is fundamentally different in kind or magnitude from anything that has happened before. But how different is COVID-19, really?

The COVID-19 pandemic is not unprecedented in its etiology. Viruses happen. We even know more or less how they happen. One does not have to posit a diabolical lab full of evil gene-splicers to account for it. Coronaviruses are not new, and many others have apparently come and gone throughout human history, before we even had the capacity to detect them or name them. Some of them have been fairly innocuous, some not. Every time a new one pops up, it’s a roll of the dice—but it’s not our hand that’s rolling them. Sure: investing in some kind of conspiracy theory to explain it is (in its odd way) comforting and exciting. It’s comforting because it suggests that we have a lot more control over things than we really do. It’s exciting, because it gives us a villain we can blame. Blame is a top-dollar commodity in today’s political climate, and it drives more and more of the decisions being made at the highest levels. Ascertaining the validity of the blame comes in a distant second to feeling a jolt of righteous indignation. The reality is both less exciting and somewhat bleaker: we don’t have nearly as much control as we’d like to believe. These things happen and will continue to happen without our agency or design. Viruses are fragments of genetic material that have apparently broken away from larger organic systems, and from there they are capable of almost infinite, if whimsical, mutation. They’re loose cannons: that’s their nature. That’s all. Dangerous, indisputably. Malicious? Not really.

The COVID-19 pandemic is not unprecedented in its scope and ability to be lethal. Epidemics and plagues have killed vast numbers of people over wide areas throughout history. A few years ago, National Geographic offered a portrait of the world’s most prolific killer. It was not a mass murderer, or even a tyrant. It was the flea, and the microbial load it carried. From 1348 through about 1352, the Black Death visited Europe with a ferocity that probably was unprecedented at the time. Because records from the period are sketchy, it’s hard to come up with an exact count, but best estimates are that it killed approximately a third of the population of Europe all within that little three-to-four-year period. The disease continued to revisit Europe approximately every twenty years for some centuries to come, especially killing people of childbearing age each time, with demographic results that vastly exceed what we might determine from a sheer count of losses. In some areas whole cities were wiped out, and the death toll in Europe alone may have run as high as two hundred million: the extent of its destruction throughout parts of Asia has not been ascertained. Smallpox, in the last century of its activity (1877-1977), killed approximately half a billion people. The 1918 Spanish influenza epidemic killed possibly as many as a hundred million. Wikipedia here lists over a hundred similar catastrophes caused by infectious diseases of one sort or another, each of which had a death toll of more than a thousand; it lists a number of others where the count cannot even be approximately ascertained.

Nor is the COVID-19 pandemic unprecedented in its level of social upheaval. The Black Death radically changed the social, cultural, economic, and even the religious configuration of Europe almost beyond recognition. After Columbus, Native American tribes were exposed to Old World disease agents to which they had no immunities. Many groups were reduced to less than a tenth of their former numbers. Considering these to be instances of genocide is, I think, to ascribe far more intentionality to the situation than it deserves (though there seem to have been some instances where it was intended), but the outcome was indifferent to the intent. The Spanish Influenza of 1918, coming as it did on the heels of World War I, sent a world culture that was already off balance into a deeper spiral. It required steep curbs on social activity to check its spread. Houses of worship were closed then too. Other pubic gatherings were forbidden. Theaters were closed. Even that was not really unprecedented, though: theaters had been closed in Elizabethan London during several of the recurrent visitations of the bubonic plague. The plot of Romeo and Juliet is colored by a quarantine. Boccaccio’s Decameron is a collection of tales that a group of people told to amuse themselves while in isolation, and Chaucer’s somewhat derivative Canterbury Tales are about a group of pilgrims heading for the shrine of St. Thomas à Becket for having given them aid while they were laboring under a plague. People have long known that extraordinary steps need to be taken, at least temporarily, in order to save lives during periods of contagion. It’s inconvenient, it’s costly, and it’s annoying. It’s not a hoax, and it’s not tyrannical. It’s not novel.

So no, in most ways, neither the appearance of COVID-19 nor our responses to it are really unprecedented. I say this in no way to minimize the suffering of those afflicted with the disease, or those suffering from the restrictions put in place to curb its spread. Nor do I mean to trivialize the efforts of those battling its social, medical, or economic consequences: some of them are positively heroic. But claiming that this is all unprecedented looks like an attempt to exempt ourselves from the actual flow of history, and to excuse ourselves from the very reasonable need to consult the history of such events in order to learn what we can from them—for there are, in fact, things to be learned.

In responding to the plagues and calamities of the past, it is perhaps unsurprising that people responded, then as now, primarily out of fear. Fear is one of the most powerful of human motivators, but it is seldom a wise counselor. There have been conspiracy theories before too: during the Black Death, for example, some concluded that that the disease was due to witchcraft, and so they set out to kill cats, on the ground that they were witches’ familiars. The result, of course, was that rats—the actual vectors for the disease, together with their fleas, were able to breed and spread disease all the more freely. Others sold miracle cures to credulous (and fearful) populations; these of course accomplished nothing but heightening the level of fear and desperation.

There were also people who were brave and self-sacrificing, who cared for others in these trying times. In 1665, the village of Eyam in Derbyshire quarantined itself with the plague. They knew what they could expect, and they were not mistaken. Everyone in the town perished, but their decision saved thousands of lives in neighboring villages. Fr. Damien De Veuster ministered to the lepers on Molokai before succumbing to the disease himself: he remains an icon of charity and noble devotion and is the patron saint of Hawaii.

The human race has confronted crisis situations involving infectious diseases, and the decisions they require, before. They are not easy, and sometimes they call for self-sacrifice. There is sober consolation to be wrung from the fact that we are still here, and that we still, as part of our God-given nature, have the capacity to make such decisions—both the ones that protect us and those sacrificial decisions we make to save others. We will not get through the ordeal without loss and cost, but humanity has gotten through before, and it will again. We are neither entirely without resources, but neither are we wholly in control. We need to learn from what we have at our disposal, marshal our resources wisely and well, and trust in God for the rest.

To Zoom or not to Zoom

May 29th, 2020

I seem to be making a lot of decisions lately: to teach AP courses or not (not), to seek accreditation or not (seek; successfully we may add), and to use video or not for my class sessions (jury still out).

Our latest home page notes that Scholars Online education is grounded, rigorous, and thorough, and that by “grounded”, we mean that we welcome constructive innovation, but do not seek novelty for its own sake. We teach traditional subjects using time-tested methods.

In other words, we use technological solutions where they are appropriate, and we recognize that not all technology is useful in a given situation for discovering the truth.

Scholars Online will be experimenting with Zoom for some of its courses in 2020-2021, but retain our own chat for others. Math and language courses already use Skype or WizIQ and will be moving to Zoom where the teachers choose to use it. But we have some serious and perhaps not obvious concerns about moving all our courses to video format. I’ve been thinking about this a lot, and here are some of my current observations, some based on my own experience, some relying on reports from others.

We’ve been using Zoom for our church services since lockdown began here in Washington in mid-March, and inevitably, we have also been taking this opportunity to evaluate its use for our Scholars Online courses. We’ve had a chance to identify both some interesting advantages and some worrying disadvantages. (I also used Webex and Skype business platforms for over a decade at work, so some of my observations are platform independent and apply to any video conferencing method).

But we’ll start with the most recent experiences with Zoom:

Last Sunday, the entire Zoom platform went down across the country. We all had to wait while Zoom tried to recover its servers not just for us, but for the rest of the USA. Most of our church members were unable to log into the church service until the last minute.

Zoom focuses on a single speaker, so two people cannot talk at once without it becoming confused. For our church services, we have a designated individual each Sunday presenting the congregational responses for our liturgy. Our Zoom-hosted coffee hour adult education discussions descend to audio beeps and video jerks when two people try to talk at the same time in response to a question.

Members using different platforms have different display options, which makes it hard for a person on a computer to help a person on a tablet find a particular option and set it so that everyone can hear properly.

Even in our reasonably well-wired urban region, our vicar and responder regularly slow down, glitch, and become unintelligible when they exceed their band width or internet traffic clogs up. It’s distracting to watch people talk and move as though they were under water, and again, hard to understand what they say, and they often don’t realize their presentation has been garbled until it is too late to go back and recover the lost moment.

So here are the issues I have in considering a move from the SO Chat software to Zoom in particular and video in general:

1. Zoom was designed to support business meetings and webinars, not group discussion sessions. Zoom would be fine if we were doing lecture demonstrations and calling on students one at a time, and for some courses (math, French), it may work well if the instructor has structured a class session that way. But for history, literature, and even science, where we depend on seminar-type discussions that allow students to participate freely, Zoom can be more of an impediment than an aid. In some critical ways, our text chat allows for more interactive discussion than Zoom does. It allows every member to present information on an equal footing, and when students are involved in the material, that dynamic can be pretty exciting. In chat, if five students talk “at the same time”, their remarks all make it into the chat window without confusion, and we can sort and address them individually. Everyone gets heard, and no one gets stepped on — there is no way to interrupt another student, and no need for the teacher to force students to be silent unless called on, except in extreme disciplinary situations. I may be particularly sensitive to this, since I have experienced being talked over in a video conference session so that my voice was never heard (and the meeting host never noticed that my silence was not voluntary). That can’t happen in our chat.

2. We have a number of families who have more than one student in class at the same time, where the noise from competing audio sessions can create chaos for students in the same room. This is an issue which has even been in the news lately as public schools moved online and parents had to deal with siblings using computers in the same room. But quiet is a necessary condition for reflection and the formulation of coherent expression. Our experience over the last two decades and especially feedback from our alumni who were at college or graduate school have made us realize that the silence of chat helps students engage with the material in ways audio input disrupts, and that constantly writing contributes to developing precise self-expression in ways off-the-cuff impulsive spoken responses cannot. If my own environment isn’t quiet during class, it won’t disrupt others in the class. In our chat, I can participate even if someone is running the washing machine in the same room, or there are booming announcements from the airport speaker while I wait to board a plane, or I am in a car on the road with five other voluble family members (and yes, those are all real examples). I may have to block the noise out, but my classmates do not, and I can still enter the discussion at will, without subjecting them to my own distracting environment.

3. Written text allows students to review what was just said, read it closely, and “listen” to it more carefully. In a video presentation, if a student comes in late,  or misses a minute, the material covered in that period is lost for the rest of the session. It may be captured in a movie uploaded to YouTube or another platform for later review, but it is no longer available during the discussion. This creates a huge temptation for the late student to simply skip the session altogether and catch the upload: that is, to become a passive viewer of a pre-recorded session instead of an active participant in the discussion. In our chat, if a student comes in late, the entire chat is available from its start up to the moment the student enters, and if one misses a point, he or she can scroll back and find exactly what was said. The student can come up to speed, and jump into the discussion, without requiring the teacher to interrupt the discussion and recap for that student.

4. We also know that many of our students have only low-bandwidth access, and (as mentioned) we have a number of families who have more than one student in class at the same time: bandwidth becomes a critical factor. We’ve seen that even Zoom, which is the cutting edge technology available to us, slows down, cuts out, and even shuts down when it is overloaded. With our chat, I don’t have to worry that another family member is also on line, teaching or taking a high-bandwidth course that will slow down my internet access. I can attend class pretty much from anywhere there is a wireless or data connection.

There are some other more subtle things that we’ve noticed both in Scholars Online chats, in using audio-visual meetings in business environments, and in reading recent news reports of teachers moving to online video methods that give us pause.

One is my experience with using an international software product, even as a very large company client. Using Zoom puts us at the mercy of a third party with many other (much bigger) customers who will influence its development. Zoom’s focus will be on meeting the requirements of the majority of its customers, especially the larger ones, at its own pace and on its own schedule. Zoom may chose to drop features that we depend on, or impose features, especially extra security, that prevent students from attending until they have received updated instructions, which puts a greater burden on teachers to stay current with a moving platform. They can choose to revise deployed applications at any moment for their own reasons. My most recent Zoom meeting was delayed for fifteen minutes because one important attendee had to download a required Zoom update before he could log in. With the Scholars Online chat, we control the server and the software, and while our dedicated server is supported by a third party, the NuOZ technicians built it for us to our specifications, and our infrastructure changes only when we understand and have agreed to recommended updates, and can coordinate changes to the MOODLE and our website and test them first. 

Another issue is hosting recordings for course sessions. Our chat logs remain on our dedicated server and are unavailable (short of court order) to anyone outside Scholars Online without our permission. We have built security that meets with both US and European requirements for personally identifiable information. But the capacity and software required to support streaming recorded Zoom sessions is too expensive for us; we will need to look at how to host these on YouTube, which means coming up with security and access controls (and maintaining them on a per-class basis as required by FERPA regulations) as well as putting information on a platform whose ultimate access by its many technicians we do not control. Access control on YouTube is a technical configuration issue with a solution, but it is an additional burden for our teachers and administrative staff, and will require students to have YouTube accounts that will track their access, and not just to Scholars Online resources. Some parents may be comfortable with this, but how will we handle a situation when we have a class where one student cannot have access to his class videos?

Less obvious, perhaps, but an important factor in preferring text to video is simply that text chat levels the playing field. My students don’t have to worry about whether they are dressed well or poorly, or how their house looks to others. This is not a trivial concern for students who feel already at a disadvantage, or that they will be judged by their appearance or their surroundings. We are starting to learn from the public school shift into online teaching that over a third of the students simply stopped coming to class because they lacked the technology or didn’t want others to see their home environment. Using a low-bandwidth text chat helps reduce economic and social distinctions and barriers for our students, and puts the focus where it belongs: on the discussion, in which everyone can participate.

So perhaps the most important factor is that text chat promotes class community in a way that video does not. I know that students sometimes feel uneasy when they cannot see the teacher or each other, because they use visual appearance, speech accent, and intonation to make judgments. But not being able to make certain kinds of distinctions about each other automatically (or at least not being constantly reminded visually of them) makes for a different kind of relationship. I’m not sure that we would have the same participation in chat if students could tell at a glance each other’s racial or ethnic background or age, or whether the teacher is frowning or smiling. Scholars Online collects no ethnic or racial information as part of the enrollment process. Unless our students make it a point in describing themselves in their MOODLE profiles, we don’t know whether they are white, African-American, Asian, Latino, or native American, and we’ve learned that names are not a reliable guide here, even for gender (information we do collect). Most of our classes have students with a two-three year age range; some have adult students. In chat, they are all more or less equal. I don’t think we could achieve anything close to the same level of equality of discussion if our younger students were constantly reminded that some of their fellow classmates are much older, or sometimes, if they could see the expression on my face as they venture a response!  But if I really want my students to learn to think for themselves, they have to be comfortable enough to venture the uncommon or unwelcome observation and speak the truth as they see it.

We realize that some students prefer visual presentation of material. We can and do use images, short movies, animations, and even interactive exercises during chat, since anything a browser supports, we can direct students to use during a chat session, and we can incorporate everything except complex simulations and whiteboard in chat itself. We’ve accepted suggestions from teachers and students to support different modes of mathematical symbol input and implemented these, and we even create our own graphics and videos to support course presentations (Scholars Online does have its own YouTube channel). This is an area where we can improve our chat presentation abilities, and we are working on it.

But we need to weigh the presentation and some personal connection advantages of video against what we will lose by moving to a video platform: a certain kind of focus on the material itself rather than the means of presentation, a level sense of community with others in the course that does not depend on identification with ethnic or economic or racial or age cohorts, and constant writing practice that requires disciplined thought from the students. We will continue to trust our teachers to make this choice for their own courses, and support them as best we can.

We’d love your feedback to help our teachers make this decision.

Mr. Spock, Pseudo-scientist

April 15th, 2020

I’m one of those aging folks who still remember the original run of Star Trek (no colon, no The Original Series or any other kind of elaboration — just Star Trek). It was a groundbreaking show, and whether you like it or not (there are plenty of reasons to do both), it held out a positive vision for the future, and sketched a societal ethos that was not entirely acquisitive, and not even as secular and materialistic as later outings in the Star Trek franchise. The officers of the Enterprise were not latter-day conquistadors. They were genuine explorers, with a Prime Directive to help them avoid destroying too many other nascent cultures. (Yes, I know: they violated it very frequently, but that was part of the point of the story. Sometimes there was even a good reason for doing so.)

It also offered the nerds among us a point of contact. Sure, Captain Kirk was kind of a cowboy hero, galloping into situations with fists swinging and phasers blazing, and, more often than not, reducing complex situations to polar binaries and then referring them either to fisticuffs or an outpouring of excruciatingly impassioned rhetoric. Dr. McCoy, on the other hand, was the splenetic physician, constantly kvetching about everything he couldn’t fix, and blaming people who were trying to work the problem for not being sensitive enough to be as ineffectual as he was. But Mr. Spock (usually the object of McCoy’s invective) was different. He was consummately cool, and he relied upon what he called Logic (I’m sure it had a capital “L” in his lexicon) for all his decision-making. He was the science officer on the Enterprise, and also the first officer in the command structure. Most of the more technically savvy kids aspired to be like him.

It was an article of faith that whatever conclusions Spock reached were, because he was relying on Logic, logical. They were the right answer, too, unless this week’s episode was explicitly making a concession to the value of feelings over logic (which happened occasionally, but not often enough to be really off-putting), and they could be validated by science and reason. You can’t argue with facts. People who try are doomed to failure, and their attempt is at best a distraction, and often worse. 

Up to that point, I am more or less on board, though I was always kind of on the periphery of the nerd cluster, myself. I suspected then (as I still do) that there are things that logic (with an upper-case or a lower-case L) or mathematics cannot really address. Certainly not everything is even quantifiable. But it was the concept of significant digits that ultimately demolished, for me, Mr. Spock’s credibility as a science officer. When faced with command decisions, he usually did reasonably well, but when pontificating on mathematics, he really did rather badly. (Arguably he was exactly as bad at it as some of the writers of the series. Small wonder: see the Sherlock Holmes Law, which I’ve discussed here previously.)

The concept of significant digits (or figures) is really a simple one, though its exact specifications involve some fussy details. Basically it means that you can’t make your information more accurate merely by performing arithmetic on it. (It’s more formally explained here on Wikipedia.) By combining a number of things that you know only approximately and doing some calculations on them, you’re not going to get a more accurate answer: you’re going to get a less accurate one. The uncertainty of each of those terms or factors will increase the uncertainty of the whole.

So how does Spock, for all his putative scientific and logical prowess, lose track of this notion, essential to any kind of genuine scientific thinking? In the first-season episode “Errand of Mercy”, he has a memorable exchange with Kirk: 

Kirk: What would you say the odds are on our getting out of here?

Spock: Difficult to be precise, Captain. I should say approximately 7,824.7 to 1.

Kirk: Difficult to be precise? 7,824 to 1?

Spock: 7,824.7 to 1.

Kirk: That’s pretty close approximation.

Spock: I endeavor to be accurate.

Kirk: You do quite well.

No, he doesn’t do quite well. He does miserably: he has assumed in his runaway calculations that the input values on which he bases this fantastically precise number are known to levels of precision that could not possibly be ascertained in the real world, especially in the middle of a military operation — even a skirmish in which all the participants and tactical elements are known in detail (as they are not here).  The concept of the “fog of war” has something to say about how even apparent certainties can quickly degrade, in the midst of battle, into fatal ignorance. Most of the statistical odds for this kind of thing couldn’t be discovered by any rational means whatever.

Precision and accuracy are not at all the same thing. Yes, you can calculate arbitrarily precise answers based on any data, however precise or imprecise the data may be. Beyond the range of its significant digits, however, this manufactured precision is worse than meaningless: it conveys fuzzy knowledge as if it were better understood than it really is. It certainly adds nothing to the accuracy of the result, and only a terrible scientist would assume that it did. Spock’s answer is more precise, therefore, than “about 8000 to one”, but it’s less accurate, because it suggests that the value is known to a much higher degree of precision than it possibly could be. Even “about 8000 to one” is probably not justifiable, given what the characters actually know. (It’s also kind of stupid, in the middle of a firefight, to give your commanding officer gratuitously complex answers to simple questions: “Exceedingly poor,” would be more accurate and more useful.

This has not entirely escaped the fan community, of course: “How many Vulcans does it take to change a lightbulb?” is answered with, “1.000000”. This is funny, because it is, for all its pointless precision, no more accurate than “one”, and in no situations would fractional persons form a meaningful category when it comes to changing light bulbs. (Fractional persons might be valid measurements in other contexts — for example, in a cannibalistic society. Don’t think about it too hard.) 

Elsewhere in the series, too, logic is invoked as a kind of deus ex machina — something to which the writer of the episode could appeal to justify any decision Mr. Spock might come up with, irrespective of whether it was reasonable or not. Seldom (I’m inclined to say never, but I’m not going to bother to watch the whole series over again just to verify the fact) are we shown the operation of even one actual logical operation.

The structures of deductive reasoning (logic’s home turf) seldom have a great deal to do with science, in any case. Mathematical procedures are typically deductive. Some philosophical disciplines, including traditional logic, are too. Physical science, however, is almost entirely inductive. In induction, one generalizes tentatively from an accumulation of data; such collections of data are seldom either definitive or complete. Refining hypotheses as new information comes to light is integral to the scientific process as it’s generally understood. The concept of significant digits is only one of those things that helps optimize our induction.

Odds are a measure of ignorance, not knowledge. They do not submit to purely deductive analysis. For determinate events, there are no odds. Something either happens or it doesn’t, Mr. Spock notwithstanding. However impossibly remote it might have seemed yesterday, the meteorite that actually landed in your back yard containing a message from the Great Pumpkin written in Old Church Slavonic now has a probability of 100% if it actually happened. If it didn’t, its probability is zero. There are no valid degrees between the two.

Am I bashing Star Trek at this point? Well, maybe a little. I think they had an opportunity to teach an important concept, and they blew it. It would have been really refreshing (and arguably much more realistic) to have Spock occasionally say, “Captain, why are you asking me this? You know as well as I do that we can’t really know that, because we have almost no data,” or “Well, I can compute an answer of 28.63725, but it has a margin of error in the thousands, so it’s not worth relying upon.” Obviously quiet data-gathering is not the stuff of edge-of-the-seat television. I get that. But it’s what the situation really would require. (Spock, to his credit, often says, “It’s like nothing we’ve ever seen before,” but that’s usually just prior to his reaching another unsubstantiated conclusion about it.)

I do think, however, that the Star Trek promotion of science as an oracular fount of uncontested truth — a myth that few real scientists believe, but a whole lot of others (including certain scientistic pundits one could name) do believe — is actively pernicious. It oversells and undercuts the legitimate prerogatives of science, and in the long run undermines our confidence in what it actually can do well. There are many things in this world that we don’t know. Some of the things we do know are even pretty improbable.  Some very plausible constructs, on the other hand, are in fact false. I’m all in favor of doing our best to find out, and of relying on logical inference where it’s valid, but it’s not life’s deus ex machina. At best, it’s a machina ex Deo: the exercise of one — but only one — of our God-given capacities. Like most of them, it should be used responsibly, and in concert with the rest.

The Sherlock Holmes Law

April 3rd, 2020

I rather like Arthur Conan Doyle’s Sherlock Holmes stories. I should also admit that I’m not a hard-core devotee of mysteries in general. If I were, I probably would find the frequent plot holes in the Holmes corpus more annoying than I do. I enjoy them mostly for the period atmosphere, the prickly character of Holmes himself, and the buddy-show dynamic of his relationship with Doctor Watson. To be honest, I’ve actually enjoyed the old BBC Holmes series with Jeremy Brett at least as much as I have enjoyed reading the original works. There’s more of the color, more of the banter, and less scolding of Watson (and implicitly the reader) for not observing the one detail in a million that will somehow eventually prove relevant.

Irrespective of form, though, the Holmes stories have helped me articulate a principle I like to call the “Sherlock Holmes Law”, which relates to the presentation of fictional characters in any context. In its simplest form, it’s merely this:

A fictional character can think no thought that the author cannot.

This is so obvious that one can easily overlook it, and in most fiction it rarely poses a problem. Most authors are reasonably intelligent — most of the ones who actually see publication, at least — and they can create reasonably intelligent characters without breaking the credibility bank. 

There are of course some ways for authors to make characters who are practically superior to themselves. Almost any writer can extrapolate from his or her own skills to create a character who can perform the same tasks faster or more accurately. Hence though my own grasp of calculus is exceedingly slight, and my ability to work with the little I do know is glacially slow, I could write about someone who can look at an arch and mentally calculate the area under the curve in an instant. I know that this is something one can theoretically do with calculus, even if I’m not able to do it myself. There are well-defined inputs and outputs. The impressive thing about the character is mostly in his speed or accuracy. 

This is true for the same reason that you don’t have to be a world-class archer to describe a Robin Hood who can hit the left eye of a gnat from a hundred yards. It’s just another implausible extrapolation from a known ability. As long as nobody questions it, it will sell at least in the marketplace of entertainment. Winning genuine credence might require a bit more.

Genuinely different kinds of thinking, though, are something else. 

I refer this principle to the Holmes stories because, though Mr. Holmes is almost by definition the most luminous intellect on the planet, he’s really not any smarter than Arthur Conan Doyle, save in the quantitative sense I just described. Doyle was not a stupid man, to be sure (though he was more than a little credulous — apparently he believed in fairies, based on some clearly doctored photographs). But neither was he one of the rare intellects for the ages. And so while Doyle may repeatedly assure us (through Watson, who is more or less equivalent to Doyle himself in both training and intelligence) that Holmes is brilliant, what he offers as evidence boils down to his ability to do two things. He can:

a) observe things very minutely (even implausibly so);

and

b) draw conclusions from those observations with lightning speed. That such inferences themselves strain logic rather badly is not really the point: Doyle has the writer’s privilege of guaranteeing by fiat that they will turn out to be correct.

Time, of course, is one of those things for which an author has a lot of latitude, since books are not necessarily (or ever, one imagines) written in real time. Even if it takes Holmes only a few seconds to work out a chain of reasoning, it’s likely that Doyle himself put much more time into its formation. While that probably does suggest a higher-powered brain, it still doesn’t push into any genuinely new territory. Put in computer terms, while a hypothetical Z80 chip running at a clock speed of 400Mhz would be a hundred times faster than the 4Mhz one that powered my first computer back in the 1982, it would not be able to perform any genuinely new operations. It would probably be best for running CP/M on a 64K system — just doing so really quickly.

It’s worth noting that sometimes what manifests itself chiefly as an increase in speed actually does represent a new kind of thinking. There is a (perhaps apocryphal) story about Carl Friedrich Gauss (1777-1855), who, when he was still in school, was told to add the digits from one to a hundred as punishment for some classroom infraction or other. As the story goes, he thought about it for a second or two, and then produced the correct result (5050), much to the amazement of his teacher. Gauss had achieved his answer not by adding all those numbers very rapidly, but by realizing that if one paired and added the numbers at the ends of the sequence, moving in toward the center, one would always get 101: i.e., 100 + 1 = 101; 99 + 2 = 101; and so on. There would then be fifty such pairs — hence 50 x 101: 5050. 

A character cannot produce that kind of idea if the author doesn’t understand it first. It makes the depiction of superintelligent characters very tricky, and sometimes may even limit the portrayal of stupid ones who don’t think the way the rest of us do.

For readers, however, it is different. Literary works (fictional or not) can open up genuinely new kinds of ideas to readers. While a writer who has achieved a completely new way of thinking about some technical problem is less likely to expound it in fiction than in some sort of a treatise or an application with the patent office, fictional works often present ideas one has never considered before in the human arena. It need not be a thought that’s new to the world in order to be of value — it needs merely to be new to you.

Such a thought, no matter how simple it may seem once you see it, can blow away the confines of our imaginations. It’s happened to me at a few different stages in my life. Tolkien’s The Lord of the Rings awakened me when I was a teenager to something profound about the nature of language and memory. C. S. Lewis’ “The Weight of Glory” revolutionized the way I thought about other people. Tolstoy’s War and Peace laid to rest any notion I had that other people’s minds (or even my own) could ever be fully mapped. Aquinas’ Summa Theologica (especially Q. 1.1.10) transformed forever my apprehension of scriptureThe list goes on, but it’s not my point to catalogue it completely here.

Where has that happened to you?

Reflections on Trisecting the Angle

March 12th, 2020

I’m not a mathematician by training, but the language and (for want of a better term) the sport of geometry has always had a special appeal for me. I wasn’t a whiz at algebra in high school, but I aced geometry. As a homeschooling parent, I had a wonderful time teaching geometry to our three kids. I still find geometry intriguing.

When I was in high school, I spent hours trying to figure out how to trisect an angle with compass and straightedge. I knew that nobody had found a way to do it. As it turns out, in 1837 (before even my school days) French mathematician Pierre Wantzel proved that it was impossible for the general case (trisecting certain special angles is trivial). I’m glad I didn’t know that, though, since it gave me a certain license to hack at it anyway. Perhaps I was motivated by a sense that it would be glorious to be the first to crack this particular nut, but mostly I just wondered, “Can it be done, and if not, why not?”

Trisecting the angle is cited in Wikipedia as an example of “pseudomathematics”, and while I will happily concede that any claim to be able to do so would doubtless rely on bogus premises or operations, I nevertheless argue that wrestling with the problem honestly, within the rules of the game, is a mathematical activity as valid as any other, at least as an exercise. I tried different strategies, mostly trying to find a useful correspondence between the (simple) trisection of a straight line and the trisection of an arc. My efforts, of course, failed (that’s what “impossible” means, after all). Had they not, my own name would be celebrated in different Wikipedia articles describing how the puzzle had finally been solved. It’s not. In my defense, I hasten to point out that I never was under the impression that I had succeeded. I just wanted to try and to know either how to do it or to know the reason why.

My failed effort might, by many measures, be accounted a waste of time. But was it? I don’t think it was. Its value for me was not in the achievement but in the striving. Pushing on El Capitan isn’t going to move the mountain, either, but doing it regularly will provide a measure of isometric exercise. Similarly confronting an impossible mental challenge can have certain benefits.

And so along the way I gained a visceral appreciation of some truths I might not have grasped as fully otherwise.

In the narrowest terms, I came to understand that the problem of trisecting the angle (either as an angle or as its corresponding arc) is fundamentally distinct from the problem of trisecting a line segment, because curvature — even in the simplest case, which is the circular — fundamentally changes the problem. One cannot treat the circumference of a circle as if it were linear, even though it is much like a line segment, having no thickness and a specific finite extension. (The fact that π is irrational seems at least obliquely connected to this, though it might not be: that’s just a surmise of my own.)

In the broadest terms, I came more fully to appreciate the fact that some things are intrinsically impossible, even if they are not obvious logical contradictions. You can bang away at them for as long as you like, but you’ll never solve them. This truth transcends mathematics by a long stretch, but it’s worth realizing that failing to accomplish something that you want to accomplish is not invariably a result of your personal moral, intellectual, or imaginative deficiencies. As disappointing as it may be for those who want to believe that every failure is a moral, intellectual, or imaginative one, it’s very liberating for the rest of us.

Between those obvious extremes are some more nuanced realizations. 

I came to appreciate iterative refinement as a tool. After all, even if you can’t trisect the general angle with perfect geometrical rigor, you actually can come up with an imperfect but eminently practical approximation — to whatever degree of precision you require. By iterative refinement (interpolating between the too-large and the too-small solutions), you can zero in on a value that’s demonstrably better than the last one every time. Eventually, the inaccuracy won’t matter to you any more for any practical application. I’m perfectly aware that this no longer pure math — but it is the very essence of engineering, which has a fairly prominent and distinguished place in the world. Thinking about this also altered my appreciation of precision as a pragmatic real-world concept. 

A more general expression of this notion is that, while some problems never have perfect solutions, they sometimes can be practically solved in a way that’s good enough for a given purpose. That’s a liberating realization. Failure to achieve the perfect solution needn’t stop you in your tracks. It doesn’t mean you can’t get a very good one. It’s worth internalizing this basic truth. And only by wrestling with the impossible do we typically discover the limits of the possible. That in turn lets us develop strategies for practical work-arounds.

Conceptually, too, iterative refinement ultimately loops around on itself and becomes a model for thinking about such things as calculus, and the strange and wonderful fact that, with limit theory, we can (at least sometimes) achieve exact (if occasionally bizarre) values for things that we can’t measure directly. Calculus gives us the ability (figuratively speaking) to bounce a very orderly sequence of successive refinements off an infinitely remote backstop and somehow get back an answer that is not only usable but sometimes actually is perfect. This is important enough that we now define the value of pi as the limit of the perimeter of a polygon with infinitely many sides.

It shows also that this is not just a problem of something being somehow too difficult to do: difficulty has little or nothing to do with intrinsic impossibility (pace the Army Corps of Engineers: they are, after all, engineers, not pure mathematicians). In fact we live in a world full of unachievable things. Irrational numbers are all around us, from pi to phi to the square root of two, and even though no amount of effort will produce a perfect rational expression of any of those values, they are not on that account any less real. You cannot solve pi to its last decimal digit because there is no such digit, and no other rational expression can capture it either. But the proportion of circumference to diameter is always exactly pi, and the circumference of the circle is an exact distance. It’s magnificently reliable and absolutely perfect, but its perfection can never be entirely expressed in the same terms as the diameter. (We could arbitrarily designate the circumference as 1 or any other rational number; but then the diameter would be inexpressible in the same terms.)

I’m inclined to draw some theological application from that, but I’m not sure I’m competent to do so. It bears thinking on. Certainly it has at least some broad philosophical applications. The prevailing culture tends to suggest that whatever is not quantifiable and tangible is not real. There are a lot of reasons we can’t quantify such things as love or justice or truth; it’s also in the nature of number that we can’t nail down many concrete things. None of them is the less real merely because we can’t express them perfectly.

Approximation by iterative refinement is basic in dealing with the world in both its rational and its irrational dimensions. While your inability to express pi rationally is not a failure of your moral or rational fiber, you may still legitimately be required — and you will be able — to get an arbitrarily precise approximation of it. In my day, we were taught the Greek value 22/7 as a practical rational value for pi, though Archimedes (288-212 BC) knew it was a bit too high (3.1428…). The Chinese mathematician Zhu Chongzhi (AD 429-500) came up with 355/113, which is not precisely pi either, but it’s more than a thousand times closer to the mark (3.1415929…). The whole domain of rational approximation is fun to explore, and has analogical implications in things not bound up with numbers at all.

So I personally don’t consider my attempts to trisect the general angle with compass and straightedge to be time wasted. It’s that way in most intellectual endeavors, really: education represents not a catalogue of facts, but a process and an exercise, in which the collateral benefits can far outweigh any immediate success or failure. Pitting yourself against reality, win or lose, you become stronger, and, one hopes, wiser. 

Crafting a Literature Program

February 22nd, 2020

The liberal arts are, to great measure, founded on written remains, from the earliest times to our own. Literature (broadly construed to take in both fiction and non-fiction) encompasses a bewildering variety of texts, genres, attitudes, belief systems, and just about everything else. Like history (which can reasonably be construed to cover everything we know, with the possible, but incomplete, exception of pure logic and mathematics), literature is a problematic area of instruction: it is both enormously important and virtually impossible to reduce to a clear and manageable number of postulates. 

In modern educational circles, literary studies are often dominated by critical schools, the grinding of pedagogical axes, and dogmatic or interpretive agendas of all sorts — social, political, psychological, or completely idiosyncratic. Often these things loom so large as to eclipse the reality that they claim to investigate. It is as if the study of astronomy had become exclusively bound up with the technology of telescope manufacture, but no longer bothered with turning them toward the stars and planets. Other difficulties attend the field as well.

We’re sailing on an ocean here…

The first is just the sheer size of the field. Yes, astronomy may investigate a vast number of stars, and biology may look at a vast number of organisms and biological systems, but the effort there is to elicit what is common to the diverse phenomena (which did not in and of themselves come into being as objects of human contemplation) and produce a coherent system to account for them. Literature doesn’t work that way. There is an unimaginably huge body of literature out there, and it’s getting bigger every day. Unlike science or milk, the old material doesn’t spoil or go off; it just keeps accumulating. Even if (by your standards or Sturgeon’s Law) 90% of it is garbage, that still leaves an enormous volume of good material to cover. There’s no way to examine more than the tiniest part of that.

…on which the waves never stop moving…

Every item you will encounter in a study of literature is itself an overt attempt to communicate something to someone. That means that each piece expresses its author’s identity and personality; in the process it inevitably reflects a range of underlying social and cultural suppositions. In their turn, these may be common to that author’s time and place, or they may represent resistance to the norms of the time. Any given work may reach us through few or many intermediaries, some of which will have left their stamp on it, one way or the other. Finally, every reader receives every literary product he or she encounters differently, too. That allows virtually infinite room for ongoing negotiation between author and reader in shaping the experience and its meaning — which is the perennially shifting middle ground between them.

…while no two compasses agree…

I haven’t seen this discussed very much out in the open, though perhaps I just don’t frequent the right websites, email lists, or conferences. But the reality — the elephant in the room — is that no two teachers agree on what qualifies as good and bad literature. Everyone has ideas about that, but they remain somewhat hidden, and often they are derived viscerally rather than systematically. For example, I teach (among other things) The Odyssey and Huckleberry Finn; I have seen both attacked, in a national forum of English teachers, as having no place in the curriculum because they are (for one reason or another) either not good literature or because they are seen as conveying pernicious social or cultural messages. I disagree with their conclusion, at least — obviously, since I do in fact teach them, but the people holding these positions are not stupid. In fact, they make some very strong arguments. They’re proceeding from basic assumptions different from my own…but, then again, so does just about everyone. That’s life.

…nor can anyone name the destination:

Nobody talks about this much, either, but it’s basic: our literature teachers don’t even remotely agree on what they’re doing. Again, I don’t mean that they are incompetent or foolish, but merely that there is no universally agreed-upon description of what success in a literature program looks like. Success in a science or math program, or even a foreign language program, is relatively simple to quantify and consequently reasonably simple to assess. Not so here. Every teacher seems to bring a different yardstick to the table. Some see their courses as morally neutral instruction in the history and techniques of an art form; others see it as a mode of indoctrination in values, according to their lights. For some, that’s Marxism. For some, it’s conservative Christianity. For some, it’s a liberal secular humanism. For others…well, there is no accounting for all the stripes of opinion people bring with the to the table — but the range is very broad.

is it any wonder people are confused?

So where are we, then, anyway? The sum is so chaotic that most public high students I have asked in the past two decades appear to have simply checked out: they play the game and endure their English classes, but the shocking fact is that, even while enrolled in them at the time, almost all have been unable to tell me what they were reading for those classes. This is not a furtive examination: I’ve simply asked them, “So, what are you reading for English?” If one or two didn’t know, I’d take that as a deficiency in the student or a sudden momentary diffidence on the subject. When all of them seem not to know, however, I suspect some more systemic shortfall. I would suggest that this is not because they are stupid either, but because their own literary instruction has been so chaotic as to stymie real engagement with the material.

It’s not particularly surprising, then, that literature is seen as somehow suspect, and that homeschooling parents looking for literature courses for their students feel that they are buying a pig in a poke. They are. They have to be wondering — will this course or that respect my beliefs or betray them? Will the whole project really add up to anything? Will the time spend on it add in any meaningful sense to my students’ lives, or is this just some gravy we could just as well do without? Some parents believe (rightly or wrongly: it would be a conflict of interest for me even to speculate which) that they probably can do just as well on such a “soft” subject as some program they don’t fully understand or trust. 

One teacher’s approach

These questions are critical, and I encourage any parent to get some satisfactory answers before enrolling in any program of literary instruction, including mine. Here are my answers: if they satisfy you, I hope you’ll consider our program. If not, look elsewhere with my blessing, but keep asking the questions.

In the first instance, my project is fairly simple. I am trying to teach my students to read well. Of course, by now they have mastered the mechanical art of deciphering letters, combining them into words, and extracting meaning from sentences on a page. But there’s more to reading than that: one must associate those individual sentences with each other and weigh them together to come to a synthetic understanding of what the author is doing. They need in the long run to consider nuance, irony, tonality, and the myriad inflections an author imparts to the text with his or her own persona. Moreover, they need t consider what a given position or set of ideas means within its own cultural conversation. All those things change the big picture.

There’s a lot there to know, and a lot to learn. I don’t pretend to know it all myself either, but I think I know at least some of the basic questions, and I have for about a generation now been encouraging students to ask them, probe them, and keep worrying at the feedback like a dog with a favorite bone. In some areas, my own perspectives are doubtless deficient. I do, on the other hand, know enough about ancient and medieval literature, language, and culture that I usually can open some doors that students hadn’t hitherto suspected. Once one develops a habit of looking at these things, one can often see where to push on other kinds of literature as well. The payoff is cumulative.

There are some things I generally do not do. I do not try to use literary instruction as a reductive occasion or pretext for moral or religious indoctrination. Most of our students come from families already seriously engaged with questions of faith and morals, and I prefer to respect that fact, leaving it to their parents and clergy. I also don’t believe that any work of literature can be entirely encompassed by such questions, and hence it would be more than a little arrogant of me to try to constrain the discussion to those points.

This is not to say that I shy away from moral and religious topics either (as teachers in our public schools often have to do perforce). Moral and theological issues come up naturally in our conversations, and I do not suppress them; I try to deal with them honestly from my own perspective as a fairly conservative reader and as a Christian while leaving respectful room for divergence of opinion as well. (I do believe that my own salvation is not contingent upon my having all the right answers, so I’m willing to be proven wrong on the particulars.)

It is never my purpose to mine literary works for “teachable points” or to find disembodied sententiae that I can use as an excuse to exalt this work or dismiss that one. This is for two reasons. First of all, I have too much respect for the literary art to think that it can or should be reduced to a platitudinous substrate. Second, story in particular (which is a large part of what literature involves) is a powerful and largely autonomous entity. It cannot well be tamed; any attempt to subvert it with tendentious arguments (from either the author’s point of view or from the reader’s) almost invariably produces bad art and bad reading. An attempt to tell a student “You should like this work, but must appreciate it only in the following way,” is merely tyrannical — tyrannical in the worst way, since it sees itself as being entirely in the interest of and for the benefit of the student. Fortunately, for most students, it’s also almost wholly ineffectual, though a sorry side effect is that a number find the whole process so off-putting that they ditch literature altogether. That’s probably the worst possible outcome for a literature program.

I also do not insist on canons of my own taste. If students disagree with me (positively or negatively) about the value of a given work, I’m fine with that. I don’t require anyone to like what I like. I deal in classics (in a variety of senses of the term) but the idea of an absolute canon of literature is a foolish attempt to control what cannot be controlled. It does not erode my appreciation for a work of literature that a student doesn’t like it. The fact that twenty generations have liked another won’t itself make me like it either, if I don’t, though it does make me reticent to reject it out of hand. It takes a little humility to revisit something on which you have already formed an opinion, but it’s salutary. It’s not just the verdict of the generations that can force me back to a work again, either: if a student can see something in a work that I have hitherto missed and can show me how to appreciate it, I gain by that. At the worst, I’m not harmed; at the best, I’m a beneficiary. Many teachers seem eager to enforce their evaluations of works on their students. I don’t know why. I have learned more from my students than from any other source, I suspect. Why would I not want that to continue?

Being primarily a language scholar, I do attempt to dig into texts for things like grammatical function — both as a way of ascertaining the exact surface meanings and as a way of uncovering the hidden complexities. Those who haven’t read Shakespeare with an eye on his brilliant syntactical ambiguity in mind are missing a lot. He was a master of complex expression, and what may initially seem oddly phrased but obvious statements can unfold into far less obvious questions or bivalent confessions. After thirty years of picking at it, I still have never seen an adequate discussion in the critical literature on Macbeth’s “Here had we now our country’s honour roofed / Were the graced person of our Banquo present (Macbeth 3.4.39-40).”  The odd phrasing is routinely explained as something like “All the nobility of Scotland would be gathered under one roof if only Banquo were present,” but I think he is saying considerably more than that, thanks to the formation of contrary-to-fact conditions and the English subjunctive.

My broadest approach to literature is more fully elaborated in Reading and Christian Charity, an earlier posting on this blog and also one of the “White Papers” on the school website. I hope all parents (and their students) considering taking any of my courses will read it, because it contains the essential core of my own approach to literature, which differs from many others, both in the secular world and in the community flying the banner of Classical Christian Education. If it is what you’re looking for, I hope you will consider our courses. 

[Some of the foregoing appeared at the Scholars Online Website as ancillary to the description of the literature offerings. It has been considerably revised and extended here.]

To teach, or not to teach….to the test

February 13th, 2020

In the last few weeks, I’ve spent considerable time updating my course websites for the 2020 summer session and academic year. This has been more complicated than usual, since I’ve decided, after considerable thought and inward turmoil, not to seek Advanced Placement recertification for the biology, chemistry, and physics courses I’ve taught for the last decade as formal “AP” courses.

A little background….

The College Board owns the “Advanced Placement” name and designation. Beginning in 2012, it required that anyone teaching a course designated for AP credit submit a syllabus for review by university faculty to ensure students were being prepared adequately for second year college work. Over the last eight years, the College Board has revised their syllabus requirements several times, remaining fairly flexible about how the course was offered and giving teachers latitude to emphasize areas or approaches as they saw fit. Curriculum suggestions and standards were minimal, and the AP examination remained largely a validation of adequate student preparation for advanced college work.

So what changed?

In 2018, the College Board announced that its program was radically changing in response to teacher and student feedback. The resulting syllabi revisions for biology, chemistry, and physics are quite specific in dictating course content and performance expectations. Teachers have fewer options to organize materials according to their own priorities. In particular, the syllabus for biology eliminates requirements for any instruction on human anatomy and plant physiology in order to focus on microbiology, evolution, and ecology, apparently assuming that students will cover physiology and anatomy in other courses. The chemistry syllabus increasingly focuses on professional level instrument use and the algebra-based physics syllabus has been broken into a two-year sequence that pushes modern physics topics to a seldom-taken second year. All three syllabi restructure the course schedules to eliminate any topics not covered on the examinations.

For biology in particular, I think this is a disastrous move for the students, however much lighter it makes the burden of instruction for the teacher. I believe that human anatomy and physiology should be taught in the context of cellular biology so that students understand how all levels of living systems work together. Many students, especially home-schooled students, attempt AP Biology without a previous course in high school biology. The new curriculum leaves them without a detailed appreciation of how their own bodies work at a time when this information is vital to help them make responsible choices for their own health.

There are implications for chemistry and physics as well. Most students won’t be going on to technical careers in chemistry; it is often a prerequisite for medical training at many levels. Performing basic chemistry investigations with limited equipment to experience fundamental principles of chemical reactions provides a better learning experience than when students perform cookbook experiments with equipment they don’t understand. Since most high school physics students are unable to take a second year due to time constraints, the current AP syllabus deprives them of exposure to the unity of field theory applications and the ramifications of modern physics: relativity, quantum mechanics, and nuclear energy.

When the exam is the focus, where’s the joy?

The College Board now requires that students register by early September for the AP test given the following May. This shifts the emphasis of the entire course from learning the subject to “teaching to the test”. Since Scholars Online courses are intended to provide our students with mastery of a subject, this runs counter to our teaching philosophy. I want my students to focus on exploring concepts and playing with ideas at the risk of making mistakes. It is difficult to experiment with possibilities when you are panicking about achieving a high score on an exam or to engage with the material joyfully instead of apprehensively.

The new AP program also heavily encourages the use of the College Board’s own website materials for unit testing throughout the year. While teachers no longer need to devise quizzes for their own students (a sometimes painstaking and onerous task), the feedback promised from the AP program will allow them to see how their students are doing (and collaterally, how they are doing as teachers) in preparing for the exam. The emphasis again is on exam performance, not on the subject matter.

There is another, more subtle issue with AP-provided online course support materials. It has been my practice to contain performance data for my students on the Scholars Online servers, rather than allow others to gather detailed information about my students’ ideas. I have not used publishers’ homework websites or quizzes that would identify individual students, and I refuse to change that practice when I do not know how personally-identifiable student data will be used in the future. The AP program has made no real assurances about the data they will be collecting this way.

I am very uncomfortable with the expanded level of content control by a major testing organization, many of whose directors are textbook publishers, and I’m not the only one. A number of prestigious private schools have dropped their AP courses to allow their teachers to teach creatively, rather than surrendering control of their courses to the College Board. Reluctantly, because it reduces an option for our students to gain formal AP course credit for their work, I have come to realize it is best to join them.

Participation in a formally certified AP course is not required for students to register and take the exam. I will continue to monitor AP course requirements so that the courses I am offering will prepare students to perform well on the AP exam if they choose to take it, and provide an equivalent lab experience. Students taking the non-AP versions of these courses have routinely achieved scores of 3 and 4 on the chemistry and physics AP exams, and 4 or 5 on the biology exams, so I do not believe this decision will put my students at a disadvantage, but that a unique approach to content and experiments will help them stand out instead.

If you have any questions or concerns about this decision, please let me know.

Causes

February 1st, 2020

The Greek philosopher Aristotle thought widely and deeply on many subjects. Some of his ideas have proven to be unworkable or simply wrong — his description of a trajectory of a thrown object, for example, works only in Roadrunner cartoons: in Newtonian physics, a thrown ball does not turn at a right angle and fall after it’s run out of forward-moving energy. The force vectors vary continuously, and its trajectory describes an arc. We can forgive Aristotle, I think, for not having calculus at his disposal. That he didn’t apparently observe the curvature of a trajectory is a little bit harder to explain.

Others of his ideas are rather narrowly culturally bound. His views on slavery are rightly repudiated almost everywhere, and many others are not very useful to us today. I personally find his description of Athenian tragedy in the Poetics far too limiting: the model of the hero who falls from greatness due to a tragic flaw is one model (though not really the only one) for describing the Oedipus Rex, but it doesn’t apply even loosely to most of the rest of surviving Athenian tragedy. This curiously Procrustean interpretive template is championed mostly by teachers who have read only one or two carefully-chosen plays.

Some of Aristotle’s ideas, though, remain quite robust. His metaphysical thought is still challenging, and, even if one disagrees, it’s very useful to know how and why one disagrees. His logical writings, too, remain powerful and compelling, and are among the best tools ever devised to help us think about how we think.

Among his most enduringly useful ideas, I think, is his fourfold categorization of cause. This is basic to almost everything we think about, since most of our understanding of the universe is couched, sooner or later, in terms of story. Story is fundamentally distinguished from isolated lists of events because of its reliance on cause and effect. 

There are, according to Aristotle, four different kinds of cause: material cause, efficient cause, formal cause, and final cause. This may all sound rather fussy and technical, but the underlying ideas are fairly simple, and we rely on them, whether we know it or not, every day. For an example, we can take a common dining room table.

The material cause of something is merely what it’s made of. That can be physical matter or not, but it’s the source stuff, in either case. The material cause of our table is wood, glue, perhaps some nails or screws, varnish, and whatever else goes into its makeup (metal, glass, plastic, or whatever else might be part of your dining room table). 

The formal cause is its form itself. It’s what allows us to say that any individual thing is what it is — effectively its definition. The table’s formal cause is largely bound up in its functional shape. It may have a variable number of legs, for example, but it will virtually always present some kind of horizontal surface that you can put things on. 

The efficient cause is the agency that brings something about — it’s the maker (personal or impersonal) or the causative process. That’s most like our simplest sense of “cause” in a narrative. The efficient cause of the table is the carpenter or the factory or workers that produced it. 

The final cause is the purpose for which something has come into being (if it is purposed) — in the case of the table, to hold food and dishes for us while we’re eating.

Not everything must have all four of these causes, at least in any obvious sense, but most have some; everything will have at least one. They are easy to recall, and remarkably useful when confronting “why?” questions. Still, people often fail to distinguish them in discourse — and so wind up talking right past one another.

Though I cannot now find a record of it, I recall that when a political reporter asked S. I. Hayakawa (himself an academic semanticist before turning to politics) in 1976 why he thought he’d been elected to the Senate, he answered by saying that he supposed it was because he got the most votes. This was, of course, a perfectly correct answer to the material-cause notion of “why”, but was entirely irrelevant to what the reporter was seeking, which probably had more to do with an efficient cause. Hayakawa surely knew it, too, but apparently didn’t want to be dragged into the discussion the reporter was looking for. Had the reporter been quicker off the mark with Aristotelian causes, he might have been able to pin the senator-elect down for a more satisfactory answer.

Aristotle wrote in the fourth century B.C., but his ideas are still immediately relevant. While one can use them to evade engagement (as Hayakawa did in this incident), we can also use them to clarify our communication. True communication is a rare and valuable commodity in the world, in just about every arena. Bearing these distinctions in mind can help you achieve it.

Time to Think

January 18th, 2020

On average, my students today are considerably less patient than those of twenty years ago. They get twitchy if they are asked merely to think about something. They don’t know how. My sense is not that they are lazy: in fact, it’s perhaps just the opposite. Just thinking about something feels to them like idling, and after they have given it a good thirty seconds, they sense that it’s time to move on to something more productive — or at least more objectively measurable. They don’t seem to believe that they are accomplishing anything unless they are moving stepwise through some defined process that they can quantify and log, and that can be managed and validated by their parents or teachers. It doesn’t matter how banal or downright irrelevant that process might be: they are steps that can be completed. A secondary consequence is that if they start to do something and don’t see results in a week or two, they write it off as a bad deal and go chasing the next thing. It is no longer sufficient for a return on investment to be annual or even quarterly: if it’s not tangible, it’s bogus, and if it’s not more or less instantaneous, it’s time wasted.

On average, my students today also have their time booked to a degree that would have been unthinkable in my youth. When I was in junior high and high school, I did my homework, I had music lessons, and I was involved in a handful of other things. I had household chores as well. But I also had free time. I rode my bicycle around our part of town. I went out and climbed trees. I pursued reading that interested me just because I wanted to. I drew pictures — not very good ones, but they engaged me at the time. Most importantly, I was able (often in the midst of these various undirected activities) simply to think about those open-ended questions that underlie one’s view of life. Today I have students involved in multiple kinds of sports, multiple music lessons, debate, and half a dozen other things. There are no blank spaces in their schedules.

I can’t help thinking that these two trends are non-coincidentally related. There are at least two reasons for this, one of them internal, and one external. Both of them need to be resisted.

First of all, in the spiritually vacant materialistic culture surrounding us, free and unstructured time is deprecated because it produces no tangible product — not even a reliable quantum of education. One can’t sell it. Much of the public has been bullied by pundits and advertisers into believing that if you can’t buy or sell something, it must not be worth anything. We may pay lip service to the notion that the most important things in life are free, but we do our best to ignore it in practice. 

As a correlative, we have also become so invested in procedure that we mistake it for achievement. I’ve talked about this recently in relation to “best practices”. The phenomenon is similar in a student’s time management. If something can’t be measured as progress, it’s seen as being less than real. To engage in unstructured activity when one could be pursuing a structured one is seen as a waste.

This is disastrous for a number of reasons. 

I’ve already discussed here the problem of confusing substance and process. The eager adoption of “best practices” in almost every field attests the colossally egotistical notion that we now know the best way to do just about anything, and that by adhering to those implicitly perfected processes, we guarantee outcomes that are, if not perfect, at least optimal. But it doesn’t work that way. It merely guarantees that there will be no growth or experimentation. Such a tyrannical restriction of process almost definitionally kills progress. The rut has defined the route.

Another problem is that this is a fundamentally mercantile and materialist perspective, in which material advantage is presumptively the only good. For a Christian, that this is false should be a no-brainer: you cannot serve both God and mammon. 

I happily admit that there are some situations where it’s great to have reliable processes that really will produce reliable outcomes. It’s useful to have a way to solve a quadratic equation, or hiring practices that, if followed, will keep one out of the courts. But they mustn’t eclipse our ability to look at things for what they are. If someone can come up with better ways of solving quadratic equations or navigating the minefields of human resources, all the better. When restrictive patterns dominate our instructional models to the point of exclusivity, they are deadening.

Parents or teachers who need to scrutinize and validate all their children’s experiences are not helping them: they’re infantilizing them. When they should be growing into a mature judgment, and need to be allowed to make real mistakes with real consequences, they are being told instead not to risk using their own judgment and understanding, but to follow someone else’s judgment unquestioningly. Presumably thereby they will be spared the humiliation of making mistakes, and they will also not be found wanting when the great judgment comes. That judgment takes many forms, but it’s always implicitly there. For some it seems to have a theological component. 

In the worldly arena, it can be college admission, or getting a good job, or any of a thousand other extrinsic hurdles that motivate all good little drones from cradle to grave. College is of the biggie at this stage of the game. There is abroad in today’s panicky world the notion that a student has to be engaged in non-stop curricular and extracurricular activities even to be considered for college. That’s false, but it’s scary, and fear almost always trumps the truth. Fear can be fostered and nurtured with remarkable dexterity, and nothing sells like fear: this has been one of the great (if diabolical) discoveries of advertisers since the middle of the last century. Fear is now the prime motivator of both our markets and our politics. It’s small wonder that people are anxious about both: they’ve been bred and acculturated for a life of anxiety. They’re carefully taught to fear, so that they will buy compulsively and continually. The non-stop consumer is a credulous victim of the merchants of fear. We need, we are told, to circle the wagons, repel boarders, and show a unified face to the world. Above all, we should not question anything. 

Though we seem more often to ignore it or dismiss it with a “Yes, but…”, our faith tells us  that perfect love casts out fear. The simple truth is one that we’ve always known. Fear diminishes us. Love enlarges us. What you’re really good at will be what you love; what you love is what you’ll be good at. Which is the cause and which the effect is harder to determine: they reinforce one another. You can only find out what you love, though, if, without being coerced, you take the time and effort to do something for its own sake, not for any perceived extrinsic reward that’s the next link in Madison Avenue’s cradle-to-grave chain of anxious bliss.

There’s nothing wrong with structured activities. If you love debate, by all means, do debate. If you love music, do music. If you love soccer, play soccer. If you don’t love them, though, find something else that you do love to occupy your time, stretch your mind, and feed your soul. Moreover, even those activities need to be measured out in a way that leaves some actual time that hasn’t been spoken for. There really is such a thing as spreading oneself too thin. Nothing turns out really well; excellence takes a back seat to heaping up more and more of a desperate adequacy. In my experience, the outstanding student is not the one who has every moment of his or her day booked, but the one who has time to think, and to acquire the unique fruits of undirected reflection. They can’t be gathered from any other source. You can’t enroll in a program of undirected contemplation. You can only leave room for it to happen. It will happen on its own time, and it cannot be compelled to appear on demand.

The over-programmed student is joyless in both study and play, and isn’t typically very good at either one. Drudges who do everything they do in pursuit of such a phantom success will never achieve it. The students who have done the best work for me over the years have without exception been the ones who bring their own personal thoughts to the table. For them, education is not just a set of tasks to be mastered or grades to be achieved, but the inner formation of character — a view of life and the world that shapes what their own success will look like. Our secular culture is not going to help you find or define your own success: it’s interested only in keeping you off balance, and on retainer as a consumer. Take charge of your own mind, and determine what winning looks like to you. Otherwise, you will just be playing — and most likely losing — a game you never wanted to play in the first place.