Author Archive

Crafting a Literature Program

Saturday, February 22nd, 2020

The liberal arts are, to great measure, founded on written remains, from the earliest times to our own. Literature (broadly construed to take in both fiction and non-fiction) encompasses a bewildering variety of texts, genres, attitudes, belief systems, and just about everything else. Like history (which can reasonably be construed to cover everything we know, with the possible, but incomplete, exception of pure logic and mathematics), literature is a problematic area of instruction: it is both enormously important and virtually impossible to reduce to a clear and manageable number of postulates. 

In modern educational circles, literary studies are often dominated by critical schools, the grinding of pedagogical axes, and dogmatic or interpretive agendas of all sorts — social, political, psychological, or completely idiosyncratic. Often these things loom so large as to eclipse the reality that they claim to investigate. It is as if the study of astronomy had become exclusively bound up with the technology of telescope manufacture, but no longer bothered with turning them toward the stars and planets. Other difficulties attend the field as well.

We’re sailing on an ocean here…

The first is just the sheer size of the field. Yes, astronomy may investigate a vast number of stars, and biology may look at a vast number of organisms and biological systems, but the effort there is to elicit what is common to the diverse phenomena (which did not in and of themselves come into being as objects of human contemplation) and produce a coherent system to account for them. Literature doesn’t work that way. There is an unimaginably huge body of literature out there, and it’s getting bigger every day. Unlike science or milk, the old material doesn’t spoil or go off; it just keeps accumulating. Even if (by your standards or Sturgeon’s Law) 90% of it is garbage, that still leaves an enormous volume of good material to cover. There’s no way to examine more than the tiniest part of that.

…on which the waves never stop moving…

Every item you will encounter in a study of literature is itself an overt attempt to communicate something to someone. That means that each piece expresses its author’s identity and personality; in the process it inevitably reflects a range of underlying social and cultural suppositions. In their turn, these may be common to that author’s time and place, or they may represent resistance to the norms of the time. Any given work may reach us through few or many intermediaries, some of which will have left their stamp on it, one way or the other. Finally, every reader receives every literary product he or she encounters differently, too. That allows virtually infinite room for ongoing negotiation between author and reader in shaping the experience and its meaning — which is the perennially shifting middle ground between them.

…while no two compasses agree…

I haven’t seen this discussed very much out in the open, though perhaps I just don’t frequent the right websites, email lists, or conferences. But the reality — the elephant in the room — is that no two teachers agree on what qualifies as good and bad literature. Everyone has ideas about that, but they remain somewhat hidden, and often they are derived viscerally rather than systematically. For example, I teach (among other things) The Odyssey and Huckleberry Finn; I have seen both attacked, in a national forum of English teachers, as having no place in the curriculum because they are (for one reason or another) either not good literature or because they are seen as conveying pernicious social or cultural messages. I disagree with their conclusion, at least — obviously, since I do in fact teach them, but the people holding these positions are not stupid. In fact, they make some very strong arguments. They’re proceeding from basic assumptions different from my own…but, then again, so does just about everyone. That’s life.

…nor can anyone name the destination:

Nobody talks about this much, either, but it’s basic: our literature teachers don’t even remotely agree on what they’re doing. Again, I don’t mean that they are incompetent or foolish, but merely that there is no universally agreed-upon description of what success in a literature program looks like. Success in a science or math program, or even a foreign language program, is relatively simple to quantify and consequently reasonably simple to assess. Not so here. Every teacher seems to bring a different yardstick to the table. Some see their courses as morally neutral instruction in the history and techniques of an art form; others see it as a mode of indoctrination in values, according to their lights. For some, that’s Marxism. For some, it’s conservative Christianity. For some, it’s a liberal secular humanism. For others…well, there is no accounting for all the stripes of opinion people bring with the to the table — but the range is very broad.

is it any wonder people are confused?

So where are we, then, anyway? The sum is so chaotic that most public high students I have asked in the past two decades appear to have simply checked out: they play the game and endure their English classes, but the shocking fact is that, even while enrolled in them at the time, almost all have been unable to tell me what they were reading for those classes. This is not a furtive examination: I’ve simply asked them, “So, what are you reading for English?” If one or two didn’t know, I’d take that as a deficiency in the student or a sudden momentary diffidence on the subject. When all of them seem not to know, however, I suspect some more systemic shortfall. I would suggest that this is not because they are stupid either, but because their own literary instruction has been so chaotic as to stymie real engagement with the material.

It’s not particularly surprising, then, that literature is seen as somehow suspect, and that homeschooling parents looking for literature courses for their students feel that they are buying a pig in a poke. They are. They have to be wondering — will this course or that respect my beliefs or betray them? Will the whole project really add up to anything? Will the time spend on it add in any meaningful sense to my students’ lives, or is this just some gravy we could just as well do without? Some parents believe (rightly or wrongly: it would be a conflict of interest for me even to speculate which) that they probably can do just as well on such a “soft” subject as some program they don’t fully understand or trust. 

One teacher’s approach

These questions are critical, and I encourage any parent to get some satisfactory answers before enrolling in any program of literary instruction, including mine. Here are my answers: if they satisfy you, I hope you’ll consider our program. If not, look elsewhere with my blessing, but keep asking the questions.

In the first instance, my project is fairly simple. I am trying to teach my students to read well. Of course, by now they have mastered the mechanical art of deciphering letters, combining them into words, and extracting meaning from sentences on a page. But there’s more to reading than that: one must associate those individual sentences with each other and weigh them together to come to a synthetic understanding of what the author is doing. They need in the long run to consider nuance, irony, tonality, and the myriad inflections an author imparts to the text with his or her own persona. Moreover, they need t consider what a given position or set of ideas means within its own cultural conversation. All those things change the big picture.

There’s a lot there to know, and a lot to learn. I don’t pretend to know it all myself either, but I think I know at least some of the basic questions, and I have for about a generation now been encouraging students to ask them, probe them, and keep worrying at the feedback like a dog with a favorite bone. In some areas, my own perspectives are doubtless deficient. I do, on the other hand, know enough about ancient and medieval literature, language, and culture that I usually can open some doors that students hadn’t hitherto suspected. Once one develops a habit of looking at these things, one can often see where to push on other kinds of literature as well. The payoff is cumulative.

There are some things I generally do not do. I do not try to use literary instruction as a reductive occasion or pretext for moral or religious indoctrination. Most of our students come from families already seriously engaged with questions of faith and morals, and I prefer to respect that fact, leaving it to their parents and clergy. I also don’t believe that any work of literature can be entirely encompassed by such questions, and hence it would be more than a little arrogant of me to try to constrain the discussion to those points.

This is not to say that I shy away from moral and religious topics either (as teachers in our public schools often have to do perforce). Moral and theological issues come up naturally in our conversations, and I do not suppress them; I try to deal with them honestly from my own perspective as a fairly conservative reader and as a Christian while leaving respectful room for divergence of opinion as well. (I do believe that my own salvation is not contingent upon my having all the right answers, so I’m willing to be proven wrong on the particulars.)

It is never my purpose to mine literary works for “teachable points” or to find disembodied sententiae that I can use as an excuse to exalt this work or dismiss that one. This is for two reasons. First of all, I have too much respect for the literary art to think that it can or should be reduced to a platitudinous substrate. Second, story in particular (which is a large part of what literature involves) is a powerful and largely autonomous entity. It cannot well be tamed; any attempt to subvert it with tendentious arguments (from either the author’s point of view or from the reader’s) almost invariably produces bad art and bad reading. An attempt to tell a student “You should like this work, but must appreciate it only in the following way,” is merely tyrannical — tyrannical in the worst way, since it sees itself as being entirely in the interest of and for the benefit of the student. Fortunately, for most students, it’s also almost wholly ineffectual, though a sorry side effect is that a number find the whole process so off-putting that they ditch literature altogether. That’s probably the worst possible outcome for a literature program.

I also do not insist on canons of my own taste. If students disagree with me (positively or negatively) about the value of a given work, I’m fine with that. I don’t require anyone to like what I like. I deal in classics (in a variety of senses of the term) but the idea of an absolute canon of literature is a foolish attempt to control what cannot be controlled. It does not erode my appreciation for a work of literature that a student doesn’t like it. The fact that twenty generations have liked another won’t itself make me like it either, if I don’t, though it does make me reticent to reject it out of hand. It takes a little humility to revisit something on which you have already formed an opinion, but it’s salutary. It’s not just the verdict of the generations that can force me back to a work again, either: if a student can see something in a work that I have hitherto missed and can show me how to appreciate it, I gain by that. At the worst, I’m not harmed; at the best, I’m a beneficiary. Many teachers seem eager to enforce their evaluations of works on their students. I don’t know why. I have learned more from my students than from any other source, I suspect. Why would I not want that to continue?

Being primarily a language scholar, I do attempt to dig into texts for things like grammatical function — both as a way of ascertaining the exact surface meanings and as a way of uncovering the hidden complexities. Those who haven’t read Shakespeare with an eye on his brilliant syntactical ambiguity in mind are missing a lot. He was a master of complex expression, and what may initially seem oddly phrased but obvious statements can unfold into far less obvious questions or bivalent confessions. After thirty years of picking at it, I still have never seen an adequate discussion in the critical literature on Macbeth’s “Here had we now our country’s honour roofed / Were the graced person of our Banquo present (Macbeth 3.4.39-40).”  The odd phrasing is routinely explained as something like “All the nobility of Scotland would be gathered under one roof if only Banquo were present,” but I think he is saying considerably more than that, thanks to the formation of contrary-to-fact conditions and the English subjunctive.

My broadest approach to literature is more fully elaborated in Reading and Christian Charity, an earlier posting on this blog and also one of the “White Papers” on the school website. I hope all parents (and their students) considering taking any of my courses will read it, because it contains the essential core of my own approach to literature, which differs from many others, both in the secular world and in the community flying the banner of Classical Christian Education. If it is what you’re looking for, I hope you will consider our courses. 

[Some of the foregoing appeared at the Scholars Online Website as ancillary to the description of the literature offerings. It has been considerably revised and extende here.]

Causes

Saturday, February 1st, 2020

The Greek philosopher Aristotle thought widely and deeply on many subjects. Some of his ideas have proven to be unworkable or simply wrong — his description of a trajectory of a thrown object, for example, works only in Roadrunner cartoons: in Newtonian physics, a thrown ball does not turn at a right angle and fall after it’s run out of forward-moving energy. The force vectors vary continuously, and its trajectory describes an arc. We can forgive Aristotle, I think, for not having calculus at his disposal. That he didn’t apparently observe the curvature of a trajectory is a little bit harder to explain.

Others of his ideas are rather narrowly culturally bound. His views on slavery are rightly repudiated almost everywhere, and many others are not very useful to us today. I personally find his description of Athenian tragedy in the Poetics far too limiting: the model of the hero who falls from greatness due to a tragic flaw is one model (though not really the only one) for describing the Oedipus Rex, but it doesn’t apply even loosely to most of the rest of surviving Athenian tragedy. This curiously Procrustean interpretive template is championed mostly by teachers who have read only one or two carefully-chosen plays.

Some of Aristotle’s ideas, though, remain quite robust. His metaphysical thought is still challenging, and, even if one disagrees, it’s very useful to know how and why one disagrees. His logical writings, too, remain powerful and compelling, and are among the best tools ever devised to help us think about how we think.

Among his most enduringly useful ideas, I think, is his fourfold categorization of cause. This is basic to almost everything we think about, since most of our understanding of the universe is couched, sooner or later, in terms of story. Story is fundamentally distinguished from isolated lists of events because of its reliance on cause and effect. 

There are, according to Aristotle, four different kinds of cause: material cause, efficient cause, formal cause, and final cause. This may all sound rather fussy and technical, but the underlying ideas are fairly simple, and we rely on them, whether we know it or not, every day. For an example, we can take a common dining room table.

The material cause of something is merely what it’s made of. That can be physical matter or not, but it’s the source stuff, in either case. The material cause of our table is wood, glue, perhaps some nails or screws, varnish, and whatever else goes into its makeup (metal, glass, plastic, or whatever else might be part of your dining room table). 

The formal cause is its form itself. It’s what allows us to say that any individual thing is what it is — effectively its definition. The table’s formal cause is largely bound up in its functional shape. It may have a variable number of legs, for example, but it will virtually always present some kind of horizontal surface that you can put things on. 

The efficient cause is the agency that brings something about — it’s the maker (personal or impersonal) or the causative process. That’s most like our simplest sense of “cause” in a narrative. The efficient cause of the table is the carpenter or the factory or workers that produced it. 

The final cause is the purpose for which something has come into being (if it is purposed) — in the case of the table, to hold food and dishes for us while we’re eating.

Not everything must have all four of these causes, at least in any obvious sense, but most have some; everything will have at least one. They are easy to recall, and remarkably useful when confronting “why?” questions. Still, people often fail to distinguish them in discourse — and so wind up talking right past one another.

Though I cannot now find a record of it, I recall that when a political reporter asked S. I. Hayakawa (himself an academic semanticist before turning to politics) in 1976 why he thought he’d been elected to the Senate, he answered by saying that he supposed it was because he got the most votes. This was, of course, a perfectly correct answer to the material-cause notion of “why”, but was entirely irrelevant to what the reporter was seeking, which probably had more to do with an efficient cause. Hayakawa surely knew it, too, but apparently didn’t want to be dragged into the discussion the reporter was looking for. Had the reporter been quicker off the mark with Aristotelian causes, he might have been able to pin the senator-elect down for a more satisfactory answer.

Aristotle wrote in the fourth century B.C., but his ideas are still immediately relevant. While one can use them to evade engagement (as Hayakawa did in this incident), we can also use them to clarify our communication. True communication is a rare and valuable commodity in the world, in just about every arena. Bearing these distinctions in mind can help you achieve it.

Time to Think

Saturday, January 18th, 2020

On average, my students today are considerably less patient than those of twenty years ago. They get twitchy if they are asked merely to think about something. They don’t know how. My sense is not that they are lazy: in fact, it’s perhaps just the opposite. Just thinking about something feels to them like idling, and after they have given it a good thirty seconds, they sense that it’s time to move on to something more productive — or at least more objectively measurable. They don’t seem to believe that they are accomplishing anything unless they are moving stepwise through some defined process that they can quantify and log, and that can be managed and validated by their parents or teachers. It doesn’t matter how banal or downright irrelevant that process might be: they are steps that can be completed. A secondary consequence is that if they start to do something and don’t see results in a week or two, they write it off as a bad deal and go chasing the next thing. It is no longer sufficient for a return on investment to be annual or even quarterly: if it’s not tangible, it’s bogus, and if it’s not more or less instantaneous, it’s time wasted.

On average, my students today also have their time booked to a degree that would have been unthinkable in my youth. When I was in junior high and high school, I did my homework, I had music lessons, and I was involved in a handful of other things. I had household chores as well. But I also had free time. I rode my bicycle around our part of town. I went out and climbed trees. I pursued reading that interested me just because I wanted to. I drew pictures — not very good ones, but they engaged me at the time. Most importantly, I was able (often in the midst of these various undirected activities) simply to think about those open-ended questions that underlie one’s view of life. Today I have students involved in multiple kinds of sports, multiple music lessons, debate, and half a dozen other things. There are no blank spaces in their schedules.

I can’t help thinking that these two trends are non-coincidentally related. There are at least two reasons for this, one of them internal, and one external. Both of them need to be resisted.

First of all, in the spiritually vacant materialistic culture surrounding us, free and unstructured time is deprecated because it produces no tangible product — not even a reliable quantum of education. One can’t sell it. Much of the public has been bullied by pundits and advertisers into believing that if you can’t buy or sell something, it must not be worth anything. We may pay lip service to the notion that the most important things in life are free, but we do our best to ignore it in practice. 

As a correlative, we have also become so invested in procedure that we mistake it for achievement. I’ve talked about this recently in relation to “best practices”. The phenomenon is similar in a student’s time management. If something can’t be measured as progress, it’s seen as being less than real. To engage in unstructured activity when one could be pursuing a structured one is seen as a waste.

This is disastrous for a number of reasons. 

I’ve already discussed here the problem of confusing substance and process. The eager adoption of “best practices” in almost every field attests the colossally egotistical notion that we now know the best way to do just about anything, and that by adhering to those implicitly perfected processes, we guarantee outcomes that are, if not perfect, at least optimal. But it doesn’t work that way. It merely guarantees that there will be no growth or experimentation. Such a tyrannical restriction of process almost definitionally kills progress. The rut has defined the route.

Another problem is that this is a fundamentally mercantile and materialist perspective, in which material advantage is presumptively the only good. For a Christian, that this is false should be a no-brainer: you cannot serve both God and mammon. 

I happily admit that there are some situations where it’s great to have reliable processes that really will produce reliable outcomes. It’s useful to have a way to solve a quadratic equation, or hiring practices that, if followed, will keep one out of the courts. But they mustn’t eclipse our ability to look at things for what they are. If someone can come up with better ways of solving quadratic equations or navigating the minefields of human resources, all the better. When restrictive patterns dominate our instructional models to the point of exclusivity, they are deadening.

Parents or teachers who need to scrutinize and validate all their children’s experiences are not helping them: they’re infantilizing them. When they should be growing into a mature judgment, and need to be allowed to make real mistakes with real consequences, they are being told instead not to risk using their own judgment and understanding, but to follow someone else’s judgment unquestioningly. Presumably thereby they will be spared the humiliation of making mistakes, and they will also not be found wanting when the great judgment comes. That judgment takes many forms, but it’s always implicitly there. For some it seems to have a theological component. 

In the worldly arena, it can be college admission, or getting a good job, or any of a thousand other extrinsic hurdles that motivate all good little drones from cradle to grave. College is of the biggie at this stage of the game. There is abroad in today’s panicky world the notion that a student has to be engaged in non-stop curricular and extracurricular activities even to be considered for college. That’s false, but it’s scary, and fear almost always trumps the truth. Fear can be fostered and nurtured with remarkable dexterity, and nothing sells like fear: this has been one of the great (if diabolical) discoveries of advertisers since the middle of the last century. Fear is now the prime motivator of both our markets and our politics. It’s small wonder that people are anxious about both: they’ve been bred and acculturated for a life of anxiety. They’re carefully taught to fear, so that they will buy compulsively and continually. The non-stop consumer is a credulous victim of the merchants of fear. We need, we are told, to circle the wagons, repel boarders, and show a unified face to the world. Above all, we should not question anything. 

Though we seem more often to ignore it or dismiss it with a “Yes, but…”, our faith tells us  that perfect love casts out fear. The simple truth is one that we’ve always known. Fear diminishes us. Love enlarges us. What you’re really good at will be what you love; what you love is what you’ll be good at. Which is the cause and which the effect is harder to determine: they reinforce one another. You can only find out what you love, though, if, without being coerced, you take the time and effort to do something for its own sake, not for any perceived extrinsic reward that’s the next link in Madison Avenue’s cradle-to-grave chain of anxious bliss.

There’s nothing wrong with structured activities. If you love debate, by all means, do debate. If you love music, do music. If you love soccer, play soccer. If you don’t love them, though, find something else that you do love to occupy your time, stretch your mind, and feed your soul. Moreover, even those activities need to be measured out in a way that leaves some actual time that hasn’t been spoken for. There really is such a thing as spreading oneself too thin. Nothing turns out really well; excellence takes a back seat to heaping up more and more of a desperate adequacy. In my experience, the outstanding student is not the one who has every moment of his or her day booked, but the one who has time to think, and to acquire the unique fruits of undirected reflection. They can’t be gathered from any other source. You can’t enroll in a program of undirected contemplation. You can only leave room for it to happen. It will happen on its own time, and it cannot be compelled to appear on demand.

The over-programmed student is joyless in both study and play, and isn’t typically very good at either one. Drudges who do everything they do in pursuit of such a phantom success will never achieve it. The students who have done the best work for me over the years have without exception been the ones who bring their own personal thoughts to the table. For them, education is not just a set of tasks to be mastered or grades to be achieved, but the inner formation of character — a view of life and the world that shapes what their own success will look like. Our secular culture is not going to help you find or define your own success: it’s interested only in keeping you off balance, and on retainer as a consumer. Take charge of your own mind, and determine what winning looks like to you. Otherwise, you will just be playing — and most likely losing — a game you never wanted to play in the first place.

Autonomy of Means Again: “Best Practices”

Wednesday, January 1st, 2020

When our kids were younger and living at home, they also frequently had dishwashing duty. Even today we haven’t gotten around to buying a mechanical dishwasher, but when five people were living (and eating) at home, it was good not to have to do all that by ourselves.

But as anyone who has ever enlisted the services of children for this job will surely remember, the process needs to be refined by practice. Even more importantly, though, no matter how good the process seems to be, it can’t be considered a success if the outcome is not up to par. At different points, when I pointed out a dirty glass or pan in the drain, all three of our kids responded with, “But I washed it,” as if that had fully discharged their responsibility.

The problem is that though they might have washed it, they had not cleaned it. The purpose of the washing (a process, which can be efficacious or not) is to have a clean article of tableware or cookware (the proper product of the task). Product trumps process: the mere performance of a ritual of cleansing may or may not have the desired result. Inadequate results can at any time call the sufficiency of the process into question.

This is paradigmatic of something I see more and more these days in relation to education. The notion that completing a process — any process — is the same as achieving its goal is beguiling but false. Depending on whether we’re talking about a speck of egg on a frying pan or the inadequate adjustment of the brakes on your car, that category mistake can be irksome or it can be deadly.

In education it’s often more than merely irksome, but usually less than deadly. I’ve already talked about the “I translated it but I don’t understand it” phenomenon here: ; the claim has never made sense to me, since if you actually translated it, that means that you actually expressed the sense as you understood it. No other set of processes one can do with a foreign-language text is really translating it.

Accordingly I’m skeptical of educational theorists (including those who put together some of the standards for the accreditation process we are going through now) buzzing about “best practices”. This pernicious little concept, borrowed with less thought than zeal from the business world, misleadingly suggests — and to most people means — practices that are ipso facto sufficient: pursuing them to the letter guarantees a satisfactory outcome. And yet sometimes the dish is dirty, the translation is gibberish, or the brakes fail.

It’s not really even a good idea in the business world. An article in Forbes by Mike Myatt in 2012 trenchantly backs up its title claim, “Best Practices — Aren’t”; in 2014, Liz Ryan followed up the same concept with “The Truth about Best Practices”. Both articles challenge the blinkered orthodoxies of the “best practices” narrative.

The problem with any process-side validation of an activity is that, in the very act of being articulated, it tends to eclipse the purpose for which the task is being done. Doing it the right way takes precedence over doing the right thing. Surely the measure of an education is the learning itself — not the process that has been followed. The process is just a means to the end. In Charles Williams’ words, which I’ve quoted and referred to here before (here  and here), “When the means are autonomous, they are deadly”.

Of course they may not in any given situation cause someone to die — but means divorced from their proper ends inevitably subvert, erode, and deform the goals for which they were originally ordained. This is especially true in education, precisely because there’s no broad consensus on what the product looks like. Accordingly the only really successful educational process is one that’s a dynamic outgrowth of the situation at hand, and it can ultimately be validated only by its results.

Liz Ryan notes, “They’re only Best Practices if they work for you.” There are at least two ways of understanding that phrase, and both of them are right: they’re only Best Practices if they work for you, and they’re only Best Practices if they work for you. Their utility depends on both the person and the outcome. Nor should it be any other way.

Socrates’ Argumentation — Method, Madness, or Something Else?

Monday, July 31st, 2017

The common understanding of basic terms and ideas is often amiss. Sometimes that’s innocuous; sometimes it’s not.

Many in the field of classical education tout what they call the Socratic Method, by which they seem to mean a process that draws the student to the correct conclusion by means of a sequence of leading questions. The end is predetermined; for good or ill, the method is primarily a rhetorical strategy to convince students that the answer was their own idea all along, thus achieving “buy-in”, so to speak. As rhetorical strategies go, it’s not really so bad.

Is it also good pedagogical technique? I am less certain. The short-term advantage of persuading a student that something is his or her own idea is materially compromised by the fact that (on these terms, at least) the method is fundamentally disingenuous. If the questioner feigns ignorance, while all the while knowing precisely where these questions must lead, perceptive students, at least, will eventually realize that they are being played. Some may not resent that; others certainly will, and will seek every opportunity to disengage themselves from a process that they rightly consider a pretense.

Whether it’s valid pedagogically or not, however, we mustn’t claim that it’s Socratic. Socrates did indeed proceed by asking questions. He asked them incessantly. He was annoying, in fact — a kind of perpetual three-year-old, asking “why?” after each answer, challenging every supposition, and never satisfied with the status quo or with any piece of accepted wisdom. It can be wearying to respond to this game; harried parents through the years have learned to shut down such interrogation: “Because I said so!” The Athenians shut Socrates’ questioning down with a cup of hemlock.

But the fact is that the annoying three-year-old is probably the most capable learning agent in the history of the world. The unfettered inquiry into why and how — about anything and everything — is the very stuff of learning. It’s why young children learn sophisticated language at such a rate. “Because I said so,” is arguably the correct answer to “Why must I do what you say?” But as an answer to a question about the truth, rather than as the justification of a command, it’s entirely inadequate, and even a three-year-old knows the difference. If we consider it acceptable, we are surrendering our credentials as learners or as teachers.

The difference between the popular notion of this so-called Socratic method and the method Socrates actually follows in the Platonic dialogues is that Socrates apparently had no fixed goal in view. He was always far more concerned to dismantle specious knowledge than to supply a substitute in its place. He was willing to challenge any conclusions, and the endpoint of most of his early dialogues was not a settled agreement, but merely an admission of humility: “Well, golly, Socrates. I’m stuck. I guess I really have no idea what I was talking about.” Socrates thought that this was a pretty good beginning; indeed, he claimed that his one advantage over other presumed experts was that he at least knew that he didn’t know anything, while they, just as ignorant in fact, believed that they knew something.

Taken on this view, the Socratic method is really a fairly poor way of training someone. If you are teaching people to be technicians of some sort or other, you want them to submit to the program and take instruction. It’s arguably not the best tool for practical engineering, medicine, or the law. (There is now a major push in resistance to using any kind of real Socratic method in law school, for example.)

But training is precisely not education. Education is where the true Socratic process comes into its own. It’s about the confrontation of minds, the clarification of definitions, and the discovery and testing of new ideas. It’s a risky way of teaching. It changes the underlying supposition of the enterprise. It can no longer be seen merely as a one-way download of information from master to pupil. In its place it commends to us a common search for the truth. At this point, the teacher is at most the first among equals.

This makes — and will continue to make — a lot of people uncomfortable. It makes many teachers uncomfortable, because in the process they risk losing control — not necessarily behavioral control of a class, but their identity (often carefully groomed and still more zealously protected) as oracles whose word should not be questioned. It opens their narrative and their identity to questioning, and may put them on the defensive.

It makes students uncomfortable too — especially those who are identified as “good” students — the ones who dot every “i” and cross every “t”, and never seem to step out of line or challenge the teacher’s authority. These are the ones likeliest, in a traditional high school, to be valedictorians and materially successful, according to a few recent studies — but not the ones likeliest to make real breakthrough contributions. (The recent book Barking up the Wrong Tree by Eric Barker has some interesting things to say about this: one can read a precis of his contentions here. Barker’s work is based at least in part on Karen Arnold’s Lives of Promise, published in 1995, and discussed here.)

In practical terms, education is a mixed bag.

There is a place for training. We need at least some of the “download” kind of instruction. Basic terms need to be learned before they can be manipulated; every discipline has its grammar. I really do know Latin, for example, better than most of my students, and, at least most of the time, what I say is likelier to be correct. But my saying so neither constitutes nor assures correctness, and if a student corrects me, then, assuming he or she is right, it should be my part to accept that correction graciously, not to insist on a falsehood because I can prevail on the basis of my presumed status. If the correction is wrong, the course of charity is also to assume good intention on the student’s part, and clarify the right answer in my turn. Either way, there is no room for “alternative facts”. There is truth, and there is falsehood. The truth is always the truth, irrespective of who articulates it, and it — not I or my student — deserves the primary respect. We must serve the truth, not the other way around.

At some point in their education, though, students should also be invited to get into the ring with each other and with the teacher, to state their cases with conviction, and back them up with reasoned argument and well-documented facts. If they get knocked down, they need to learn to get back up again and keep on engaging in the process. It hurts a lot less if one realizes that it’s not one’s own personal worth that’s at stake: it’s the truth that is slowly coming to light as we go along. That’s the experience — and the thrill of the chase that it actually entails — that constitutes the deeper part of education. That’s what the true Socratic method was — and still should be — about.

Two modes of learning are prevalent today in colleges, especially — the lecture course and the seminar. In the lecture, the students are, for the most part, passive recipients of information. The agent is the lecturer, who delivers course content in a one-way stream. It’s enshrined in hundreds of years of tradition, and it has its place. But a student who never moves beyond that will emerge more or less free of actual education. The seminar, on the other hand, is about the dialectic — the back-and-forth of the process. It requires the student to become, for a time, the teacher, to challenge authority not because it is authority but because truth has the higher claim. Here disagreement is not toxic: it’s the life blood of the process, and it’s lifegiving for the student.

At Scholars Online, we have chiefly chosen to rely on something like the seminar approach for our live chats. We have, we think, very capable teachers, and there are some things that they need to impart to the students. But to large measure, these can be done by web-page “lectures”, which a student can read on his or her own time. The class discussion, however, is reciprocal, and that reciprocity of passionately-held ideas is what fires a true love of learning. It’s about the exchange — the push and pull, honoring the truth first and foremost. It may come at a cost: in Socrates’s case it certainly did. But it’s about awakening the life of the mind, without which there is no education: schooling without real engagement merely produces drones.

Failure as a good thing

Friday, March 11th, 2016

People tout many different goals in the educational enterprise, but not all goals are created equal. They require a good deal of sifting, and some should be discarded. Many of them seem to be either obvious on the one hand or, on the other, completely wrong-headed (to my way of thinking, at least).

One of the most improbable goals one could posit, however, would be failure. Yet failure — not as an end (and hence not a final goal), but as an essential and salutary means to achieving a real education — is the subject of Jessica Lahey’s The Gift of Failure (New York, HarperCollins, 2015). In all fairness, I guess I was predisposed to like what she had to say, since she’s a teacher of both English and Latin, but I genuinely think that it is one of the more trenchant critiques I have read of modern pedagogy and the child-rearing approaches that have helped shape it, sometimes with the complicity of teachers, and sometimes in spite of their best efforts.

Christe first drew my attention to an extract of her book at The Atlantic here. When we conferred after reading it, we discovered that we’d both been sufficiently impressed that we’d each ordered a copy of the book.

Lahey calls into question, first and foremost, the notion that the student (whether younger or older) really needs to feel that he or she is doing well at all stages of the process. Feeling good about your achievement, whether or not it really amounts to anything, is not in fact a particularly useful thing. That seems common-sensical to me, but it has for some time gone against the grain of a good deal of teaching theory. Instead, Lahey argues, failing — and in the process learning to get up again, and throw oneself back into the task at hand — is not only beneficial to a student, but essential to the formation of any kind of adult autonomy. Insofar as education is not merely about achieving a certain number of grades and scores, but about the actual formation of characer, this is (I think) spot-on.

A good deal of her discussion is centered around the sharply diminishing value of any system of extrinsic reward — that is, anything attached secondarily to the process of learning — be it grades on a paper or a report card, a monetary payoff from parents for good grades, or the often illusory goal of getting into a good college. The only real reward for learning something, she insists, is knowing it. She has articulated better than I have a number of things I’ve tried to express before. (On the notion that the reason to learn Latin and Greek was not as a stepping-stone to something else, but really to know Latin and Greek, see here and here. On allowing the student freedom to fail, see here. On grades, see here.) Education should be — and arguably can only be — about learning, not about grades, and about mastery, not about serving time, passing tests so that one can be certified or bumped along to something else. In meticulous detail, Lahey documents the uselessness of extrinsic rewards at almost every level — not merely because they fail to achieve the desired result, but because they drag the student away from engagement in learning, dull the mind and sensitivity, and effectively promote the ongoing infantilization of our adolescents — making sure that they are never directly exposed to the real and natural consequences of either their successes or their failures. Put differently, unless you can fail, you can’t really succeed either.

Rather than merely being content to denounce the inadequacies of modern pedagogy, Ms. Lahey has concrete suggestions for how to turn things around. She honestly reports how she has had to do so herself in her ways of dealing with her own children. The book is graciously honest, and I enthusiastically recommend it to parents and teachers at every level. If I haven’t convinced you this far, though, at least read the excerpt linked above. The kind of learning she’s talking about — engaged learning tied to a real love of learning, coupled with the humility to take the occasional setback not as an invalidation of oneself but as a challenge to grow into something tougher — is precisely what we’re hoping to cultivate at Scholars Online. If that’s what you’re looking for, I hope we can provide it.

STEMs and Roots

Tuesday, February 2nd, 2016

Everywhere we see extravagant public handwringing about education. Something is not working. The economy seems to be the symptom that garners the most attention, and there are people across the political spectrum who want to fix it directly; but most seem to agree that education is at least an important piece of the solution. We must produce competitive workers for the twenty-first century, proclaim the banners and headlines; if we do not, the United States will become a third-world nation. We need to get education on the fast track — education that is edgy, aggressive, and technologically savvy. Whatever else it is, it must be up to date, it must be fast, and it must be modern. It must not be what we have been doing.

I’m a Latin teacher. If I were a standup comedian, that would be considered a punch line. In addition to Latin, I teach literature — much of it hundreds of years old. I ask students, improbably, to see it for what it itself is, not just for what they can use it for themselves. What’s the point of that? one might ask. Things need to be made relevant to them, not the other way around, don’t they?

Being a Latin teacher, however (among other things), I have gone for a number of years now to the Summer Institute of the American Classical League, made up largely of Latin teachers across the country. One might expect them to be stubbornly resistant to these concerns — or perhaps blandly oblivious. That’s far from the case. Every year, in between the discussions of Latin and Greek literature and history, there are far more devoted to pedagogy: how to make Latin relevant to the needs of the twenty-first century, how to advance the goals of STEM education using classical languages, and how to utilize the available technology in the latest and greatest ways. What that technology does or does not do is of some interest, but the most important thing for many there is that it be new and catchy and up to date. Only that way can we hope to engage our ever-so-modern students.

The accrediting body that reviewed our curricular offerings at Scholars Online supplies a torrent of exortation about preparing our students for twenty-first century jobs by providing them with the latest skills. It’s obvious enough that the ones they have now aren’t doing the trick, since so many people are out of work, and so many of those who are employed seem to be in dead-end positions. The way out of our social and cultural morass lies, we are told, in a focus on the STEM subjects: Science, Technology, Engineering, and Math. Providing students with job skills is the main business of education. They need to be made employable. They need to be able to become wealthy, because that’s how our society understands, recognizes, and rewards worth. We pay lip service, but little else, to other standards of value.

The Sarah D. Barder Fellowship organization to which I also belong is a branch of the Johns Hopkins University Center for Talented Youth. It’s devoted to gifted and highly gifted education. At their annual conference they continue to push for skills, chiefly in the scientific and technical areas, to make our students competitive in the emergent job market. The highly gifted ought to be highly employable and hence earn high incomes. That’s what it means, isn’t it?

The politicians of both parties have contrived to disagree about almost everything, but they seem to agree about this. In January of 2014, President Barack Obama commented, “…I promise you, folks can make a lot more, potentially, with skilled manufacturing or the trades than they might with an art history degree. Now, nothing wrong with an art history degree — I love art history. So I don’t want to get a bunch of emails from everybody. I’m just saying you can make a really good living and have a great career without getting a four-year college education as long as you get the skills and the training that you need.”

From the other side of the aisle, Florida Governor Rick Scott said, “If I’m going to take money from a citizen to put into education then I’m going to take that money to create jobs. So I want that money to go to degrees where people can get jobs in this state. Is it a vital interest of the state to have more anthropologists? I don’t think so.”

They’re both, of course, right. The problem isn’t that they have come up with the wrong answer. It isn’t even that they’re asking the wrong question. It’s that they’re asking only one of several relevant questions. They have drawn entirely correct conclusions from their premises. A well-trained plumber with a twelfth-grade education (or less) can make more money than I ever will as a Ph.D. That has been obvious for some time now. If I needed any reminding, the last time we required a plumber’s service, the point was amply reinforced: the two of them walked away in a day with about what I make in a month. It’s true, too, that a supply of anthropologists is not, on the face of things, serving the “compelling interests” of the state of Florida (or any other state, probably). In all fairness, President Obama said that he wasn’t talking about the value of art history as such, but merely its value in the job market. All the same, that he was dealing with the job market as the chief index of an education’s value is symptomatic of our culture’s expectations about education and its understanding of what it’s for.

The politicians haven’t created the problem; but they have bought, and are now helping to articulate further, the prevalent assessment of what ends are worth pursuing, and, by sheer repetition and emphasis, crowding the others out. I’m not at all against STEM subjects, nor am I against technologically competent workers. I use and enjoy technology. I am not intimidated by it. I teach online. I’ve been using the Internet for twenty-odd years. I buy a fantastic range of products online. I programmed the chat software I use to teach Latin and Greek, using PHP, JavaScript, and mySQL. I’m a registered Apple Developer. I think every literate person should know not only some Latin and Greek, but also some algebra and geometry. I even think, when going through Thucydides’ description of how the Plataeans determined the height of the wall the Thebans had built around their city, “This would be so much easier if they just applied a little trigonometry.” Everyone should know how to program a computer. Those are all good things, and help us understand the world we’re living in, whether we use them for work or not.

But they are not all that we need to know. So before you quietly determine that what I’m offering is just irrelevant, allow me to bring some news from the past. If that sounds contradictory, bear in mind that it’s really the only kind of news there is. All we know about anything at all, we know from the past, whether recent or distant. Everything in the paper or on the radio news is already in the past. Every idea we have has been formulated based on already-accumulated evidence and already-completed ratiocination. We may think we are looking at the future, but we aren’t: we’re at most observing the trends of the recent past and hypothesizing about what the future will be like. What I have to say is news, not because it’s about late-breaking happenings, but because it seems not to be widely known. The unsettling truth is that if we understood the past better and more deeply, we might be less sanguine about trusting the apparent trends of a year or even a decade as predictors of the future. They do not define our course into the infinite future, or even necessarily the short term — be they about job creation, technical developments, or weather patterns. We are no more able to envision the global culture and economy of 2050 than the independent bookseller in 1980 could have predicted that a company named Amazon would put him out of business by 2015.

So here’s my news: if the United States becomes a third-world nation (a distinct possibility), it will not be because of a failure in our technology, or even in our technological education. It will be because, in our headlong pursuit of what glitters, we have forgotten how to differentiate value from price: we have forgotten how to be a free people. Citizenship — not merely in terms of law and government, but the whole spectrum of activities involved in evaluating and making decisions about what kind of people to be, collectively and individually — is not a STEM subject. Our ability to articulate and grasp values, and to make reasoned and well-informed decisions at the polls, in the workplace, and in our families, cannot be transmitted by a simple, repeatable process. Nor can achievement in citizenship be assessed simply, or, in the short term, accurately at all. The successes and failures of the polity as a whole, and of the citizens individually, will remain for the next generation to identify and evaluate — if we have left them tools equal to the task. Our human achievement cannot be measured by lines of code, by units of product off the assembly line, or by GNP. Our competence in the business of being human cannot be certified like competence in Java or Oracle (or, for that matter, plumbing). Even a success does not necessarily hold out much prospect of employment or material advantage, because that was never what it was about in the first place. It offers only the elusive hope that we will have spent our stock of days with meaning — measured not by our net worth when we die, but by what we have contributed when we’re alive. The questions we encounter in this arena are not new ones, but rather old ones. If we lose sight of them, however, we will have left every child behind, for technocracy can offer nothing to redirect our attention to what matters.

Is learning this material of compelling interest to the state? That depends on what you think the state is. The state as a bureaucratic organism is capable of getting along just fine with drones that don’t ask any inconvenient questions. We’re already well on the way to achieving that kind of state. Noam Chomsky, ever a firebrand and not a man with whom I invariably agree, trenchantly pointed out, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum — even encourage the more critical and dissident views. That gives people the sense that there’s free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.” He’s right. If we are to become unfree people, it will be because we gave our freedom away in exchange for material security or some other ephemeral reward — an illusion of safety and welfare, and those same jobs that President Obama and Governor Scott have tacitly accepted as the chief — or perhaps the only — real objects of our educational system. Whatever lies outside that narrow band of approved material is an object of ridicule.

If the state is the people who make it up, the question is subtly but massively different. Real education may not be in the compelling interest of the state qua state, but it is in the compelling interest of the people. It’s the unique and unfathomably complex amalgam that each person forges out of personal reflection, of coming to understand one’s place in the family, in the nation, and in the world. It is not primarily practical, and we should eschew it altogether, if our highest goal were merely to get along materially. The only reason to value it is the belief that there is some meaning to life beyond one’s bank balance and material comfort. I cannot prove that there is, and the vocabulary of the market has done its best to be rid of the idea. But I will cling to it while I live, because I think it’s what makes that life worthwhile.

Technical skills — job skills of any sort — are means, among others, to the well-lived life. They are even useful means in their place, and everyone should become as competent as possible. But as they are means, they are definitionally not ends in themselves. They can be mistakenly viewed as ends in themselves, and sold to the credulous as such, but the traffic is fraudulent, and it corrupts the good that is being conveyed. Wherever that sale is going on, it’s because the real ends are being quietly bought up by those with the power to keep them out of our view in their own interest.

Approximately 1900 years ago, Tacitus wrote of a sea change in another civilization that had happened not by cataclysm but through inattention to what really mattered. Describing the state of Rome at the end of the reign of Augustus, he wrote: “At home all was calm. The officials carried the old names; the younger men had been born after the victory of Actium; most even of the elder generation, during the civil wars; few indeed were left who had seen the Republic. It was thus an altered world, and of the old, unspoilt Roman character not a trace lingered.” It takes but a single generation to forget the work of ages.

But perhaps that’s an old story, and terribly out of date. I teach Latin, Greek, literature, and history, after all.

Reading and Christian Charity

Friday, September 21st, 2012

[This was originally posted as part of the Scholars Online website, and it remains there among the “White Papers”, but I thought that putting it out on the blog would give it a little more exposure.]


Over my years as a teacher, I have had parents and students challenge me on my choice of literature on the grounds that some of it was not morally suitable. Among the works that have fallen under their scrutiny are the Iliad (for violence), the Oedipus Rex of Sophocles (patricide, incest), the Volsunga Saga (murder, incest), the plays of Shakespeare (murder, bawdry, violence, drunkenness, adultery, lying, theft, treason…the list is nearly endless), and Frankenstein and “The Importance of Being Earnest” (their authors’ lifestyles). Teaching the various pagan myths, furthermore, has been variously condemned on the ground that they presume false gods. Even some students who have come looking specifically for classical education have not been able to refrain from ridiculing the Greeks for their beliefs.

Most of these people have good intentions. The behavior to which they are objecting is usually indeed objectionable. We should not practice murder or incest or adultery; we should not steal to become wealthier, or kill others to enhance our personal glory; we should not embrace any of the thousand human vices that are detailed almost any selection of literature one could pick. We should not be moved by admiration for a work to emulate its author’s bad behavior, either. And I would certainly affirm that the gods of Olympus and Valhalla are fictions, and that any inclination to worship them ourselves ought to be suppressed. So why should we concern ourselves with literature that includes them—and if we do, how should we approach them? It’s a good question, requiring a serious answer.

The case that observation elicits emulation has been made for generations, and there is something to be said for it. One is unlikely to be drawn to a sin one has never heard of. An adult charged with the care and education of children must bear this in mind. One oughtn’t expose an unprepared mind to even literary descriptions of some human activity, any more than one hands a six-year-old the keys to the car.

The countervailing argument is that some familiarity with the harsh reality of the world is necessary: children need to recognize evil to reject it. After all, we live inescapably in a sinful world, and are ourselves part of it. Our tendency to sin will not be eliminated by cultivating ignorance. We may not have heard of one sin, but we can almost certainly make up the lack in some other way out of our own flawed natures. In addition to exposing us to bad things, good literature can help teach us to recognize evil and avert it.

So these two excellent arguments stand perpetually at odds. It’s nothing new: the question rages in Plato’s Republic, and it’s still part of our public discourse relating to censorship. Unsurprisingly, Plato, who believes in the moral perfectibility of man, prefers censorship; today’s libertarian tends toward the other extreme, in a fond belief that market forces will achieve something optimized not only for economic equilibrium but moral balance. A Christian cognizant of our fallen nature, though, can accept neither extreme. The trick seems to be in ascertaining exactly where to place the boundary at any given time.

That is of course not obvious, nor is it even really clear that there is a fixed line so much as a murky zone in the middle somewhere, to be navigated with a queasy caution. It is hard to decide what ought to be read and what ought to be suppressed without relying on some kind of calculus—whether simple or elaborate—of a work’s infractions. The problem is that an incident-count is usually going to miss the point. One can promote a thoroughly repugnant ethic without resorting to obscenity or violence. It is also possible to show the light of redemption shining through the nastiest human experiences.

The shallow scoring methodology often used to define movies or books as unsuitable because of their quantities of inappropriate behavior will also erode the Scriptures. The Old Testament objectively recounts almost every known form of sin. The Gospels are not much better on that computation: they’re full of hypocrites and adulterers and sinners of every other sort, and the narrative comes to a wholly unwarranted execution by crucifixion. Can we allow our children to read such things?

And yet—dare we allow our children not to read such things? Are we are saved by the overwhelming niceness of God, or by this horrific and bloody sacrifice once offered? Weren’t the Children of Israel freed in a sequence of increasingly grisly plagues upon their Egyptian oppressors? Don’t we need to take these stories into ourselves and make them ours? We aren’t coaxed into the Kingdom of God as into a four-star hotel, by its elegant appointments and superior service—we’re driven, battered, and corralled, lifted up out of the mire of our own making because that’s finally the only place we can turn where we don’t see our own destruction. So sooner or later we must allow our children to encounter some unpleasant material. It seems to me better that they should encounter at least some of it in literature rather than in person.

Still, as responsible parents trying to raise children in the nurture of the Lord, we have to wrestle with the boundaries of where and when, and even after that we’re going to have to determine how to approach it in substance. It’s never going to be easy, safe, or comfortable. We’re going to make mistakes. We’re going to give our children some things they’re not ready for. It will hurt them. We’re going to protect them from things they really should have known. That will hurt them too.

We can lament this, but we cannot avoid it. There’s no easy answer. People who rely on simplistic formulae will achieve commensurately simplistic solutions. I know some families, for example, that simply rely on movie ratings for their film standards. PG-13 is okay; R is not. The problem is that it doesn’t work. There are some profoundly moving and powerful movies—important ones—that are rated R. There are also some vile ones that skate by with a PG or even a G rating—perhaps not those promoting ostentatious sexuality, extreme physical violence, or drug abuse, but some that plant corrupting seeds in the soul all the same (and there are sins other than sins of sex, violence, and substance abuse). The complexity of our experience cannot be reduced to a simple tally.

My thinking on this issue has been transformed by a number of things over the last decade, but by nothing as much as C. S. Lewis’s An Experiment in Criticism. I recommend it to everyone concerned with the complex balance of both what and how we read. What I’m going to propose here has to do with drawing what Lewis says to the critical community at large at least partway under the umbrella of specifically Christian thought. (While Lewis was of course a noted Christian apologist, this particular work was written for the scholarly community, and tends not to take an openly theological approach, though I think it is informed at a deeper level by his faith.) For the most part my own thinking is motivated by an awareness that when you read something, you are not just taking words into your eyes and mind: you are actually encountering, at some level, the person who wrote it.

This is a brash claim, and I’ll be the first to admit that it’s a limited sort of encounter. We don’t know what Homer looked like, or whether there were two of him, or whether one or both of him were blind or female or slaves. There are a lot of facts we don’t know—facts we would know if we were sitting down with him (or her, or them) at dinner. We don’t know, either, whether Shakespeare the writer was Shakespeare the actor or someone else entirely. And even where we have a fair amount of reliable biographical data, we still don’t know a good deal about most authors. A lot slips through the cracks.

But how is that different from any other human encounter? There are a lot of things I don’t know about the fellow I meet on the street. And yet (assuming he doesn’t approach me with obvious hostile intent) I try to give such a person a reception in Christian charity—a fair hearing, genuinely trying to understand what he has to say to me. Our Lord tells us, “For as much as you have done to one of the least of these, you have done also to me.”

There are a lot of things, for that matter, that I don’t know or understand even about those people who are closest to me. There are parts of my wife’s personality that I am only now discovering after nearly thirty years of marriage. There are parts I’m still pretty puzzled about. Maybe I’ll get them figured out eventually, but I’m not wagering on it. This is heady stuff, and should keep us humble and aware of the profound gravity (Lewis called it the weight of glory) that inheres in every single human interaction we have.

Is it reasonable to suggest that we approach reading that way too? I think it’s a thought-experiment worth trying. When we pick up a book, we are privileged to make—on some level—the author’s acquaintance. At the most basic level, we encounter persons when we read. And that imposes on us a moral obligation to listen—listen hard, sincerely, and attempt to understand what they’re trying to tell us.

Note that there are a number of things we have no obligation to do. We’re not obliged to believe everything they say—merely to hear it, and to strive to understand them. We have—and should have—own beliefs, and others (whether we meet them on the street or through books) have no presumptive claim upon those beliefs, unless they manage to persuade us by honest argument.

At the same time, I don’t think we need to feel obliged to judge everything they say, or to condemn them for crossing this or that line. This seems to be a favorite academic pastime, and a favorite pastime too among a lot of other groups. We live in a society ruled by the iconic thumbs-up or thumbs-down. Things are apparently either to be embraced or dismissed, with no intermediate gradations of evaluation or analysis. Many watchdog groups pronounce a movie worthy or unworthy of my attention, based on whether they agree with what they think it’s propounding. Few from any part of the political or religious spectrum suggest that I sift the work’s content for myself. It’s either one way or the other.

Do we deal with people that way? Some, I suppose, do—but that’s not what Our Lord has told us to do. We believe—at least those of us who believe that God loves us all, sinners as we all are—that we need to receive not only those with whom we agree, but also those with whom we do not. We don’t receive them for the rightness of their opinions, but because of our shared humanity. We don’t give them a cup of water in Jesus’ name because of their own righteousness (or even because of ours) but because of His. And when we do so, if we have the humility for it, we can see a partial image of God in each of them, too: again, not because they are right but because they are His— even if they don’t know it.

That’s why I can read Homer and receive the raw humanity of his tale, expressed in selfish, generous, sinful, driven, glorious—and contradictory—people. I don’t have to approve of Hamlet and his decisions (many of which are repellent, I think) to find a window on some very serious truths about human nature. We can have a literary sympathy for him without approving his deeds. I don’t have to approve of Mary Shelley’s behavior to recognize that she has some important and serious things to tell me about our capacity to create and to betray. I don’t have to approve of Oscar Wilde’s lifestyle to appreciate some of his scathingly funny (and often correct) pieces of human insight.

I am not eager to found another school of literary criticism, but I cannot find in any of the currently dominant ones the slightest note of the moral burden I think we inherit as readers. I would like at least to advance the notion that there is—less as a school of critical practice, and more as a disposition of the heart—a Christian way of reading. I would like to suggest that as a paradigm for such Christian reading, we take an approach that may seem simplistic to some, daring to others; but I think it will exercise our moral capacity and force us back where we belong, humbled, upon the all-sufficient love of Christ.

If we as Christians were to read with a fundamental charity toward the author, we would achieve something of a revolution in critical thought, at least within the Church. No, the world will not listen, most likely; it seldom does. It will have its own combinations of pettiness and loftiness, and it will come to its own mix of profound and vapid perceptions. And we may not do a lot better, in terms of critical output. But we are under no obligation to be successful: we are obliged to do what is right, irrespective of its success or failure.

Herewith I present a handful of what seem to me to be the chief implications of that principle:

We must make a good-faith effort to learn what the author was trying to say.
The so-called New Criticism of the 1950s laid it down as axiomatic that authorial intention was more or less beyond recovery, and that the text itself should be scrutinized on absolute terms as a work entirely unto itself. There is of course a profound truth behind what they claimed. We can never wholly or perfectly know the mind of another. In fact, the likelihood is high we will from time to time make some rather serious errors.

But it does not make matters better to combine a profound insight with an oversight, even more profound. What the New Critics seem to have missed is the fact that, if there is no authorial intention at stake, there is really no point to reading at all: if there is no context, neither is there, in any meaningful sense, a text. The purpose of writing in the first place is lost, for an author is almost never merely weaving words into an abstract object for his own amusement: he is attempting to communicate with readers, whoever they may be. If we respect that intention and respond in charity, we have to take this seriously.

We will never completely discover that intention.
As I said above, our understanding will be imperfect. This chafes some, especially those who require pure theory. I’ve come to expect it. Reality is messy and confusing. Now we see as through a glass darkly: if we can only see God imperfectly (whose intention, at least, is perfect, and whose capacity for self-expression issued in the Logos of all creation) then surely our understanding of our fellow man will be no better. That’s unfortunate, but for now, it’s what we have, so we’d better make the best of it. We can hope that we will in the next life be united not only with God, but also with the rest of God’s creation, in a more perfect understanding.

Not everything in a work can be encompassed by the author’s intention.
Sometimes we will perceive something valuable in the text without being sure whether the author intended it or not. There are passages in the Psalms where the Hebrew word is simply unknown to us. There are passages in Shakespeare where the words seem clear, but the thought that knits them together is impenetrable. There are places in poetry and prose alike where words take on a complex of meanings, and we cannot be entirely sure of whether the author really meant all those meanings or just one. This is where the New Criticism got it right. In the overall richness of literary production, connections emerge either from the subconsciousness of the author, where murky things reside beyond the scrutiny of pure intention, or else they emerge from the innate coherence of the material itself: the author has touched a truth perhaps unwittingly, but the truth of the universe resonates with it. This is part of the literary experience, too, and it would be churlish to reject it. Christian readers, I think, can take it as a sign of the grace of God operating in and on our small creative efforts, validating them, fructifying them, and turning them to a higher purpose. I’m not sure how others take it, but that’s not my present concern.

The whole intention of a work will be greater than the sum of its parts.
We cannot evaluate a work solely by regarding the incidents of its narrative. There may be reasons to proscribe certain works because of such things, or to ban them from schools, but this is a pragmatic tactical judgment—not a real evaluation. Put somewhat more pointedly, the mere presence of a sin in a story, no matter how appalling it is, does not make the story immoral. Yes, there are stories that we can call immoral, insofar as they seem to conduce to immoral practices on the part of those who read and believe them, or (at a deeper level) because they present a lie as a truth. But most stories—and all good ones—have to account for the reality of human sin. Dramatically presenting sinful behavior in a story is not ipso facto an endorsement of the sin. A story that presumes a sinless or perfectible humanity is, in the long run, immeasurably more dangerous.

We haven’t entered into the reading process primarily to judge.
I know, the term “judge” is tossed around rather sloppily both inside the Church and at its periphery, and indignant secularists with a somewhat deficient sense of irony routinely condemn Christians for being judgmental. What I’m saying here is merely this: just as we don’t talk to people in order to tally up the conversation’s share of virtue, the goal of the process of reading is not primarily evaluative either. The goal of reading is the meeting of minds itself. That imperfect meeting, across the gaps of time, space, world-view, and personality is not a side benefit of reading; it’s what reading is about. It’s another instance of human interaction—which seems to be a large part of what God put us here for—and it should be conducted with full regard for what Lewis called the weight of glory. I don’t need to pronounce on the ultimate state of Homer’s soul (God surely doesn’t need my help to sort that out); I don’t even need to come up with a value to assign to his work. I could not possibly do so anyway. I do need to love Homer—not because of his artistic virtuosity, or even because of his own intrinsic worth as a person, but because God loved him first and loves him still.

To recognize and embrace a truth is infinitely more rewarding than rejecting something.
When we have moved away from the position of judging, we also allow all those people—imperfect as they are—to mediate God’s love and God’s presence to us, and in that very act we can turn around some of the perceived deficiencies in these works, and make of them powerful lenses. When Achilles, a proud killing machine, and yet also a deeply sensitive representative of his culture—poetic, cruel, brilliant, and vengeful—extends mercy to Priam at last, he offers not only the mercy of Achilles, but an image of the mercy of God. Does Achilles know that? No. Does Homer know it? No. Does it matter that they don’t know it?  No. It’s powerful because it comes unexpectedly, like lightning from a clear sky. What we experience there is not pure alienation and bewilderment: the great shock here at the end of the ordeal of the Iliad is the shock of recognition—like climbing Everest and finding there, waiting for you, an old friend. The part of our souls that responds to the love of Christ, mirrored among our fellow churchmen on Sunday morning, should be able to recognize it, even in glimmers half-understood, in the far reaches of time and space. The incongruity of the context can endow it with a peculiar power: a bright light shines with equal intensity by day or night, but it’s by night that we see it best.

Humility is never out of place.
The words that come most painfully to most academics are, “I don’t know.” Scarcely less shameful than not knowing something is not having an opinion on it. Being willing to admit that we don’t know something, and withholding the formation of an opinion until we do, though, can be hugely liberating. It leaves us open to perceive  without bias. And if it entails an admission that we aren’t infinitely wise, so much the better. We all need to be reminded of that.

It’s easier to miss something that’s there than mistakenly to see something that isn’t.
Accordingly we should remain open to the possibility—indeed, the virtual certainty—that we’ve missed something. This is one of the reasons one can keep coming back to the same literature; it has the happy result that as one grows older, one can find valuable new things in what we might previously have discarded.

It’s akin to the unicorn problem. It’s easy to demonstrate the existence of people or dogs—one need just point one out. It’s nearly impossible to prove that unicorns don’t exist, though, unless they are logically self-contradictory. After conducting a painstaking search, we can say with some assurance that there are no unicorns here—but that doesn’t mean that they aren’t lurking just beyond our sight. In the same way, it’s virtually impossible to show that a work lacks real literary value. I’m not sure why anyone feels called upon to try, and why some seem so eager to dismiss as many things as possible. As ever, the dismissal on this level is tantamount to a dismissal of the person behind the work. Dare we, on peril of our own souls, to do that?

When a work doesn’t speak to me, really the worst thing I can honestly say about it is that it doesn’t speak to me. That’s a statement that’s as much about me as about it. I have too often had the humbling experience, though, of returning to works—sometimes after several readings—and discovering in them something I had missed before. It was many years and a dozen readings or more before Hamlet really started to make sense to me. I don’t think Hamlet really improved or altered in the interim.

Do note that this is different from perceiving a positive literary or moral fault in a work. Of the two, the literary fault is just a failure of workmanship; the moral fault is more problematic and probably more important. I think one can say that a work of literature is to be approached with caution or avoided altogether if its whole program is positively pernicious. But this is properly the domain of moral philosophy, and not in and of itself a literary judgment. Of course a literary scholar is also a moral agent, and this is not a concern that can be ruled out of bounds; nor in many cases are moral and artistic faults completely separable. I think it is always possible, too, that a work that is apparently advocating something we don’t approve of will, upon recognition of its artistic virtues, turn out not to have been saying that all along—but that is a complex and troubling line of inquiry too big for the present context.

Ridicule is not helpful to the enterprise.
Ridicule does not ennoble the one ridiculing; it does not benefit the one ridiculed; it does not helpfully inform the third party. It virtually never promotes real understanding; it seldom makes a significant distinction; it is, accordingly, at best pointless, at worst cruel, and most often (even when the object of ridicule is dead and gone, and beyond apparent harm) it sets a low example of callous disregard and uncharity, a pattern of not hearing and not receiving another genuinely. There is room for satire in the world, but it’s the form of literature most perilous for its practitioner: it needs to be conducted with an eye on the higher goal of lifting someone or something up, not merely tearing people down.

All truth is God’s truth.
If something is not true in and of itself, no amount of pious dressing will make it true. Conversely, if it is true, it needs no further raison d’être. We don’t need to apologize for every and any truth, or make it a platform for apologetics or pious polemics. Apologetics have their place, and I applaud and appreciate them: but truth, insofar as anything is true in itself, needs no further justification. The attempt to frame everything up as a case for Jesus, or to endow every story with a moral, or to force on every historical essay an evaluative pronouncement upon a culture, does not work to the glory of God. It instead tends to give the impression that truth is only worth heeding if we can somehow cash it in for platitudes, and tie it to an overtly theological point. Such a timorous view of the truth confounds the fear of the Lord: it’s fear for the Lord, and argues a fragile faith that cannot endure to look at the beauty of truth for what it is, and know that it is God’s.

And in a sense, I think, such people deprive themselves of a view of God in the very act of trying to keep their perspectives pure. For while I am very far from being a pantheist, I think (as Paul suggests in the first chapter of Romans) that the Lord has in fact hidden himself—or perhaps we might say, metaphorically, that he has left his fingerprints, to be discovered, as a channel of revelation and delight for us—throughout the weird and wonderful diversity of creation, with the divinely ironic result that even those who deny Him can convey to us an image of Him in spite of themselves.

Why Study Greek?

Thursday, September 13th, 2012

I would make them all learn English: and then I would let the clever ones learn Latin as an honour, and Greek as a treat.
— Winston Churchill (somewhat out of context).

A few years ago I wrote an entry on this blog entitled “Why Study Latin?” It was a distillation of my own thoughts about the actual benefits of learning Latin — and the benefits one ought legitimately to expect from doing so. I tried to distinguish the real benefits from other phantom benefits that might not be, in and of themselves, fully valid reasons for undertaking the study. Not everyone agreed, but in general I stand by what I said there. From my point of view, the chief reason to learn Latin is to be able to read Latin; a significant second is to gain that unique way of looking at the world that attends that ability. One has access to a number of great works of Latin literature in their original forms, therefore, and one has also an enhanced ability to think in Latinate terms.

Of course other collateral benefits might reasonably accrue, but they are neither absolutely guaranteed to the student of Latin, nor are they benefits that attend Latin study exclusively. Dr. Karl Maurer of the University of Dallas suggested that I didn’t sufficiently credit the advantages a trained Latinist would have in reading older English — and he’s definitely right that this kind of textural depth of English poetry and prose will probably elude anyone who isn’t familiar with Latin, and the way Latin education was a cornerstone of English education from about 1500 to at least 1900. I certainly don’t disagree with his claims there; I don’t think they rank as matters of linguistics as much as matters of literary development and style. They’re still not trivial, however.

Be that as it may, for a variety of reasons, some of them right and some of them wrong, learning Latin has its champions, and I hope it gains a lot more. While I don’t agree with all the reasons one might advance for Latin study, I will enthusiastically concur that it’s a terrific thing to learn and to know.

Far fewer, however, champion learning Greek so loudly. For a variety of reasons, Greek is seen as far less significant. Some of those reasons are sound: Greek does not directly stand behind a broad range of modern Western European languages the way Latin does. Many of our ideas of statecraft and polity come from Greece, but most of them came through Latin in the process. Other reasons people shy away from Greek are fairly trivial. It has an odd-looking alphabet. Its literature seems to depend on a lot of odder assumptions. Realistic, though rather defeatist, is the fact that, in general, Greek is just considered tougher to learn. Many mainstream churches no longer even require their clergy to be able to read Greek (which seems preposterous to me, but that’s another matter).

For whatever reasons, Greek is certainly studied far less at the high school level than it once was. I read a statistic a few years ago suggesting that maybe a thousand students were actually studying ancient Greek in modern American high schools at any one time. The numbers may be as high as two thousand, but surely no higher than that. I don’t know whether those numbers have risen or fallen since I read it, but I certainly see no evidence that they have skyrocketed. I do occasionally run into a Latin teacher at the American Classical League Summer Institutes who teaches some Greek, but it’s most often a sideline, and often a completely optional “extra” for before or after school. Most of those students are being exposed to the Greek alphabet and some vocabulary, but fairly few of them are receiving a rigorous exposure to the grammar of Greek as a whole. If one narrows that to those who have studied real Classical Greek, as opposed to New Testament Greek, the numbers are probably smaller still.

For me most of the reasons for learning to read Greek are similar to those for reading Latin. The chief benefit, I would still insist, is to be able to read Greek literature in its original terms. Lucie Buisson wrote an eloquent defense of Homer in Greek not long ago in this blog. You cannot acquire a perspective on the Homeric poems like Lucie’s without reading them in Greek. It’s a huge deal: something snaps into view in a way that just cannot be explained to someone who hasn’t experienced it. No translation, no matter how good, can capture it for you. Though Keats memorably thanked Chapman for something like this eye-opening experience, the fact remains that Keats didn’t have the real thing as a comparandum. Chapman’s Homer is terrific — but Homer’s Homer is better.

Beyond the immediate experience of the literary objects themselves there is the fact that Greek provides its students with what I can only (metaphorically) call another set of eyes — that is, a different way of seeing the world, with different categories of thought that run deeper than mere changes in vocabulary. Virtually any new language one learns will provide that kind of new perspective: French, Spanish, or German will do so; Latin certainly does. I would suggest that Greek provides a uniquely valuable set precisely because it is further removed from English in its basic terms.

A reasonable command of multiple languages gives us what might be likened to stereoscopic vision. One eye, or one point of view, may be able to see a great deal — but it’s still limited because it’s looking from one position. A second eye, set some distance from the first, may allow us to see a somewhat enlarged field of view, but its real benefit is that it allows us, by the uncannily accurate trigonometric processor resident in our brains, to apprehend things in three dimensions. Images that are flat to one eye achieve depth with two, and we perceive their solidity as we never could do otherwise. Something similar goes on with an array of telescope dishes spread out over a distance on the earth — they allow, by exploiting even relatively slight amount of parallax in cosmic terms, an enhanced apprehension of depth in space. (Yes, there are also some other advantages having to do with resolution — all analogies have their limits.)

I would argue that every new language one learns will thus provide another point of view, enhancing and enriching, by a kind of analogical stereoscopy, a deeper and more penetrating view of the world. And like the more widely spaced eyes, or the telescopes strung out in a very large array, the further apart they are, the more powerfully their “parallax” (to speak purely analogically) will work upon us. This, I would argue, is one of the chief reasons for learning Greek. In some of its most fundamental assumptions, Greek is more sharply distinct from English than is Latin. A few examples will have to suffice.

Greek, for example, invites us to think about time differently. Greek verb tenses are not as much about absolute time as English verb tenses are; they are more about what linguists call aspect (or aspect of action in older writings). That is, they have more to do with the shape of an action — not its intrinsic shape, but how we’re talking about it — than merely locating it in the past, present, or future. Greek has a tense — the aorist — that English and Latin are missing. The aorist is used in the indicative mood to denote simple action in the past, but in other moods to express other encapsulation of simple verb action. Greek aorist verbs in the indicative will certainly locate events in the temporal continuum, and certainly English also has ways to express aspect — things such as the progressive or emphatic verb forms: e.g., “I run” vs. “I am running” or “I do run”. But whereas the English verb is chiefly centered in the idea of when something happened or is happening or will happen, with aspect being somewhat secondary, in Greek it’s the other way around. What exactly that does to the way Greek speakers and thinkers see the world is probably impossible to nail down exactly — but it’s not trivial.

Attic and earlier Greek has a whole mood of the verb that isn’t present in English or Latin — the optative. Students of New Testament Greek won’t see this on its own as a rule. There are a few examples such as Paul’s repeated μὴ γένοιτο in Romans (sometimes translated as “by no means”, but intrinsically meaning something more like “may it not come about”). But Attic and older dialects (like Homeric Greek) are loaded with it. It’s not just an arbitrary extension of a subjunctive idea: it runs alongside the subjunctive and plays parallel games with it in ways that defy simple classification.

Greek has a voice that neither English nor Latin knows, properly speaking — what is called the middle voice. It is neither active nor passive; but tends to refer to things acting on or on behalf of themselves, either reflexively or in a more convoluted way that defies any kind of classification in English language categories.

The Greek conditional sentence has a range of subtlety and nuance that dwarfs almost anything we have in English. Expressing a condition in Greek, or translating a condition from Greek, requires a very particular degree of attention to how the condition is doing what it is doing. In the present and the past, one may have either contrary to fact conditions (“If I were a rich man, I would have more staircases,” or “If I had brought an umbrella I would not have become so wet,”) general conditions (“If you push that button, a light goes on,”), and particular conditions (“If you have the older edition of the book, this paragraph is different”); in the future there are three other kinds of conditions, one of them nearly (but not quite) contrary to fact (“If you were to steal my diamonds, I’d be sad,”) called the future less vivid, and then a future more vivid and a future most vivid, representing increasing degrees of urgency in the future. All of these can be tweaked and modified and, in some rather athletic situations, mixed. If you study Greek, you will never think about conditions in quite the same way again.

Greek has what are called conditional temporal clauses that model themselves on conditions in terms of their verb usage, though they don’t actually take the form of a condition. There is something like this in English, but because we don’t use such a precise and distinct range of verbs for these clauses, they don’t show their similarities nearly as well.

The Greek participle is a powerhouse unlike any in any other Western language. Whole clauses and ideas for which we would require entire sentences can be packaged up with nuance and dexterity in participles and participial phrases. Because Greek participles have vastly more forms than English (which has only a perfect passive and a present active — “broken” and “breaking”) or than Latin (which has a perfect passive and a present active, and future active and passive forms), it can do vastly more. Greek participles have a variety of tenses, they can appear in active, middle, and passive voices, and they are inflected for all cases, numbers, and genders. All of these will affect the way one apprehends these nuggets of meaning in the language.

Those are only some examples of how a Greek sentence enforces a subtly different pattern of thought upon people who are dealing with it. As I said, however, for me the real treasure is in seeing these things in action, and seeing the ideas that arise through and in these expressions. So what’s so special as to require to be read in Greek?

Lucie already has written thoroughly enough about the joys of Homer; much the same could be said of almost any of the other classical authors. Plato’s dialogues come alive with a witty, edgy repartee that mostly gets flattened in translation. The dazzling wordplay and terrifying rhythms of Euripidean choruses cannot be emulated in meaningful English. Herodotus’s meandering storytelling in his slightly twisted Ionic dialect is a piece of wayfaring all on its own. The list goes on.

For a Christian, of course, being able to read the New Testament in its original form is a very significant advantage. Those who have spent any time investigating what we do at Scholars Online will realize that this is perhaps an odd thing to bring up, since we don’t teach New Testament Greek as such. My rationale there is really quite simple: the marginal cost of learning classical Attic Greek is small enough, compared with its advantages, that there seems no point in learning merely the New Testament (koine) version of the language. Anyone who can read Attic Greek can handle the New Testament with virtually no trouble. Yes, there are a few different forms: some internal consonants are lost, so that γίγνομαι (gignomai) becomes γίνομαι (ginomai), and the like. Yes, some of the more elaborate constructions go away, and one has to get used to a range of conditions (for example) that is significantly diminished from the Attic models I talked about above. But none of this will really throw a student of Attic into a tailspin; the converse is not true. Someone trained in New Testament Greek can read only New Testament Greek. Homer, Euripides, Plato, Aristotle, Sophocles, Herodotus, Thucydides — all the treasures of the classical Greek tradition remain inaccessible. But the important contents of the New Testament and the early Greek church fathers is open even with this restricted subset of Greek — and they are very well worth reading.

Greek is not, as mentioned earlier, a very popular subject to take at the high school level, and it’s obvious that it’s one of those things that requires a real investment of time and effort. Nevertheless, it is one of the most rewarding things one can study, both for the intrinsic delights of reading Greek texts and for some of the new categories of thought it will open up. For the truly exceptional student it can go alongside Latin to create a much richer apprehension of the way language and literary art can work, and to provide a set of age-old eyes with which to look all that more precisely at the modern world.

Computer Programming as a Liberal Art

Monday, September 3rd, 2012

One of the college majors most widely pursued these days is computer science. This is largely because it’s generally seen as a ticket into a difficult and parsimonious job market. Specific computer skills are demonstrably marketable: one need merely review the help wanted section of almost any newspaper to see just how particular those demands are.

As a field of study, in other words, its value is generally seen entirely in terms of employability. It’s about training, rather than about education. Just to be clear: by “education”, I mean something that has to do with forming a person as a whole, rather just preparing him or her for a given job, which I generally refer to as “training”. If one wants to become somewhat Aristotelian and Dantean, it’s at least partly a distinction between essence and function. (That these two are inter-related is relevant, I think, to what follows.) One sign of the distinction, however, is that if things evolve sufficiently, one’s former training may become irrelevant, and one may need to be retrained for some other task or set of tasks. Education, on the other hand, is cumulative. Nothing is ever entirely lost or wasted; each thing we learn provides us with a new set of eyes, so to speak, with which to view the next thing. In a broad and somewhat simplistic reduction, training teaches you how to do, while education teaches you how to think.

One of the implications of that, I suppose, is that the distinction between education and training has largely to do with how one approaches it. What is training for one person may well be education for another. In fact, in the real world, probably these two things don’t actually appear unmixed. Life being what it is, and given that God has a sense of humor, what was training at one time may, on reflection, turn into something more like education. That’s all fine. Neither education nor training is a bad thing, and one needs both in the course of a well-balanced life. And though keeping the two distinct may be of considerable practical value, we must also acknowledge that the line is blurry. Whatever one takes in an educational mode will probably produce an educational effect, even if it’s something normally considered to be training. If this distinction seems a bit like C. S. Lewis’s distinction between “using” and “receiving”, articulated in his An Experiment in Criticism, that’s probably not accidental. Lewis’s argument there has gone a long way toward forming how I look at such things.

Having laid that groundwork, therefore, I’d like to talk a bit about computer programming as a liberal art. Anyone who knows me or knows much about me knows that I’m not really a programmer by profession, and that the mathematical studies were not my strong suit in high school or college (though I’ve since come to make peace with them).

Programming is obviously not one of the original liberal arts. Then again, neither are most of the things we study under today’s “liberal arts” heading. The original liberal arts included seven: grammar, dialectic, and rhetoric — all of which were about cultivating precise expression (and which were effectively a kind of training for ancient legal processes), and arithmetic, geometry, music, and astronomy. Those last four were all mathematical disciplines: both music and astronomy bore virtually no relation to what is taught today under those rubrics. Music was not about pavanes or symphonies or improvisational jazz: it was about divisions of vibrating strings into equal sections, and the harmonies thereby generated. Astronomy was similarly not about celestial atmospheres or planetary gravitation, but about proportions and periodicity in the heavens, and the placement of planets on epicycles. Kepler managed to dispense with epicycles, which are now of chiefly historical interest.

In keeping with the spirit, if not the letter, of that original categorization, we’ve come to apply the term “liberal arts” today to almost any discipline that is pursued for its own sake — or at least not for the sake of any immediate material or financial advantage. Art, literature, drama, and music (of the pavane-symphony-jazz sort) are all considered liberal arts largely because they have no immediate practical application to the job of surviving in the world. That’s okay, as long as we know what we’re doing, and realize that it’s not quite the same thing.

While today’s economic life in the “information age” is largely driven by computers, and there are job openings for those with the right set of skills and certifications, I would suggest that computer programming does have a place in the education of a free and adaptable person in the modern world, irrespective of whether it has any direct or immediate job applicability.

I first encountered computer programming (in a practical sense) when I was in graduate school in classics. At the time (when we got our first computer, an Osborne I with 64K of memory and two drives with 92K capacity each), there was virtually nothing to do with classics that was going to be aided a great deal by computers or programming, other than using the very basic word processor to produce papers. That was indeed useful — but had nothing to do with programming from my own perspective. Still, I found Miscrosoft Basic and some of the other tools inviting and intriguing — eventually moving on to Forth, Pascal, C, and even some 8080 Assembler — because they allowed one to envision new things to do, and project ways of doing them.

Programming — originally recreational as it might have been — taught me a number of things that I have come to use at various levels in my own personal and professional life. Even more importantly, though, it has taught me things that are fundamental about the nature of thought and the way I can go about doing anything at all.

Douglas Adams, the author of the Hitchhiker’s Guide books, probably caught its most essential truth in Dirk Gently’s Holistic Detective Agency:

”…if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve learned something about it yourself.”

I might add that not only have you yourself learned something about it, but you have, in the process learned something about yourself.

Adams also wrote, “I am rarely happier than when spending entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.” This is, of course, one of the drolleries about programming. The hidden benefit is that, once perfected, that tool, whatever it was, allows one to save ten seconds every time it is run. If one judges things and their needs rightly, one might be able to save ten seconds a few hundred thousand or even a few million times. At that point, the time spent on programming the tool will not merely save time, but may make possible things that simply could never have been done otherwise.

One occasionally hears it said that a good programmer is a lazy programmer. That’s not strictly true — but the fact is that a really effective programmer is one who would rather do something once, and then have it take over the job of repeating things. A good programmer will use one set of tools to create other tools — and those will increase his or her effective range not two or three times, but often a thousandfold or more. Related to this is the curious phenomenon that a really good programmer is probably worth a few hundred merely adequate ones, in terms of productivity. The market realities haven’t yet caught up with this fact — and it may be that they never will — but it’s an interesting phenomenon.

Not only does programming require one to break things down into very tiny granular steps, but it also encourages one to come up with the simplest way of expressing those things. Economy of expression comes close to the liberal arts of rhetoric and dialectic, in its own way. Something expressed elegantly has a certain intrinsic beauty, even. Non-programmers are often nonplussed when they hear programmers talking about another programmer’s style or the beauty of his or her code — but the phenomenon is as real as the elegance of a Ciceronian period.

Pursuit of elegance and economy in programming also invites us to try looking at things from the other side of the process. When programming an early version of the game of Life for the Osborne, I discovered that by simply inverting a certain algorithm (having each live cell increment the neighbor count of all its adjacent spaces, rather than having each space count its live neighbors) achieved an eight-to-tenfold improvement in performance. Once one has done this kind of thing a few times, one starts to look for such opportunities. They are not all in a programming context.

There are general truths that one can learn from engaging in a larger programming project, too. I’ve come reluctantly to realize over the years that the problem in coming up with a really good computer program is seldom an inability to execute what one envisions: it’s much more likely to be a problem of executing what one hasn’t adequately envisioned in the first place. Not knowing what winning looks like, in other words, makes the game much harder to play. Forming a really clear plan first is going to pay dividends all the way down the line. One can find a few thousand applications for that principle every day, both in the computing world and everywhere else. Rushing into the production of something is almost always a recipe for disaster, a fact explored by Frederick P. Brooks in his brilliantly insightful (and still relevant) 1975 book, The Mythical Man-Month, which documents his own blunders as the head of the IBM System 360 project, and the costly lessons he learned from the process.

One of the virtues of programming as a way of training the mind is that it provides an objective “hard” target. One cannot make merely suggestive remarks to a computer and expect them to be understood. A computer is, in some ways, an objective engine of pure logic, and it is relentless and completely unsympathetic. It will do precisely what it’s told to do — no more and no less. Barring actual mechanical failure, it will do it over and over again exactly the same way. One cannot browbeat or cajole a computer into changing its approach. There’s a practical lesson and probably a moral lesson too there. People can be persuaded; reality just doesn’t work that way — which is probably just as well.

I am certainly not the first to have noted that computer programming can have this kind of function in educational terms. Brian Kernighan — someone known well to the community of Unix and C programmers over the years (he was a major part of the team that invented C and Unix) has argued that it’s precisely that in a New York Times article linked here. Donald Knuth, one of the magisterial figures of the first generation of programming, holds forth on its place as an art, too, here. In 2008, members of the faculties of Williams College and Pomona College (my own alma mater) collaborated on a similar statement available here. Another reflection on computer science and math in a pedagogical context is here. And of course Douglas Hofstadter in 1979 adumbrated some of the more important issues in his delightful and bizarre book, Gödel, Escher, Bach: An Eternal Golden Braid.

Is this all theory and general knowledge? Of course not. What one learns along the line here can be completely practical, too, even in a narrower sense. For me it paid off in ways I could never have envisioned when I was starting out.

When I was finishing my dissertation — an edition of the ninth-century Latin commentary of Claudius, Bishop of Turin, on the Gospel of Matthew — I realized that there was no practical way to produce a page format that would echo what normal classical and mediaeval text editions typically show on a page. Microsoft Word (which was what I was using at the time) supported footnotes — but typically these texts don’t use footnotes. Instead, the variations in manuscript readings are keyed not to footnote marks, but to the line numbers of the original text, and kept in a repository of textual variants at the bottom of the page (what is called in the trade an apparatus criticus). In addition, I wanted to have two further sets of notes at the bottom of the page, one giving the sources of the earlier church fathers that Claudius was quoting, and another giving specifically scriptural citations. I also wanted to mark in the margins where the foliation of the original manuscripts changed. Unsurprisingly, there’s really not a way to get Microsoft Word to do all that for you automatically. But with a bit of Pascal, I was able to write a page formatter that would take a compressed set of notes indicating all these things, and parcel them out to the right parts of the page, in a way that would be consistent with RTF and University Microfilms standards.

When, some years ago, we were setting Scholars Online up as an independent operation, I was able, using Javascript, PHP, and MySQL, to write a chat program that would serve our needs. It’s done pretty well since. It’s robust enough that it hasn’t seriously failed; we now have thousands of chats recorded, supporting various languages, pictures, audio and video files, and so on. I didn’t set out to learn programming to accomplish something like this. It was just what needed to be done.

Recently I had to recast my Latin IV class to correspond to the new AP curriculum definition from the College Board. (While it is not, for several reasons, a certified AP course, I’m using the course definition, on the assumption that a majority of the students will want to take the AP exam.) Among the things I wanted to do was to provide a set of vocabulary quizzes to keep the students ahead of the curve, and reduce the amount of dictionary-thumping they’d have to do en route. Using Lee Butterman’s useful and elegant NoDictionaries site, I was able to get a complete list of the words required for the passages in question from Caesar and Vergil; using a spreadsheet, I was able to sort and re-order these lists so as to catch each word the first time it appeared, and eliminate the repetitions; using regular expressions with a “grep” utility in my programming editor (BBEdit for the Macintosh) I was able to take those lists and format them into GIFT format files for importation into the Moodle, where they will be, I trust, reasonably useful for my students. That took me less than a day for several thousand words — something I probably could not have done otherwise in anything approaching a reasonable amount of time. For none of those tasks did I have any training as such. But the ways of thinking I had learned by doing other programming tasks enabled me to do these here.

Perhaps the real lesson here is that there is probably nothing — however mechanical it may seem to be — that cannot be in some senses used as a basis of education, and no education that cannot yield some practical fruit down the road a ways. That all seems consistent (to me) with the larger divine economy of things.