Archive for April, 2010

The True Test of Education

Saturday, April 24th, 2010

Sometime around 1969, standing in the breezeway of Balch Hall at Scripps College in Claremont, I typed “Echo Hello World” on the keyboard of the metal Texas Instruments paper terminal, saved the string as a text file named (with masterful originality) “ChristeHello” over a 300 baud phone line connection on the CalTech computer 35 miles way in Pasadena, typed “Execute ChristeHello”, and watched “Hello World” appear on the next line. Thus began my sometimes rapturous, sometimes contentious relationship with ARPANET, programming, and distance learning.

I hung around the computer lab and made friends with the senior geeks who performed their workstudy duties by feeding the computers large stacks of buff-colored cards and fixing the magnetic tape leaders when they broke. If I brought food, I could get them to talk to me. They spoke a strange language full of acronyms and electronics terms, little of which made sense, but they did explain how to write simple BASIC instructions, and I eventually got the computer to calculate my astronomy lab results. I probably spent five hours programming successful code for every hour it would have taken me to do the homework the hard way (with a slide rule), but it was satisfying to finish the code at last and push the button and have the answer come out reliably, even if I was never going to run that particular program again.

I kept on writing code, conning system operators into giving me guest accounts on one system or another and asking what must have been not completely dumb questions, since they took the time to answer me. Programming is fun: it’s the only way for the truly lazy person to get by in the modern world. The software engineer’s motto is “Do it once, do it right, and never do it again”. I programmed my way through grad school for a couple of years, into NASA/JPL, and into the Rand Corporation, working with computers running IBM JCL 360, IBM 3300 HASP, and PDP11 operating systems. Eventually I wound up on a VAX780 running Berkeley UNIX 4.2….the forerunner of Solaris and the UNIX systems that now underly MacIntosh’s OS X systems. I wrote programs in whatever I could get to compile: in Basic, Fortran, PL/1, VICAR, C, C++, and then I discovered databases and learned how to use the MarkIV, QUEL and SQL languages to manipulate massive amounts of data.

In that nearly twenty-five-year period, I never once took a formal programming course. If I wanted to learn a language for a new project, I got an account on the right machine (system administrators are always hungry and easy to persuade after a good meal), tried not to bring it down doing something I didn’t understand (in this I was not always successful, but luckily I was always forgiven), read lots of other people’s code, and asked flattering questions of those whose programming style I most admired. I had three incredible mentors in that time; any good software engineering skills I know are because Jackson, Ed, and Jim took the time to explain things to me, often many times, until I thought they made sense and I could translate the concept into code that I could maintain. All the remaining bad habits I have are my own fault for not listening to something I’m sure one of them told me at some time or another.

On one of the projects I worked on toward the end of this period, we hired a newly-minted college graduate with a Bachelor of Science degree in computing science. I was both excited and apprehensive. Here was someone who had actually studied this stuff for real, taken courses, learned how to do it right, passed tests even. I could learn from her — or maybe be replaced by her; I wasn’t sure which was more likely. After all, she had the degree and the professional accreditation that I conspicuously lacked. She joined our project meetings and rattled off proposals to “normalize our databases” and “modularize our code”. When we asked her how we were supposed to revise the code we had, she rattled off rules about entity-relationship diagramming as though it should be obvious to us how to implement the details of her industry-standard proposal.

Somebody finally asked her what programs she’d written, and she admitted that she had done some coding for several classes — exercises of a couple of dozen lines each demonstrating a mastery of a particular technique, but in complete isolation from any other program. She’d never actually had to put it all together to create a complex multiple-function system. She’d never worked with other programmers on a project, or integrated code written by different people with different styles into a single coherent executable program. While she had memorized the textbook and could identify concepts by name, she had never applied anything she’d learned to a real-world program, where the analysis it produced would be used to make decisions that could affect the jobs and lives of real people. Over these discussions, it became clear that she’d studied hard — to pass the test at the end of the course. She had great study skills and good test-taking skills. Her test scores were high, and her grades were correspondingly good. But she had no idea how to begin to analyze a problem that involved any parameters beyond those in her text, or how to formulate an approach that would help her craft a solution suited to a context she hadn’t seen.

To be fair, the problem was not with the student, but with the “educational” system she trusted, one that was (and still is) more focused on turning out workers than thinkers. She wanted a good job, in a well-paying field, and chose software programming because it suited her talents and interests. But what she received by way of “education” was really job training, the presentation of materials targeted toward producing an efficient practitioner of a set of processes with relation to a known set of problems. As job training, it worked well: she knew how to recognize certain situations and give them a name, and she knew how apply a proven solution to the recognized problem efficiently.

As education, in the classic liberal arts sense of producing a clear-thinking individual, it failed miserably. Education is more than training. Yes, education must teach basic concepts, the terms of the field and the steps of the processes: these are the grammar of the topic and fundamental to any further work. Yes, education must teach skills in performing basic tasks efficiently. Certainly, education includes some level of training — but only as one aspect of its proper sphere.

An educational process must do far more than training, otherwise, it merely pays lip service to the rationale that it is “helping students develop their full potential”. This is a worthwhile goal: from a Christian point of view, helping students reach their potential is really helping them recognize, develop, and use their talents to the glory of God. Education should give them the context for the information they learn, and a sense of ethical responsibility for how that information is used. It should hone the students’ use of logical analysis and self-evaluation, so that students can recognize the shortcomings of their own work, without a test or teacher’s feedback. It should give the student self-confidence through experience, so that setbacks and failures to “get it right” the first time become an accepted and expected part of the educational process, not an excuse to opt out. It should encourage creativity, not penalize it for not fitting in one of four answers. It should result in joy in the knowing, that knowledge is worth something in and of itself, and needs no “usefulness” for justification. In this context, a grade becomes a temporary and limited measure of progress on the way to reaching this educated state, nothing more. It is neither the end nor the means to the end.

Unfortunately, the organization of our actual educational system works more like job training than classical liberal arts education. Our standardized tests, which form the backbone of our “educational assessment system”, focus on basic information mastery and limited application skills. They cannot adequately assess a student’s ability to analyze complex situations, to think creatively, or even to recognize fuzzy but often fruitful relationships between ideas in different fields. At their worst, such standardized tests only determine whether the student is able to recognize the name of a concept (without necessarily any comprehension of the concept). At their best, they may push a student to recognize the correct outcome of an appropriate analysis of a situation (and to be fair, most standardized tests do include this aspect). These standard examinations can be excellent measures of effective training, and it is appropriate to use them this way, particularly in establishing basic control of material.

But because they fail to assess creative and insightful approaches to analysis and evaluation, when they are the end in themselves to “education”, these exams effectively discourage methods that do try to develop analysis, perspective, and creativity. Students have limited resources, and they want to put their efforts where they will pay off, so they often ask “will this be on the test?”. Teachers, whose effectiveness is measured by their students’ performance on these exams, teach to the test so their students perform well. The dependence on this kind of testing and evaluation limits our educational system, and prevents it from building on the foundation that this approach does create. We produce students who are proficient test takers, but, like my co-worker, not really well educated.

A recent issue of US News and World Report carried an article on “Surviving the American Makeover”. In it, Rick Newman stated that “The highest earners” in the new American economy “are well educated, but have strong tacit and cognitive skills that are difficult to teach in a classroom: informed intuition, judgment under pressure, the ability to solve problems that don’t have an obvious solution.” (p. 16, USNWR Volume 147, Number 3, March 2010)

Our goal as teachers must be to find ways to help students develop these cognitive skills, informed intuition, and especially judgment under pressure by providing courses that go beyond “basic training” and challenge them to analyze, experiment, create, and above all, try again if they don’t succeed the first time.  We want this not because we want them to be “high earners” (although that isn’t necessarily a bad thing), but because the world needs people who can provide real, ethical solutions for complex problems, who will do the right thing whatever the pay, or the cost. We want to produce students who look at problems that don’t have an obvious solution, and rather than resorting to a standard example that won’t help, pawning off an easy but unethical solution, or giving up in confusion and despair, say “Well, not yet…..”, roll up their sleeves, and go to work, preferably singing.

Latin pronunciation for the continuing student

Monday, April 19th, 2010

On bulletin boards and in magazines dealing with classical homeschooling, one question that arises over and over again is, “What sort of pronunciation should we use in teaching Latin?” The options usually boil down to two: the reconstructed classical pronunciation, and the Italianate ecclesiastical pronunciation. Both have their champions, and the discussions that follow in their defense usually generate more heat than light. A lot of the discussion is usually centered on which one is right.

Asking “Which pronunciation is the right one?” is an exercise in historical reductionism doomed to fail. One cannot define an entire spectrum from a single point, and the history of Latin as a living language extends for somewhat over two thousand years. Either is right. Neither of them is satisfactory for all occasions.

Typically, the most attention is given this question by parents just starting out in Latin instruction. At this point, the question is more or less moot, and any real anxiety is out of proportion with its pedagogical significance. While learning forms — declensions and conjugations — it doesn’t matter much how you pronounce them, as long as you learn those forms, what they mean, and what they’re for. For practical purposes, therefore, my own suggestion is to pick one — whether purposefully or arbitrarily — and use it consistently for the first year or two. You’re probably better off choosing a pronunciation matching the kinds of texts used in the introductory text. With something like Wheelock’s Latin Grammar, which draws most of its examples from classical authors, you probably want to go with a classical pronunciation. If you’re using a course like Henle’s, which is based on ecclesiastical texts and ecclesiastical authors, then it only makes sense to go with that as your pronunciation standard. If your chief reason for learning Latin at first is to be able to sing church music, that’s a good reason to start with an ecclesiastical pronunciation as well.

Later on, though, pronunciation will become significant, especially when one begins to deal with literary products. Poetry in particular is at least largely about the sounds of a language. I’ll discuss that a little bit later. First, however, it’s probably worth dispelling some of the widespread misinformation that gets circulated.

The one I’ve heard most frequently is, “There are no recordings of classical Latin speakers. It’s clearly impossible to know how the language was pronounced.” This is generally used as a way of dismissing the classical pronunciation, though a parallel argument could be used as easily to dismiss any other system. Unfortunately, those who make this argument are merely asserting that they don’t know how to figure something of this sort out. But there are those who do.

At the subtlest level, yes — there are things we don’t know. We’d give a lot to be able to plant even one microphone in the Forum to pick up just one of Cicero’s orations. But we actually do know, with fair accuracy, how the major inventory of language sounds were produced. Historical linguistics is a slow and painstaking process, but over its long history people really have taken those pains, and so there is now a substantial body of data available for analysis.

Detailing all those sources of information is beyond the scope of this discussion, but a few examples may suffice. We do have a few grammatical and literary discussions about mispronunciations, of course. These are at least somewhat interesting. But they usually document the egregiously odd — such as Catullus’ harangue against a certain Arrius, who added initial “h” sounds to a lot of words that should have begun with a vowel. Those are colorful, but provide less information than we might wish, and almost no information about what was normal. There are, moreover, relatively few of them.

Just as one might read novels and the publications of the popular press today without learning a great deal about how we pronounce English, one could stare at a page of Cicero for the next ten years and learn little or nothing about how Cicero pronounced it. It would help you very little in distinguishing classical from ecclesiastical pronunciations.

But those are literary texts, and literary texts are not the only tools of the discipline. The real treasures for the historical linguist are errors. Some of the papers I get from my students, for example, could provide more information about how we speak than a ten-year run of National Geographic or New Yorker: those who write “I might of known” instead of “I might have known” are providing virtually irrefutable evidence that, in its auxiliary usage, “have” is normally pronounced much the same way as “of”. That will tell us something about the loss of the initial h; it will also tell us that in “of” the final f is like a v. The fact that one sees, with increasing frequency, comparative phrases formed with “then” rather than “than” illustrates the fact that in an unemphatic position (as these connective words almost always are), the vowel itself tends to settle down to about the same middle schwa sound (ǝ).

Our surviving evidence from the ancient world is (unsurprisingly) short on student papers, but they are not short of inscriptions scratched into stone of one sort or another. Some of these are quite elegant; others are primitive — the desperate efforts, for example, of a grieving parent who wants to memorialize his dead son or daughter as best he can. Often that best is riddled with misspellings. The inscriptions themselves are often rather moving, reaching across centuries with an uncommon universality, but in addition, almost every one of them tells us something about the language.

Anyone interested in the detailed conclusions about classical Latin, and the fastidious work that has gone into reconstructing it, would be well advised to take a look at W. Sidney Allen, Vox Latina. It’s fairly dry going, unless you have the philological bent, but it’s worth reading if you do. It argues every point with very solid evidence.

Of course the “we can’t know” argument is not the only one out there. Others are more belligerent and random. One of the more bizarre ones I’ve encountered over the last few years includes the reflection that “if it’s good enough for Dante, it’s good enough for me.” This sounds full of conviction, but substitutes triumphal ignorance for reason. Anyone even glancingly familiar with rhetorical fallacies will identify it as an appeal to inappropriate authority. Dante, writing a little more than 1300 years after Vergil (whom he regarded as his master), had no better direct access to recordings of Classical Latin than we do, but certainly lacked all the comparative evidence that has been marshalled over the last two centuries. To read Vergil as Dante did is probably a useful exercise, if you are interested in learning what Dante was hearing. It tells us virtually nothing about what Vergil was writing, however.

So does it matter what kind of pronunciation you use, and if so, why? To start with, no. It will obviously not affect your conversation with native Romans. It will probably not vastly affect your understanding of Latin texts. Some of the best classicists I have known have had very peculiar pronunciation. They seemed to get along. The English have had a long tradition of some of the finest classical scholarship in the world, coupled with with some of the worst pronunciation imaginable.

But if you want to deal with authors on their own terms, you probably need ultimately to learn and use two (or perhaps more) different ways to pronounce Latin. Sure, you should start with one method while you’re learning the ropes. But if you really want to appreciate the Latin that was written over a space of a thousand years, you have to be ready to adapt. It’s not really that hard, and the fruits of the exercise are considerable.

What’s wrong with reading classical Latin as if it were Mediaeval Latin? It’s not merely that it’s wrong. It’s not, I would argue, morally wrong, and if you can read and appreciate Cicero’s orations while reading them with a thick Italian or English accent, fine. But you will lose the music of the language, and especially with poetry, that’s important. Just as a brief illustrative case, let’s look at two consonants and a diphthong that are treated differently in Classical and Mediaeval pronunciations.

  • In classical Latin pronunciation, the letter C is invariably hard — like our K. It does not vary with position. In ecclesiastical Latin pronunciation, it will change to something like our CH sound (as in “church”) when followed by an I or an E.
  • Similarly, in classical Latin pronunciation, the letter T is invariably hard. In ecclesiastical Latin pronunciation, it will change to something like our S or TS sound when followed by an I or an E.
  • The diphthong AE in classical Latin is a true diphthong — beginning with A (as in “amen”) and gliding into an E or I sound — much like our word “eye”. In the ecclesiastical pronunciation, it is flattened to the equivalent of E — much like what we call a “long” A in modern English.

So in the classical pronunciation, the word “caelum” (heaven) comes out to something like “kylum”. In ecclesiastial pronunciation, it is going to be more like “chaylum”.

Consider the implications in the following fragment from the beginning of Bk. II of Vergil’s Aeneid. It’s written in the ancient meter reserved for epic and didactic poetry, dactylic hexameter. The meter is quantitative, and the lines are unrhymed.

A few lines into the book, one encounters the remarkable lines:

…Et iam nox umida caelo
praecipitat, suadentque cadentia sidera somnos.

…And now dewy night from heaven
descends, and the sinking stars bid us to sleep.”

Vergil achieves something remarkable here (and he knows it’s good: he quotes himself later in Bk. IV):

In a classical Latin pronunciation, the vowels are dark and muted; and the two words in the middle of the line contain an internal rhyme (suadentque cadentia), are followed by two words alliterating in S. The effect is lulling and hypnotic.

In an Italianate ecclesiastical pronunciation, all that is ruined. Praecipitat becomes something like praychippytot; cadentia becomes more like cadensia, which piles up one S-sound too many at the end of the line, so that the whole thing begins to hiss like a basket full of vipers.

Lest I seem to be exhibiting a bias in favor of the classical pronunciation, let me hasten to point out that one can achieve a similar train-wreck by reading mediaeval verse in the wrong way, too. Take the following example from the beginning of the monumental De contemptu mundi by Bernard of Cluny. It’s written in something also called dactylic hexameter, but it’s of a completely different sort. It’s qualitative (stress accent, rather than duration); its lines are rhymed internally (but always at word-end) at the end of the second and the fourth dactyls, and couplets are end-rhymed.

Hora novissima, tempora pessima sunt — vigilemus.
Ecce minaciter imminet arbiter ille supremus.
Imminet imminet ut mala terminet, aequa coronet,
Recta remuneret, anxia liberet, aethera donet.
Auferat aspera duraque pondera mentis onustae,
Sobria muniat, improba puniat, utraque iuste.
Ille piissimus, ille gravissimus ecce venit rex.
Surgat homo reus; instat homo deus, a patre iudex.
Surgite, currite simplice tramite, quique potestis;
Rex venit ocius ipseque conscius, ipseque testis.

To read this in a classical voice is to crush its rhymes: ocius and conscius in the last line there are meant to rhyme, but won’t, unless one follows the ecclesiastical norms for how to handle C; if one keeps a classical diphthong pronunciation of AE, the end-rhymes between onustae and iuste are obliterated. The driving, almost manic energy of Bernard’s apocalyptic lines drains away.

My point here isn’t to champion one form of pronunciation over another. It’s to recommend that a maturing Latinist — and I would include anyone who has done three or four years of Latin with Scholars Online — should learn to adapt his or her reading to the text at hand. If nothing else, it’s an act of humility before the material at hand, and that is probably a good thing in and of itself.

The King of Quotations

Wednesday, April 7th, 2010

This summer I’m planning on teaching the second of my three Summer Shakespeare courses. Accordingly, I’ve been putting together a web site for it, and have been thinking about Shakespeare a good deal in general; in addition, our son recently played Hamlet in Minneapolis, and we were fortunate enough to get to see him in it.

Shakespeare is probably the single most revered author in the English language — the gold standard. He wrote brilliant plays containing intriguing situations, characters, and philosophical problems, of course, but most particularly he was a master of language. His words can still move, transform, and amaze us. Probably for this latter reason in particular, his plays have been mined, ever since they were written, as sources of pithy quotations, aphorisms, and the like.

The mere fact that Shakespeare wrote something, however, does not inoculate it against banal misuse. This is the more likely, inasmuch as Shakespeare is more often revered than read, and more often read than understood. Accordingly, one commonly encounters extracts taken out of context, presented as the wisdom of the ages condensed into lapidary iambic pentameters.

One particularly amusing — and common — example is from Hamlet I.iii: the final advice of the old courtier Polonius to his son Laertes, who is about to take ship to return to Paris:

This above all: to thine own self be true,

And it must follow, as the night the day,

Thou canst not then be false to any man.

This is prettily turned, with a certain rhetorical flourish on “true” and “false”. Being true to yourself sounds like a good idea, surely, especially if it leads to being true to others as well. The sentence is so often quoted out of context that one might well encounter it a dozen different places without ever realizing that it was apparently meant to characterize the speaker as either a buffoon or a villain or both.

There are a number of ways to understand Polonius, but none of them should particularly commend him to us. The simplest is as a mere self-important windbag. Surely he is at least that. He is constantly spouting florid phrases without any sense of proportion or context; he’s so wrapped up in his own rhetoric that he’s lost track of content. He distracts himself and derails his own discourse.

He can also plausibly be seen (as David Ball argues in his brilliant little Backwards and Forwards) as a much cagier fellow — perhaps relying on an affected persona of the buffoon to mask the fact that he is a cold behind-the-scenes manipulator, who manages, in the cynical pursuit of his own advancement, to destroy his son, his daughter, and himself, and to steer most of the events of the Danish court into pure disaster.

In either case, however, three lines come as the consummation of a lengthy run of advice, almost all of which tells Laertes how to behave — but having nothing much to do with being true to himself in any sense of the term, either ancient or modern. Much of it has to do with creating an impression upon others — an impression that at least Polonius believes is a false one. Immediately after seeing his son off, he sends out a spy (Reynaldo, in a scene cut from almost every commercial production of the play) to keep an eye on him and find out — largely by slandering Laertes — what he may learn of him that is to his discredit. Polonius is not a trusting man — even so far as his own son (who deserves better) is concerned — and certainly he’s not one to be trusted himself. He is not true to himself; he is not true to anyone else.

That of course does not necessarily vitiate the advice: it probably remains a good idea to be true to oneself and so to others. But taking that as the sum and substance of the matter — a golden apothegm because The Bard said it — is largely to miss its point.

My point here is that any author — whether Shakespeare or anyone else — needs to be read with an eye to the whole. Any author can be the source of extracts that seem lofty and laudable, or reprehensible, without those things having any real relation to what the author was saying. Reading well, and reading charitably, involves pushing past those limitations and engaging with the text itself in context. The reader is the richer for it.