Archive for the ‘Technology’ Category

STEMs and Roots

Tuesday, February 2nd, 2016

Everywhere we see extravagant public handwringing about education. Something is not working. The economy seems to be the symptom that garners the most attention, and there are people across the political spectrum who want to fix it directly; but most seem to agree that education is at least an important piece of the solution. We must produce competitive workers for the twenty-first century, proclaim the banners and headlines; if we do not, the United States will become a third-world nation. We need to get education on the fast track — education that is edgy, aggressive, and technologically savvy. Whatever else it is, it must be up to date, it must be fast, and it must be modern. It must not be what we have been doing.

I’m a Latin teacher. If I were a standup comedian, that would be considered a punch line. In addition to Latin, I teach literature — much of it hundreds of years old. I ask students, improbably, to see it for what it itself is, not just for what they can use it for themselves. What’s the point of that? one might ask. Things need to be made relevant to them, not the other way around, don’t they?

Being a Latin teacher, however (among other things), I have gone for a number of years now to the Summer Institute of the American Classical League, made up largely of Latin teachers across the country. One might expect them to be stubbornly resistant to these concerns — or perhaps blandly oblivious. That’s far from the case. Every year, in between the discussions of Latin and Greek literature and history, there are far more devoted to pedagogy: how to make Latin relevant to the needs of the twenty-first century, how to advance the goals of STEM education using classical languages, and how to utilize the available technology in the latest and greatest ways. What that technology does or does not do is of some interest, but the most important thing for many there is that it be new and catchy and up to date. Only that way can we hope to engage our ever-so-modern students.

The accrediting body that reviewed our curricular offerings at Scholars Online supplies a torrent of exortation about preparing our students for twenty-first century jobs by providing them with the latest skills. It’s obvious enough that the ones they have now aren’t doing the trick, since so many people are out of work, and so many of those who are employed seem to be in dead-end positions. The way out of our social and cultural morass lies, we are told, in a focus on the STEM subjects: Science, Technology, Engineering, and Math. Providing students with job skills is the main business of education. They need to be made employable. They need to be able to become wealthy, because that’s how our society understands, recognizes, and rewards worth. We pay lip service, but little else, to other standards of value.

The Sarah D. Barder Fellowship organization to which I also belong is a branch of the Johns Hopkins University Center for Talented Youth. It’s devoted to gifted and highly gifted education. At their annual conference they continue to push for skills, chiefly in the scientific and technical areas, to make our students competitive in the emergent job market. The highly gifted ought to be highly employable and hence earn high incomes. That’s what it means, isn’t it?

The politicians of both parties have contrived to disagree about almost everything, but they seem to agree about this. In January of 2014, President Barack Obama commented, “…I promise you, folks can make a lot more, potentially, with skilled manufacturing or the trades than they might with an art history degree. Now, nothing wrong with an art history degree — I love art history. So I don’t want to get a bunch of emails from everybody. I’m just saying you can make a really good living and have a great career without getting a four-year college education as long as you get the skills and the training that you need.”

From the other side of the aisle, Florida Governor Rick Scott said, “If I’m going to take money from a citizen to put into education then I’m going to take that money to create jobs. So I want that money to go to degrees where people can get jobs in this state. Is it a vital interest of the state to have more anthropologists? I don’t think so.”

They’re both, of course, right. The problem isn’t that they have come up with the wrong answer. It isn’t even that they’re asking the wrong question. It’s that they’re asking only one of several relevant questions. They have drawn entirely correct conclusions from their premises. A well-trained plumber with a twelfth-grade education (or less) can make more money than I ever will as a Ph.D. That has been obvious for some time now. If I needed any reminding, the last time we required a plumber’s service, the point was amply reinforced: the two of them walked away in a day with about what I make in a month. It’s true, too, that a supply of anthropologists is not, on the face of things, serving the “compelling interests” of the state of Florida (or any other state, probably). In all fairness, President Obama said that he wasn’t talking about the value of art history as such, but merely its value in the job market. All the same, that he was dealing with the job market as the chief index of an education’s value is symptomatic of our culture’s expectations about education and its understanding of what it’s for.

The politicians haven’t created the problem; but they have bought, and are now helping to articulate further, the prevalent assessment of what ends are worth pursuing, and, by sheer repetition and emphasis, crowding the others out. I’m not at all against STEM subjects, nor am I against technologically competent workers. I use and enjoy technology. I am not intimidated by it. I teach online. I’ve been using the Internet for twenty-odd years. I buy a fantastic range of products online. I programmed the chat software I use to teach Latin and Greek, using PHP, JavaScript, and mySQL. I’m a registered Apple Developer. I think every literate person should know not only some Latin and Greek, but also some algebra and geometry. I even think, when going through Thucydides’ description of how the Plataeans determined the height of the wall the Thebans had built around their city, “This would be so much easier if they just applied a little trigonometry.” Everyone should know how to program a computer. Those are all good things, and help us understand the world we’re living in, whether we use them for work or not.

But they are not all that we need to know. So before you quietly determine that what I’m offering is just irrelevant, allow me to bring some news from the past. If that sounds contradictory, bear in mind that it’s really the only kind of news there is. All we know about anything at all, we know from the past, whether recent or distant. Everything in the paper or on the radio news is already in the past. Every idea we have has been formulated based on already-accumulated evidence and already-completed ratiocination. We may think we are looking at the future, but we aren’t: we’re at most observing the trends of the recent past and hypothesizing about what the future will be like. What I have to say is news, not because it’s about late-breaking happenings, but because it seems not to be widely known. The unsettling truth is that if we understood the past better and more deeply, we might be less sanguine about trusting the apparent trends of a year or even a decade as predictors of the future. They do not define our course into the infinite future, or even necessarily the short term — be they about job creation, technical developments, or weather patterns. We are no more able to envision the global culture and economy of 2050 than the independent bookseller in 1980 could have predicted that a company named Amazon would put him out of business by 2015.

So here’s my news: if the United States becomes a third-world nation (a distinct possibility), it will not be because of a failure in our technology, or even in our technological education. It will be because, in our headlong pursuit of what glitters, we have forgotten how to differentiate value from price: we have forgotten how be a free people. Citizenship — not merely in terms of law and government, but the whole spectrum of activities involved in evaluating and making decisions about what kind of people to be, collectively and individually — is not a STEM subject. Our ability to articulate and grasp values, and to make reasoned and well-informed decisions at the polls, in the workplace, and in our families, cannot be transmitted by a simple, repeatable process. Nor can achievement in citizenship be assessed simply, or, in the short term, accurately at all. The successes and failures of the polity as a whole, and of the citizens individually, will remain for the next generation to identify and evaluate — if we have left them tools equal to the task. Our human achievement cannot be measured by lines of code, by units of product off the assembly line, or by GNP. Our competence in the business of being human cannot be certified like competence in Java or Oracle (or, for that matter, plumbing). Even a success does not necessarily hold out much prospect of employment or material advantage, because that was never what it was about in the first place. It offers only the elusive hope that we will have spent our stock of days with meaning — measured not by our net worth when we die, but by what we have contributed when we’re alive. The questions we encounter in this arena are not new ones, but rather old ones. If we lose sight of them, however, we will have left every child behind, for technocracy can offer nothing to redirect our attention to what matters.

Is learning this material of compelling interest to the state? That depends on what you think the state is. The state as a bureaucratic organism is capable of getting along just fine with drones that don’t ask any inconvenient questions. We’re already well on the way to achieving that kind of state. Noam Chomsky, ever a firebrand and not a man with whom I invariably agree, trenchantly pointed out, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum — even encourage the more critical and dissident views. That gives people the sense that there’s free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.” He’s right. If we are to become unfree people, it will be because we gave our freedom away in exchange for material security or some other ephemeral reward — an illusion of safety and welfare, and those same jobs that President Obama and Governor Scott have tacitly accepted as the chief — or perhaps the only — real objects of our educational system. Whatever lies outside that narrow band of approved material is an object of ridicule.

If the state is the people who make it up, the question is subtly but massively different. Real education may not be in the compelling interest of the state qua state, but it is in the compelling interest of the people. It’s the unique and unfathomably complex amalgam that each person forges out of personal reflection, of coming to understand one’s place in the family, in the nation, and in the world. It is not primarily practical, and we should eschew it altogether, if our highest goal were merely to get along materially. The only reason to value it is the belief that there is some meaning to life beyond one’s bank balance and material comfort. I cannot prove that there is, and the vocabulary of the market has done its best to be rid of the idea. But I will cling to it while I live, because I think it’s what makes that life worthwhile.

Technical skills — job skills of any sort — are means, among others, to the well-lived life. They are even useful means in their place, and everyone should become as competent as possible. But as they are means, they are definitionally not ends in themselves. They can be mistakenly viewed as ends in themselves, and sold to the credulous as such, but the traffic is fraudulent, and it corrupts the good that is being conveyed. Wherever that sale is going on, it’s because the real ends are being quietly bought up by those with the power to keep them out of our view in their own interest.

Approximately 1900 years ago, Tacitus wrote of a sea change in another civilization that had happened not by cataclysm but through inattention to what really mattered. Describing the state of Rome at the end of the reign of Augustus, he wrote: “At home all was calm. The officials carried the old names; the younger men had been born after the victory of Actium; most even of the elder generation, during the civil wars; few indeed were left who had seen the Republic. It was thus an altered world, and of the old, unspoilt Roman character not a trace lingered.” It takes but a single generation to forget the work of ages.

But perhaps that’s an old story, and terribly out of date. I teach Latin, Greek, literature, and history, after all.

Computer Programming as a Liberal Art

Monday, September 3rd, 2012

One of the college majors most widely pursued these days is computer science. This is largely because it’s generally seen as a ticket into a difficult and parsimonious job market. Specific computer skills are demonstrably marketable: one need merely review the help wanted section of almost any newspaper to see just how particular those demands are.

As a field of study, in other words, its value is generally seen entirely in terms of employability. It’s about training, rather than about education. Just to be clear: by “education”, I mean something that has to do with forming a person as a whole, rather just preparing him or her for a given job, which I generally refer to as “training”. If one wants to become somewhat Aristotelian and Dantean, it’s at least partly a distinction between essence and function. (That these two are inter-related is relevant, I think, to what follows.) One sign of the distinction, however, is that if things evolve sufficiently, one’s former training may become irrelevant, and one may need to be retrained for some other task or set of tasks. Education, on the other hand, is cumulative. Nothing is ever entirely lost or wasted; each thing we learn provides us with a new set of eyes, so to speak, with which to view the next thing. In a broad and somewhat simplistic reduction, training teaches you how to do, while education teaches you how to think.

One of the implications of that, I suppose, is that the distinction between education and training has largely to do with how one approaches it. What is training for one person may well be education for another. In fact, in the real world, probably these two things don’t actually appear unmixed. Life being what it is, and given that God has a sense of humor, what was training at one time may, on reflection, turn into something more like education. That’s all fine. Neither education nor training is a bad thing, and one needs both in the course of a well-balanced life. And though keeping the two distinct may be of considerable practical value, we must also acknowledge that the line is blurry. Whatever one takes in an educational mode will probably produce an educational effect, even if it’s something normally considered to be training. If this distinction seems a bit like C. S. Lewis’s distinction between “using” and “receiving”, articulated in his An Experiment in Criticism, that’s probably not accidental. Lewis’s argument there has gone a long way toward forming how I look at such things.

Having laid that groundwork, therefore, I’d like to talk a bit about computer programming as a liberal art. Anyone who knows me or knows much about me knows that I’m not really a programmer by profession, and that the mathematical studies were not my strong suit in high school or college (though I’ve since come to make peace with them).

Programming is obviously not one of the original liberal arts. Then again, neither are most of the things we study under today’s “liberal arts” heading. The original liberal arts included seven: grammar, dialectic, and rhetoric — all of which were about cultivating precise expression (and which were effectively a kind of training for ancient legal processes), and arithmetic, geometry, music, and astronomy. Those last four were all mathematical disciplines: both music and astronomy bore virtually no relation to what is taught today under those rubrics. Music was not about pavanes or symphonies or improvisational jazz: it was about divisions of vibrating strings into equal sections, and the harmonies thereby generated. Astronomy was similarly not about celestial atmospheres or planetary gravitation, but about proportions and periodicity in the heavens, and the placement of planets on epicycles. Kepler managed to dispense with epicycles, which are now of chiefly historical interest.

In keeping with the spirit, if not the letter, of that original categorization, we’ve come to apply the term “liberal arts” today to almost any discipline that is pursued for its own sake — or at least not for the sake of any immediate material or financial advantage. Art, literature, drama, and music (of the pavane-symphony-jazz sort) are all considered liberal arts largely because they have no immediate practical application to the job of surviving in the world. That’s okay, as long as we know what we’re doing, and realize that it’s not quite the same thing.

While today’s economic life in the “information age” is largely driven by computers, and there are job openings for those with the right set of skills and certifications, I would suggest that computer programming does have a place in the education of a free and adaptable person in the modern world, irrespective of whether it has any direct or immediate job applicability.

I first encountered computer programming (in a practical sense) when I was in graduate school in classics. At the time (when we got our first computer, an Osborne I with 64K of memory and two drives with 92K capacity each), there was virtually nothing to do with classics that was going to be aided a great deal by computers or programming, other than using the very basic word processor to produce papers. That was indeed useful — but had nothing to do with programming from my own perspective. Still, I found Miscrosoft Basic and some of the other tools inviting and intriguing — eventually moving on to Forth, Pascal, C, and even some 8080 Assembler — because they allowed one to envision new things to do, and project ways of doing them.

Programming — originally recreational as it might have been — taught me a number of things that I have come to use at various levels in my own personal and professional life. Even more importantly, though, it has taught me things that are fundamental about the nature of thought and the way I can go about doing anything at all.

Douglas Adams, the author of the Hitchhiker’s Guide books, probably caught its most essential truth in Dirk Gently’s Holistic Detective Agency:

”…if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve learned something about it yourself.”

I might add that not only have you yourself learned something about it, but you have, in the process learned something about yourself.

Adams also wrote, “I am rarely happier than when spending entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.” This is, of course, one of the drolleries about programming. The hidden benefit is that, once perfected, that tool, whatever it was, allows one to save ten seconds every time it is run. If one judges things and their needs rightly, one might be able to save ten seconds a few hundred thousand or even a few million times. At that point, the time spent on programming the tool will not merely save time, but may make possible things that simply could never have been done otherwise.

One occasionally hears it said that a good programmer is a lazy programmer. That’s not strictly true — but the fact is that a really effective programmer is one who would rather do something once, and then have it take over the job of repeating things. A good programmer will use one set of tools to create other tools — and those will increase his or her effective range not two or three times, but often a thousandfold or more. Related to this is the curious phenomenon that a really good programmer is probably worth a few hundred merely adequate ones, in terms of productivity. The market realities haven’t yet caught up with this fact — and it may be that they never will — but it’s an interesting phenomenon.

Not only does programming require one to break things down into very tiny granular steps, but it also encourages one to come up with the simplest way of expressing those things. Economy of expression comes close to the liberal arts of rhetoric and dialectic, in its own way. Something expressed elegantly has a certain intrinsic beauty, even. Non-programmers are often nonplussed when they hear programmers talking about another programmer’s style or the beauty of his or her code — but the phenomenon is as real as the elegance of a Ciceronian period.

Pursuit of elegance and economy in programming also invites us to try looking at things from the other side of the process. When programming an early version of the game of Life for the Osborne, I discovered that by simply inverting a certain algorithm (having each live cell increment the neighbor count of all its adjacent spaces, rather than having each space count its live neighbors) achieved an eight-to-tenfold improvement in performance. Once one has done this kind of thing a few times, one starts to look for such opportunities. They are not all in a programming context.

There are general truths that one can learn from engaging in a larger programming project, too. I’ve come reluctantly to realize over the years that the problem in coming up with a really good computer program is seldom an inability to execute what one envisions: it’s much more likely to be a problem of executing what one hasn’t adequately envisioned in the first place. Not knowing what winning looks like, in other words, makes the game much harder to play. Forming a really clear plan first is going to pay dividends all the way down the line. One can find a few thousand applications for that principle every day, both in the computing world and everywhere else. Rushing into the production of something is almost always a recipe for disaster, a fact explored by Frederick P. Brooks in his brilliantly insightful (and still relevant) 1975 book, The Mythical Man-Month, which documents his own blunders as the head of the IBM System 360 project, and the costly lessons he learned from the process.

One of the virtues of programming as a way of training the mind is that it provides an objective “hard” target. One cannot make merely suggestive remarks to a computer and expect them to be understood. A computer is, in some ways, an objective engine of pure logic, and it is relentless and completely unsympathetic. It will do precisely what it’s told to do — no more and no less. Barring actual mechanical failure, it will do it over and over again exactly the same way. One cannot browbeat or cajole a computer into changing its approach. There’s a practical lesson and probably a moral lesson too there. People can be persuaded; reality just doesn’t work that way — which is probably just as well.

I am certainly not the first to have noted that computer programming can have this kind of function in educational terms. Brian Kernighan — someone known well to the community of Unix and C programmers over the years (he was a major part of the team that invented C and Unix) has argued that it’s precisely that in a New York Times article linked here. Donald Knuth, one of the magisterial figures of the first generation of programming, holds forth on its place as an art, too, here. In 2008, members of the faculties of Williams College and Pomona College (my own alma mater) collaborated on a similar statement available here. Another reflection on computer science and math in a pedagogical context is here. And of course Douglas Hofstadter in 1979 adumbrated some of the more important issues in his delightful and bizarre book, Gödel, Escher, Bach: An Eternal Golden Braid.

Is this all theory and general knowledge? Of course not. What one learns along the line here can be completely practical, too, even in a narrower sense. For me it paid off in ways I could never have envisioned when I was starting out.

When I was finishing my dissertation — an edition of the ninth-century Latin commentary of Claudius, Bishop of Turin, on the Gospel of Matthew — I realized that there was no practical way to produce a page format that would echo what normal classical and mediaeval text editions typically show on a page. Microsoft Word (which was what I was using at the time) supported footnotes — but typically these texts don’t use footnotes. Instead, the variations in manuscript readings are keyed not to footnote marks, but to the line numbers of the original text, and kept in a repository of textual variants at the bottom of the page (what is called in the trade an apparatus criticus). In addition, I wanted to have two further sets of notes at the bottom of the page, one giving the sources of the earlier church fathers that Claudius was quoting, and another giving specifically scriptural citations. I also wanted to mark in the margins where the foliation of the original manuscripts changed. Unsurprisingly, there’s really not a way to get Microsoft Word to do all that for you automatically. But with a bit of Pascal, I was able to write a page formatter that would take a compressed set of notes indicating all these things, and parcel them out to the right parts of the page, in a way that would be consistent with RTF and University Microfilms standards.

When, some years ago, we were setting Scholars Online up as an independent operation, I was able, using Javascript, PHP, and MySQL, to write a chat program that would serve our needs. It’s done pretty well since. It’s robust enough that it hasn’t seriously failed; we now have thousands of chats recorded, supporting various languages, pictures, audio and video files, and so on. I didn’t set out to learn programming to accomplish something like this. It was just what needed to be done.

Recently I had to recast my Latin IV class to correspond to the new AP curriculum definition from the College Board. (While it is not, for several reasons, a certified AP course, I’m using the course definition, on the assumption that a majority of the students will want to take the AP exam.) Among the things I wanted to do was to provide a set of vocabulary quizzes to keep the students ahead of the curve, and reduce the amount of dictionary-thumping they’d have to do en route. Using Lee Butterman’s useful and elegant NoDictionaries site, I was able to get a complete list of the words required for the passages in question from Caesar and Vergil; using a spreadsheet, I was able to sort and re-order these lists so as to catch each word the first time it appeared, and eliminate the repetitions; using regular expressions with a “grep” utility in my programming editor (BBEdit for the Macintosh) I was able to take those lists and format them into GIFT format files for importation into the Moodle, where they will be, I trust, reasonably useful for my students. That took me less than a day for several thousand words — something I probably could not have done otherwise in anything approaching a reasonable amount of time. For none of those tasks did I have any training as such. But the ways of thinking I had learned by doing other programming tasks enabled me to do these here.

Perhaps the real lesson here is that there is probably nothing — however mechanical it may seem to be — that cannot be in some senses used as a basis of education, and no education that cannot yield some practical fruit down the road a ways. That all seems consistent (to me) with the larger divine economy of things.

Autonomy of Means revisited: the Internet

Saturday, February 19th, 2011

Last May I wrote a piece for this blog entitled “Autonomy of Means and Education”. The choice of phrasing was drawn from Charles WIlliams, “Bors to Elayne, on the King’s Coins”. I’ve recently had reason to revisit the question again, from a different direction.

I’ve just finished reading Nicholas Carr’s The Shallows: What the Internet is Doing to our Brains. Some may consider it ironic that I discovered this book at the recommendation of some friends via Facebook: it is an extended (and not particularly optimistic) meditation on how the Internet is “rewiring” our minds — making quantifiable and physically measurable changes in our brains — by the kinds of information it delivers, and the way it delivers it.

Carr’s main point is fairly straightforward, and very hard to refute from common experience: he contends that the rapid-fire interruption-machine that the Internet offers us tends to fragment our attention, perpetually redirect us to the superficial, and prevent us from achieving any of the continuous long-term concentration from which emerge real ideas, serious discourse, and, in the long view, civilization itself. Not only is it not conducive to such thinking in and of itself — it actually suppresses our capacity for such thinking even when we’re away from our computers. Carr doesn’t point fingers or lay particularly onerous burdens of blame at anyone’s door, though one is moved to wonder cui bono? — to whom is all this a benefit, and where is the money coming from? There is a curious unquestioned positivist philosophy driving companies like Google that is not consistent with at least how I see myself in relation to my God, and the other people in his world.

Carr supports his case with a dazzling array of synthetic arguments ranging from the philosophical to the neuropsychological. He makes a very convincing case for the plasticity of the human brain, even into adulthood — and for the notion that those capacities that get exercise tend to be enhanced through measurable growth and synaptic enhancement of specific areas of the brain. All this can happen in remarkably short time (mere days or even hours). My own field is rather far removed from psychology, but what he says rings true with me — my ability do do almost any kind of mental activity really does improve with practice. Unused abilities, by the same token, can atrophy. That this happens is probably not very surprising to any of us; what is surprising is its extent and the objectivity with which it can be measured. I was intrigued to learn, for example, that one can identify particular developments characteristic of the brains of taxi-drivers, and that discernible physical differences distinguish the brains of readers of Italian, for example, from readers of English. We tend to think of language as largely convertible from one to another; it’s not necessarily so. Whether this has some other implications about why one ought to learn Latin or Greek is intriguing to me, but not something I’m going to chase down here.

Carr’s thesis, if it’s true, has serious consequences for us at Scholars Online. It has implications about who we are and how we do what we are doing. As a teacher who has found his calling trying to teach people to read carefully and thoughtfully, analytically and critically, with concentration and focus — via the Internet — I naturally feel torn. I like to believe that the format in which I’m pursuing that work is not itself militating against its success. It is at the very least a strong warning that we should examine how we work and why we do what we do the way we do it.

I do feel somewhat vindicated in the fact that we have never chosen to pursue each and every new technological gewgaw that came down the pike. Our own concern has always been for cautiously adopting appropriate technology. I still tend not to direct students to heavily linked hypertext documents (which, as Carr argues, provide vastly less benefit than they promise, with substantially lower retention than simple linear documents in prose); almost anything that requires the division or fragmentation of attention is an impediment to real learning. As I have said elsewhere in my discussions of the literature program, my main effort there has always been to teach students to read carefully and thoroughly — not just the mechanics of decoding text, but the skills of interpreting and understanding its meaning.

The book is not without a few technical flaws. Carr has either misread or misinterpreted some of the points in Paul Saenger’s Space Between Words: The Origins of Silent Reading. Many of his claims about Latin and the development of the manuscript are too facile, and some are simply incorrect. Saenger points out that in Classical Latin, word order makes relatively little syntactic difference. He’s using that distinction precisely. Carr apparently takes this to mean that, as a function of the way manuscripts were written and produced in late Antiquity and the early Middle Ages, there was less concern for discrete idenitification of word boundaries (likely to be true), and less concern for word order in a given text (completely preposterous). Yes, it’s true that Latin syntax does not rely as heavily as English does on word order; it’s not true that word order is without significance semantically. The fact that many of our survivals from ancient sources are poetic would clearly argue against this: if you rearrange the words in a line of Vergil, you will destroy the meter, if nothing else. Word order in poetry is essential for meter (something we can verify objectively); it’s also powerful poetically. Words echo each other only if they stand in a certain arrangment; this one will be left enjambed at the beginning of a new line with potent poetical effect.

Of Horace, Friedrich Nietzsche said:

Bis heute habe ich an keinem Dichter dasselbe artistische Entzücken gehabt, das mir von Anfang an eine Horazische Ode gab. In gewissen Sprachen ist Das, was hier erreicht ist, nicht einmal zu wollen. Dies Mosaik von Worten, wo jedes Wort als Klang, als Ort, als Begriff, nach rechts und links und über das Ganze hin seine Kraft ausströmt, dies minimum in Umfang und Zahl der Zeichen, dies damit erzielte maximum in der Energie der Zeichen – das Alles ist römisch und, wenn man mir glauben will, vornehm par excellence.
(Götzen-Dämmerung, “Was ich den Alten verdanke”, 1)

To this day, I have had from no other poet the same artistic pleasure that one of Horace’s Odes gave me from the beginning. In some languages, what Horace accomplished here could not even be hoped for. This mosaic of words, where each word — [understood] as sound, as place, and as idea — exerts its influence to the right and left and over the whole, this economy in the extent and number of the signs, through which those signs receive their greatest power — that is all Roman and, to my way of thinking, supremely noble.
(Twilight of the Gods, “What I owe to the Ancients”, 1. Tr. my own.)

Nietzsche was a very strange philosopher (if that’s even the right term to describe him); I don’t hold with many of his ideas. But he was actually a pretty astute reader of Horace.

Cicero’s orations — not poetry — were similarly characterized by prose rhythms and semantic subtleties that could not possibly have been preserved were the scribes or copyists indifferent to word order. Whether we’re dealing with poetry or prose, word order is ultimately no less important in Latin than in English. It just has a different importance. Don’t let anyone tell you otherwise.

Carr also routinely refers to Socrates as an orator, which is certainly not how Socrates viewed himself. He correctly notes that Socrates eschewed writing, partly because (as is discussed in the Phaedrus, one of the weirder Platonic dialogues), the old Egyptian priest claimed that it tended to weaken the memory. This is true, but it’s only one of Socrates’ reasons. He also disdained writing and oratory both because they were one-way forms of communication. What he valued (as can be found elsewhere throughout his work) is the give-and-take of two-way conversation: in the Greek, διαλέγεσθαι (dialegesthai) — the root of our own “dialogue” and “dialectic”. He believed that the exchange was uniquely capable of allowing people to dig out the truth.

In the Apology (which I’m now reading with some terrific students in Greek III), Socrates specifically and fairly extensively begs to be excused from having to talk like an orator. This is how the dialogue begins:

How you, men of Athens, have been affected by my accusers, I do not know; but I, for my part, almost forgot my own identity, so persuasively did they talk; and yet there is hardly a word of truth in what they have said. But I was most amazed by one of the many lies that they told—when they said that you must be on your guard not to be deceived by me, because I was a clever speaker. For I thought it the most shameless part of their conduct that they are not ashamed because they will immediately be convicted by me of falsehood by the evidence of fact, when I show myself to be not in the least a clever speaker, unless indeed they call him a clever speaker who speaks the truth; for if this is what they mean, I would agree that I am an orator—not after their fashion. Now they, as I say, have said little or nothing true; but you shall hear from me nothing but the truth. Not, however, men of Athens, speeches finely tricked out with words and phrases, as theirs are, nor carefully arranged, but you will hear things said at random with the words that happen to occur to me. For I trust that what I say is just; and let none of you expect anything else. For surely it would not be fitting for one of my age to come before you like a youngster making up speeches. And, men of Athens, I urgently beg and beseech you if you hear me making my defence with the same words with which I have been accustomed to speak both in the market place at the bankers tables, where many of you have heard me, and elsewhere, not to be surprised or to make a disturbance on this account. For the fact is that this is the first time I have come before the court, although I am seventy years old; I am therefore an utter foreigner to the manner of speech here. Hence, just as you would, of course, if I were really a foreigner, pardon me if I spoke in that dialect and that manner in which I had been brought up, so now I make this request of you, a fair one, as it seems to me, that you disregard the manner of my speech—for perhaps it might be worse and perhaps better—and observe and pay attention merely to this, whether what I say is just or not; for that is the virtue of a judge, and an orator’s virtue is to speak the truth.
(Plat. Apol., 17a-18a, tr. Harold North Fowler).

One of the things that struck me while I was reading the latter stretches of this book was the subject I raised last May: when a tool — any tool — becomes autonomous, we’re heading for trouble with it. We pour much of who and what we are into our tools, and the making of tools is apparently very much a part of our nature as human beings. We are homo faber — man the maker — as much as we are homo sapiens. That is, as I take it, a good thing. With our tools we have been able to do many things that are worth doing, and that could not have been done otherwise. But we must always hold our tools accountable to our higher purposes. The mere fact that one can do something with a given tool does not mean that it’s a good thing. They say the man with a hammer sees every problem as a nail. That adage still holds good. We can be empowered by our tools, but every one comes at a cost — a cost to us in terms of who we are and how we work, and what ends our work ultimately serves. There is some power in choosing not to use certain tools on certain occasions.