Words and Things

Humanity can be characterized as a tool-using and tool-creating species. The tools we create or borrow can obviously give us enormous power, for good or ill: they will, however, always do so at a cost. We sometimes mark that cost ahead of time and choose to pay it; at other times, we don’t fully appreciate what the tools have taken from us until we look back on how they have changed us.

Thinkers have noted this at different times about different things. Emerson remarks, “The civilized man has built a coach, but has lost the use of his feet.” Certainly many of our other forms of transportation have enabled us to go places nobody could have gone in Emerson’s day, but it is certainly true that we would scarcely think of walking even many places that we might otherwise go. Jared Diamond has argued that the Agricultural Revolution was in a sense a great mistake on the part of humanity. I don’t agree, but that’s a different matter.

This has been noted in fiction as well. Tolkien’s Sauron poured so much of himself into the One Ring that, when it was destroyed, not only did he lose the power it had conferred, but he was himself lessened or destroyed as well. The same was true of Feanor and the Silmarils.

The whole might be framed up once again as an instance of the autonomy of means, but that’s not where I propose to go here. I’m interested in focusing on one tool in particular. Arguably the first and greatest tool mankind has learned to use is language. Language enables us to cooperate with our contemporaries; it also allows us preserve our voices and our ideas beyond the bounds of our lifetimes. One of the reasons I continue to be fascinated by ancient languages is that, even at so great a distance in time and space, we can recover something real and meaningful about the lives of those long gone. There is nothing like it, and nothing that will substitute for it on our human journey. All the same, even our capacity for articulate speech, which many of us see as a vital gift of God, comes with responsibilities, limitations, and hidden costs. 

As speakers, we have the sense — which may not be entirely true — that unless we can express or define something in words, it doesn’t really exist, or its existence is suspect. Whether that’s true or not, most of us have some experience of not knowing how to say what we feel or think at a given juncture. We’re constantly reminded of the limitations of our language. The cost there is merely our silence — frustrating, but probably not, in most situations, pernicious.

Language can deceive us in the opposite direction. We can believe that the fact that we can give something a name or a description ipso facto confirms its existence. Sometimes called “reification” (a nineteenth-century Latinate formation meaning something like “thingification”), it is arguably a fallacy of reasoning. The mere ability to conjure a name does not bring its referent into being. “Square circle” is something we can say, but there is not and never can be such a thing. We can talk about unicorns, but we have at least no evidence of their existence (though I suspect they are not intrinsically impossible, as the square circle is). Language helps us not only communicate our thoughts but to think those thoughts in the first place; if we are not careful, though, we can rely upon it too much, and construct our edifices of thought on quicksand. 

This can show up in social theory and argumentation. People talk about a lot of things, and take that as verification that what they are talking about is beyond question. Political parties treat the conjuration of the name as all the evidence they need for the reality. To the Marxist mind, ongoing class warfare is not just a phrase: it is seen as a reality of the human historical process. It’s very difficult work to dismantle such thinking. 

One can make things up just by naming them. I could go on for hours — convincingly — about my sister, and most people who don’t know me won’t have any idea that there has never been anyone who qualified for that title. That’s the simple case. 

But assume goodwill, and that I’m not actually trying to deceive. We come up with names all the time to describe the hypotheticals we use to explain the phenomena that we actually see, and then come to rely on and believe in those constructs almost as fervently as the actual evidence. In science, people once talked about the luminiferous ether. Building from their categorical names and descriptions, they made any number of valid and intelligent inferences about it — valid in terms of the reasoning, but failing to account for the fact that it has never existed. Late nineteenth century physicists calculated how rigid the ether had to be, in order for light to propagate through it as quickly as it did. The calculations were good. The assumptions were not. The brontosaurus was my favorite dinosaur when I was a little boy. Alas, it now appears never to have existed — it’s a name given to a composite made up of parts of different creatures. Those fossils and the bones that generated them were real: the construct paleontologists used to account for them is not.

In science, of course, there are rules and pragmatic ways to help us distinguish what exists from what does not. In other fields, the game is trickier. In linguistics, for the better part of a generation, the Noam Chomsky’s transformational model of grammar held the field against all comers, at least within the academy. Its orthodoxies were almost impossible to challenge within the field (I know: I tried in an undergraduate linguistics course at Pomona College — I was regarded as something just shy of a lunatic). Chomsky claimed that language emerged, through a series of “transformations”, from a pre-verbal matrix of ideas framed as “deep structure” in the mind. It’s not now clear that deep structures exist as such at all, or that there is any line of demarcation between the word and the idea, that the influence is all in one direction, or that transformations actually exist. Despite Chomsky’s many perceptions about language, this is one that is no longer widely acknowledged. Arguably, yes — there must be some level in our consciousness at which ideas interact and form without relying entirely on a verbal medium, but the whole picture is fairly obviously a lot more nuanced and ambiguous. If nothing else, I know from my own experience that the words I find for my ideas influence the further formation of those ideas. It’s a cycle more than a one-way flow.

To claim the name of something as ipso facto proof of its existence, therefore, is often appealing, but it remains fallacious. Something may exist or it may not, and if it does, attaching a name to it makes it easier to refer to it; but the appearance of the name doesn’t make it exist. 

Whether one takes it as figurative or as the report of a specific event in a literal Eden, the Adamic naming function appears to be our human legacy. It’s not a simple one. It comes, moreover, with an obligation to exercise it honestly and to question it relentlessly. It behooves us to question the existence of anything for which we don’t have direct and incontrovertible evidence apart from the name. There are many things I have seen presented as established facts, on which we build or propose social policies and programs, but which I think may well be questionable at their core. Sometimes they are offered by those on the left, sometimes by those on the right. Skepticism is well-advised in either case. This is not to say that any of them doesn’t exist — but merely that existence requires further demonstration than the conjuration of a name. Some of them are concepts dearly held by one or another party, and even questioning their legitimacy will cause a firestorm of recrimination on social media, and many places else. 

What such concepts occur to you?

2 comments

  1. Certainly people are dangerous fools — and Eden-exiters, and Adam-despisers, and God-deriders — for confusing words and things. But why do people confuse words for things *more* than they confuse other signs for things? (By ‘sign’ I just mean ‘anything that can stand for something else’.)

    I think the reason people hypostatize words more than other signs is (partly) that words *are* ‘thingier’ than other signs. By ‘words are thingier’ I mean that words have a higher ‘implicit import quotient’ than other signs. By ‘implicit import quotient’ (which I hereby Name IQQ and thereby transmute it into a Thing) I mean a measure of the ‘black-boxiness’ of the tool used. By ‘black-boxiness’ I mean the ratio of ‘stuff imported by using the tool’ to ‘stuff mastered by the tool-user’. For instance: in most cases, stdlib in C has a lower ‘implicit import quotient’ than PyTorch (a popular machine-learning library) does in Python, because (in most cases) the user of the former understands what stdlib is doing better than the user of the latter understands what PyTorch is doing. And I think (as a matter of n=1 empirical observation) people do in fact hypostasize PyTorch more than they hypostasize stdlib. That is, someone is (far) more likely to say something like ‘I used PyTorch to do x’ than ‘I used stdlib to do x’.

    (Of course many kinds of machine learning are entirely dependent on high ‘black-boxiness’ — as is perhaps natural selection in biological systems — because there is sufficient pressure for the effective output to outpace the theoretical armature. And I do think that the substitution of neural nets as ‘general function derivers’ for full-bore AGI is of a piece with the name/thing confusion that your post addresses.)

    Words, I think, are more like PyTorch, as non-word signs (like Rayleigh-scattered blue, or screams of pain, or traffic lights) are more like stdlib.

    This can be observed by noting degrees of ‘implicit import quotient’. When I say ‘banana’, an awful lot of thoughts explode into my reader/auditor’s mind. But note that the ‘implicit import quotient’ is quite high for both sender and receiver. I know only a miniscule fraction of the effect of my ‘banana’ utterance on the reader/auditor, and I know that this is the case. My reader/auditor is not predictably sure of the effect of ‘banana’ on their own mind, and also knows that I know very little of the effect of ‘banana’ on their mind. So it feels to both of us that ‘banana’ is less a medium between mind1 and mind2 and more of a ‘thing’ with its own (channel-terminus-independent) ‘essence’ — that is, less/more by comparison with (say) ‘k=mv^2/2’ (a non-word sign).

    So (I think) people (idiotically) hypostasize words because words want to be things more than other signs do. This is not so much ‘autonomy of the means’ as ‘momentum of the means’ (the same thing I think you’re capturing with your observation that your words also carry your thoughts).

  2. I think I track most of what you’re saying here. I’ve never wrestled with or used at all the PyTorch library, though I have indeed used stdlib with C (back some time ago when I was still using C at all).

    That being said — I’m not sure I’d be quite as hard on people for making this systematic mistake of reification (hypostasis, if you like) from word to implicit thing. I do think it’s wrong, and potentially disastrously so, but idiotic? I’m not sure. It’s part of the seductive slipperiness of word-use at all to equate the manipulation of the symbols with the manipulation of the ideas, and in turn the manipulation of ideas with the manipulation of the referents of those ideas. It’s one of those things that requires ongoing vigilance and one of the greatest (as I’m inclined to think) of human virtues, humility.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.