Spurious fallacies linguists make: A response to Dr. Wallace

posted by Kris (with a “k”) Preliminary remarks: The following is a response to a post Dr. Wallace wrote last week concerning 3 fallacies linguists commit. I am only responding to the first fallacy about word meaning. For ease of following along, I encourage you to first read his post. (I’ve also included a screenshot at the end of this post capturing the relevant material from his post that I’ll be commenting on).

The theory of word meaning (or lexical semantics) is a cumbersome realm to navigate. How we construct meaning and how we communicate is no small task. Even though it’s only been a formal field of study since the 60’s, we’ve made great strides towards better understanding how meaning works.

When it comes to word meaning, a specific subfield of linguistics known as Cognitive semantics has especially made great progress. Nonetheless, the field is nuanced and hypotheses are constantly being tested and refined.

Ever so often a consensus is reached that can be summed up in a very general sense. For example: words have no meanings, meanings have words. The heart of this saying is true, but taken at face-value and if pressed, holes will surely be found. That’s because it’s not bullet-proof, but nor is it meant to be. It’s rather a friendly simplification of a more complex reality.

Another such saying has recently been discussed by Dr. Wallace: a word has no meaning apart from context. But if I’m being honest, there are several key points that I feel are off the mark. So after careful reflection, I’ve boiled my reservations down to two points.


1) He treats this saying as linguistic dogma.

What I mean by that is simple: Dr. Wallace seems to regard the adage as bullet-proof, and as if linguists intend it to be understood as such. But as mentioned above, these pithy little sayings are helpful oversimplifications of complex truths. To treat them as anything else is to misunderstand the field from which they’ve sprouted. Consequently, Dr. Wallace places a load on this saying that it was never meant to bear and, additionally, misses the greater truth behind the maxim.

Although he makes specific claims about what “linguists” say or believe throughout his post, he never specifies which camp or individual(s) he is referring to. So I’m honestly not sure whom his charges are leveled against, but this is especially the case since I don’t see linguists saying the things he says they say (and my expertise lies in lexical semantics, so I’ve read quite a bit of them). [Update: Wallace has recently cited several scholars in the comments section on his blog, but it still feels like he's pulling teeth. The scholars are Greek scholars for the most part, not "linguists" as he calls them in his post (Darrell Bock?), and tend to be from decades past—hardly reflective of current opinions. Also, see the lively FB discussion, which I only discovered this morning (Thanks Chris!)]. With that said, if I was given the opportunity to speak on behalf of the linguists to which he is referring (at least the ones I’ve read), I might re-phrase the adage he references as follows:

a word has no meaning [within a context] apart from context


a word has no meaning [in any useful sense] apart from context

Shoot as many bullets as you want at this, but you’ll have a hard time finding holes. This line of thinking is more congruent with modern (Cognitive) linguists dealing with semantics. Not surprisingly (and to no offense at Dr. Wallace—he’s a grammarian and text-critic not a linguist), those who deal with word meaning do so with much more finesse than this adage suggests. In fact, the charges rallied against linguists with regard to semantics seem to be absolved when one actually consults the literature; for linguists (at least those in the Cognitive enterprise) are far from claiming that a word in isolation has no meaning outside of context. Rather, they claim that when a word is actually used within a context (and not just randomly written on a chalkboard in isolation), that its specific contribution is in large part determined by the surrounding context. In other words, their concern is not so much with the semantic content associated with a word outside of language-use (which they acknowledge exists), but with the way word meaning operates in actual language use. Don’t believe me? See for yourself…[1]

“[…] my claim is not that words do not have stable semantic representations associated with them. I argue that they do, and refer to these as lexical concepts. Rather, my claim is that these lexical concepts provide access to encyclopedic knowledge—a semantic potential—which is constrained and determined by context. Thus, the semantic structure (lexical concept) that a word is conventionally associated with does not in fact equate with the word’s meaning. Word meaning, from this perspective, is always a function of a situated interpretation: the context in which any word is embedded and to which it contributes.”

–V. Evans, How Words Mean: Lexical Concepts, Cognitive Models and Meaning-Construction (Oxford, 2009) 23

I hope this makes it clear that linguists (at least not all of them) are guilty of what Wallace is charging them with. And that quite contrarily, they probably agree! The adage, as interpreted by Dr. Wallace, is a fallacy—but not if understood as it’s meant to be heard.

Onto the next point. If he has misunderstood the adage, then his extrapolation of the “logical conclusion” is necessarily flawed too. So now let’s discuss why that’s the case.

2) His method of taking the adage to its logical conclusion is faulty.

In short, Dr. Wallace asserts that if a word means nothing apart from context, then we shouldn’t be able to know what any of the words in a sentence mean since each word’s semantic contribution is contingent upon the surrounding words. He gives an example substituting the semantic value of each word with an “X”.

Marry had a little lamb…

X X X X X…

At the face of it, this thought process makes sense. How can you identify a word’s meaning when it’s meaning is contingent upon other words that are similarly contingent upon other words’ meanings? This is cyclical and wrong-headed, isn’t it?

In short, yes—it’s cyclical. But, no, this doesn’t mean it’s wrong.

The problem isn’t that this model of reasoning is cyclical, it’s that it isn’t cyclical enough. The way humans process meaning is more complicated than this. We cannot come to a sentence and conduct a static analysis of a word’s semantic contribution. In other words, we can’t pick one word out of the mix and say “X gets its meaning from Y and Z” with the assumption that Y and Z are stable. In reality, Y gets its meaning from X and Z at the same that that Z gets its meaning from X and Y.

Word meaning is a live event that happens before and during the time that the utterance is made. There are innumerable interdependent relationships that transpire in milliseconds that help to beget an informative utterance.

Before a word is ever mentioned we carry around our own private dictionary of senses associated with a given form or expression. The words we use day in and day out are entrenched with various meaning potentials. Some more than others, e.g. tip (gratuity, sharp point, piece of advice, end of something, baseball term) vs. lightbulb (thing, idea?). We can’t leave living dictionary behind. We bring it with us into every conversation, and it is constantly being updated.

Similarly, at the time a word is mentioned its semantic contribution to the communicative event is filtered through a number of non-lexical lenses (e.g., shared world knowledge, assumed shared world knowledge, gestures, etc.). In addition, there are obvious lexical constraints that one word might have on another (see footnote 2).

And so in a very real sense, we can’t say that a word ever means something outside of context, for context is ever-present—though it varies by degree.

She had a little heart.

Who is “she”? What event does “had” refer to? A past event in which she’s still alive, or one in which she’s dead? Is the reference to “heart” literal? Was she young, or full-grown but have an abnormally sized heart? Or is the expression “little heart” figurative, as in she had some courage remaining?

Even in this situation where there is arguably no context (nor the ability to answer any of the above questions), how is it that I am able to draft so many different scenarios?

Because there is context, at least some. It may not be the kind we often think about, but there is context nonetheless. For instance, for each word we can surmise various meaning potentials. And for the verbal construction, we can dream up multiple events that may be suggested by the temporal frame.[2]

It is apparent, then, that words do have meaning—but it is equally clear that the specifics of what they code is only accessible when a complete context is in view. When we hear something out of context, we use what we’ve got to try and sort out the ambiguity.

But like I said before, this is a live event. There’s no static piecemeal composition involved. Consequently, there can be no piecemeal decomposition. We can’t take the word “little” out of the sentence from above and make sense of it in light of the surrounding values, because just as these other values influence the semantic load of “little”, “little” similarly exerts a semantic constraint on the other forms.

So does a word have meaning apart from context? Yes—there’s various potentials that we could draw on. But at the same time, no—at least not in any practical sense. Meaning construction, then, involves a hyper-cyclical process in which one word feeds off the potential and activation of another word, that it itself helps give rise to.


Let’s close with a comment about Lego blocks.

Picture a white block, 2x4. You know the one: two dots wide, 4 dots long. We’ll say it’s one of the larger ones, not the flat ones (even though chances are you were already visualizing the first one, since it’s the more prototypical). This block by itself is just a block. You could imagine a host of scenes in which it is more than just a block, but for now, while alone, it’s just a block with untapped potential. With a few more blocks, it could become the wall of a fort or a bedroom, or a piece of car door or rocket ship. But what would be required for all of this? More blocks working in concert with one another to activate any of these potentials.

I think word meaning is similar to this. In isolation, sure there’s untapped potential—which is based on past experiences and uses. But the moment I start pairing words with other words, we begin to experience the magic of meaning. Some potentials are primed. Others are nixed. All the while a specific construal is crystalized in milliseconds.

Screen Shot 2014-12-15 at 11.12.40 AM

[1] Anyone interested in what a cutting edge linguist has to say about word meaning should consult Vyvyan Evans, who has not only written a book on this very topic (How Words Mean: Lexical Concepts, Cognitive Models, and Meaning Construction, Oxford 2009) but deals with it at both a popular level (in his book The Language Myth: Why Language is Not an Instinct, Cambridge 2014) and often scholarly (e.g., “Lexical Concepts, Cognitive Models and Meaning Construction” Cognitive Linguistics 17/4 (2006), 491–534) where you can find gems such as these:

  • [T]he ‘meaning’ associated with a word in any given utterance appears to be, in part, a function of the particular linguistic context in which it is embedded. Put another way, word ‘meaning’ is protean, its semantic contribution sensitive to and dependent on the context which it, in part, gives rise to… (2006:492)
  • The precise semantic contribution of any word is a function of the utterance context in which it is embedded, and, moreover, the sorts of (conceptual) knowledge these lexical entities provide access to, as I shall argue in detail. In other words, words don’t have ‘meanings’ in and of themselves. Rather meaning is a function of the utterance in which a word is embedded (2006:492)
  • What a word ‘means’, which is to say, which part of its encyclopedic knowledge potential is activated will always be a function of how it is being used in any given context (2006:493)

[2] Think about how much easier the sentence would be to interpret if the word “left” was inserted at the very end: She had a little heart left. Immediately, much of the fog is lifted.