A New Model for Mapping Meaning

Words do not have senses.

At least in the sense we like to think they do.

Our understanding of the meaning of words is largely shaped by our interactions with dictionaries. A dictionary is a heuristic tool that helps a user learn the different meanings associated with certain expressions — the key word being “heuristic”.

Contrary to popular belief, the goal of lexicographers in creating a dictionary is not to lay down with pen and paper the conceptual semantic structure of a language’s wheelhouse of words. Those in this guild of dictionary-making are painfully aware of the impossibility of that task. Instead, they posit artificial semantic demarcations (commonly referred to as dictionary senses) and freely admit that

[M]eanings and dictionary senses aren’t the same thing at all. Meanings exist in infinite numbers of discrete communicative events, while the senses in a dictionary represent lexicographers’ attempts to impose some order on this babel.
— Atkins and Rundell (2008:11)

This means that the semantic structure identified for a verb like “run” in a dictionary is more like a crutch than a real-world solution for representing how this verb can be used. But the benefit of a crutch is twofold: it recognizes a problem exists and seeks to offer a temporary solution. We are entering into a stage in lexicology, however, where this lexicographical crutch may be abandoned in the coming decade(s) for certain languages (or at least parts of them), like English and German. Let me explain.

See the full definition

Word-meaning is not distinct.

I alluded to a more robust understanding of word meaning above in the quote: “Meanings exist in infinite numbers of discrete communicative events…” (Atkins and Rundell 2008:311). In other words, meaning is not some static phenomena that can be spliced into well-ordered categories. It is emergent, protean, and messy. Did I mention it was messy?

The implications of this understanding are beginning to be realized with the recent advances made by linguists who are applying corpus linguistic quantitative techniques to usage-based / cognitive semantic analysis. One of the heralds of this new and so-called “corpus-driven cognitive semantic” approach is Dylan Glynn, who openly admits that

“the assumption of discrete senses is something that is intuitively attractive. Indeed, like it is obvious that the world is flat, it would seem obvious that words have meanings and that we choose between those meanings in communication. Understood in these terms, senses are reified as discrete units. This naïve operationalisation may aid in language learning, dictionary writing and, typically, only comes amiss in inter-personal disputes. However, the evidence for discrete lexical senses is as naïvely sound as the horizon is evidence for the flatness of the world” (2014:119).

He goes on to explain that if we know the classical model of categorization doesn’t work at a conceptual level — that of necessary-and-sufficient conditions — then why would we assume it holds water at the word level with different distinct senses? Put differently, if we know the concept “bird” isn’t made up of a fixed number of must-have attributes, why does it make sense to assume that when we discuss the concept of “bird” with words that they can encode a distinct sense?

The answer is simple: we shouldn’t.

So if the relationship between different meanings of a word is fuzzy, just like concepts, then where does this leave us with talking about meaning at all? Why try to draw lines between certain usages if no lines exist?

Once again, the recent advances in corpus-driven cognitive semantics provide some promising ways forward.

Visualizing non-discrete lexical senses...

Instead of reducing the protean nature of lexical meaning to a static depiction of distinct senses, certain quantitative models allow usages of a word to be visually depicted in a two- or three-dimensional clusters of usage-features. These usage-features represent linguistic parameters of the word under scrutiny — be they phonological, morphological, collocational, semantic (including TAM), discourse-pragmatic, etc. These are then converted into data points through the researcher’s quantitative method of choice. The final result is a full-bodied behavioral profile of the target lexeme.

The first linguist to really make a dent in showcasing the utility of this approach was Stefan Gries (2006) who looked at the many uses of the verb “run”, applying a behavioral profile approach to assess the prototype structure of this word’s range of usages. He took 815 tokens of “run” and tagged each occurrence with 252 usage-features (ID tags), resulting in about 205,000 different data points, with which he was then able to test for significance and begin to postulate about the correlations between the formal distribution patterns and the semantic conceptual structures of the verb “run”.    

That was in 2006. Almost a decade later, Gries (2015:483) is able to affirm in a handbook article on polysemy that:

Across all three areas — C[ognitive] L[inguistics], corpus linguistics, and psycholinguistics — a consensus is emerging to assume a multidimensional semantic space in which usages or senses are located such that their spatial proximity reflects distributional and/or semantic similarity; cf., e.g., Gries (2010) and Taylor (2012) for cognitive/corpus linguistics and, Rodd et al. (2004: 89) for psycholinguistics. Thus, while integral to early C[ognitive] L[inguistics], the notion of distinct senses appears more of a descriptive device rather than a claim about psycho-linguistic reality. This conception does justice to the fact that the same word/sense — i.e., region of semantic space — can be accessed or traversed at different levels of resolution and from different angles/trajectories.

Gries continues to explain what he means with an example of how non-discrete lexical senses might be represented in three-dimensional semantic space.

A simple example is shown in Figure 1, which represents the same usages (as dots) in semantic space from three different angles. The left panel suggests there is one group of relatively coherent usages, maybe corresponding to one sense. However, the other two panels show the same usages from different angles (e.g., from different semantic/discourse contexts), and these panels give rise to two or four senses. That is, context facilitates sense recognition by imposing a particular view on, or trajectory through, stored exemplars, and the possibility of creativity is afforded by the speaker’s freedom to (i) approach the same region from different perspectives or (ii) see similarities between different point clouds in semantic space and exploit this linguistically by means of, say, analogy, or (iii) condense regions of space.” (Gries 2015:482-83)

 

What's this mean for me and you?

In my own research, I’m still a) trying to figure out how to conduct some of these quantitative techniques and b) figure out if there’s any utility in applying them to ancient languages, like Biblical Hebrew. I’m sure that the limited corpus size will prove to demonstrate some significant limitations, but I would hate to walk away from these advances in Cognitive Semantics without trying to glean what I can from these more empirical methods of semantic inquiry.

Regardless of whether or not these methods can be partially or fully appropriated, the theoretical underpinnings signified by their results can enthusiastically be adopted. Most significantly this means that instead of trying to identify the distinct senses of a lexeme we should look for established patterns of uses, which can be operationalized as the “re-occurring configuration of features (or ID-tags)” (Glynn 2014:122). All in all, I like the way Glynn (2009:99) wraps up his essay in the co-edited volume New Directions in Cognitive Linguistics...

Cognitive Linguistics is a usage-based theory of language and one that assumes language is driven by our encyclopaedic knowledge of the world. In light of this, the kind of usage patterns that Quantitative Multifactorial methods identify offer important clues to the conceptual structures associated with linguistic forms. Although, presenting the results in terms that are typical of the cognitive research community still needs development, mapping the usage, and therefore meaning, of lexemes and constructions is precisely in keeping with the lexical semantic tradition developed by Lakoff (1987). The principal difference is that such quantitative results offer relative tendencies rather than ‘different meanings’. This, however, seeing the complex and varied nature of language, is arguably a more cognitively realistic approach to the description of the conceptual structure.

Bibliography                               

Atkins, B.T. & Rundell, M. (2008) The Oxford Guide to Practical Lexicography. Oxford: Oxford University Press.

Glynn, D. (2014) The Many Uses of run: Corpus-Based Methods and Socio-Cognitive Semantics. In D. Glynn & J. Robinson (eds.), Corpus Methods in Cognitive Semantics. Amsterdam: John Benjamins.

Glynn, D. (2009) Polysemy, Syntax, and Variation: A Usage-Based Method for Cognitive Semantics. In V. Evans & S. Pourcel (eds.), New Directions in Cognitive Linguistics (pp. 77-104). Amsterdam: John Benjamins.        

Gries, S. (2015) Polysemy. In E. Dąbrowska & D.S. Divjak (eds.), Handbook of Cognitive Linguistics, 472-490. Berlin & Boston: Mouton De Gruyter.

Gries, S. (2006) Corpus-based methods and Cognitive Semantics: The many senses of to run. In S.Th. Gries & A. Stefanowitsch (eds.), Corpora in Cognitive Linguistics: Corpus-based approaches to syntax and lexis (pp.57–99). Berlin & New York: Mouton de Gruyter.