Identifying prototypes: What's your gut say?

In an article titled, "Where does prototypicality come from?" Dirk Geeraerts (2007:176–77) provides a number of factors that beget this phenomenon; and when it comes to identifying the prototypical usage of a word he discusses how introspection can prove useful.

Before we can deal with the introspective evidence in favour of the prototypicality hypothesis, two preliminary questions have to be answered. In the first place, how trustworthy is the introspective methodology? The paradoxical fact of the matter is that it is exactly the unreliability of introspection that makes it interesting for our purposes. If introspection were able to yield a completely adequate picture of the facts of linguistic usage (which is doubtful), it would simply reduplicate the results reached in the previous paragraphs on the basis of a direct examination of linguistic usage. 
But given the presupposition that introspection yields only a partial insight into the semantic structure of the words that are investigated, we can also presuppose that it will be exactly the prototypical kinds of usage of those words, that reach the introspective consciousness of the language user. We can use the results of the introspective method as support for the prototypical hypothesis if we presuppose that prototypical kinds of usage (precisely because they are more salient than other applications) will more easily pass the threshold of conscious attention. Given this presupposition, the introspective judgements of native speakers may shed light on the question which kinds of usage are predominant within a certain concept. 
We can use the results of the introspective method as support for the prototypical hypothesis if we presuppose that prototypical kinds of usage (precisely because they are more salient than other applications) will more easily pass the threshold of conscious attention.
— Geeraerts

With ancient languages, like Biblical Hebrew and Greek, this is of course irrelevant as there are no native speakers. Such limitations in studying ancient languages should always be borne in mind and provide further motivation to ground one's analysis in methods that are as empirical as possible. This means replicability, cross-linguistic typological studies and the like are the closest allies of the ancient language inquirer.

Book Review: An Interpretive Lexicon of New Testament Greek

The following book review of An Interpretive Lexicon of New Testament Greek is divided into two major sections — (1) Areas of Praise As It Stands, and (2) Areas of Improvement for Future Editions — because although the review is more heavy in its critical evaluation, I believe the volume has the potential to be a handy resource for students of the Greek New Testament for years to come as it increases in utility with future editions. And it's the hope of this review that some of these areas might be taken note of. (For a more general overview of the Interpretive Lexicon see the reviews by Exegetical Tools or Reading Acts). 

AREAS OF PRAISE AS IT STANDS

Synthesis

The volume immediately reminds me of Williams Hebrew Syntax, which especially in its earlier editions was a skeletal outline of Hebrew grammar with heavy reference to other grammars. In fact, this was the selling point: not the descriptive content about a particular grammatical phenomenon but its ability to point you to more substantial works that treated this very issue. In the same way, I see much of the value of the Interpretive Lexicon in its synthesis of existing resources, some standard (Wallace, Harris) and some less so (Fuller’s list of logical relations and Piper’s Arc project). 

Size

Another notable benefit of the current edition is its size: smaller than the modern tablet and slightly larger than most (oversized) smartphones, the Interpretive Lexicon is easy to carry around with your physical GNT (assuming people still do that). Similarly, because of the size I expect most users will not feel intimidated by getting familiar with the contents and that sifting through it for insight will be more inviting.

Subject Matter

When I first read about this little volume before it came out I was excited to see one dedicated to guiding GNT readers through appreciating "the little words" I’ve spent so much time researching (at least in theoretical linguistics). But I was more excited about the angle from which the analysis was proposed to be conducted: discourse analysis. Despite the fact that the Interpretive Lexicon never words it this way, it is essentially a handbook on cohesive ties. Among the two categories of cohesive ties it is primarily concerned with relational ties (prepositions, adverbs, particles, conjunctions), though referential ties are also discussed (relative pronouns). 

Aside from the general subject matter of the volume, the introduction to it is as much appreciated as it is needed. When angles of research are appropriated to a field that is new to them, it becomes mandatory that a hold-your-hand guide be provided. And the broader discipline of discourse analysis is just that: a new model that has been in the process of being adopted over the past few decades. It would have been much easier to opt out of such a guide, so the fact that one is provided — that comprises a healthy 20 percent of the volume — is commendable.

AREAS OF IMPROVEMENT FOR FUTURE EDITIONS

Layout / Structure

Each entry is organized at a macro-level by the different grammatical functions the cohesive tie can occur with (which is supplemented by attention to the case a particular function may occur with). Under this larger division, various usages are introduced by glosses; the gloss is then followed by a symbol that more specifically designates the target usage. And here lies the first critique: the entries should prioritize the relational symbols instead of glosses.

The current structure is problematic for several reasons. Glosses are often mistakenly taken to communicate the usage/meaning of a form when in all reality they merely represent a rough indication of how the source lexeme maps over into the target language [give Barr quote somewhere along the side]. Thus, although glosses can be helpful heuristic indicators, by giving them more spotlight time in a lexicon a typical user may unfortunately be encouraged to see more than what the lexicographer is trying to communicate. Aside from this tendency towards misunderstanding, leading with glosses is also unhelpful because of the different meanings a single gloss may have.

An Interpretive Lexicon, p. 52

An Interpretive Lexicon, p. 52

These issues could be dealt with if the different subsections of an entry began with a more specific, non-overlapping signal — like the relational symbols. Beginning the subsections in this way would frame whatever follows in this particular usage, instead of potential ambiguity. Moreover, it has the added bonus of reflecting the major focus of the Interpretive Lexicon: to detail the different coherence relations specified by particular cohesive ties. 
 
Aside from these surface level areas of improvements, the more significant critiques relate to the content and underlying linguistic theory of the lexicon. 

Content & Theory

I have three primary critiques with the content and theory of the Interpretive Lexicon (which will each be discussed in turn), though all of them can be traced back to a broader systemic issue, namely: the lexicon’s exclusive reliance on traditional lexicographical resources, which (in my mind) weakens the volume’s assessment of cohesive ties.

  1. The Polysemy Fallacy
  2. The Omission of Key Concepts in Discourse Grammar and Analysis
  3. The Neglect of More Recent Contributions from Greek Discourse Scholars 

The Polysemy Fallacy

When looking at little function words, like those covered in the Interpretive Lexicon, linguists have (for a good number of decades) seen more meaning in them than is appropriate. Sandra (1998) called this overestimation the polysemy fallacy. 

To commit the polysemy fallacy is to exaggerate the number of distinct senses associated with a particular form vis-à-vis the mental representation of a native speaker.
— Tyler and Evans (2003:39–40)

This tendency is difficult to shake and is evident not only in the works of preeminent linguists, such as Lakoff, but among the standard lexicons we consult for our Hebrew and Greek studies. It was no surprise then, to find the polysemy fallacy committed in the Interpretive Lexicon when the different usages of prepositions, conjunctions, etc. are enumerated. 

Interpretive Lexicon, p. 23

Interpretive Lexicon, p. 23

Let’s take δέ as an example. The Interpretive Lexicon identifies 19 “logical relationships” (or coherence relations) that can be signaled by the lexemes considered. For δέ the entry suggests it is able to signal 10 of these 19 relations. Such a spectrum of usages should give pause, especially when they are assigned to a form that is understood to often be untranslatable (unlike έν for example which is also assigned 10 relations). More will be said about δέ below, but for now it is enough to point out that the contribution of context is likely overlooked in such examples, and that δέ brings a different constraint to the discourse. (I would raise similar concerns of the polysemy fallacy with other connectives such as γάρ, καί, μέν, and οὖν; side note: it’s curious that νὑν is not treated and, likewise, that ὥσπερ lacks any relational symbol). 

Interpretive Lexicon, p. 33

Interpretive Lexicon, p. 33

Interpretive Lexicon, p. 34

Interpretive Lexicon, p. 34

The Omission of Key Concepts in Discourse Grammar and Analysis

In the introduction the authors are clear that the ins-and-outs of discourse analysis will not be discussed, but that one of the discipline’s major concerns — that of "[a]ttempting to discern the logical relationships between propositions” (6) — will be a focus-point for the Interpretive Lexicon.[1] With that said, I was surprised to find that several key concepts related to discourse analysis and grammar were not brought into the conversation, especially since they are so helpful in explaining the way connectives can help to stitch together a discourse. Some of the concepts I have in mind include: theme line (main line) and supportive (offline) material, development, segmentation, and discourse markers. 

The tendency to commit the polysemy fallacy is often mitigated by an awareness of these parameters of discourse that extend beyond the semantic properties (that may or may not be present) with a connective. They provide answers for how a connective is operating in the discourse without imputing the semantics of the immediate context into the cohesive tie as though it brought these semantics to the related propositions (instead of the other way around). For example, with δέ there is little reason to see it as doing anything else but segmenting what follows as a distinct (or new) unit of information in the discourse, which entails that any other perceived logical relation or semantic constraint is really attributable to context—but not something that δέ denotes as the Interpretive Lexicon (and those they cite) claim.

Additionally, I wonder what the entry for γάρ might look like if the parameters of main line and offline were accounted for (perhaps the part of speech label “Conjunction” would rightfully be removed, as this connective operates at another level besides the grammar of the related propositions). I also wonder how the concepts of main line and offline coupled with development might have helped a user differentiate between the different functions of some of the (inferential) cohesive ties on pp. 34–37 (e.g., διά vs. διά τούτο vs. διό).

A related notion to these that are not accounted for is the fact that propositions are connected to others at varying levels in a discourse. That is, a number of the cohesive ties covered in the Interpretive Lexicon operate at levels beyond the sentence (and some below, like δέ) — though this is the only level I can see that they address. This static evaluation necessarily entails that those connectives which have the capacity to operate as discourse markers are not acknowledged in this respect (e.g., δέ or οὖν). As a result of this limited viewpoint of propositions (or a discourse), the analyst is forced to think only in terms of how the connectives operate at a sentential level. This restriction perhaps encourages the polysemy fallacy by limiting the higher (cognitive and discourse-pragmatic) role a cohesive tie may play.    

The Neglect of More Recent Contributions from Greek Discourse Scholars

An obvious corollary of the lexicon’s appropriation of strictly conventional resources (i.e., BDAG, Wallace, and Harris) is that more recent evaluations are not tapped into. I am thinking primarily of the work of Stephen Levinsohn and Steve Runge as they have both produced resources that deal with a handful of cohesive ties in a more or less systematic and consistent fashion.

If the work of such scholars were included, it would no doubt prove to be problematic: Levinsohn and Runge often land at odds with how BDAG, Wallace, and Harris interpret certain “popular” Greek connectives. How are such dissonant voices then to be reflected in an interpretive lexicon for students whose primary concern is exegesis?

The difficulty of answering such a question does not constitute grounds for applying only models that are in harmony, or that have been well-received over time. While the latter may certainly earn a resource its place in such a lexicon as secondary reference material, good-standing over time does not invalidate new voices that challenge the status-quo. And while, to be clear, the authors of the Interpretive Lexicon should not be held responsible as though they assume such a thing, the lexicon could certainly be strengthened (and likewise be a tool that advances scholarship) by assimilating these more recent insights into their evaluation of Greek cohesive ties.

As it stands, however, conventional theory is brought into play while more recent advances are left to ride the bench. I’m aware this is the most serious of my critiques, but I believe this decision is an unfortunate disservice to the users of the lexicon as well as the broader field. After all, resources that push for practical application (which this lexicon both embraces and embodies) is the surest way for new insights to challenge and sharpen traditional understandings.

In the most mild form of response to this area of improvement it would seem fitting to give at least a head-nod in footnote format to those instances where the content of an entry is at odds with recent insights made in Greek discourse grammar. For instance, regarding the conjunctive και, a footnote explaining that it connects two items of equal syntactic status without specifying a semantic value for how the two units are related would be a more linguistically sensitive account than what is currently found in the footnote:

Because of the flexibility of καί, it can be used to signal a wide variety of logical relationships. The specific semantic (or logical) relationship signaled in each instance must be derived from the context.
— An Interpretive Lexicon, p. 56

I actually disagree wholeheartedly with the soundness of this footnote. In the first sentence, the authors seem to suggest that flexibility in usage entails polysemy, when, in fact, it can imply just the opposite (i.e., semantic bleaching). Similarly, in the first sentence the authors claim that καί signals (or “denotes”, as they phrase in the introduction) multiple logical relationships, when, in fact, καί is more like asyndeton in that it specifies nothing, except that the two items related should be related somehow (while asyndeton leaves this possibility up to be inferred). In the second sentence, the authors attempt to clarify that the extensive polysemy of καί is completely contingent upon the immediate context, when, in fact, you could remove και and the coherence relation (or semantic/logical relationship) would still stand — because it is context that makes the two coordinated units make sense (i.e. coherent); but καί has nothing to do with achieving the specifics of the coherence relation.

Sorry for the extended digression into grimy details, but it is these extended descriptions in the footnotes that users will cling to as they try to make sense of the little words that stitch a discourse together. In the end — as summary to this third critique — it only seems appropriate that a lexicon whose mission is to inform its users on Greek connectives from the perspective of discourse analysis (however covert that mission from that angle may be) would look to the work of discourse grammarians to contribute to this important conversation. 

Technology

Aside from the recent point, one of the areas of improvement that I would be most eager to see is the digitalization of this volume. There are obvious benefits that come to mind immediately: hyperlinking to resources pointed to (especially if said resources were also in a digitally mapped platform, e.g. Accordance or Logos); and relatedly, the opportunity for every relational symbol to display a definition from a glossary upon hover would make the resource much more user-friendly and nullify any learning curve with having to keep in mind what NLR (no logical relationship), S-R (situation-response), or W-Ed (way-end) stand for. 

However, more importantly for linguistic research, the ability to search and filter between results would provide the lexicon's users with a significant advantage in applying the contents of this lexicon. For instance, the chance to oscillate between a semasiological and onomasiological perspective would most certainly prove insightful; in other words, the ability to turn the lexicon into a thesaurus (onomasiology) from a lexicon (semasiology). More concretely, this would entail the opportunity to choose a particular coherence relation (e.g. Cause) to study and then to see all the cohesive ties in the lexicon that specify such a relation. 

CONCLUSION

There’s no doubt many students will be eager to hold this little lexicon in their hands as they scour the Greek New Testament. As it stands, I believe it provides a helpful survey and point of orientation for many of the coherence relations that can be signaled with many of the cohesive ties it covers. The areas of improvement I have raised are significant enough for me, however, to worry that its users will not be taken any closer to properly understanding some of the key connectives they encounter most, but instead be encouraged to think about them in conventional ways. But, as I have tried to frame my concerns, this is nothing future editions cannot amend. 


Thanks to Zondervan for sending me a review copy (which in no way influenced my assessment).


[1] Although the authors do not specify, this angle of discourse analysis is commonly referred to as rhetorical-structural analysis, see Noordman et al (1999:138-140) or Taboada and Mann (2006) for an overview.

Discourse Matters: An Example with Word Order

PHOTO CREDIT: JESU MAFA

John 4:16 Λέγει αὐτῇ· Ὕπαγε φώνησον τὸν ἄνδρα σου καὶ ἐλθὲ ἐνθάδε. 17 ἀπεκρίθη ἡ γυνὴ καὶ εἶπεν αὐτῷ· Οὐκ ἔχω ἄνδρα. λέγει αὐτῇ ὁ Ἰησοῦς· Καλῶς εἶπας ὅτι Ἄνδρα οὐκ ἔχω· 18 πέντε γὰρ ἄνδρας ἔσχες, καὶ νῦν ὃν ἔχεις οὐκ ἔστιν σου ἀνήρ· τοῦτο ἀληθὲς εἴρηκας. (SBLGNT)
John 4:16 He said to her, “Go, call your husband and come here.” 17 The woman answered and said to him, “I do not have a husband.” Jesus said to her, “You have said rightly, ‘I do not have a husband,’ 18 for you have had five husbands, and the one whom you have now is not your husband; this you have said truthfully!” (LEB)

The conversation between Jesus and the Samaritan woman is a prime example of how two people can say the exact same thing, but mean something completely different. In verse 16, Jesus tells the woman to do something:

Call your husband and come here
— Jesus

But the woman responds:

I don’t have a husband
(Οὐκ ἔχω ἄνδρα)
— Samaritan Woman

In Jesus’ instruction, he discloses an assumption that the woman has a husband. But because this isn't true, the woman corrects him. 

Her assertion — that she has no husband — follows the standard word order of sentences in Greek (the verb is first). As such, there is nothing unusual about the way the information is structured. But we can't say the same for Jesus’ response:

You have said rightly, “I don’t have a husband”
(Καλῶς εἶπας ὅτι Ἄνδρα οὐκ ἔχω)
— Jesus

Now pay attention to the Greek, not the English. Notice that Jesus re-arranges the word order of what the Samaritan originally said. He mentions ἄνδρα (husband) before οὐκ ἔχω (I do not have), whereas the Samaritan had the order reversed. And recall that her arrangement reflects the standard (or unmarked) order of words; thus Jesus’ arrangement is considered marked (i.e., it's not the default setup). Because of this, even though Jesus repeats the exact same words as the Samaritan, he inevitably means something different.

While those who have heard the rest of the story know that she was being only partially truthful — and actually has had multiple husbands — when we read the story in English it's highly likely that we don't pick up on this fuller reality that Jesus hints at in his rearrangement of the Samaritan woman's response. Instead, we tend to fill in the story after the explicit mention of five husbands in the following verse. But the point remains, in Greek, you don’t have to wait until verse 18 to find out that something is up — that something is off in the Samaritan's claim.

By fronting ἄνδρα (husband) before the main verb Jesus draws more attention to this piece of information than would otherwise have been the case had it followed the verb. (In English we can accomplish a similar effect through stressed intonation.) The specific function achieved by the fronted information is determined by the context.

In this case, Jesus is able to draw more attention to a particular aspect of the Samaritan woman's claim for the sake of confirmation, namely that she does not have a (single) husband. Even though the Samaritan probably did not mean to imply (or leave open the fact) that she has had multiple husbands with the words Οὐκ ἔχω ἄνδρα (I do not have a husband), Jesus knows better and exploits the ambiguity in her response by reconfiguring the information structure. And yet, even if the extra attention on ἄνδρα (a husband) doesn't fully make sense at this point (to the Samaritan or the reader), it primes a situation where Jesus is able to affirm with equal force that it's not one but five husbands that she has had. For just as Jesus fronted ἄνδρα (husband) in verse 17, so he fronts πέντε ἄνδρας (five husbands) in verse 18. The attention drawn to the singular ἄνδρα (husband) thus functions as a foil for the plural mention in the next sentence. In a similar manner, we might make the same point in English with raised intonation: You don’t have A HUSBAND — you've got FIVE!

Though you probably knew before reading this post that it’s not what you say but how you say it that matters, now you should have a better understanding of how this principle can get fleshed out in Greek — even when the exact same words are repeated.

A New Model for Mapping Meaning

Words do not have senses.

At least in the sense we like to think they do.

Our understanding of the meaning of words is largely shaped by our interactions with dictionaries. A dictionary is a heuristic tool that helps a user learn the different meanings associated with certain expressions — the key word being “heuristic”.

Contrary to popular belief, the goal of lexicographers in creating a dictionary is not to lay down with pen and paper the conceptual semantic structure of a language’s wheelhouse of words. Those in this guild of dictionary-making are painfully aware of the impossibility of that task. Instead, they posit artificial semantic demarcations (commonly referred to as dictionary senses) and freely admit that

[M]eanings and dictionary senses aren’t the same thing at all. Meanings exist in infinite numbers of discrete communicative events, while the senses in a dictionary represent lexicographers’ attempts to impose some order on this babel.
— Atkins and Rundell (2008:11)

This means that the semantic structure identified for a verb like “run” in a dictionary is more like a crutch than a real-world solution for representing how this verb can be used. But the benefit of a crutch is twofold: it recognizes a problem exists and seeks to offer a temporary solution. We are entering into a stage in lexicology, however, where this lexicographical crutch may be abandoned in the coming decade(s) for certain languages (or at least parts of them), like English and German. Let me explain.

See the full definition

Word-meaning is not distinct.

I alluded to a more robust understanding of word meaning above in the quote: “Meanings exist in infinite numbers of discrete communicative events…” (Atkins and Rundell 2008:311). In other words, meaning is not some static phenomena that can be spliced into well-ordered categories. It is emergent, protean, and messy. Did I mention it was messy?

The implications of this understanding are beginning to be realized with the recent advances made by linguists who are applying corpus linguistic quantitative techniques to usage-based / cognitive semantic analysis. One of the heralds of this new and so-called “corpus-driven cognitive semantic” approach is Dylan Glynn, who openly admits that

“the assumption of discrete senses is something that is intuitively attractive. Indeed, like it is obvious that the world is flat, it would seem obvious that words have meanings and that we choose between those meanings in communication. Understood in these terms, senses are reified as discrete units. This naïve operationalisation may aid in language learning, dictionary writing and, typically, only comes amiss in inter-personal disputes. However, the evidence for discrete lexical senses is as naïvely sound as the horizon is evidence for the flatness of the world” (2014:119).

He goes on to explain that if we know the classical model of categorization doesn’t work at a conceptual level — that of necessary-and-sufficient conditions — then why would we assume it holds water at the word level with different distinct senses? Put differently, if we know the concept “bird” isn’t made up of a fixed number of must-have attributes, why does it make sense to assume that when we discuss the concept of “bird” with words that they can encode a distinct sense?

The answer is simple: we shouldn’t.

So if the relationship between different meanings of a word is fuzzy, just like concepts, then where does this leave us with talking about meaning at all? Why try to draw lines between certain usages if no lines exist?

Once again, the recent advances in corpus-driven cognitive semantics provide some promising ways forward.

Visualizing non-discrete lexical senses...

Instead of reducing the protean nature of lexical meaning to a static depiction of distinct senses, certain quantitative models allow usages of a word to be visually depicted in a two- or three-dimensional clusters of usage-features. These usage-features represent linguistic parameters of the word under scrutiny — be they phonological, morphological, collocational, semantic (including TAM), discourse-pragmatic, etc. These are then converted into data points through the researcher’s quantitative method of choice. The final result is a full-bodied behavioral profile of the target lexeme.

The first linguist to really make a dent in showcasing the utility of this approach was Stefan Gries (2006) who looked at the many uses of the verb “run”, applying a behavioral profile approach to assess the prototype structure of this word’s range of usages. He took 815 tokens of “run” and tagged each occurrence with 252 usage-features (ID tags), resulting in about 205,000 different data points, with which he was then able to test for significance and begin to postulate about the correlations between the formal distribution patterns and the semantic conceptual structures of the verb “run”.    

That was in 2006. Almost a decade later, Gries (2015:483) is able to affirm in a handbook article on polysemy that:

Across all three areas — C[ognitive] L[inguistics], corpus linguistics, and psycholinguistics — a consensus is emerging to assume a multidimensional semantic space in which usages or senses are located such that their spatial proximity reflects distributional and/or semantic similarity; cf., e.g., Gries (2010) and Taylor (2012) for cognitive/corpus linguistics and, Rodd et al. (2004: 89) for psycholinguistics. Thus, while integral to early C[ognitive] L[inguistics], the notion of distinct senses appears more of a descriptive device rather than a claim about psycho-linguistic reality. This conception does justice to the fact that the same word/sense — i.e., region of semantic space — can be accessed or traversed at different levels of resolution and from different angles/trajectories.

Gries continues to explain what he means with an example of how non-discrete lexical senses might be represented in three-dimensional semantic space.

A simple example is shown in Figure 1, which represents the same usages (as dots) in semantic space from three different angles. The left panel suggests there is one group of relatively coherent usages, maybe corresponding to one sense. However, the other two panels show the same usages from different angles (e.g., from different semantic/discourse contexts), and these panels give rise to two or four senses. That is, context facilitates sense recognition by imposing a particular view on, or trajectory through, stored exemplars, and the possibility of creativity is afforded by the speaker’s freedom to (i) approach the same region from different perspectives or (ii) see similarities between different point clouds in semantic space and exploit this linguistically by means of, say, analogy, or (iii) condense regions of space.” (Gries 2015:482-83)

 

What's this mean for me and you?

In my own research, I’m still a) trying to figure out how to conduct some of these quantitative techniques and b) figure out if there’s any utility in applying them to ancient languages, like Biblical Hebrew. I’m sure that the limited corpus size will prove to demonstrate some significant limitations, but I would hate to walk away from these advances in Cognitive Semantics without trying to glean what I can from these more empirical methods of semantic inquiry.

Regardless of whether or not these methods can be partially or fully appropriated, the theoretical underpinnings signified by their results can enthusiastically be adopted. Most significantly this means that instead of trying to identify the distinct senses of a lexeme we should look for established patterns of uses, which can be operationalized as the “re-occurring configuration of features (or ID-tags)” (Glynn 2014:122). All in all, I like the way Glynn (2009:99) wraps up his essay in the co-edited volume New Directions in Cognitive Linguistics...

Cognitive Linguistics is a usage-based theory of language and one that assumes language is driven by our encyclopaedic knowledge of the world. In light of this, the kind of usage patterns that Quantitative Multifactorial methods identify offer important clues to the conceptual structures associated with linguistic forms. Although, presenting the results in terms that are typical of the cognitive research community still needs development, mapping the usage, and therefore meaning, of lexemes and constructions is precisely in keeping with the lexical semantic tradition developed by Lakoff (1987). The principal difference is that such quantitative results offer relative tendencies rather than ‘different meanings’. This, however, seeing the complex and varied nature of language, is arguably a more cognitively realistic approach to the description of the conceptual structure.

Bibliography                               

Atkins, B.T. & Rundell, M. (2008) The Oxford Guide to Practical Lexicography. Oxford: Oxford University Press.

Glynn, D. (2014) The Many Uses of run: Corpus-Based Methods and Socio-Cognitive Semantics. In D. Glynn & J. Robinson (eds.), Corpus Methods in Cognitive Semantics. Amsterdam: John Benjamins.

Glynn, D. (2009) Polysemy, Syntax, and Variation: A Usage-Based Method for Cognitive Semantics. In V. Evans & S. Pourcel (eds.), New Directions in Cognitive Linguistics (pp. 77-104). Amsterdam: John Benjamins.        

Gries, S. (2015) Polysemy. In E. Dąbrowska & D.S. Divjak (eds.), Handbook of Cognitive Linguistics, 472-490. Berlin & Boston: Mouton De Gruyter.

Gries, S. (2006) Corpus-based methods and Cognitive Semantics: The many senses of to run. In S.Th. Gries & A. Stefanowitsch (eds.), Corpora in Cognitive Linguistics: Corpus-based approaches to syntax and lexis (pp.57–99). Berlin & New York: Mouton de Gruyter.

Christo van der Merwe + Alex Andrason = Research you must read

Christo van der Merwe + Alex Andrason = Research you must read

Two people just put their minds together and produced a piece of content that you're going to want to read: Alex Andrason and Christo van der Merwe. I've written and worked with Alex on other topics and so I know, firsthand, the value that he brings to the table.

Assuming you care for linguistics and Biblical Hebrew, Christo needs no introduction; but I will say from my experience with him as an MA supervisor that any kind of collaborative research is such an energizing experience as he constantly positions himself as someone who can learn from you, though the reverse is often more so the case.

So what happens when these two work together? Now you get a chance to see.

Benefits of a Principled Analysis of BH Prepositions

I recently received the final proof of an article accepted by Journal for Semitics. You can find a copy here. If you're interested in semantics, prepositions, methodology, or Biblical Hebrew, chances are you'll enjoy reading it.

Lyle, K. (2015) "Benefits of a principled analysis of Biblical Hebrew prepositions", Journal for Semitics 24/2, 403–426.

With the publication of this article, that means I've had 3 articles published and one SBL presentation for the year 2015. Now before you think I'm bragging, you should know something about the nature of getting published in a peer-reviewed journal—the process from submission to final proof can take a long time.

What I'm trying to say is that in 2015 it's not that I was particularly productive as much as it is that some mental gardening that began in 2013 finally blossomed into published pieces. I'm learning this timeframe isn't that uncommon, but I'll admit, it was a learning curve to catch onto—especially when some delays were just out of my hands.

My first article submission and acceptance with Hebrew Studies spoiled me. It still took about 6 months but there were no hiccups or extended delays. These last three were something of a different story. And with this most recent article, it's a story I'm looking forward to sharing in a future post—not because of any fault of JSem but because this article went through the ringer.

Three rejections, and a final acceptance.

Yes—I submitted it 4 times (granted, with increasingly significant revisions). Some might be embarrassed by these odds but my hope is that in sharing the story future scholars preparing their first articles could learn from my experiences and have a more successful go at it the first (or second) time around.

As a final note, you should know that 3 out of the 4 papers I wrote were co-authored. As I read more and more in the field of Cognitive Linguistics I'm struck by how many articles are multi-authored. And having done more of this lately, I can see why. It's so much more fun writing with a colleague (or more) and tackling a research question together! Not only is it more engaging, the intellectual rigor doubles.

I admit I don't read standard biblical studies journals that often, but from what I've seen in the past, co-authored articles are a rarity while single authored articles are the norm. I would love to see this change. Maybe it has.

Anyways, I've heard somewhere that a cord of a couple strands is stronger or something...

The Grammaticalization of בלי: Part 1 and 2 (finally published)

w3e7r
w3e7r

For the many thousands who are interested, both Part 1 and Part 2 of an article I co-authored with Alex Andrason have been published by JNSL and are available for download. In this two-parter we take a look at the Biblical Hebrew lexeme בלי and explore its fascinating grammatical evolution. Although only appearing 59 times in the Hebrew Bible, we get glimpses of multiple stages of its grammatical life: beginning as a noun and ending as a verbal negator, with preposition, semi-conjunction, conjunction, and negative affix in between. It really is remarkable that such a full story of grammaticalization could be told in such a sparse number of occurrences. But the data doesn't lie (and we try not to.). ;)

A final closing comment concerns the label we use to describe this vast array of potential uses of בלי. Instead of referring to this potential as polysemy, we use the term heterosemy. Don't worry, I hadn't heard this term before Alex introduced me to it when we started writing these articles. While polysemy is traditionally used to describe a lexeme that has multiple (more semantic) meanings, heterosemy is used to describe a lexeme that has multiple (more grammatical) functions. So, for instance, "google" is heterosemous in the sense that it can be used as a noun ("Google owns everything") or as a verb ("Just google it"). While "sharp" is polysemous in the sense that it can mean keen or pointy. To be clear, most lexemes can't be discussed from only one of these two angles (thank you, grammaticalization). Many share both a semantic (polysemy) and functional (heterosemy) potential. In addition to meaning different things, "sharp" can be used as both an adjective and an adverb. In our article, we just chose to focus on heterosemy, since this perspective was much more interesting as בלי unashamedly boasted a full spectrum of the grammaticalization process.

If you have any questions about the article, don't hesitate to ask in the comments section below. Enjoy!(?)