The plain meaning rule is out of favor with contracts academia. There is so little to say about it, nothing to theorize, and even less to test students about. Plain meaning? It’s such an unintelligent concept. Scriptures, poems, literature, love letters—they all have subtle meanings that can be imagined and read between the lines. Why not contracts?
Luckily, California rescued the contract world from that slight. Its courts rejected the plain meaning rule! We too now have a job to do: speculate about the meaning of contracts. California and Foucault told us that there is no such thing as plain meaning of words, and so the meaning of the contract must be teased out not merely from the text but also from the context of the agreement. The so-called contextualist interpretation approach liberated our profession to develop surgical interpretation tools that advance various conceptions of what-the-parties-must-have-truly-intended. Precontractual conversations, relational norms, the parties’ interests and expectations, what not. So much richness beneath the text. Aside from a few dissenters, the contracts professoriate either ignores or deplores the plain meaning rule.
There is only one problem with this state of the art: it is divorced from the state of the law. American courts, by and large, regularly apply the plain meaning rule to interpret contracts. Words, courts strangely think, have meanings, and when common sense is not sufficient to discover that meaning dictionaries and treatises can help.
There is so little “theory” behind the textualist/plain-meaning approach to interpretation that until recently there was no scholarship on it. Against the dozens, maybe hundreds, of articles on how to do contextualist interpretation, and the even more numerous articles on normative interpretation (namely, how to interpret contracts in ways that advances various social values), there were no creative ideas on how to improve the methodology of textualism. A few years ago, Stephen Mouritsen began to fill that void with a terrific article that introduced a method of empirical linguistics—how to choose between several plain meanings. I wrote an earlier Jot on that article and I loyally teach it to my 1Ls. They love it.
Now comes a major new contribution to the budding field of data-driven textualism. It asks not what the parties had intended but rather what the words they chose typically mean. Yonathan Arbel and Dave Hoffman have teamed together to propose a method of “generative interpretation” of contracts, which statistically estimates the meaning of the text with the aid of artificial intelligence models.
For example, does the term “flood”—typically used in insurance contracts to describe an important exclusion—have a plain meaning that refers only to inundations that are naturally caused, or could it also mean humanly caused ones? When courts want to answer this question (as they did, for example, after the Katrina floods), they use dictionaries and canons of textual interpretation. The AI algorithms Arbel & Hoffman use—the Large Language Models (LLMs)—tease out the meaning by looking at other texts that use this term. The algorithm is trained on vast bodies of existing texts to learn complex statistical relationship between words and discover the most common other words and meanings that relate to the contested phrase. It captures both semantic and syntactic relation to other words, along numerous dimensions. When assessing the meaning of a sentence, the LLM algorithm can also determine which word in the sentence sheds more light on its meaning. As result, it offers a big improvement relative to corpus linguistics: the meaning of any word or phrase in a contract is not “stable” but rather changes based on the specific textual context in which it is packaged.
Consider what this means for contract disputes. If a term like “flood” or “chicken” is disputed, the court could open a search prompt similar to ChatGPT, type in the contractual clause and ask which of the contested meanings is more likely. The answer, sometimes given in probabilistic terms, will be instantaneous. No training is necessary, no software needs to be purchased, there is no cost involved. If the contested meanings both appear in dictionaries, the court would have a method to rank them, and a quantitative measure for the likelihood of each. Moreover, terms that are otherwise deemed ambiguous, like the infamous “fair share of profits” in Varney v. Ditmars, could be deciphered by asking the model to rank the parties’ proposed meanings.
The Arbel-Hoffman article is a proof of concept. With AI tools becoming rapidly available for popular use, the LLM models they rely on will be exponentially improved and their uses to interpret contracts will become easy and convenient. This is why I will not bother commenting here on some of the practical issues they gloss over, perhaps too rapidly (e.g., can the method be manipulated by smart lawyers?). Before too long, I imagine, drafters of contracts will use AI tools to make sure in advance that the text they are writing has the meaning they intend.
Will courts follow? Would their opinions refer to AI models instead of, or in addition to, dictionaries, when interpreting text? I sure hope so, but I also recognize that the craft of law is more resistant to automation than many other skills. Law people comfortable with auto-pilots may not be ready for auto-judges. At the same time, textual contract interpretation is one of the more technical, apolitical, no-nonsense tasks within the judicial repertoire. If we want to begin a transformation towards artificial judicial intelligence somewhere, here is an area that seems less threatening to our quixotic notions of courthouse justice.






