Most law students are digital natives who have been using computers since grade school, while I, a baby boomer, remain an immigrant to the world of e-communication. Yet the old and new worlds may not be as different as they sometimes seem. Five years ago, publishers expected to replace hard copies with electronic casebooks, but it turns out that millennial students seem to learn best with a hybrid of electronic and hard copy materials that allow for interactive elements like on-line multiple choice quizzes.
With exceptions like the Uniform Electronic Transactions Act, digital immigrants have left to the natives the task of figuring out how doctrine should treat computer-generated communications. If electronic communications enable transactions that have never occurred before in the hard copy world, lawyers, scholars and judges must figure out whether those transactions require new and special rules or fit within the old common law rules. Lauren Henry Scholz’s article Algorithmic Contracts, forthcoming in the Stanford Technology Law Review and available in draft form on SSRN, substantially contributes to this conversation by suggesting that old-fashioned agency principles can be repurposed to govern algorithmic contracts.
Scholz, a fellow at Yale’s Information Society Project, provides a taxonomy of algorithmic contracts, reviews their dangers, considers possible regulatory responses, and concludes that agency principles work best. Her core contention is that algorithms are not mere tools like calculators, nor the equivalent of form contracts, but instead quasi-animate actors that legal doctrine should treat like robots or human servants of the people and entities that put them into action.
Readers learn that proprietary algorithmic contracts generally determine price and other terms in high frequency securities trading, and increasing facilitate dynamic pricing in consumer transactions such as the purchase of airline tickets. Much of this sounded familiar until I got to the section on Ethereum or smart contracts that take automation to a new level. In this world of Bitcoin and other virtual methods of transacting, Scholz explains,
Blockchain technology, which can roughly be described as a decentralized database, enables “trustless” transactions: value exchanges over computer networks that can be verified, monitored, and enforced without centralized institutions. (P. 29.)
A “public leger . . . records every transaction that has ever been made and will ever be made on the Bitcoin network” and distributes copies to users, who all agree to comply with the Bitcoin protocol. Apparently these agreements are “self-enforcing” via a “contract [that] is defined by the code and is also automatically being enforced by the code that defines it.” (P. 30.) Setting aside whether this Bitcoin protocol itself is a contract, Scholz’s analysis makes clear that contract theory and doctrine must ready tools to understand and regulate transactions that purport to be self-enforcing. (P. 30.)
Not surprisingly, these opaque mechanisms are vulnerable to mistakes and fraud. Scholz suggests that algorithmic contracting may cause flash stock market crashes that yield losses in the millions, and the algorithms’ very opaqueness allows the banks and others who put the algorithm into operation to evade liability for losses they cause by asserting a defense along the lines of “the code made me do it.” That view, Scholz contends, mistakenly treats algorithms as mere tools. If instead algorithms are akin to servants—as she convincingly argues—then principles of respondeat superior bind those principal and create incentives for them to both monitor the algorithmic contracts and also to refrain from mischief.
Moreover, Scholz argues, algorithmic agreements may not even be binding contracts because the participants lack the knowledge of the substance of the transaction to form the requisite mutual assent. In addition, the agreements lack consideration if they parties do not know the content of their offers or acceptances, because those promises can hardly induce each other. As a matter of law on the ground, she explains that the Commodity Futures Trading Commission
cannot pursue a successful case against companies that use algorithms to make the trades because the laws require either specific intent or outright recklessness. The algorithms are considered to be too attenuated from the intent of the companies who use them to rise to that level of intent. (P. 59.)
Algorithmic Contract’s analysis helps regulated communities—consumers and businesses alike—make sure that the banks and other activators of the algorithms cannot treat the agreements as legally binding contracts or mysterious communications out of their control as they find convenient.
Scholz joins other scholars who apply common law—including agency principles—to electronic transactions, including Danielle Citron’s Hate Crimes in Cyberspace (2014), Frank Pasquale’s The Black Box Society (2015), and William Reynolds & Juliet Moringiello’s The New Territorialism, 99 Cornell L. Rev. 1415 (2014). By providing specific ways forward for a variety of stakeholders, she both sounds the alarm and shows the way to an exit from the dangers of algorithmic contracting.