The Journal of Things We Like (Lots)
Select Page
Noam Kolt, Governing AI Agents, 101 Notre Dame L. Rev. __ (forthcoming), available at SSRN (Feb. 11, 2025).

Let’s start with a cliché: Kolt’s article is a must-read conversation starter. AI agents are proliferating around us, and the opportunities created by those technologies seem infinite. And so are the legal problems. The more powerful the technology and the higher its potential to make our lives easier, the greater the risks of its use. As Kolt puts it, “productivity and efficiency gains may come at the cost of unintended outcomes.” He also provides great illustrations of such risks, ranging from hallucinations (Hello Air Canada!) to unethical behavior in pursuit of the set goals. Ultimately, somebody will have to foot the bill. And it won’t be the AI agent. But – I am running ahead with myself!

If I were to provide a short summary of the article, here it is: Kolt explores the governance challenges presented by autonomous AI agents. The latter differ significantly from language models in their ability to independently plan and execute complex tasks. While established legal and economic frameworks, particularly the principal-agent theory and common law agency doctrine, provide insights into issues like information asymmetry, authority, and loyalty, Kolt shows how all such frameworks hit a wall when applied to non-human entities. Reinterpreting traditional theories and legal doctrines can only get us so far. Can we really speak of fiduciary duties with regard to software? Can we speak of “conflicts of interest” and “loyalty” – or should we speak of ill-defined objective functions, sloppy prompts, or simply bad programming? I can’t help but ask: who (or what!) is easier to control: a human or an AI agent? Of course, Kolts makes an important disclaimer: he uses structures, principles, and vocabulary developed in the common law of agency to shed light on the challenges involved in governing AI agents. The common law of agency is used as an analytic lens, but does not directly examine the legal application of agency law to AI agents. After all, the AI agent is not a discrete legal entity and cannot be held liable.

To address the governance concerns surrounding AI agents, Kolt proposes a three-pronged strategy focused on inclusivity, ensuring AI agents serve a broad range of societal interests beyond a single user, visibility, advocating for greater transparency into AI agent design and operation, and liability, establishing clear rules for accountability when harm occurs. Unsurprisingly, he also emphasizes the need for new technical and legal infrastructure to manage the burgeoning risks of accompanying this technology. Let me split this up.

Regarding inclusivity, Kolt uses this concept to address the alignment problem, that is, the challenge of ensuring that an AI agent’s goals and operations remain compatible with human values. The alignment problem, however, raises its own (seemingly insurmountable) challenges, mainly deriving from the difficulty of defining, formalizing, and encoding human values into an AI system. Values are difficult to capture in code. Whose values are we speaking of anyway? The developer’s or the user’s? The shareholders’ or the consumers’? In practice, an AI agent may extrapolate initial human preferences to achieve the desired objective, while also leading to harmful consequences. For example, a system tasked with increasing shareholder value may target vulnerable consumers with higher prices. Ultimately, we must acknowledge that any technology that operates with a high degree of autonomy and at inhuman speed will preclude humans from evaluating whether its operations are responsible or ethical in real time.

Regarding visibility, Kolt refers to the ability to understand the AI agent’s operations. Such understanding is largely precluded by the fact that many AI agents operate as black boxes so that neither those who developed them nor those who deployed them can fully comprehend their internal operations. This problem seems unsolvable, at least from a technical perspective. LLMs, which form the “brain” of AI agents, are inherently stochastic and unpredictable. We may be able to understand how the technology works in general, but we cannot determine the “reasoning” underlying the individual outputs or “decisions” made by the AI agent. Technical issues aside, we must be realistic. Can we assume that, if provided with adequate technical information about the capabilities of a given AI agent, the average user would familiarise him or herself with such information?

Regarding accountability, Kolt’s article leads straight into the broader and not-so-novel territory of liability for the operation of computer programs, or software in general. For the cacophony of sensationalistic headlines about the imminent arrival of AGI, Agentic AI (or whatever new term appears on the cover of Wired) must not distract us from the simple truth that AI agents are software—computer programs of varying complexity and reliability that interact with each other in unexpected ways. Computer programs do not always execute as intended or operate as instructed. Sometimes it is the “fault” of their creators (bad programming or training); sometimes it is their users who are to blame (using the technology without proper safeguards or for a purpose it was not intended for). Users of AI agents must understand the risks of their, well, use.

There is no sugarcoating: the risks – at least for the time being – may outweigh the benefits. At present, AI agents should not be left to operate without ongoing human supervision. In order to achieve the set goals, such as making restaurant reservations or booking flights, they need login information as well as payment details. How many of us are willing to provide such information to a nascent technology that, as Kolt points out, operates as a black box and is inherently unpredictable? If we are willing to do so, do we deserve protection? We must also remember that software is always provided “as-is,” without many (if any!) guarantees as to its reliability and uptime. As indicated, notwithstanding the new label, AI agents are pieces of software, and software has the tendency to “disobey” the instructions of its users. The instructions themselves may also be unclear or ridden with programming errors. Who provided the instructions? Who decided on the degree of autonomy? Everything leads back to the old problem of “many hands” and the challenges of allocating liability among the many actors involved in developing and deploying the AI agent.

I love the paper for its appreciation of the complexities involved and for its attempt to use “time-tested analytic frameworks” to understand and characterize the tradeoffs arising from the use of AI agents: the economic theory of principal-agent problems and the common law of agency. After all, before setting out to create something new, we should first test existing legal and conceptual frameworks. I do not, however, have Kolt’s confidence to use the term AI agent without a gazillion caveats. After all, the legal meaning of the term agent differs from its technical meaning, as acknowledged by Kolt himself. We must also remember that the concept of “autonomous agents” is not new and concerns a particular way of thinking about how one component of a software program relates to other components. Kolt sees AI agents as actors, not tools. There may be a fine line between tool and actor, a line that needs to be carefully drawn to make this distinction as clear as possible. After all, crossing this line is supposed to trigger a different legal regime and require a discrete governance framework. Kolt defines AI agents as “autonomous systems that can plan and execute complex tasks with only limited human involvement” and regards them as actors, not tools. He leaves some questions underexplored, though. When does a tool become an actor? When does it act autonomously? What is autonomy, then? Is it absolute? Or –is it a question of degree? Kolt’s article confirms the importance of technical details as well as the difficulty of falling back on established legal principles without a full understanding of the complexities involved.

Download PDF
Cite as: Eliza Mik, AI Agents: Tools or … Actors?, JOTWELL (November 10, 2025) (reviewing Noam Kolt, Governing AI Agents, 101 Notre Dame L. Rev. __ (forthcoming), available at SSRN (Feb. 11, 2025)), https://contracts.jotwell.com/ai-agents-tools-or-actors/.