The Journal of Things We Like (Lots)
Select Page
Cary Coglianese & Erik Lampmann, Contracting for Algorithmic Accountability, 6 Admin. L. Rev. Accord 175 (2021).

Algorithmic accountability is a pressing contemporary issue. Machine learning algorithms—also known as artificial intelligence (AI)—are used in decision-making by state and federal agencies, as well as in the private sector. The decisional outcomes from AI can be critical to the quality of life of affected people, and yet the rationale for algorithmic decisions is often obscure. Algorithmic accountability is the process of assigning responsibility for the results of decision making assisted by AI. In Contracting for Algorithmic Accountability, Cary Coglianese and Erik Lampmann argue that public procurement—or government contracting—is a tool to promote algorithmic accountability in governance and beyond.

Federal, state, and local agencies use machine learning algorithms to aid in many tasks, from forecasting crime to allocating social services. The algorithms are not always immediately successful, but there is great enthusiasm in developing AI for governmental decision-making due to the potential for efficiency and cost savings in the long run. However, most government entities do not have the expertise or resources to develop machine learning algorithms on their own. They must contract with private parties to create these tools for them through public procurement processes.

Procurement has mechanisms for incorporating compliance with socially important goals that could be used to promote the responsible public sector use of AI tools. For example, interested parties can object to the terms or award of a federal government contract through a formal challenge called a bid protest. The terms of procurement contracts already include environmental and social goals. This has been shown to have the secondary benefit of diffusing social norms about best business practices in the private sector. Similarly, government contract terms could promote compliance with otherwise voluntary standards for the responsible use of AI. Several groups have developed standards for ethical AI that government procurement could incorporate.

Coglianese and Lampmann suggest that government contracts for AI services and tools go further than simply incorporating pre-existing general contracting language. To achieve algorithmic accountability, they provide general suggestions for what AI contracts should incorporate, including substantive privacy and security standards, mandatory audit processes, and transparency safeguards which would ensure the possibility of public evaluation, while limiting contractors’ abilities to invoke trade secrets as a broad shield to avoid scrutiny. Each bidding process is an opportunity to set standards tailored for the particular AI use desired by the government body. Both the government and potential alternate contractors will have the means and incentives to hold AI providers to the terms of the contract, and public transparency will allow lawmakers and the rest of society to learn from government deployment of AI.

There is paralysis around regulatory action on algorithmic accountability. Many argue that governing by broad standards is undesirable, because compliance is expensive for regulated entities when regulatory action is not predictable. But clear, inflexible rules are also said to be unworkable because there is not enough knowledge about individual cases and applications to make clear rules that are not poorly designed or over- or under-inclusive. This leads to the familiar inaction that characterizes modern American regulation of the newly possible.

Coglianese and Lampmann’s proposal offers one way through this impasse. Trial and error in rule development in public law is often derided as a waste of resources. There is both a need to rely on experience to develop rules that reflect an understanding of technology, incorporate legal reasoning, and balance the interests of all stakeholders, but a hesitance to allow any discretion in the rules applied by judges and agencies to particular cases. While further regulatory action is likely needed in the AI accountability space, government procurement can be an important laboratory for developing workable rules promoting algorithmic accountability. This allows the public interest and public dollars to shape AI accountability rules and provide public subsidization of the process of determining how to safely deploy AI in society.

Download PDF
Cite as: Lauren Scholz, Government Contracts, Algorithms, and the Benefits of Trial and Error, JOTWELL (December 4, 2023) (reviewing Cary Coglianese & Erik Lampmann, Contracting for Algorithmic Accountability, 6 Admin. L. Rev. Accord 175 (2021)), https://contracts.jotwell.com/government-contracts-algorithms-and-the-benefits-of-trial-and-error/.