The reviewed article describes an experiment in which an audio chatbot was either imbued with a random voice or with a clone of the participant’s voice, and the participant was either informed or not informed that they were communicating with a bot. It found that people tended to trust the bot more when it imitated their voice and that this effect was not influenced by whether or not participants were informed that their partner was a bot. While the article does not discuss any legal questions, it is submitted that it carries interesting implications for consumer law and regulation.
In recent decades, thousands of behavioral studies have documented numerous systematic and substantial deviations from the assumptions of economic rationality (Zamir & Teichman 2018). A very influential strand of scholarship has called for the use of behavioral insights not only to better understand deficiencies in human judgment and decision-making, but also as a means to mitigate those deficiencies through nudges—“low-cost, choice-preserving, behaviorally informed approaches to regulatory problems” (Sunstein 2014). While effective in some contexts, it has been persuasively argued that nudges are unlikely to be effective in business-to-consumer relationships, where firms are both able and motivated to undo their effect (Willis 2013). In fact, firms are quicker and more effective than legal policymakers at taking advantage of consumer heuristics and biases for their own interests. Unlike nudges, which aim to improve decision-making for the benefit of the decision-maker or society at large, sludges aim to benefit the entities that employ them. The proliferation of online transactions, the spread of personalized marketing techniques, and advancements in AI technology provide marketers with new opportunities to exploit consumer heuristics—and pose new challenges for legal policymakers.
Specifically, a large portion of customers’ interactions with firms—prior to purchasing the product or service, during the contracting process, and throughout the contractual relations—are no longer conducted with human beings, but rather with chatbots. While currently most of these interactions are in writing, it is expected that with the advancement of AI models and other technologies, more and more interactions will take place via oral conversations. From the perspective of the firms, the effectiveness of such communication depends on the extent to which customers trust the honesty and reliability of the entity with which they communicate. Firms may enhance customers’ trust in the chatbots by ensuring that the information they provide and the commitments they make are truthful and reliable. However, they can also promote this goal by shaping other aspects of the interaction, such as manipulating cues that unconsciously enhance people’s trust. In particular, they may utilize the similarity attraction, namely the attraction people feel to others who are perceived as similar to themselves in some personal dimension.
In their thought-provoking paper (forthcoming in Management Science), Scott Schanke, Gordon Burtch, and Gautam Ray report the results of an experiment designed to examine how the use of a voice clone enhances consumer trust. The participants in the online experiment—native English speakers from North America—were invited to take part in a Trust Game, a well-known paradigm in experimental economics.
In the basic form of the trust game, one player (A) is given a sum of money and can choose to send some, all, or none of it to the other player (B). The amount sent is then multiplied (usually tripled), and B decides how much of the multiplied amount to return to A. If both players are rational maximizers in the economic sense, that is, if each one of them cares only about their own self-interest, then A would assume that B would return nothing, and would therefore send nothing to B. However, if A and B could trust each other, they would both be better off by A sending the entire amount allocated to them, and B splitting the multiplied amount with A. Indeed, contrary to the prediction of standard economic analysis, experiments show that very often, A does send a considerable portion of the money to B, who then reciprocates by sending back an amount that exceeds what A initially sent (Berg et al. 1995).
In the version of the game used by Schanke and his colleagues, A could either send nothing—i.e., keeping an endowment of $1.25 to themselves—or send the entire endowment to B. B could then either keep the money to themselves—in which case they would get $3.50 (the multiplied amount) and A nothing—or roll a dice. If B rolls the dice and the resulting number is 1, A receives nothing and B receives $2.50. If the resulting number is 2, 3, 4, 5, or 6, A receives $3.50 and B $2.50.
All the participants in the experiment were assigned to the role of A. Before deciding whether to send the money to B, they received an oral message, ostensibly from player B, trying to convince them to send the money. Before playing the game, the participants were asked to call a 1-800 number and read a consent statement. This enabled the experimenters to generate a message using a clone of the participant’s own voice (footnote 12 of the article provides a link to a website where one can listen to a sample of the recorded messages.).
The participants were randomly assigned to one of four experimental conditions, in a 2 X 2 factorial design: Half of them were told that they had been paired with another player and half with an AI autonomous agent. Then, half of them heard the oral message from player B in a randomly depicted voice, while the other half heard a message that was generated using their own voice.
The key results of this neatly executed experiment were that using a voice clone significantly increased participants’ inclination to trust player B (that is, to send them the money), while informing the participants that they are paired with an autonomous agent, rather than a human being, had no discernable effect on their decision.
Such stylized experiments inevitably raise concerns about their external validity and the generalizability of their finding, and one should be careful not to draw far-reaching conclusions from a single study. Nevertheless, the results of the study make one wonder about the ramifications of the use of such manipulations by marketers. Large tech firms already have access to very large amounts of people’s voice samples, such information is tradeable, and is already used for various commercial purposes (Turow 2021). So, there is every reason to believe that the studied manipulation is not a mere fantasy.
Compared to more worrying online manipulations (AKA dark patterns; see Luguri & Strahilevitz 2021), imbuing chatbots with the customer’s voice does not seem overly troubling. Indeed, it is unclear whether, even in legal systems that protect consumers from large firms more effectively than US law, it warrants a regulatory response. There appears to be no sufficient ground to prohibit such practices altogether. As for disclosure duties, these are generally ineffective (Ben-Shahar & Schneider 2014; Zamir & Ayres 2020, pp. 284–302), and the results of the current experiment suggest that they are ineffective in the present context, as well.
Note, however, that in the experiment, the disclosure was about the partner being a bot—not about its imitation of the participant’s voice. Disclosing this fact might have made a difference. Regardless, contracting parties should have a free-standing right to receive significant information—such as knowing that they are interacting with a bot that imitates their own voice—even if this information does not affect their behavior. At the very least, realizing that the firm is willing to use various manipulations to gain their trust, such disclosure may put customers on guard.
To be sure, voice cloning can be used for both manipulative and valuable purposes. However, the potential harms of such manipulations are likely to extend beyond the market, potentially affecting the political sphere as well. Therefore, it is important to pay attention to these risks.






