What if your face and voice lied—and you couldn’t stop it?
An article appearing in a forthcoming Fordham Law Review "When the Screen Lies: Navigating Privacy and Publicity in an AI World" by Nancy Menagh offers a meticulously argued case for the creation of a federal right of publicity in the United States, prompted by the rise of generative AI in the entertainment industry. The article opens by situating the issue within the context of the 2023 SAG-AFTRA strike, where fears about AI-generated digital replicas replacing human performers catalyzed the walkout. Menagh uses this labor dispute to illuminate deeper legal, economic, and ethical challenges faced by performers whose likenesses are being coopted by studios and third-party content creators without consent. She tracks the rising use of GenAI to produce deepfakes and synthetic performers, showing how these technologies outpace the patchwork of current privacy and IP laws.
In Part I, the article dives into how digital replicas are created, explaining the inputs (massive training datasets, often including copyrighted or publicly scraped content) and the outputs (realistic, AI-generated images, voices, and videos of real people). Menagh dissects the 2023 SAG-AFTRA/AMPTP Memorandum of Agreement, highlighting the provisions around digital replicas and synthetic performers. While the agreement was heralded for introducing consent requirements and compensation guidelines, Menagh argues it still leaves many gaps—particularly for non-union actors and non-commercial misuse. The analysis is grounded in legal doctrine, but attuned to the very real dignitary and psychological harms these tools can inflict.
“There is something uniquely dehumanizing in having your own voice and image turned against you . . . To have your own voice used to speak the opposite of what you believe—and have no way to stop it . . .”
In Part II, Menagh surveys the patchwork of existing state laws and evaluates pending federal legislation, including the NO FAKES Act and No AI FRAUD Act. She notes that while these proposals attempt to regulate deepfakes and unauthorized replicas, they fall short in scope and enforcement mechanisms. International comparisons, particularly the EU’s AI Act, are presented to highlight how transparency and accountability provisions can offer a more rights-respecting model. However, the U.S. lacks uniformity, and current protections often depend on jurisdiction, celebrity status, or contractual muscle.
The article’s central contribution arrives in Part III, where Menagh proposes a robust federal right of publicity as the best solution to AI’s encroachment on personhood. Her proposal includes clear licensing standards, limitations on transferability (to protect against coercive studio contracts), and regulatory oversight via a federal registry. This vision goes beyond commercial protection, aiming to safeguard the dignitary interests of performers and everyday individuals alike.