A plague of unauthorized AI-generated or manipulated content misusing celebrity likenesses and well-known intellectual property has increased since generative AI became accessible in consumer applications and open-source models.
How Deepfake Videos Are Causing Real Problems for Celebrities
Fake product recommendations, deepfake pornography, and voice-activated chatbots using a celebrity’s voice and persona – known as name, image, likeness, or NIL – which sources told VIP+ are currently the most prevalent forms of such abuse, all have the potential to damage personal and professional reputations and brand equity, cause significant emotional distress, and mislead or deceive susceptible consumers. It’s worth noting that NIL rights typically cover both facial and voice likeness.
Notorious examples of NIL abuse using generative AI have surfaced in news reports, with celebrity victims joining a growing list of victims including Taylor Swift, Tom Hanks, Steve Harvey, and Selena Gomez. Yet such reported breaches remain anecdotal evidence, the tip of the iceberg of a much larger problem.
VIP+ sources described this problem as increasing month over month, noting an “exponential” or “explosive” rise in the number of talent NIL breaches discovered online, with the uptick driven by a wave of widely accessible and easy-to-use generative AI creative tools hitting the market.