Personality Rights in The Era of Deepfakes And Synthetic Media

Personality Righ

Abstract

Deepfakes and synthetic media erase the old boundary between representation and reality. When anyone can generate a video of you saying or doing something you never did, the legal question becomes: who owns your persona and who protects it? This piece explains what “personality rights” mean in practice, examines recent Indian litigation involving Bollywood personalities, compares how other jurisdictions are reacting, highlights gaps in Indian law, and sets out concrete, realistic reforms for doctrine, platforms, and courts.

Introduction

Here’s the blunt point that your face, voice, and likeness are now replicable commodities. Advances in generative AI let bad actors and sometimes well-meaning creators produce convincing videos, audio, and images that mimic a person’s identity. For celebrities the harms are reputational and commercial; for ordinary people they can be intimate, dangerous, or criminal. Law has to decide three things fast: (1) what legal interest protects a “persona,” (2) what remedies are available when that interest is violated, and (3) what obligations platforms and data holders should bear.

India’s courts have started to answer these questions. In 2025 we’ve seen a string of high-profile suits where Bollywood stars sued platforms and content creators over AI-generated content injunctions and takedowns followed, but core legal gaps remain. Below I unpack the terrain and offer practical fixes.

What are “personality rights”?

Personality rights sometimes called the right of publicity protect the commercial and personal use of a person’s identity: name, image, voice, signature, and distinctive likeness or persona. They overlap with but are distinct from copyright, trademark, privacy, and defamation:

  • Copyright protects original expression, not the mere fact of a person’s face appearing in a work.
  • Trademark protects brand identifiers in commerce (less useful for private likenesses).
  • Privacy / dignity / reputation claims (including constitutional privacy under Article 21) can cover misuse, but they don’t neatly handle commercial exploitation.

In India, personality protection has developed case by case through judicial precedent rather than a dedicated statute. Courts rely on a mix of copyright, passing off, torts, and constitutional privacy to craft relief. That approach gives flexibility but leaves uncertainty about scope and remedies.

India’s recent courtroom flashpoints what happened and why it matters

This year brought a cluster of celebrity lawsuits that crystallize the legal problem.

  • A-list actors filed suits against YouTube/Google and other platforms seeking takedowns, injunctions, and damages over sexually explicit or deceptive AI-generated videos and voice clones. Plaintiffs assert misuse of likeness and claim platforms facilitated distribution and, they allege, the training of downstream AI.
  • Separate petitions led to urgent interim orders: courts ordered removal of specific deepfake material and restrained e-commerce listings and other commercial uses tied to unauthorized AI content. The Bombay High Court and Delhi High Court have issued such orders in 2025, signalling judicial willingness to grant injunctive, platform-directed relief.

Personality Righ

Why these cases matter is because courts are recognizing that identity misuse via AI creates immediate, sometimes irreparable harm and that takedowns plus stoppage of commercial exploitation are proper first responses. But these interim fixes don’t resolve the deeper issues: who bears long-term liability, what counts as consent, and whether platforms must police training data.

Legal obstacles revealed by the cases

  1. No statutory “right of publicity.” India lacks a clear, standalone statutory interest that expressly grants control over commercial exploitation of likeness. Judges have been creative, but piecemeal remedies create unpredictability.
  2. Platform liability and scale. Platforms host millions of uploads daily. Courts can order takedowns, but identify originators and prevent re-uploads is hard; platform enforcement is uneven and reactive.
  3. Training-data problem. Plaintiffs have begun to argue that platforms facilitating hosting of images and videos (and not preventing large public harvesting) create downstream risks content may be used to train models that then generate new deepfakes. Courts are just starting to confront whether platform policies or practices amount to facilitation.
  4. Evidence and attribution. Proving a synthetic clip or voice was derived from a given dataset or model is technically challenging. Courts must grapple with technical proof standards and expert testimony.
  5. Global distribution and jurisdictional friction. Deepfakes can be hosted outside India but consumed inside India. Cross-border enforcement raises conflicts of law and practical hurdles.

How other jurisdictions are approaching the problem

  • United States: Many states recognize a right of publicity with statutory and common-law roots; several states criminalize deepfake use in specific contexts (e.g., election interference, pornographic deepfakes). Remedies include takedowns, damages, and statutory penalties.
  • European Union: The EU’s approach mixes data protection (GDPR), audiovisual regulations, and emerging AI rules. The proposed AI Act adds duties for high-risk systems, and data protection law can constrain using personal data for model training without lawful basis.
  • UK/Canada/Australia: Courts rely on privacy, harassment, and publicity-style torts; legislatures are considering targeted rules.

The comparison lies here that when jurisdictions vary, but two trends recur (1) statutory recognition of personality/publicity interests makes enforcement faster and more predictable, and (2) platform obligations (notice-and-takedown plus proactive risk tools) play a central role.

Technical realities that must inform legal design

  • Detectability vs. indistinguishability. Not all synthetic media are detectable; some models produce content indistinguishable from real material. Legal rules that rely on easy technical proof will fail in hard cases.
  • Attribution science: Watermarking and provenance metadata can help. But watermarking requires prior installation; provenance depends on uploaders’ cooperation and interoperable metadata standards.
  • Model training opacity: Many models are trained on scraped public data. Deciding whether scraping public images constitutes lawful use for training depends on consent, data source, and local law.

Designing legal standards must take these technical facts into account: require provenance metadata for uploaded media, recognize watermarking evidence where available, and set feasible evidentiary rules for courts.

Concrete reform proposals for India (practical, prioritized)

  1. Statutory recognition of personality/publicity rights (sui generis)
    • A narrow statute that protects unauthorized commercial use of name, image, voice, and distinctive persona for living persons and for a limited post-mortem period. This would provide clarity for damages, injunctive relief, and choice of law.
  2. Platform obligations: mandatory provenance and notice regimes
    • Require major platforms to implement standard provenance metadata (origin, upload date, uploader identity where available).
    • Strengthen notice-and-takedown with mandated timelines and automated repeat-infringer processes; require transparency reports on takedown compliance.
  3. Training-data transparency and limited consent rules
    • For commercial model training, impose disclosure obligations (what datasets were used, whether scraped public content was included) and give persons a private right to demand removal of their likeness from commercial training sets.
  4. Evidentiary and forensic capacity building
    • Create a central forensic lab or accredited expert panels to assist courts with technical verification (model provenance, watermark analysis).
  5. Interim and injunctive standards tailored to synthetic harms
    • Courts should apply low-threshold emergency relief (takedowns, interim injunctions) when synthetic content threatens safety or reputation, balanced with expedited hearings to avoid undue censorship.
  6. Criminal law guardrails
    • Targeted criminalization for deepfakes used for extortion, sexual exploitation, or election interference not blanket bans that could chill speech.
  7. Public awareness and low-cost remedies
    • A public portal to report deepfake misuse and clear remedies for ordinary citizens, plus support for digital literacy initiatives.

These reforms blend doctrinal clarity with practical enforcement tools. They respect free expression while protecting personal dignity and legitimate commercial interests.

Conclusion

Deepfakes expose a simple legal truth: identity is both personal and commercial. India’s courts have begun filling the vacuum with ad hoc injunctions and creative readings of existing laws. That’s a necessary stopgap. What we need next is structure: a statutory framework for personality rights, platform obligations for provenance and takedown, transparency about training data, and technical capacity for courts.

If law is slow, it can still be smart. Targeted, technology-aware reforms will give people celebrities and ordinary citizens alike meaningful control over their personas without throwing out free expression or innovation. The alternative is messy: inconsistent rulings, delayed relief, and reputational harms that cannot be repaired.

Author:Hardika Dave in case of any queries please contact/write back to us at support@ipandlegalfilings.com or   IP & Legal Filing.