Grammarly ‘Expert Review’ Sparks AI Persona Impersonation Lawsuit

Split-screen of a text editor showing 'expert' annotations and an AI model diagram illustrating the AI persona impersonation lawsuit and misattribution risks

Grammarly ‘Expert Review’ Sparks AI Persona Impersonation Lawsuit

By Agustin Giovagnoli / March 11, 2026

Grammarly’s AI-powered ‘Expert Review’ feature is at the center of an AI persona impersonation lawsuit that alleges the company used the names and identities of real professionals—without permission—to frame algorithmic feedback as human expertise [1][2][3]. The dispute matters for every team building or buying AI because it tests whether simulated expert personas can be commercialized without consent, and how such framing shapes user trust and accountability [1][2][3].

Quick summary: What the Grammarly ‘Expert Review’ lawsuit alleges

Grammarly’s feature reportedly generated AI feedback while presenting it as if it came from “the world’s great writers and thinkers” or identifiable professionals, including living journalists and professors, none of whom agreed to participate, according to reporting and the emerging class action claims [1][2][3]. Coverage also notes the system imitated deceased professors and historic writers, who cannot consent or correct misattributed views [2]. At stake are user trust, the ethics of identity appropriation AI, and whether companies can legally turn recognizable personas into product features without prior authorization [1][2][3].

How the feature worked: simulated expert feedback framed as named professionals

The core issue is framing. Reports indicate the tool labeled AI-generated expert feedback as if it were authored by specific, recognizable figures, positioning the output as “expert-approved” guidance rather than generic AI assistance [1][2]. Critics argue this amounts to “authority theater AI,” where borrowed credibility can inflate user confidence in suggestions that are, in fact, generated by simulations—not actual individuals [1][2]. That framing can blur who is accountable for the advice, and, for many content teams, it raises practical questions about disclosure, provenance, and risk [1][2].

Documented examples and reporting

Coverage documents that The Verge staffers—including Nilay Patel, David Pierce, Sean Hollister, and Tom Warren—appeared as selectable experts inside the feature despite never consenting to participate [1][3]. Reporting also highlights the presence of deceased professors and well-known writers among simulated personas, underscoring the absence of any meaningful consent mechanism and the impossibility of correction when misattributions occur [2]. Together, these accounts form the factual basis for the class action claims and industry backlash [1][2][3].

Ethical concerns: identity appropriation and ‘authority theater’

Observers describe the practice as identity appropriation AI, where a person’s reputation, style, or voice is packaged as a product capability without permission [1][2][3]. Ethically, critics say this can mislead users into trusting output because it is framed as coming from recognized authorities, obscuring that it is simulated and algorithmic [1][2]. It may also pressure writers to conform to perceived expert styles, contributing to homogenization in writing and diluting originality—an especially fraught prospect for marketers, educators, and newsrooms that rely on distinctive voice and accountable editorial judgment [2].

AI persona impersonation lawsuit: legal issues companies and vendors should watch

The lawsuit surfaces several potential claims relevant to AI product and legal teams: right of publicity (use of a person’s name or likeness for commercial purposes without consent), misrepresentation, and deceptive trade practices related to how the feature presented authorship and authority [1][2][3]. Observers suggest the outcome could set precedent for whether and how companies may create, market, and monetize digital replicas or stylistic likenesses of identifiable professionals—especially when outputs are labeled as advice from named individuals rather than clearly disclosed as AI-generated [1][2][3]. For additional context on endorsements and claims, see the U.S. Federal Trade Commission’s guidance on advertising disclosures FTC Endorsement Guides (external).

Business and product implications for AI vendors and buyers

Beyond court filings, the reputational and procurement impact could be immediate. Vendor due diligence will likely emphasize:

  • Digital replicas consent: documented, revocable permission for any named or identifiable persona [1][2][3].
  • Clear labeling and disclosure for AI-generated expert feedback, avoiding ambiguity about who authored the advice [1][2].
  • Risk assessments around identity use, misattribution, and user deception—especially for enterprise rollouts [1][2][3].

Product teams evaluating generative features can benefit from governance playbooks and transparent UX patterns that clarify when guidance is simulated versus human-authored. For more practical frameworks, Explore AI tools and playbooks.

Short-term outlook: what the lawsuit could change

In the near term, expect heightened scrutiny of any feature that simulates named experts or markets stylistic likeness as an on-demand persona [1][2][3]. The case could accelerate internal reviews, external audits, and product updates that prioritize consent, disclosures, and safeguards against impersonation. For AI product teams, the implications extend to branding, user education, and partner contracts—especially where “expert” framing influences high-stakes decisions in education, media, or enterprise communications [1][2][3]. If courts clarify boundaries, the resulting standards may shape product design and compliance roadmaps across the industry [1][2][3].

Checklist for buying or building AI tools that mimic human experts

  • Require explicit consent for any identifiable name, likeness, or persona used in product UI or marketing [1][2][3].
  • Avoid labeling simulated output as advice from a specific living or deceased person; use transparent, AI-first labels [1][2].
  • Implement audit logs for persona selection and output provenance [1][2].
  • Provide opt-outs and removal processes for individuals and organizations [1][2][3].
  • Conduct legal reviews focused on right of publicity AI, misrepresentation, and deceptive trade practices before launch [1][2][3].
  • Stress-test user trust: measure whether framing as “expert” materially changes user behavior or perceived accountability [1][2].

Sources

[1] Grammarly Caught Using Real Identities Without Consent
https://www.techbuzz.ai/articles/grammarly-caught-using-real-identities-without-consent

[2] Grammarly Expert Review Explained: AI Backlash, Risks …
https://www.junia.ai/blog/grammarly-expert-review

[3] Grammarly’s AI ‘Expert Review’ Simulates Writers Without Consent
https://www.techbuzz.ai/articles/grammarly-s-ai-expert-review-simulates-writers-without-consent

Scroll to Top