What happens when a software tool sounds like your favorite columnist – without asking them first? That�s the question at the center of a new class-action lawsuit against Grammarly after it launched an �Expert Review� feature that simulated feedback in the style and name of well-known figures like journalist Julia Angwin, novelist Stephen King, and tech journalist Kara Swisher, among others – without their consent.
Alleged �synthetic endorsement� meets swift pushback
According to TechCrunch�s reporting, Angwin filed a class-action complaint alleging violations of privacy and publicity rights after discovering her name and persona were used to deliver paid editorial feedback (the feature cost $144/year). Other figures – including AI ethicist Timnit Gebru – were reportedly included. The output wasn�t great, either. Platformer�s Casey Newton said the tool�s imitation of Swisher offered generic notes, raising the question: why risk legal and reputational blowback for rote advice any basic editor could give?
Grammarly has disabled the feature. In a LinkedIn post cited by TechCrunch, industry executive Shishir Mehrotra apologized for how the rollout landed but defended the idea: �Imagine your professor sharpening your essay� a thoughtful critic challenging your arguments.� The real Kara Swisher offered a blunter take via text to Newton: �You rapacious information and identity thieves better get ready for me to go full McConaughey on you� Also, you suck.�
Why this matters: a fast-forming line around identity and consent
For businesses, the risk is no longer theoretical. Generative tools can now convincingly simulate the tone, cadence, and perceived authority of recognizable people – blurring into what lawyers will call �right of publicity� and what brand managers will call �synthetic endorsement.� It�s not just a media problem. Any company shipping generative features that evoke real people – experts, executives, creators, or customers – now faces concrete legal exposure and trust risk.
Counterweight: platforms move toward consent and detection
Contrast Grammarly�s stumble with YouTube�s escalating guardrails. The platform is piloting likeness-detection for a vetted group of political figures, public officials, and journalists, enabling them to flag and request removal of unauthorized, AI-generated impersonations (akin to Content ID for faces). Eligible testers must verify identity with a selfie and government ID. YouTube says the program is designed to protect �the integrity of the public conversation,� while balancing parody and critique under its policies. The company supports the proposed NO FAKES Act and plans longer term to prevent uploads of violating content before they go live, with removal requests �very small� so far.
Takeaway for leaders: consent flows, verification, and proactive detection are quickly becoming table stakes – not nice-to-haves – when likeness is in play.
Governance is the new competitive moat
Another data point: Amazon is tightening internal controls after AI-assisted code contributed to service incidents. Following a nearly six-hour retail site disruption and a 13-hour interruption to an AWS cost calculator linked to its Kiro coding tool, Amazon moved to require senior engineers to sign off on AI-assisted changes. The company framed the incidents as �novel GenAI usage� lacking mature safeguards. The lesson is transferable: as AI systems generate more than text – feedback, code, designs – enterprises need documented review paths and escalation for any output that can materially affect users or brand.
Signal vs. noise: the �expert� problem isn�t just legal
Open-source maintainers have watched a similar pattern unfold. ZDNET reports that AI has been both a force multiplier and a floodgate: Anthropic�s tools helped Mozilla find more high-severity Firefox bugs in two weeks than typically surface in two months, yet projects like cURL and FFmpeg were deluged with low-quality, AI-generated �security reports� that burned volunteer time – so much so that cURL paused its bounty program. Linux creator Linus Torvalds says he�s �much less interested in AI for writing code and far more excited� about AI for maintenance – automated patch checking and code review – where human accountability stays front and center.
If Grammarly�s �Expert Review� felt hollow, that�s the same signal-to-noise tension: simulated authority without accountable expertise yields generic output and real risk.
What leaders should do now
- Build consent into design: Model YouTube�s approach with identity verification, opt-ins, and clear appeal processes before using anyone�s name, face, or voice – real or implied.
- Create AI change-control gates: Borrow from Amazon�s practice and require senior human sign-off for AI-influenced outputs that impact customers, content quality, or infrastructure.
- Aim AI where it�s accretive: Follow Torvalds� guidance – use AI to maintain, review, and triage (e.g., Anthropic�s multi-agent code review) rather than to impersonate authority.
Will courts accept �everyone else is doing it� as a defense for synthetic endorsements? Unlikely. The market is drawing a brighter line around identity, consent, and accountability. Companies that cross it may not just face lawsuits – they�ll lose the audience trust that data-rich models can�t buy back.

