Grammarly ditches "Expert Review" after expert rebellion and class action suit

Failing to live up to its superhuman name, Grammarly says it "fell short."

Grammarly ditches

The once defiant makers of Grammarly, Superhuman were forced to eat a little AI crow today. After enlisting countless authors, writers, and journalists for its much-needed “Expert Review” feature, the company has reversed course because it did so entirely without their consent, prompting a class action lawsuit. “Expert Mode” allowed Grammarly subscribers to receive phony analysis made by an LLM that’s been trained on the work of famous writers, living or dead, in an effort to “take your writing to the next level.” Of course, seeing as this is a tech company we’re talking about, and everything is just data for them to train their products on, Superhuman did so without the consent of its “leading professionals, authors, and subject-matter experts.” 

Earlier today, Wired reported that Markup founder Julia Angwin is the only named plaintiff in a class action suit against Superhuman, arguing damages exceeding $5 million. “We think it’s a pretty straightforward case,” Angwin’s attorney told Wired. He goes on to argue that this type of behavior from tech companies is happening across society. “Lots of professionals who spend years, or in Julia’s case, decades, honing a skill or a trade, then see that their name or their skills are being appropriated by others without their consent.”

The feature received widespread condemnation from the authors who were non-consentually recruited for the program, including tech journalist Kara Swisher, AI blogger Casey Newton, and the staff of The Verge. The latter reached out to Grammarly, which informed them earlier this week that victims of identity theft could “opt out” of the program they never signed up for. 

But apologies are a feature of the AI hype machine, not a bug, and so the CEO of Superhuman, Shishir Mehrotra performed the modern Notes app apology: An extended mealy-mouthed LinkedIn post. “Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices,” he wrote. “This kind of scrutiny improves our products, and we take it seriously. As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward.”

So what did we learn here? For one thing, a future in which AI is firmly foisted upon users isn’t one that users have to stand for. Complaining about these products widely and loudly can help prevent the slop trough from filling too quickly. Additionally, we learned that a class action suit couldn’t hurt.

 

 
Join the discussion...
Keep scrolling for more great stories.