THE VERTEX.
Back to home
TECHNOLOGY11 March 2026

Grammarly's AI Deception: When Automation Crosses Ethical Lines

Grammarly faces a class action lawsuit over its deceptive 'Expert Review' feature that falsely attributed AI suggestions to real academics. The shutdown highlights growing ethical concerns about AI transparency and consent.

La
La Rédaction
The Vertex
5 min read
Grammarly's AI Deception: When Automation Crosses Ethical Lines
Source: www.wired.com
Grammarly, the popular writing assistance platform, has abruptly shut down its 'Expert Review' feature following a class action lawsuit that exposes troubling questions about AI transparency and intellectual property rights. The feature, which presented AI-generated editing suggestions as if they came from established authors and academics without their consent, represents a concerning trend in the tech industry's rush to automate human expertise. The lawsuit, filed by users and potentially affected academics, centers on Grammarly's practice of attributing AI-generated feedback to real individuals without permission. This deceptive practice not only undermines the credibility of the review process but also raises significant ethical concerns about consent and attribution in the age of generative AI. The company's decision to shut down the feature suggests acknowledgment of these fundamental problems. This incident reflects a broader challenge facing the tech industry: the temptation to pass off AI-generated content as human-created. As companies race to integrate AI into their products, the line between helpful automation and deceptive practices becomes increasingly blurred. Grammarly's case serves as a cautionary tale about the importance of transparency and ethical considerations in AI development. The implications extend beyond Grammarly. This controversy could trigger increased scrutiny of AI features across the tech sector, potentially leading to new regulations around AI attribution and consent. For users, it highlights the importance of critically evaluating AI-generated content and understanding its limitations. For the industry, it demonstrates that attempts to artificially enhance AI credibility through deception are likely to backfire, damaging both user trust and corporate reputation.