business reports
Business Wire
Published on : Feb 26, 2026
Trust is the currency of review platforms. And in 2025, that currency is under pressure from AI-generated content, coordinated review rings, and increasingly sophisticated scams.
Yelp Inc. (NYSE: YELP) says it’s fighting back—at scale.
In its newly released 2025 Trust & Safety Report, Yelp disclosed that it identified and filtered nearly half a million suspected AI-generated reviews, shut down more than 1.3 million user accounts for policy violations, and ramped up enforcement across scams, compensated reviews, and viral-driven review abuse.
The message is clear: as generative AI tools proliferate and moderation budgets tighten elsewhere, Yelp is positioning itself as the industry’s hardliner on authenticity.
AI-written reviews violate Yelp’s content guidelines. Reviews must reflect genuine, firsthand experiences—and users are prohibited from using third-party AI tools to draft them.
With AI writing tools now widely accessible, Yelp says it significantly expanded detection efforts in 2025, deploying new AI-powered systems to flag suspicious patterns. The result: nearly 500,000 reviews exhibiting characteristics of AI-generated content were filtered out by automated systems.
That’s a substantial volume, especially considering Yelp received approximately 22 million reviews globally in 2025.
Of those:
About 70% were recommended by Yelp’s automated recommendation software
17% were not recommended
11% were removed by the User Operations team
2% were self-removed by users
Unlike platforms that lean heavily on community reporting, Yelp emphasizes that its recommendation engine operates independently. It evaluates every review using hundreds of signals related to quality, user behavior, and reliability—and cannot be overridden by employees or business owners.
In 2025, Yelp further tuned that system to demote reviews lacking sufficient detail or showing signs of undisclosed conflicts of interest.
In an era when AI can generate polished, convincing narratives in seconds, detail alone is no longer proof of authenticity. Yelp’s bet is that layered detection—automated plus human moderation—remains defensible at scale.
The fight against deceptive behavior extended beyond reviews.
Yelp closed over 1.3 million user accounts in 2025, a 138% increase from 2024. The surge was largely driven by airline phone support scams—an increasingly common tactic where fake support listings divert consumers seeking help.
Yelp’s systems identified and removed more than 889,800 fake phone support accounts tied to these schemes.
It also rejected more than 50,700 new business page submissions associated with spam-like behaviors—a 29% year-over-year increase. Many were concentrated in high-risk emergency service categories such as locksmiths, plumbing, roadside assistance, and garage door repair, where consumers are especially vulnerable during urgent situations.
Additionally, Yelp removed over 1,340 business pages linked to deceptive lead generators attempting to create fake listings to resell consumer inquiries.
Taken together, the data underscores how review platforms are increasingly battlegrounds for fraud beyond just fake five-star ratings.
Compensated and incentivized reviews remain a persistent challenge across the industry. Yelp says it proactively investigates both its own platform and external sites to infiltrate review-trading groups.
In 2025, the company:
Placed 128 Compensated Activity Alerts on business pages
Issued 363 Suspicious Review Activity Alerts tied to coordinated behavior
Closed nearly 2,000 accounts linked to review exchange rings (a 49% increase)
Yelp also reported making more than 1,020 notifications to platforms including Meta Platforms (Facebook and Instagram), X Corp., LinkedIn, Reddit, TikTok, and Craigslist after identifying groups attempting to trade or purchase reviews.
According to Yelp, 60% of those reports resulted in action by the receiving platforms—a 62% increase from the previous year.
The company identified more than 1,100 suspicious groups, posts, or individuals tied to review trading, marking a 45% year-over-year rise.
In other words, the arms race is escalating.
Not all moderation challenges stem from scams. Social virality can distort review ecosystems just as quickly.
Yelp reported a 58% year-over-year increase in Media Attention Alerts and Unusual Activity Alerts placed on business pages following spikes in abnormal review behavior.
More than 80,000 reviews were removed in 2025 due to viral-driven activity. Of those cases, 75% stemmed from social media amplification that triggered waves of reviews from users without firsthand experiences.
Yelp placed:
Over 1,190 Unusual Activity Alerts
266 Public Attention Alerts related to accusations or discrimination claims
In some cases, the platform temporarily disabled review posting to prevent review bombing.
As social media outrage cycles accelerate, review platforms increasingly function as secondary battlegrounds. Yelp’s approach—temporary freezes and visible alerts—signals a more interventionist stance compared to platforms that rely solely on reactive moderation.
The report also highlights Yelp’s resistance to legal demands aimed at unmasking reviewers.
In 2025, the company says it avoided producing personal information related to 99% of user accounts targeted by subpoenas or legal requests from law enforcement, government entities, or private parties.
Yelp also placed six Questionable Legal Threat Alerts on business pages after identifying what it described as potential abuse of the legal system to silence reviews.
Legal pressure as a moderation tactic isn’t new—but Yelp’s data suggests it remains active, particularly when negative reviews threaten reputation.
While the report focuses on enforcement metrics, there’s a strategic layer beneath the numbers.
As generative AI accelerates content production and some platforms recalibrate trust-and-safety budgets, Yelp is leaning into moderation as a differentiator. By publicizing detection volumes and enforcement growth, it positions itself as a platform prioritizing authenticity over frictionless scale.
The challenge going forward won’t just be identifying AI-written content—it will be distinguishing increasingly sophisticated synthetic narratives from genuine human experiences.
Filtering half a million suspected AI reviews in one year is a strong signal. Whether that pace holds as generative models evolve will be the next test.
For now, Yelp is making its stance clear: authenticity isn’t optional—it’s the product.
Get in touch with our MarTech Experts.