How to play Content Warning exclusively ...

YouTube Is Now Asking Users to Flag Videos That Feel Like AI Slop

YouTube is testing a new viewer prompt that asks whether videos feel like "AI slop," adding a crowdsourced layer to its fight against low-quality AI content flooding the platform.

Eliza Crichton-Stuart

Eliza Crichton-Stuart

Updated Mar 19, 2026

How to play Content Warning exclusively ...

YouTube has a new weapon in its war against AI-generated garbage, and that weapon is you.

Starting around March 17, users on the YouTube mobile app began encountering a new pop-up prompt when rating videos. The question? "Does this feel like AI slop?" or "How much does this video feel like low-quality AI?" Responses range from "Not at all" to "Extremely," giving viewers a direct line to flag content they think was machine-generated and low effort.

The Scale of the Problem YouTube Is Trying to Solve

This isn't YouTube panicking over nothing. The platform has been drowning in AI-generated content for a while now, and its existing automated and human review systems haven't kept pace. A study found that roughly 21% of the first 500 recommended videos served to a brand-new YouTube account were identified as AI slop, with another 33% falling into a broader "brainrot" category of low-substance, repetitive content.

The problem hits especially hard for younger audiences. An investigation by the New York Times found thousands of videos targeting children that claimed to be educational but were built primarily to hold attention with minimal actual effort or value. Experts flagged this as a genuine concern for young viewers' development.

YouTube currently doesn't ban creators from using AI tools, and there's no blanket requirement to disclose AI-generated content. The risk for creators is losing monetization if their content gets flagged as low quality, but enforcement has been inconsistent at best.

Crowdsourcing Detection, With Some Big Caveats

The new viewer prompt adds a third layer on top of YouTube's existing detection systems, essentially turning its massive audience into an informal moderation force. With over 2 billion logged-in users visiting the platform monthly, that's an enormous amount of potential feedback.

Here's the thing, though: YouTube hasn't explained how this data will actually be used. The company hasn't clarified how much weight viewer ratings will carry, whether they'll trigger automatic action against flagged content, or how the system will handle false positives where legitimate human-made content gets mislabeled.

That's not a fringe take. If Google feeds viewer-labeled "slop" data into its Veo video generation models, the end result could be AI that's specifically trained to avoid the patterns humans flag as low quality. Whether that's the intent or not, YouTube hasn't addressed it.

What This Means for Creators and Viewers

For creators, the stakes just got more visible. If viewer sentiment directly influences how content is ranked or monetized, even legitimate creators using AI tools as part of their workflow could find themselves on the wrong end of a negative rating.

For viewers, it's a genuinely interesting shift in how platform moderation works. Rather than trusting algorithms alone, YouTube is betting that the people actually watching have a better gut sense of what feels hollow and machine-generated.

Whether that bet pays off depends entirely on how YouTube uses the data it collects, which right now remains an open question.Make sure to check out more:

Games

Guides

Reviews

News

Announcements, Reports

updated

March 19th 2026

posted

March 19th 2026