
AI slop, which we used to simply be called spam, now includes impressive but overwhelming content, muddying the definition as both a technical marvel and a source of digital pollution.
While advanced tools like Open AI’s text-to-video generator, Meta Vibes or Sora 2 videos, aren’t slop in themselves, like any generative tool they can flood the ecosystem with low-signal filler if abused. From fake viral clips to junk video inventory, it affects every link in the chain: publishers losing traffic and trust, creators competing with low-effort content, and advertisers risking brand adjacency.
Whether additional AI tools are seen as a breakthrough or just another slop factory depends less on the tool itself and more on how people use it.
Underneath the hype around developments like Sora 2 and Meta Vibes, creators and brands are being thoughtful about whether they invest, to ensure there is a distinction between what’s useful AI adoption and what really is just plain slop.
Here’s a look at some of the nuances developing around it.
Myth 1: All AI slop is equal
AI slop has become a catch-all for low-quality AI content, because it’s fast, sticky shorthand. But that convenience hides the nuances that are emerging.
There is the total junk: the obvious MFA (made for advertising) spam and scraper sites with AI-generated listicles and plagiarized news. These are now filled with AI slop to churn cheap inventory, and on the rise now that anyone can create one, a lot faster, using generative AI. It’s helpful to think of MFA as the business model for slop, which is the content.
Deepsee.io, a fraud detection and site analysis platform, flagged at least 10,000 new slop sites each month in 2025 — and that’s a conservative estimate, according to data Rocky Moss, co-founder of Deepsee.io, cited for Digiday.
“We see thousands of sites currently active in the past few months as directly stealing content from reputable news sources and promoting it on social media as their own using AI remixing tools, or just straight up plagiarism,” Moss said. And there seems to be no signs of that slowing down.
Then there is the gray zone: low-effort AI content that’s technically fine but adds little cognitive thinking — like generic lifestyle explainers, AI-voiced YouTube shorts or filler posts that border on clutter.
And now, there is kind of a third bucket of slop, depending on consumer sentiment: the stuff like AI personas or Sora-made clips and other creative experiments, that divide people. To some, they’re innovation, to others, they’re just another layer of slop. It’s all a matter of perception.
Myth 2: AI-generated content is harmless fun
Sure, if you choose not to think about the consequences. Jamie Indigo, technical SEO consultant believes there will be a payout for consuming all this slop. “This unprecedented access to data is big tech’s dream. People are granting AI access to personal emails, calendars, photos and more,” she said. Her fear: that this data will at some point be used for surveillance.
Generative AI in all forms operates on the assumption of consent, noted Indigo. OpenAI pivoted to opt-in of Sora2 character-by-character after backlash from content creators when it launched earlier this month, and to reduce legal/PR risk. But Indigo believes content creators – especially women – should be concerned. She pointed to a 2023 study, which showed deepfake pornography makes up 98% of all deepfake videos online and 99% of unconsenting targets are women. And Sora2 is reportedly being used to make nonconsensual fetish content. “We need to differentiate AI slop from propaganda, exploitation, and dilution of human connection,” she added.
Myth 3: Advertisers won’t touch it
In programmatic, ad dollars often end up on slop by accident, because supply paths can be opaque and inventory looks cheap and plentiful. Many MFA sites and low-quality AI farms are still monetized because they manage to slip past brand-safety checks, even though the content itself is junk.
In August, 70 percent of approximately 20,000 sites analyzed by Deepsee.io, were flagged as content farms and 67 percent of the content on those sites was AI-generated, and 54 percent of those sites sent bid requests into the programmatic marketplace, according to data shared with Digiday.
That’s classic slop. This is where the nuance around what defines slop is becoming more important. Of 1,000 senior marketers recently surveyed by creator-first social agency Billion Dollar Boy, 77 percent said they planned to divert a larger proportion of advertising budgets from traditional creator content — content produced exclusively by humans — to generative AI-powered creator content in the next 12 months. But what are the guardrails for what’s slop or not?
Brands are juggling more channels, features and faster trend cycles than ever, with the same finite teams, stressed Walters. That’s why they’re investing in tools that make content creation and distribution more efficient. The appeal isn’t just volume; it’s also the promise of smarter personalization at scale, he noted.
“Consumers do see value when these tools are being applied in a meaningful way, where there’s an intention behind it, where the aim is to further creativity, or to provide value where they don’t see the same level of value,” said Walters. “Where we get into sort of this AI slop territory is where the use has not really been considered.”
Myth 4: Quality AI-generated content is easy to do
Slop has existed for a long time, it’s just made it easier for people to do it now. You don’t need to be good at coding anymore to generate a fake content farm, filled with slop for instance.
AI creator and filmmaker Omar Karim believes there is a misconception that AI is easy to do. “To push an AI to its creative edges takes a particular type of patience,” he said. “Some clips can take days to perfect, you need taste and curation to pluck needles out of digital haystacks. To call it slop, misses the point that it’s actually a radically new form of creative.”
Like any medium, it has the potential to be slop, making it even more vital that creators embrace and find their voice with AI so this new wave of creative potential can really find its force, he noted.
Some argue the AI content is more nefarious now (or respectful of readers’ short attention spans, depending on how you view it). The core difference between the spam of pre-generative AI days, was that the stakes for listicles and other MFA content were fundamentally lower, per Indigo. “You wasted the user’s time, making them dive a dozen clicks deep to find out what the Buffy [The Vampire Slayer TV show] cast looks like now,” she said. Now, with more sophisticated tools available and the barrier to entry far lower, the dividing line between what’s good enough and what’s slop, are blurring. “Truthfulness doesn’t matter in the attention economy — engagement does,” she said.
For publishers and creators, true quality still takes editorial judgement, context, fact-checking, IP clearance and creative craft. AI can accelerate drafts or assets, but without human oversight, it still produces errors or simple junk. For publishers and creators, low effort content may be cheap and fast, but sustained audience trust (and its monetization) comes from content that feels original, reliable and worth paying attention to.
“Finding a unique style and aesthetic is vital when it comes to using AI as your creative medium, that is the core of any distinctive creator and how you can see the differentiation between people who use AI and artists that use AI,” added Karim.
Myth 5: Creators know the risks and are prepared
A week after Sora 2 launched in early October, creators, brand marketers and legal experts gathered at workshops hosted by BDB at its London and New York headquarters to discuss some of the complexities around ensuring responsible adoption of AI in the creator economy, according to BDB’s Walters.
Risks that bubbled to the surface in conversation included that creators may sign away rights for short-term gains.
AI enables 87% of creators to produce more content, and 44% of consumers agree that AI has increased content volume, but more doesn’t mean better, per the same DPD report. Ultimately, consumers aren’t rejecting AI, but how it’s often used. They respond when AI enhances creativity, but not when it’s used to churn out repetitive, low-quality output.
Iesha White, director of intelligence at digital advertising watchdog Check My Ads, stressed that despite the Sora2 watermark, (which can be removed by other programs), publishers and content creators will be more challenged to file takedown requests when someone’s copyrighted content is reused for advertising purposes without permission. It could also harm influencers and content creators who often sign time-based contracts for their likeness and sponsored posts, she added.