There’s a peculiar dread coursing through the creative industries right now, the kind that doesn’t announce itself dramatically but seeps in quietly, like water damage. It’s November 2025, and we’re living in a world where a fake band called Velvet Sundown, completely fabricated by an algorithm, amassed over 500,000 monthly listeners on Spotify before anyone realized the members didn’t actually exist. Meanwhile, Spotify’s own detection tools suggest that 70% of AI-generated tracks on the platform are streamed fraudulently, often by bots clapping for other bots in an infinite loop of creative emptiness. And somewhere in a film festival, a director watched an A.I.-slop movie with lip-sync issues, impossible hands, and children rendered with unsettling cartoonishness, realizing that this is the moment culture officially hit the uncanny valley.
The discourse is intensifying, and it’s no longer confined to industry insiders wringing their hands over Zoom calls. The question everyone’s asking, whether AI content represents genuine creative limitation or merely serves as an excuse for artists lacking imagination, cuts deeper than it first appears. Because the real issue isn’t whether the technology works. It does, terrifyingly well. The issue is what happens to creativity, commerce, and the soul of art itself when effort becomes optional and authenticity becomes quaint.
The Symptom Everyone’s Seeing
Call it “AI slop,” and the term has infected the culture with remarkable speed. It’s not just about quantity, though; that’s staggering enough, with AI-generated domains jumping from 17,124 in June 2024 to 108,270 in May 2025. It’s about a specific aesthetic of soullessness that audiences have learned to recognize instinctively: the hollow melodicism, the derivative chord progressions, the way an AI-generated film looks just off enough to make your brain itch.
What’s particularly striking is that young people, supposedly the digital natives who should embrace this tech uncritically, are actively rejecting it. A recent study found that 55% of Gen Z participants aged 16-24 favored human-written articles over AI-generated alternatives, finding them more engaging and emotionally resonant. This generation, the ones who grew up swiping through endless content, whose attention spans supposedly shattered into digital confetti, has developed a sophisticated BS detector. They can feel inauthenticity the way previous generations could smell stale air.
The real evidence of AI slop’s cultural penetration came when John Oliver’s Last Week Tonight ran a segment on AI music that accumulated over 4.4 million YouTube views, featuring absurdist travesties like an AI song titled “I Caught Santa Claus Sniffing Cocaine.” The fact that this wasn’t even surprising anymore, that the internet barely shrugged, speaks to something more troubling than a technology problem. It speaks to a cultural exhaustion.
The Creativity Argument: Tool vs. Crutch
Here’s where the conversation splinters into genuine disagreement. One camp argues that AI isn’t fundamentally different from Photoshop or synthesizers—tools that initially faced fierce resistance before becoming integrated into creative practice. Digital artists now routinely use AI as a collaborative partner, and certain production houses have begun equipping directors with AI capabilities to democratize filmmaking, potentially opening doors for creators who couldn’t previously afford expensive equipment.
But the counterargument has teeth. Mac DeMarco, a musician who’s watched the industry transform over nearly two decades, put it succinctly: “We’re in a weird place. The most important part of art is the human element.” The distinction DeMarco’s making matters. There’s a categorical difference between using a tool to realize your artistic vision and using a tool to bypass it entirely.
In 2024, over 200 music artists, including Billie Eilish, J Balvin, and Nicki Minaj, signed an open letter declaring the irresponsible use of AI in music to be “an assault on creativity.” The grievances weren’t merely protectionist; they centered on exploitation and deception. When AI training data consists of millions of copyrighted songs ingested without permission or compensation, when artists’ voices are deepfaked into songs they never recorded, when dead musicians are resurrected algorithmically to generate new content, the question shifts from “is this a valid tool?” to “is this ethical?”
That distinction is crucial, and it’s where the narrative gets complicated. The real problem isn’t that creative people are using AI; it’s that uncreative people are using AI to flood the zone, betting that quantity will eventually overwhelm quality in algorithms designed to prioritize engagement over artistry.
The Cultural Moment: Ozempic for Creativity
The cultural critic Shirley Marschall recently described AI’s effect on media as the “Ozempic of media”—slimming everything down, reshaping it artificially, removing the effort, delivering quick results with minimal input. It’s a brutal metaphor, but accurate: just as Ozempic offers a pharmaceutical shortcut to weight loss without addressing underlying metabolic or lifestyle issues, AI offers a technological shortcut to content creation without addressing the underlying absence of genuine creative impulse.
What’s insidious is the ecosystem that develops around this. When Spotify detects fraudulent AI streams, when production companies establish AI-specific rosters, when the platforms themselves profit from the volume regardless of quality, suddenly, the incentive structures are completely inverted. There’s money in slop, particularly when the slop can be generated at near-zero cost and monetized through mechanisms designed to reward volume.
The music industry is experiencing this viscerally. According to recent analysis, popular music has become increasingly homogenized, with studies of Billboard hits showing decreased harmonic complexity, shrinking dynamic range, and more formulaic structures. But here’s the uncomfortable part: this trend predates widespread AI adoption. The culprit isn’t merely artificial intelligence; it’s algorithmic curation itself. Spotify’s own recommendation systems incentivize music designed for “passive consumption” with shorter intros, fewer bridges, and predictable hooks. AI is just the logical endpoint of that system.
Similarly, the film industry faces below-the-line workers confronting an existential question. When virtual production tools and AI can handle storyboarding, makeup design, editing, and production design, what happens to the grip, the makeup artist, the storyboard professional who spent years developing craft? This isn’t hypothetical; it’s happening now, in 2025, with production companies experimenting with AI-generated visual effects and creative direction tools becoming increasingly refined.
Gen Z’s Suspicious Intelligence
Interestingly, the generation most comfortable with technology is simultaneously most skeptical of technology’s creative output. Gen Z uses AI extensively, 79% report having used AI tools—yet expresses significant anxiety about it, with 41% reporting anxiety specifically, and nearly half concerned that AI will impair their ability to think critically.
This younger cohort represents something important: they’ve never known a world without algorithmic mediation, so they possess an intuitive understanding of its limitations. They recognize that algorithms are designed by humans with commercial incentives, that engagement metrics aren’t the same as meaning, and that authenticity, however performative, still matters more than efficiency.
What Gen Z is modeling, perhaps inadvertently, is a kind of creative responsibility. They’re drawn to creators who demonstrate intentionality, who make work that reflects genuine human experience rather than algorithmic probability distributions. In an interview about AI music, the Canadian singer-songwriter also noted something crucial: “I think it’s about intention. If people want to climb the mountain, I guess you use the tools you can to get up there. I don’t know what you’re gonna find when you get up there, but hopefully you’re happy.”
That final phrase, “hopefully you’re happy,” captures something the AI-slop debate often misses. Creation, at its best, is supposed to satisfy something in the creator, some itch that can’t be scratched through mechanical means.
The Real Question
So is AI slop an excuse for a lack of creativity, or is it the inevitable result of systems that have already abandoned creativity in favor of commerce? The honest answer is both, and neither.
The technology isn’t inherently the villain. AI is, fundamentally, a tool, a very sophisticated one capable of replicating patterns with uncanny accuracy, but a tool nonetheless. Some artists will use it to enhance their work, to handle tedious tasks, to explore aesthetic possibilities they couldn’t access before. That’s legitimate.
But there’s a chasm between “AI as creative tool” and “AI as creative substitute,” and increasingly, commercial incentives are pushing everyone toward the substitute. When a startup can generate a thousand songs in a day and monetize them through bots, when a production company can render a scene in minutes rather than weeks, the pressure to adopt shortcuts becomes overwhelming. The question of whether AI is enabling laziness becomes academic when the financial incentives ruthlessly reward laziness.
The real cultural work ahead isn’t technological—it’s structural and legal. It’s about copyright enforcement protecting artists whose work was used to train AI without permission. It’s about regulatory frameworks that distinguish between AI as a tool and AI as an independent content creator. It’s about streaming platforms being held accountable for algorithmic fraud. It’s about record labels negotiating fair terms for artists whose voices are being deepfaked.
Musicians won’t solve this by refusing to engage with AI—the technology will continue regardless. Filmmakers won’t solve it by abandoning efficiency tools. But audiences, creators, and policymakers can collectively establish that certain lines exist: between innovation and exploitation, between augmentation and replacement, between democratization and theft.
In the end, the AI slop isn’t appearing because artists lack creativity. It’s appearing because someone built a machine that makes mediocrity profitable, then was shocked when the market responded by creating an infinite supply of mediocrity. The blame for that belongs not to the algorithm, but to the systems and incentives that encourage its mindless proliferation.
The question isn’t whether artists will use AI tools. They will, and some uses will be beautiful and innovative. The question is whether culture itself will develop the antibodies to resist the kind of aestheticized emptiness that masquerades as content. Whether audiences will continue to prefer authenticity over convenience. Whether, as one critic put it, we’ll ultimately recognize that “AI art does not provide anything that humans cannot already do better.”
We’re not there yet. We’re still in the liminal space where everything feels possible because the rules haven’t been written. But they will be written either by the people who care about art or by the people who just care about profit. Right now, it’s uncomfortably unclear which group will prevail.