Spotify Cracks Down on AI: 75 Million Tracks Removed in Sweep Against Fake Music
Spotify has drawn a firm line in the sand. The streaming giant revealed it has removed more than 75 million tracks from its platform in the past year, many of them AI-generated “spammy” recordings designed to game the system rather than contribute to it.
The crackdown comes as AI music tools make it easier than ever to churn out quick, low-effort tracks, from voice-cloned impersonations of artists (both living and dead) to endless loops and bite-sized recordings uploaded en masse to siphon streaming royalties. These tactics have been clogging the platform, threatening to undermine trust in its catalog.
The streaming giant introduced new policies to protect artists and listeners, including a stricter impersonation policy banning unauthorized AI voice clones and deepfakes, and a new spam filter set to roll out this fall to flag mass uploads, duplicates, and artificially short tracks designed to exploit royalties. Spotify is also collaborating with industry partners like Digital Data Exchange (DDEX) to develop a standard for AI disclosures in music credits, allowing artists to transparently indicate AI use in vocals, instrumentation, or production without penalizing responsible AI use.
The move addresses concerns about royalty dilution, with Spotify noting that its payouts grew from $1 billion in 2014 to $10 billion in 2024, attracting bad actors. Major labels like Universal Music Group and Warner Music Group have praised the initiative, while artists like Jada Carter emphasized its importance for protecting emerging musicians. Deezer, another streaming platform, reported 28% of its daily uploads are fully AI-generated, highlighting the industry-wide challenge.
In a statement, Spotify framed the move as part of a broader attempt to “strengthen trust across the platform.” Beyond mass removals, the company is introducing spam filters to automatically flag duplicate uploads, artificially short tracks, and keyword-stuffed releases. Perhaps more crucially, new rules target AI impersonation, ensuring that voice clones and unauthorized uploads no longer slip through under an artist’s name.
Transparency is also on the agenda. Spotify is collaborating with metadata standards bodies to create AI usage disclosures, meaning future tracks could clearly state whether AI was involved in vocals, instrumentation, or production.
For legitimate artists, the technical update is more of a fight for visibility and ownership in an era where spam threatens to drown out craft. For listeners, it’s a promise that pressing play still carries meaning.
Spotify’s decision signals a shift in how streaming platforms will navigate the messy, rapidly evolving world of AI music. While it’s increasingly possible that AI may be part of music’s future, it won’t come at the expense of authenticity.

