Advertisement

Sabotage, Settlements and Self-regulation: How Musicians Are Fighting Generative AI

June 8, 2025 8:00 am in by
Image by Frazao Studio Latino via Getty Images

In just a few years, AI has gone from a buzzword to a real threat for musicians. Some are fighting tech with tech; others are setting new legal benchmarks, and a few are actually teaming up with AI (but only on their terms).

At the heart of the AI-music debate is how generative models are built: by scraping massive amounts of existing music, often without permission. These systems sample countless hours of audio to learn patterns and styles, creating new works that can sound uncannily close to real artists. Supporters argue this is no different to how humans learn (by being influenced) but some musicians see it as outright theft of intellectual property, and a threat to the industry itself.

With minimal production costs, AI can churn out countless songs, flooding platforms and risking quality for quantity. Some see this as more diversity, others worry it drowns out real creativity.

Article continues after this ad
Advertisement

Sabotage

AI models rely on pattern recognition, making them vulnerable to targeted disruption. Tools like HarmonyCloak and Poisonify embed subtle, inaudible distortions in tracks, tricking AI training algorithms without affecting human listeners. Musicians like Benn Jordan have openly sabotaged their own tracks this way, though doing it well requires technical skill, computing power and time (up to two weeks for an album). As this space grows, a new micro-industry could emerge; developers are already offering “anti-AI” services to artists.

Settlements

Major labels like Universal, Sony, and Warner are suing AI music startups, claiming their material was used without permission to train models. The RIAA leads the push for damages and licensing agreements so artists get paid when their work is used in AI-generated content. Legal responses differ worldwide: the US is tightening what counts as “fair use,” while the UK is debating opt-out laws. Streaming platforms like Spotify also battle artificial streaming by bots, but enforcement remains patchy.

In Australia, groups like ARIA and the Australia Copyright Council are calling for clearer rules and fairer compensation. Under current law, only works created by humans are protected. But if an AI directly copies a substantial part of a work, that’s still likely to be copyright infringement.

Article continues after this ad
Advertisement

Self-regulation

Self-regulation might sound small-scale, but when thousands of musicians start setting their own rules, it builds momentum. Artists like Holly Herndon are opting in on their terms, developing their own models of themselves, and publishing on platforms that allow for creative control and consent.

Most importantly, musicians are setting ethical boundaries for how they use and support AI. Many advocate for a human-first future in creativity and choose not to engage with platforms that won’t disclose training data.

Initiatives like Have I Been Trained, provide a reverse-search service and an opt-out service by listing a “Do Not Train Registry”. However this doesn’t stop organizations from scraping work.

For others, it’s a chance to break these systems. A notable case involves Michael Smith, who was charged with wire fraud conspiracy, wire fraud, and money laundering conspiracy after allegedly generating countless AI-created tracks and using bot accounts to stream them, amassing over $10 million in royalties over seven years. 

Article continues after this ad
Advertisement

What’s striking is how long this scheme ran before it was shut down. Platforms like Spotify have terms of use that explicitly ban artificial streaming (which includes using bots to manipulate play counts) but enforcement is another story. Streaming fraud is hard to catch in real time, especially when done at scale and disguised within legitimate traffic. Many believe these platforms were slow to act because they benefit from more content and streams, regardless of the source.

In any case, anti-tech and pro-ethics aren’t necessarily mutually exclusive.

Closing Thoughts

As AI continues to embed itself deeper into the world of music, the questions grow more complex and more human. What does it mean for something to be truly “original” in an age when machines can remix the sum of human culture in seconds? Is a song less valuable because it was composed by code rather than lived experience? And if AI can democratise creation, does that threaten or enrich the art we value?

These aren’t questions with simple answers. They cut to the core of how we define creativity, ownership, and the purpose of art production and consumption. As artists, listeners, and citizens, we each have to decide what kind of musical future we want to support.

Article continues after this ad
Advertisement
Advertisement