Ghostwriter in the Machine: The State of Generative AI Music in 2025, and Beyond

HAL, open the slop-bay doors

Article photo edited by author, original photo: Lars Frantzen, licensed under CC AS A 4.0

All of this has been very embarrassing for the sci-fi authors of our recent history, hasn’t it. All wrapped up in guessing if computers will learn to hate us and kill us, or by some reprieve learn to love us and have sex with us, that no one thought to ask if the computer could make a passable beat. Well, it’s possible that one author somewhere might have predicted it, but I can’t check in any libraries right now. To circumvent training data-scraping, this article is presently being scratched into an oak tree which stands near the perimeter of the Unabomber-esque cabin I have moved to, and I am using the power of positive thinking to will the words onto our website. I’m hoping this works, as it will be the only means I have to contact the outside world left, and I’ll need to ask someone soon how to empty a septic tank safely. 

The following article about the state of the music industry and AI as of 2025 uses information I have gleaned from reading a combined weight of 300 kg of tea leaves, and any pain expressed about the state of things can be chalked down to just how little I enjoy drinking tea. Speaking of septic tanks filled with tea:

The UK Government 

While the UK’s Labour government have been busy kowtowing to fascism as of recent, they’ve also found some free time to be desperately horny to give free reign for AI companies to do whatever they feel with copyrighted IP. This is in keeping with the great British tradition of doormatting to private companies. The currently proposed opt-out model would mean artists would individually have to ask nicely to each new private equity-funded startup that pops up if they could stop stealing their shit. And while I’ll always have time for it, I don’t think anyone else has time to make their life a whack-a-mole of polo shirted business-school misanthropes, who keep the CliffNotes page of The Fountainhead in their bookmarks bar. 

The pushback to this all has put our nation’s jingoist rags in lockstep with half of the surviving Beatles. Out of this furore has come the protest album Is This What We Want?, co-crediting Kate Bush, Damon Albarn, Hans Zimmer, and a thousand other artists with 47 minutes and 17 seconds of total silence. Beyond the existentialist point made about what is left musically in a Britain where its artists aren’t properly protected, it has the added benefit of demonstrating that musicians, for all their egos, can work together if they all simply shut the fuck up for a while.

So, what vision of the musical future is this all at the cost of? What do the promised soulless chimeras borne from the stealing of the unwilling’s blood look like? Well, we’ve had our first major glimpse of this future this last week, and so far it looks like if Blossoms spent all of their tour budget on Dermablend.

I Have No Mouth, and I Must Make Shit Dad Rock

The Velvet Sundown (Photo credit: The Velvet Sundown (and the souls of the damned consumed to give them corporeal form))

The Velvet Sundown are allegedly total constructs of a computer’s imagination. Alternatively, they are misunderstood forerunners of disappearing mic cable technology, and a great source of representation for people who only some days wake up with five fingers on each hand. These very real boys racked up a couple hundred-thousand on Spotify alone before any Blade Runners noticed, not before their directionless psych-rock found its way onto prominent TV soundtrack and Vietnam War music playlists. Kissinger didn’t generate enough horrors himself, apparently. Without looking out for the confusing directionlessness of their tracks, and the overly consistent vibrato, it frankly is possible to pass some of their tracks in the wild and believe they’re the real product of deeply uninspiring, yet human minds. 

This, more than any guess I’ve seen before, is the bar for what I believe will be pouring through the floodgates, onto your favourite streaming services. Why would it ever get better than this?

Could a great song be made with AI alone? Like, fucking maybe, but the quality bottleneck there is the tepid prick who is writing the prompts to the magical music machine in the first place. Why would the vast, vast majority test the output beyond what is passable, if, to start with, their dim spark of creativity only moves their fingers to type a comma-separated list of nouns and adjectives? Why would going for quality beyond what sounds passibly human ever matter to someone who wants the absolute, least path of resistance end result? The least amount of vulnerability? You can’t merely hear beyond that, you need to be able to listen too.

If that’s elitism, then that’s elitism. The paths to music creation have been so beautifully broken down by technology in the last century, allowing the barrier of entry to creating the next hit song to nowadays just be a half-decent laptop, your own time, your intentions. This is a clear leap forwards in accessibility, yes. Accessibility ceases to be an innate force for good in this misappropriation of the concept, where the barriers now being torn down are our own human expression. The finished product of the art is nothing if it’s accessed with such extreme dilution, or altogether circumvention of intention, of decision, and of vulnerability.

Shall we come back to that in a bit?

Let’s come back to that in a bit, but even then, that was an argument which appeals to the somewhat good intentions on the part of AI “““““creatives”””””, and the sheer quantity would far more suggest that this is a tide of cynical cash-grabbing attempts, pure and simple. 

Deezer believes that 18% of music on their platform is AI-generated, and that’s only what they know. That is quite a lot! We’re only a year and a bit into all this! While Deezer have flagged The Velvet Sundown as probable AI creations, Spotify as of yet have given no indication to their customers that anything is amiss. 

As legislative proposals continue to get ping-ponged between our countries favourite cockroaches, the other big players putting their hat against the government proposals are the big three mega-labels, Universal Music Group, Sony Music Entertainment, and Warner Music Group, who amount to 76.7% of the record industry market share. Though yes, it makes sense that the profit motive would align these big old money-makers with the protection of artist IP, this should not be taken for AI skepticism on their part.

Almost a year after the big three smacked a lawsuit on AI music generators Suno and Udio, they are opening up their minds and wallets to licensing agreement talks. With the precedent of previous industry deals where press-releases bandy out the term “artist-centric principles”, we’ll have to wait and see what share of this pie will be left for those artists still sitting outside of the warm, conglomerate embrace. 

For now, it remains up in the air whether these companies of doubtful artistic merit will find any sort of deal with the big dogs of the music industry, and they’ve now got the relative chihuahuas of the industry biting their ankles now

With this all in mind, let’s come back to something a little more at the baseline to what’s going on here. If a generative AI company even wanted to, or were compelled by law, could they get an artist’s music out of their system? 

The answer is, and always has been, no.

Arguing whether we should be able to tell AI companies to get our data out of their training set, while a noble question to ask, lets them shift under the rug one limitation they’d prefer to keep quiet: They can’t get your data out. As of now, and without demolishing their own business model, they simply don’t know how to.

When generative AIs are doing their thing, they aren’t leafing through an index of data to reference, no sort of library where any given book can simply be removed, or a call list where your number can be deleted with a key press. Training data starts as discrete files yes, in this instance audio files, but the diffusion process breaks this down to its patterns, to its similarities and differences to the rest of the dataset. 

To grossly, grossly oversimplify, these shape a gargantuan stencil of ever-increasing complexity, with which portions of its patterns can be retraced whenever needed. As evolving as this stencil is through training and retraining, the influence of your data is rigid in there, in a sea of terabytes on terabytes of other data points. That is not to say that it has necessarily been blended into no recognition, as similar models continue to recall perfect copies of real world, discrete and specific information

Deleting an audio file is easy. Evaluating where in an ocean of numbers assigned to parameters the potential for an exact recreation of your art exists, continues to simply prove impossible. 

Well there is one solution here, roll back the AI model to before the music was put in. And this can’t ever be allowed to happen. Training these models is so expensive, that AI companies will fail to exist if they have to roll back every time they infringe on someone’s copyright, even if it is the right thing to do. What if your lifework and livelihood was absorbed three months ago? That’s a couple billion dollars to the valuation of a private company, a huge knock on their standing in this perpetual AI arms race, and your willing consent, the ownership of your work, will never be prioritised above that. 

This is what’s at stake for AI companies right now, the bind they have placed themselves within, and why their survival is dependent on them keeping on doing what they want. This is why they are backed into a corner, and it’s either fighting tooth and nail to do whatever they like, or conceding licensing agreements to the boys with enough money to ask in a way they notice. Notice that in either solution for them, nobody gets to say no to their art being misappropriated if it happens their way into the dataset. Once they got you, they got you.

So this is where we stand, and what’s left to find out. 

What’s left to find out

Generative AI, by its many forms, is a modern miracle. But its application is being steered by the same beast we have come to know and love, Silicon Valley company valuations. Yes, capitalism is often a vehicle for art’s creastion, but what makes the two at odds, art and profit, is capitalism’s blindness to the value of art beyond its potential for profit. 

John Berger writes in Steps Towards a Small Theory of the Visible “If the pursuit of profit is considered as the only means of salvation for mankind, turnover becomes the absolute priority, and, consequently, the existent has to be disregarded or ignored or suppressed”. Art in all its forms stands defiant against this, and demands a value beyond this, a value for life.

This is why this seeming end-point of art’s tools, a machine which makes the art where the human is circumvented, was one of the first things profit could imagine with this technology. The question we are being asked of us is not if the machines can make art now, or if the machines are finally alive. It’s, once again, “are we as dead as the machines?”.

I have faith that we aren’t resigned to accepting that here. We don’t even have to dismantle all of capitalism to save music juuuuuuust yet (though I am free next Tuesday if you are). What they still need on board is the audience, and the simple thing I can count on with my life, one thing that refuses me to lie down and accept an inevitability of AI music swallowing up culture, no matter its volume: music fans are moralising pricks, obsessed with authenticity. Perhaps this is what all the training of paranoid nepo-baby and ghostwriter accusations has all been for. One quick extra note, stop ceding inevitability to a shit future which hasn’t even happened yet. 

If music audiences refuse AI music, if you refuse it too, then we can leave the machines to talk to themselves, to hear only their own soulless chatter. 

And we get to keep listening to each other.