Spotify: droppin' them fake beats
Infinite streams of machine-generated music are here to assault your remaining senses.
A while back, I made a rather fatalistic prediction: that barring a radical intervention, in a decade or two, most of interactions on the internet will be faked. It’s not pessimism; it’s that our aggregate capacity for human-to-human interaction is inherently capped. In contrast, the ability to generate human-like text, images, and audio is now almost infinitely scalable — and from customer support, to marketing, to cybercrime, there are powerful incentives to crank it up to eleven without being upfront with you.
The most popular article on this blog is still my 2022 entry about a machine-generated book I accidentally bought on Amazon. I made the discovery before the release of ChatGPT; since then, machine-generated books, articles, and imagery have swarmed the web. If you want to find real photos on Google Images, the before:2022-01-01 operator is a godsend. The contagion is also spreading to the physical world. For example, off-brand jigsaw puzzles for sale on Amazon now routinely feature misshapen children and animals:
A phenomenon that has gotten much less attention is generative music; machine-generated songs can be created on platforms such as Suno, possibly from nothing more than a single-sentence prompt outlining the desired style and lyrical themes. As with generated images, the technology is impressive; the results are not quite there, but if it’s just playing in the background, you will probably miss the cues.
And so, it was only a matter of time before this band started automatically playing for me on Spotify in the platform’s personalized “release radar” lineup:
“Huh”, I thought to myself. This is some seriously bland, autotuned symphonic metal. And what’s up with that ultra-generic description and the AI-quality cover image?
My suspicions aroused, I went to the band’s YouTube profile — 4.4k subscribers! — and discovered a series of videos consisting almost exclusively of stock footage and AI-generated images:
The channel’s description featured this bizarre, ChatGPT-esque passage:
“Disclaimer: Our channel is a space of respect and open-minded exploration into the themes of magic and the occult within the scope of the songtexts. We encourage positive engagement and community spirit in the comments section.”
The vocals sounded inconsistent, the lyrics were comically kitschy, and the appearance of a woman who I presumed was the lead singer changed from clip to clip.
At that point, I posted an exasperated rant on Mastodon; a reader by the name of @moirearty quickly uncovered this remark under one of the early clips:
Credit where credit is due: the author explained what’s going on when asked, and included a passing mention of AI in the below-the-fold summary on Spotify.
I wasn’t looking for any more trouble, but soon thereafter, another recommendation followed — this time, a made-up band with a four-fingered guitarist and no mention of AI anywhere:
It followed a very similar template of bad, inconsistent music combined with evidently auto-generated images and text:
The bottom line is that surreptitious non-human music is here. In the era of automatic playlists and algorithmic feeds, such cheaply-generated infinite shovelware can monetize your behavior without meaningful consent — and without giving you anything worthwhile in return.
I should note that I’m not a neo-Luddite: I don’t mind people using modern tools. I was there when the digital photography revolution happened, and I shed no tears for film. But if the bulk of the creative output is done by a black box, I don’t think it’s right to pretend it’s human work. Finally, the societal externalities of minimal-effort generative content can’t be easily wished away.
PS. Although I included enough info to find the YouTube channel of the "band", please don't harass the creator. While I think they should be more forthcoming in labeling the content as synthetic, it's ultimately up to platforms to give us better control.
I'll also address this HN quip about my HN quip:
>>> Despite some HN quips, AI text detectors are pretty dependable
>> I have never tried AI text detectors, but my impression was that they were considered unreliable?
> They are. The author is wrong.
There is some HN lore rooted in public complaints from people who were sacked from their jobs because someone investigated their writing and it consistently failed such automated checks. The complaints fit neatly into several recurring narratives, so I don't think the reporting gets seriously challenged.
I was quite skeptical of the tooling, but having played with it extensively, I believe the apps tend to be unreliable mostly in the sense that there are prompting tricks to evade detection. I was not able to consistently trigger false positives on human writing - regardless of the writer's language proficiency or style. Further, for the handful of stories where the fired persons' writing was public, it was pretty clear to me that they weren't telling the whole story. In essence: the tech, while not perfect, is pretty good.
[Edit: in a comment below, Ruslan gives an example of a human email that triggered the tool for him, so caveat emptor.]
On the AI text detectors: to be fair, my experience differs. I've tried https://quillbot.com on a few emails I drafted recently without using ChatGPT (though I still did my best attempt to make them formal), and the detector claimed that the text is 100% AI generated. Even when I tried to type up a less thoughtful email in a fairly casual style, it still reported the same: 100% AI generated.
This could mean only 2 things: either AI detectors are indeed not reliable yet (at least this particular one)... Or I'm an AI! 🤖😬
For real though, it seemed to me like it's biased when it stumbles upon cliche phrases like "we'll do our best to provide a solution most valuable to your company". And if it finds a few of those in the text, it claims it to be AI generated. When in reality it's a regular human brain distorted by years of corporate written communication :D