How to Determine If This Song Was Written Using Artificial Intelligence

This post is part of Lifehacker’s Artificial Intelligence Debunked series . We explore six different types of AI-generated media and highlight common features, by-products, and distinguishing features that will help you distinguish artificial content from human-generated content.

Of all the AI-generated content, AI music may be the strangest. It doesn’t seem like you can ask a computer to create a complete song out of nothing in the same way you ask ChatGPT to write you an essay, but it is true: apps like Suno can create a song for you from just a quick, full vocals, instrumentals, melodies and rhythm, some of which are too convincing. The better this technology gets, the harder it will be for AI music to be detected when you stumble across it.

In fact, this is already quite difficult. Sure, there are obvious examples (as good as they are, no one thinks Plankton actually sings all those covers ), but there are plenty of AI-generated songs that are guaranteed to fool casual listeners. Instrumental electronic music, which is already heard digitally, is particularly difficult to discern and raises many ethical questions as well as concerns about the future of the music industry.

However, let’s put that aside and focus on the task at hand: recognizing AI music when you hear it in the wild.

How does music generation using artificial intelligence work?

It’s like magic: you can describe a song in lyrics, and an artificial intelligence tool can generate the entire song, vocals and all. But it’s actually a product of machine learning.

Like all AI generators, AI music generators are based on models that are trained on massive amounts of data. These specific models are trained on musical samples, learning the relationships between the sounds of different instruments, vocals, and rhythms. For example, programs that create artificial intelligence covers are trained on the voice of a specific artist: you provide enough samples of that artist’s voice, and the program matches it with the vocal track you are trying to reproduce. If the model is well trained and you’ve given it enough voice data, you can simply create a convincing AI cover.

This is an overly simplistic explanation, but it’s important to remember that these “new” songs were made possible by a huge data set of other sounds and songs. Whether the entire song was created using AI or just the vocals, models using the technology create products based on their previous training. While many of these results are impressive, there are some quirks you may notice if you pay attention:

Sound glitches and glitches

Most generative AI products have some artifacts or inconsistencies that may indicate their digital origin. AI music is no different: the sound generated by AI models can sometimes sound very convincing, but if you listen closely, you can hear some oddities here and there.

Take Suno’s “Ain’t Got a Nickel Ain’t Got a Dime” for example . This is the type of AI result that should rightfully scare you, as it will likely lead many people to believe it is real. But focus on the vocals: the “singer’s” voice shakes all the time, but not in the way you would expect from a human being. It modulates, almost like autotune, but sounds more robotic than digital. Once you learn to listen to this sound, you will hear it appear in many AI songs. (Though, I’ll begrudgingly admit, that chorus is catchy as hell.)

Here’s another example, ” Stone “, which is perhaps even scarier than the previous one: there are moments in this song, especially the line “I know it, but what should I do”, that sound very real. But right after this line, you can hear some of the same modulation issues as above, starting with “oh my love.” Soon after, a strange glitch occurs: the singer and band appear to be singing and playing the wrong note.

Perhaps even more tellingly, the second “chorus” falls apart. It has the same lyrics right down to the line “I know it, but what should I do”, but halfway through it goes to the words “I know it, one day I”, transforming into the lyrics of a different verse. Additionally, the AI ​​doesn’t seem to remember what the original chorus sounded like, so it makes up a new melody. This second attempt is not nearly as realistic as the first.

Trust your instincts here: there are so many digitally edited vocals that it can be difficult to distinguish these glitches and modulations from actual human voices. But if something sounds too creepy, it could be a robot singing.

Poor sound quality

If you have a modern streaming service and good headphones, you may be used to extremely high-quality music playback. On the other hand, AI-generated music often has classic mp3 audio. It’s not fresh; instead it is often fuzzy, tinny and flat.

You can hear what I mean in most of the samples Soundful offers : browse the options, and while you might not think twice about hearing something in the background of a YouTube video, note that none of the them is not particularly clear. Loudly’s samples are slightly higher quality, but still suffer from the same effect as if each track had been compressed into a low-quality format. Even many of Suno’s tracks, which are AI’s best songs so far, sound like they were downloaded via Napster . ( Although they seem to understand bass drop-off .)

Clearly, there is a true lo-fi genre of music that deliberately strives for “low-quality” sound. But this is just one clue to look for when determining whether a track was created by AI or not.

Lack of passion

AI may be able to generate vocals, even relatively realistic ones, but it’s still not perfect. The technology still has difficulty producing vocals with realistic dispersion. This can be called lack of passion.

Listen to this song ” Back To The Start “. There’s a general robotic sound to the voice, but it doesn’t go anywhere. Most words are sung in the same tone; poppy and light, sure, but a little muted, almost boring.

However, this is one area where AI results are improving, with Suno producing vocals with realistic variations (though not always). Even Plankton has some passion in his voice when he loses to Chappell Roan:

Another thing to note is that the singer sounds “out of breath” in AI songs, with many of the words sounding like they aren’t entirely clear. I’m not sure what causes this phenomenon, but I have noticed it in many AI singers. Just listen to poor Frank Sinatra struggling with every word during his Dua Lipa cover:

Does the song have any meaning at all?

When I write about AI, I reiterate one specific point: AI doesn’t really “know” anything. These generative models are trained to look for relationships, and their output is the results of the connections they have learned.

So these songs are not proof that the AI ​​actually knows how to make music or how music should work. That doesn’t make them good lyricists or expert melody writers. Rather, he creates content based on his previous learning, without any critical abilities. These days, this results in the end product often being compelling on first listen, but if you listen again, or with discerning ears, things can fall apart. When presented with a song that you think might have been written by artificial intelligence, think about the different elements of the song: Do these lyrics actually make any sense? Does the music flow logically?

You don’t need to be a music expert to figure these things out. Consider the example of “Stone” above: Suno seems to have “forgot” what the original chorus was supposed to sound like, and essentially ended up ruining the lyrics it laid out early on. This first verse is also a melodic mess, especially the quirky “without thinking about you” line. Not to mention, the verse is short and transitions almost immediately into the chorus. It’s amazing how “good” the result is to the AI, but that doesn’t make the song “good”.

Who “sings”?

AI-generated celebrity covers can be impressive and often sound just like the singers they portray. But the fact that the song uses a famous voice can itself be a clue: if Taylor Swift covers Sabrina Carpenter , it will be news, not a YouTube video or Instagram reel. If a major artist has released actual music, you’ll likely find it on a streaming platform like Apple Music or Spotify, or at least get some kind of confirmation from the artist that they actually recorded a cover.

More…

Leave a Reply