Live
- Over 7,600 Syrians return from Turkiye in five days after Assad's downfall: minister
- Delhi BJP leaders stay overnight in 1,194 slum clusters
- Keerthy Suresh and Anthony Thattil Tie the Knot in a Christian Ceremony
- AAP, BJP making false promises to slum dwellers for votes: Delhi Congress
- 'Vere Level Office' Review: A Refreshing Take on Corporate Life with Humor and Heart
- Libya's oil company declares force majeure at key refinery following clashes
- Illegal Rohingyas: BJP seeks Assembly session to implement NRC in Delhi
- Philippines orders full evacuation amid possible volcanic re-eruption
- Government Prioritizes Welfare of the Poor, says Dola Sri Bala Veeranjaneyaswamy
- Two Russian oil tankers with 29 on board damaged due to bad weather
Just In
What does this mean for music as we know it?
In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor – Udio – arrived on the scene.
I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.
The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.
After playing with Suno and Udio, I’ve been thinking about what it is exactly they change – and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.
Expressing emotion without feeling it
Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.
This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphising).
The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing
joy that I might get listening to a great band.
To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.
An everyday language
Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans – and there is much debate about those humans’ intellectual property rights.
Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with and actually listen to for their own enjoyment.
AI capable of “end to end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers – or whether the distinction is even useful.
A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language – think what smartphones have done to photography.
So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference – a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, dad!
Can you create without control?
Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.
Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search – a kind of wandering through the space of possibilities – but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force)
Viewing these tools as a practising music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music”, I don’t feel I have enough control to express myself with these tools.
I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.
Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.
But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.
And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.
For a start, the output depends just as much on everything that went into the AI – including the creative work of millions of other artists.
Arguably, you didn’t do the work of creation. You simply requested it.
New musical experiences in the no-man’s land between production and consumption
So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.
A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.
While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.
But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching and meaningful for both individuals and communities.
(Writer is an Associate Professor at the School of Art & Design, University of New South Wales, Sydney; https://theconversation.com/)
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com