Music and AI: an unplayable score?

Share:

The year 2023 was marked by considerable media coverage around AI, notably with the arrival of ChatGPT, Midjourney, and in music, musical deepfakes , and other AI covers. . The song “Heart on My Sleeve” is the most resounding example, since we hear Drake and The Weeknd there, without either of them having recorded it. Their voices were in fact imitated using AI, with a precision that would be difficult to differentiate from the originals. The quality of the song, the popularity of both artists and the media bubble around AI made it go viral very quickly before it was removed from streaming platforms.

“Heart on My Sleeve,” Ghostwriter (2023).

Some see this as a harbinger of the problems raised when a technical innovation develops erratically, without the law necessary to regulate its use being in place. Others perceive the beginnings of an unprecedented transformation of all aspects of music: its practice, its production, its consumption, its economy, its social worlds and its aesthetics.

An increasingly accessible practice

The practice of musical AI, resulting from research in musical computing, has been made increasingly accessible since the 2010s. Start-ups have used research to develop automatic composition tools and distribute them to the market. GAFAM was quick to follow, with Google developing its suite of tools called Magenta, then MusicLM , a text-to-audio similar to MusicGen developed by Meta. These applications allow you to generate music audio files based on prompts , like Midjourney or DALL-E.

Current tools are a continuation of the digital shift and make musical production more accessible, but there is still the problem of the black box: their operation still remains a mystery to the general public. Bernard Stiegler pointed out the proletarianization of digital knowledge, and AI music is no exception: the majority of users of these tools do not know how they are designed.

Symbolic music and audio generation

From a purely technical point of view, we distinguish two areas in musical AI, symbolic music generation and audio generation. Symbolic generation allows you to generate musical scores or sequences of notes. For example, DeepBach allows you to automatically generate Bach chorales. Audio generation allows you to generate music directly as an audio file, as with text-to-audio Stable Audio or Riffusion . In both cases, the most generalized approach is the use of techniques based on deep neural networks .

The application of audio generation is wide: music creation, speech synthesis, noise removal, or even audio restoration. Using these techniques, The Beatles were able to harness the voice of the late John Lennon to complete their final song . Until now, the quality of Lennon’s recording was too poor to use. Audio separation techniques have made it possible to extract the voice from parasitic noise.

What consequences for the music sector?

What can we say about the transformation of the economy of music production  ? With the development of companies offering custom automated music manufacturing, is the place of musicians threatened? Indeed, thanks to the increasing portability and accessibility of technologies, it is increasingly easier to produce professional quality music. Today, users of the Boomy app , for example, can select a few parameters and in a few seconds generate an instrumental that they can then rearrange, rework or record vocals on. BandLab ‘s SongStarter application can generate a song from lyrics and emojis.

Automated, AI-enabled composition will lead to a massive influx of music very quickly, and music industry professionals are worried. Particularly when financial analysts predict a dilution of market shares . Personalized generation of music in real time, offered by certain start-ups, is already at work in the video game and relaxation industries .

Furthermore, AI speech synthesis is already used by professional authors to place their productions with high-profile artists. The already common practice was to hire performers imitating the voices of big artists to sell them a song. Today, record labels are using AI to show star artists what the song would sound like with their own voice over it. This was actually what “Heart on My Sleeve” was originally intended for. Finally, the separation of sources would allow record companies owning albums that were recorded before multi-track mixing to sell individual parts of a song, a singing voice or only instrumental parts, for cinema or advertising. For example.

Legal issues

The legal issues related to musical AI are questions of intellectual property. The question of copyright relating to AI plays out on two levels. Can the databases used to feed the algorithms be works protected by copyright? Can we consider the result obtained as a work of the mind?

Lawyers respond that intellectual property law protects the creation of forms but not a style or way of creating. Thus, the AI ​​is content to borrow the style without ever retaining the form of a work; it deconstructs the content to extract trends and reconstruct them. When it comes to these questions, institutions are taking the lead, such as Sacem in 2019 , which officially recognized AIVA, a musical AI program, as the composer of the album Genesis .

The law seems to be only now addressing these questions. Universal Music Group and other music companies are suing an artificial intelligence platform called Anthropic PBC for using copyrighted song lyrics to “train” its software. It’s the first major lawsuit in what is expected to be a key legal battle over the future of musical artificial intelligence.

Along the same lines, a bill titled “No Fakes Act” has been introduced in the US Senate. It aims to prevent the creation of “digital replicas” of an artist’s image, voice or visual likeness without their permission. The YouTube platform announced shortly after that it would give labels the ability to remove “synthetic” content, while requiring creators of AI covers to report their counterfeits.

The IFPI (International Federation of the Phonographic Industry) surveyed 43,000 people in 26 countries and came to the conclusion that 76% of respondents “believe that an artist’s music or voices should not be used or ingested by AI without permission,” and 74% think “AI should not be used to clone or impersonate an artist without permission.”

Imitation or creation?

What aesthetic value can we give to music created by AI? The musical works created by AI demonstrate the use of machine learning techniques operating through the exploitation of large volumes of data. These works are therefore inseparable from the data sets that were used in their production.

In line with musical works using borrowing, quotation or samples, we must consider the reinterpretative, readaptive, even imitative dimension of music generated by AI. In the continuity of retromania , described by Simon Reynolds to describe the constant reinterpretation of the codes of past music in contemporary production, AI makes this reinjection of the past much more realistic. Witness this last Beatles song using the voice of the late John Lennon, recorded decades earlier. It is the concept of hauntology, developed by Derrida and taken up by Mark Fisher , which resonates through these pieces bringing the past back into the present .

But how can a work produced by a tool that reproduces, imitates, be original? If she is content only to imitate, we can at most salute the fidelity with which she imitates, as well as her innovative character. If the end of originality and musical expressiveness is often feared, the answer is that in fact, these notions must be called into question. It’s simply about placing originality elsewhere in the creative process. Artists like Oneothrix Point Never or Holly Herndon use these techniques with a critical distance, as real means of serving their subjectivity, to offer singular works with strong emotional expressiveness.

Author Bio: Paul JF Fleury is a Doctoral student in Musicology at Rennes 2 University

Tags: