<-- test --!> See How This AI Improvised With A Musican To Create A Musical Dialogue In Real-Time – Best Reviews By Consumers
See How This AI Improvised With A Musican To Create A Musical Dialogue In Real-Time

See How This AI Improvised With A Musican To Create A Musical Dialogue In Real-Time

news image

A performer, David Dolan (Guildhall School of Music) and an AI from the programmer-composer Oded … [+] Ben-Tals (Kingston University) created a musical dialogue together at the Guildhall School of Music in January 2022

David Dolan and Oded Ben Tal.

On February 3, 2023, a musician and artificial intelligence (AI) created by a composer-programmer took the stage at the Guildhall School of Music and Drama in London to create a musical dialogue.

The performance was sponsored by the Arts and Humanities Research Council (AHRC) and based on the AI research of Dr. Oded Ben-Tal from Kingston University, who leads an AHRC-funded research project that looks at the relationship between music and data.

Ben-Tal was joined on stage by performer-improviser Professor David Dolan from the Guildhall School. The live 75-minute performance featured musical improvisations between Dolan and the AI. The result was a musical dialogue between them where the computer listened to Dolan performing and then contributed musical material in real-time.

Ben-Tal says that the computer did the moment-to-moment generation of music in an automatic way, and the AI computerized system performs duo improvisations in real-time in the context of tonal and modal and extended tonality.

“When performing this, I monitor the system as it performs with David and adjust things, making executive type decisions about how the system will operate over larger time spans, say above 30 seconds, the larger scale narrative of an improvisation,” said Ben-Tal. “The output is a duo improvisation with David on the piano, who performs the improvisations involving no pre-planned elements and a semi-autonomous AI musical performer that produces sounds directly to speakers.”

Ben-Tal says the system has two main parts – one that listens and the other that generates the music.

“We recorded David improvising solo on the piano and I used these recordings for the initial development of the system, iteratively improving how the system “listens” and how it responds,” said Ben-Tal. “Both the listening and response parts encode aspects of my musical thinking as a composer. The things I want to hear because I judge them to be beautiful and expressive and imaginative.”

“I invited Dolan to contribute towards an interdisciplinary workshop that brought in researchers from different fields – music, computer science, engineering, neuroscience. “David heard about my past work using machine listening to enable real-time musical dialogue between human performers and computer systems, and he asked if we could work on something together.”

But Ben-Tal said David was convinced he would prove that a computer cannot improvise with human musicians.

“I think he was pleasantly surprised when we met in the studio, hearing musical output from the computer which made musical sense, said Ben-Tal. “The fact that these sounds were musically relevant and meaningful meant that we were able to meet almost every month to improvise together and develop this collaboration.”

Dolan says that for his part, the reason he participated was and is still a curiosity.

“I have performed classical improvizations with world-class colleagues, but never with a non-human partner,” said Dolan. “As Oded says, I found the idea, before we started, impossible, and the fact that the computer/AI was able to produce musical lines which resulted in musical meaning and enough common elements for the duo to make sense, as such, was intriguing.”

New technology, new musical opportunities

Ben-Tal believes that new technology can bring new musical opportunities, the piano, tape machines, vinyl records, etc.

“AI has been applied to music in the past but developments in computing power and computing techniques make AI much more interesting now,” said Ben-Tal. “Of particular importance for this collaboration is developments in music informatics – the ability to extract musically-relevant data from sound.”

“We refer to this as machine listening, but like the term machine learning, the relationship to the human type is very partial,” adds Ben-Tal. “Machines don’t learn like humans. Machines don’t listen like humans.”

For the performance, they used simple microphones to get the sound of the piano into the computer. The AI analyzed that signal and tried to make musical inferences and tried to make musical sense of the input.

“This data conversion process – from audio signal to musical material – is the analogue version of our listening; listener’s brains transform sound waves into music,” said Ben-Tal. “The AI’s version is much less sophisticated; much more primitive and simplistic.”

“But the material inferred from David, in real time, is used by the AI to generate musical responses. The material is recombined in inventive ways to create a sense of musical dialogue,” said Ben-Tal.

Dolan says he didn’t prepare anything special for this project and performance. “I have been developing and performing classical improvisation for the last 35 years as a part of my concert activity, but as said, always with human performing artists.”

Ben-Tal says that this type of performance is part of an ongoing creative-research project.

Dolan says there are more concerts planned.

“But more importantly more rehearsals which are also research sessions, from which the AI improves and Oded feeds in more refined elements related to my musical language,” said Dolan.

Musical meaning and creativity

“We continue to develop our improvisation musically as we consider the interesting questions it raises,” said Ben-Tal. “These include questions that music theorists have been pondering for many years: about musical structure and musical meaning which we now have an opportunity to revisit and bring some of the musical knowledge accumulated over the years into the domain of AI engineering.”

Ben-Tal adds that other important questions that arise relate to the meaning of creativity, the creative process and how new technology might be integrated meaningfully and usefully into that.

“For the moment this is not only very far from a duo between two expert human performers, but also a totally different game,” said Dolan. “As a human, I have to limit my musical freedom and bursts of spontaneous expressive gestures and ideas so that the AI can keep being onboard, enabling the duo to keep making sense in real-time – where it is going to go while improving is the key question, and the curiosity keeps me going.”

Ben-Tal says he sees AI as a co-creative tool.

“By using AI as part of a creative process it has multiple uses: whether that be to accomplish some creative tasks within a larger project, or as a partner that can throw interesting and unexpected ideas into the mix that lead to further inspiration,” added Ben-Tal.

Read More