Music and other live performance art has always been at the cutting edge of technology so it’s no surprise that artificial intelligence and machine learning are pushing its boundaries.
As AI’s ability to manage key elements of the creative process continue to evolve, should artists be worried about the machines taking over? Probably not, says Douglas Eck, research scientist at Google’s Magenta.
“Musicians and artists are going to grab what works for them and I predict that the music that will be made will be misunderstood by many people,” Eck said at the Sónar+D event last week in Barcelona.
At the event, which is twinned with the Sónar dance music festival, Google held an AI demonstration where Eck showed a series of basic, yet impressive musical clips produced using machine learning model that was able to predict what note should come next.
The Magenta project has been running for just over a year and aims discover whether machine learning can create “compelling” creative works. “Our research is focused on sequence generation,” Eck says, “we’re always looking to build models that can listen to what musicians are doing. From that we can extend a piece of music that a musician’s created or maybe add a voice”.
Just as the drum machine was loathed and feared by many when it first hit the mainstream in the 1970s, AI’s role in the creation of art has sparked similar fears among critics. Eck, who admits that he was initially among the drum machine haters, explains that it took an entire generation of musicians to take the technology and figure out how to take it forward without putting good drummers out of work. He envisages a similar process of misunderstanding and eventual acceptance for AI-based music tools.
Given its flexible nature, it’s likely that musicians and other artists of the future will all use AI differently, according to Freya Murray, program manager at Google Arts & Culture Lab.
“Some will collaborate with machine learning, others will use it as a tool and for others it will be their creative process and that’s the case throughout the history of art,” she said.
“In the creative process, it can provide that stimulus to take you in a direction you might not have gone before”. AI will also have an important role in art education, says Murray.
Also at Sónar+D was Abbey Road Red, the legendary studio’s tech incubator. Jon Eades, who heads up the scheme agrees that the dawn of AI in music is a good thing.
“In the same way that Instagram has democratized the process of taking and editing photos, we’ll see a similar progression towards making more people musical creators – using assistive AI to help people make good music,” he mentioned at a recent talk on AI at the London studio. “I don’t think we’ll see a complete replacement of composers with computers but I do think there are going to be big shifts. We’ve already seen passable results in a lot of areas”.
The move to AI-based music creation tools will be “as big a technological shift as the digitization of music,” he predicted, albeit cautiously.
Abbey Road Red recently announced the most recent intake of startups for its mentoring scheme, including AI Music, a company that plans to use artificial intelligence to transform music “from a static process of a one-directional interaction, to one of a universal dynamic co-creation”. Applications for the next wave of hopefuls are now open – until July 7th.
While machines may not replace composers anytime soon, they’re certainly catching up. This week, a marimba-playing robot called Shimon composed its own music for the first time. Developed by the Georgia Institute of Technology, the musical ‘bot was given more than 5,000 complete songs, two million motifs, riffs and short passages of music and then asked to produce its own composition.
However, Freya Murray says robo-composers simply can’t compete with the human touch, explaining: “Our ability to imagine and create is at the core of what it makes us human and artists will continue to express the world we live in, and imagined worlds.”
Via Wired UK