Where Is Music Taking Tech? Abbey Road Red's Karim Fanous Takes from BBC Introducing LIVE

Where Is Music Taking Tech? Abbey Road Red's Karim Fanous Takes from BBC Introducing LIVE

A Positive Spin on Artificial Intelligence and Music by Abbey Road Red Innovation Manager, Karim Fanous

There is a lot of negativity out there but if we want to spin the proverbial AI vinyl with efficacy, we need to not fear the [robot] reaper. Sure, the music robots are coming. In fact, they’re already here. But are you going to fear substitution – think synthesiser – or are you going to ask how much can this new gizmo do for me, and how do I push it to its limits? – think Beatles – rather than shall I pack up my pedal case and throw in the creativity towel.

Yesterday I found myself at BBC Introducing, sharing the stage with Bobby Friction and Nikki Lambert discussing where music and technology are going as an awkward or symbiotic pairing.It’s an exciting subject, was a fun discussion, and I’d like to summarise some of the main points I took away about artificial intelligence and music in as simple terms as possible.

Caveat: this is not a technical piece and I won’t go into detail about technology or startups, it’s just a nice way of getting one’s imagination around the topic.
Tech has been playing a part in music since well before the first guitar amplifier was created and Bob Dylan ripped up the rulebook. In fact, when looking at the date for the origin of the synth, even though it was popularised by the wonderful Moog line in the 60s, the Theremin was created as far back as 1920! How many amazing new genres and amazing records were made using this electronic instrument? (The synth, that is, not the Theremin, although I’m sure there are a few out there!)

Fear of substitution is more damaging to our creativity than substitution itself. We can sit around all day worrying about the potential damage that tech might cause, but what’s the point? When The Beatles were offered the REDD desks made by Abbey Road’s Recording Engineering Department, from which our incubator, Red, derives its name, did they say, ‘no I don’t want to use the four simultaneous channels this desk can give me’? Nope! They said, ‘Give me more!’. They pushed technology to the limit and past.

The question you could be asking isn’t when will my creativity become redundant, it’s what will my creativity become? Artificial Intelligence, algorithms, machine learning – these technologies will power tools that fold into your creative process and the layers and structure of that process, in the same way that you insert a plug-in into a DAW channel strip or patch and analogue compressor into your digital set-up. The more fun question to explore is how will you use the tools? Where will they sit in your process?

The process of creating and listening to music is linked to our human selves and nature, it’s relational, social, linked to the stories of our daily lives and artificial intelligence doesn’t replace that. But it could enhance it. When we create a piece of music we put our heart and soul into conceiving, performing and making choices about production and the settings on our tools, whether pedals, analogue desks or plug-ins. Our creative moments are driven by our conscious thoughts, subconscious thoughts, feelings, mood, surroundings, relationships with other creators and so much more. The same goes for our listening moments, whenever we experience something, actively or passively, there is always context around us and the experience. It doesn’t feel like this is directly interchangeable with an AI or piece of music tech. But you can use it in the process.

If you don’t want to compete with AI, don’t! You don’t have to use it. But at some point you are going to have to compete for share of ear with other writers / performers who are using it.

There are interesting experiments afoot. Do some research around Holly Herndon and her AI, Spawn, or how Benoit Carré is using François Pachet’s Flow Machines technology to interpret music and create and edit building blocks of tracks. These are two great examples at opposite poles: one creator ‘birthing’ an AI to interpret her vocals and perform its interpretations back for her to use, and another using AI driven tools to pragmatically interpret and layer music like building blocks.
The tools are out there, how are you going to use them? Two examples from our own stable of alums are Vochlea’s Dubler microphone and LifeScore’s adaptive music platform. Vochlea’s intelligent microphone can recognise what you are vocalising in realtime and output it in the correct sound, with the nuances of the performance captured. So you could be beatboxing one second and making a trumpet noise the next and it would recognise and output that in realtime. It can be used as a writing, recording or performing instrument. Check out the video, here.

While at the other end of the spectrum, LifeScore’s adaptive music platform takes human performances – high quality input composed and performed by humans – and uses them as the foundation for evolving soundtracks triggered by data inputs, whether movement, weather or other. So for example you could be turning left or increasing speed and the music will change in response, using the recordings to output an adaptive soundtrack with integrity.

Both pieces of tech marry machine and human creativity, one as an instrument, the other as an adaptive music platform that uses high quality human created and performed inputs. A third, Humtap, is exploring fully generative composition, enabling people to create fully produced tracks by humming and tapping and choosing a genre.

All of these tools could somehow fall into your creative process or set-up, whether as instruments or writing aids. The question is what do you need from AI and music tech, how do you want to, or not want to, use it?

Related News