Abbey Road Red talk to Benoît Carré

Abbey Road Red talk to Benoît Carré

20th November 2019
Following a trip to see the premiere of his second album using the artificial-intelligence-exploring SKYGGE moniker, we had a chat with Benoit Carré about his method, attitude to artificial intelligence and the line where human meets, and uses, machine.
 

What have you learned in general about how AI will become a part of your creative process in the future?

There are many things to say about making music with these kinds of tools. There are two main ways of using them. The first can be to take the tool as something that will solve the fear of the blank page. Something to start something. An idea. You can use it in that way and it’s very useful.

The second use case is that you have something, an idea or a mood and you need the system to give you feedback on it and to help you organise your own creative mess and creative confusion. Because when you create you are always operating on the boundaries of control and confusion, on the edge. And I think that in the future AI will help provide some kind of structure to what you give to it.
 
 

And how do you see it folding into the creative process?

You can use AI for generating sounds. You could use it as a plug-in in a certain moment in your production process. You want a Fender Rhodes in the style of Chick Corea. You can ask it to generate the sound for you. It would help you enrich the album with a sound you couldn’t easily explore yourself.

Or you could use it as a compositional aid, to explore a thousand possibilities (you still have to click a thousand times, and listen to thousand results). You click your mouse to generate a sound or motif, and If it doesn’t work you could try another one instantly. There are so many things you can try!

AI can also be used in the post-production process. For example you have an album with ten tracks and separated stems and you give it to a system that reorganises or reinterprets the whole album and makes a kind of mash-up that changes it. The possibilities are so vast and large it’s hard to summarise. The possibilities are endless.
 

People are worried about AI removing the need for humans in the creative process – what would you say to them?

I think that in the future there may be a category named AI Music, a genre, it’s totally possible. Music that’s adapted to the hour of the day, the mood, the weather. This technology exists already. And it will exist as its own category. But it won’t take the part of the human away, I think. There will always be people who sing and play guitars. AI won’t change that. But it may create new genres. From the creative point of view AI can bring some new kinds of sounds and maybe new melodies and harmonies, but these new kinds of sounds and harmonies will always be created by humans, like samplers helped create hip hop, there will be a hip-hop moment for AI or something like that!

But we are at the beginning of these experiences so it’s very hard to say what will happen.
 

What are your next moves?

I would like to make a pop album. To go back to songwriting and to hide the AI and not to speak about it and not to pitch about it when I launch the tracks. For this EP I’ve just released I’ve made videos that explain in detail what my process was. But when you’ve done something, you want to do the opposite. So I’d like to do pop songs, sing those songs, to make an album as an independent artist, like if AI was just an instrument like the others and not to put the AI at the centre of my work. It’s the only way for me to renew my approach in creating with AI.
 

How did you begin your journey with AI

François [Pachet] was interested by my way of composing songs. He is a fan of Paul McCartney – he likes unexpected changes in the harmonies of the songs. McCartney songs are captivating in terms of harmony and melody. François was interested in my tracks. I’ve written songs for many singers and was in a band called Lilicub. We had a hit in the late '90s. François invited me into his lab to play with his tools and give some feedback about how to interact with the tools.

In 2015 he asked me to participate in the Flow Machines project. It had been going for two years and was focused on generative composition and sound production. I helped the team to develop the tool from a musician’s point of view.

I worked with the researchers and suggested some changes in the modules to make them more musician compatible. I gave ideas of new features that were the result of my user experience with the tool. It’s really exciting to build tools with researchers who take your needs and sometimes your weird ideas seriously! And then I made a song called Ballad of Mr Shadow.

That was the starting point to making music with AI and that helped me explore a part of my creativity I didn’t really know. This strange sounding song was very musical to me. I wanted to share the experience with other musicians, so I invited many musicians to explore with me and it was really cool to have them bring in their own methods and create something together.
 
 

What do you mean by musician friendly, when you talk about the tools?

I mean really insisting on the interactional part of the tool. Researchers have to know how musicians react with instruments. Otherwise they create tools that generate music in itself but what I realised is that what is important is how the musician feeds the machine. So, we improved this part of the work by for example creating an interface – the idea was not to create an interface that was too sophisticated but to implement raw parameters that allowed the musician to have control in the process. For example, "I want long notes, I want variation, I want many notes."

What was important was that the musicians and myself could have the feeling that we kept control of the result. That we led the machine to serve our goal to create something that we really liked and is inspiring. The other thing that was important was for the researchers to understand that we don’t need 32 bars perfectly composed, we just need some material that is inspiring for us to create something new.

Those were the two main things that the researchers and I highlighted during my residence as a musician at The Sony Computer Science Laboratory. Then a part of that team moved to Spotify. There we pursue our mission of creating tools for musicians. We think that flexible tools that allow musicians to use the technology for specific tasks is a good approach. Not to create ready made music with a big red button. We don’t believe that’s a tool a musician would really like.
 

Will you make these tools publicly available?

Yes, there is a project to transfer the prototype tools to Soundtrap. I can’t say exactly how long it will take because the prototypes need to be reshaped before it’s available as a product.

We believe in interactions adapted to different types of pop composition strategies. Each musician has their own method. Some musicians like to start with a melody or a chord sequence, other ones with rhythms or just a bassline. I mean that there are many ways to compose nowadays and the more AI tools will be flexible to use, the more useful they will be for artists.
 
Thank you to Benoît Carré for taking the time to speak to us. For more info take a look here.

Check out SKYGGE’s YouTube channel for videos explaining how he created each song and more.


Listen to SKYGGE’s two records:

Hello World, Composed Music Composed With Artificial Intelligence

American Folk Songs

 
 
 

Related News