The tools are out there, how are you going to use them? Two examples from our own stable of alums are
Vochlea’s Dubler microphone and
LifeScore’s adaptive music platform. Vochlea’s intelligent microphone can recognise what you are vocalising in realtime and output it in the correct sound, with the nuances of the performance captured. So you could be beatboxing one second and making a trumpet noise the next and it would recognise and output that in realtime. It can be used as a writing, recording or performing instrument. Check out the video,
here. While at the other end of the spectrum, LifeScore’s adaptive music platform takes human performances – high quality input composed and performed by humans – and uses them as the foundation for evolving soundtracks triggered by data inputs, whether movement, weather or other. So for example you could be turning left or increasing speed and the music will change in response, using the recordings to output an adaptive soundtrack with integrity.
Both pieces of tech marry machine and human creativity, one as an instrument, the other as an adaptive music platform that uses high quality human created and performed inputs. A third,
Humtap, is exploring fully generative composition, enabling people to create fully produced tracks by humming and tapping and choosing a genre.
All of these tools could somehow fall into your creative process or set-up, whether as instruments or writing aids. The question is what do you need from AI and music tech, how do you want to, or not want to, use it?