Abbey Road Red On Music Tech Trends 2017Wednesday, December 21, 2016
As 2016 draws to a merry close and we begin to peer over the hedge into 2017, Jon Eades from the Red team gives us a quick run through of some of the technology trends worth keeping an eye on over the coming year and beyond.
No self respecting article on ‘the trends to watch in 2017’ could start with anything but the all encompassing umbrella trend that is Artificial Intelligence (AI).
It has become impossible to ignore the discussion around AI over the past couple of years, with a significant increase in mainstream media coverage and commentary on the topic coming from many well known public figures.
As often tends to be the case with technologies that reach mainstream attention, AI is not a new area of development, having been a research topic among computer scientists arguably since the birth of computing itself. But while the recent increase in interest in the field is creating a spike in hype that will likely cool off into 2017, it now seems there is general agreement among experts that intelligent computer systems will continue to play an increasingly significant role in our day to day lives throughout 2017 and well beyond.
The debate around AI raises some deeply philosophical questions about the nature of intelligence and of human consciousness. Within that debate, creativity is often held up as that one human attribute that it is surely impossible to replicate. Despite the temptation to dive into that part of the debate here, that is a long conversation that is best saved for another day.
Instead, putting that top down theoretical and philosophical debate to the side momentarily, let’s take a pragmatic bottom up look at some of the movements in the music and AI space so far, for some hints at what might be to come.
Machine Learning & Algorithmic Composition
While it is well documented that people have been trying for many years to create computer programs that can autonomously compose music, most of the early attempts tended to result in either avant garde or somewhat mechanical sounding music. Simply put, it has proven historically very difficult to write a computer program that takes all the necessary rules into account to create intricate and aesthetically pleasing musical patterns (yes, of course, this is a matter of taste).
More recently however, with the advent of new machine learning techniques, developers have been able to take new approaches. Instead of trying to create rule based systems, machine learning has allowed developers to instead design systems that use pattern recognition to ‘learn’ the structures that typify specific musical styles. By running machine learning algorithms on large sets of training data (i.e. tens, if not hundreds of thousands of songs), the algorithms figure out the patterns themselves, which allows them to in turn write music in a similar style.
These recent advances have enabled algorithmic composition systems to finally produce music that is pleasant and acceptable, in certain situations. And, although a little vague, that is critical - while the debate carries on about whether you theoretically can or can't synthesise creativity, whether you like it or not, the results seem to suggest that it is starting to happen already.
Commercially it is still early days, but early movers such as London based JukeDeck are making significant progress and doing an excellent job of stimulating interest in the field.
Within the research communities, the pace of developments continues to accelerate. Google’s recently launched Magenta Project has gained a lot of interest by making a lot of its findings open source, as well as the work of the team at the Sony Computer Science Lab.
As the quality of the output from these systems continues to improve, throughout 2017 we expect to hear stories of music that was wholly or partially algorithmically composed being used more and more situations, starting with online short form videos, then adverts, games, documentaries, films and (perhaps not in 2017) even mainstream pop music.
Audio Feature Extraction - Music Information Retrieval
Music Information Retrieval (MIR) is likely to be a term familiar only to researchers and audio developers. The low level technology thread is centred around audio feature extraction algorithms designed to listen to audio and extract musical information, such as chords, instrumentation and so on. This extracted information can then be used to drive new applications.
While the use of feature extraction is already relatively well established in music discovery and consumption, the creative opportunities in music production are still being explored.
Early movers in the music production space include companies such as LandR and CloudBounce, who use the results of MIR to direct the control of audio effects, enabling them to offer a fully automated alternative to traditional mastering. Simply put, these systems listen to audio and make decisions about what audio effects to apply, emulating (in part) the role of mastering engineers.
iZotope’s recently released Neutron plugin by contrast offers a hybrid approach of being partially automatic and partly user controlled, resulting in a plugin which looks a bit more like the tools engineers are used to using.
In 2017 we predict that music production software will continue to become more ‘intelligent’, listening in and assisting with the creative and administrative parts of music creation. While fully autonomous systems may be attractive in some applications, we predict that most intelligent systems for music creation will end up being be at least in part be user controlled. And if history has taught us anything, it’s likely that these tools will be creatively re-appropriated in ways that haven’t yet been conceived of.
Another topic that is sometimes included under the umbrella of AI is the ongoing trend towards personalisation. Over the past few years we have seen an increase in personalisation in a number of areas: our devices and platforms now learn about our behaviour and preferences and react accordingly. For example, the content Facebook serves to your news feed is influenced heavily by your profile and behaviour, your experience of Google’s applications and services is the same and your smart devices are making small adjustments to the idiosyncrasies of you interactions all the time.
In 2016 we saw a continued rise in proliferation of automatically personalised playlists (i.e. Spotify’s hugely popular Discover Weekly). This personalisation is being enabled by advances in machine learning and MIR, as mentioned above, but also by the trends inferred by from behavioural and social data.
But what about even more granular personalisation? Personalisation not just at the artist and song level, but at the compositional level?
Artists and developers have been experimenting in this field for some time. Early pioneers such as RJDJ and many others since have experimented with creating mobile applications that were a mixture of interactive and reactive delivering personalised musical experiences. Ultimately these apps have mostly achieved relatively limited long term use, but as we become surrounded by more and more sensors and the power of smartphones increases along with the the quality of the compositions created algorithmically (as discussed above) the creative opportunities continue to increase.
Can’t imagine a world where all the music you listen to is tailored specifically to you? Us neither. But how about a system that reacts to your biometric data and creates adaptive music perfectly tailored to help you drift off to sleep, or to help you meditate, or to help you concentrate at work, or to keep you awake whilst driving?
One of the undeniable mega trends for 2016 was virtual reality (VR). Despite the noise, analysts predict device penetration is still has a relatively long way to go, so we fully expect to see some exciting VR developments in 2017.
Specifically for music, while there are no doubt some clear uses for VR in distributing music related content and transporting viewers to live events, we think perhaps the most interesting developments will come from the intersection of VR with algorithmic composition and reactive personalised music: Developers have already started experimenting with creating immersive game like interactive musical landscapes that allow you to create and manipulate sound, resulting in experiences that are part music making, part gameplay, part educational and part just down right fun.
As adoption of Virtual Reality continues and VR headsets start to find their place in everyday life, and as the move towards 360 2D video online continues, we predict a spin out growing appetite for music mixed in surround or 3D formats. While surround sound for music failed to become widely adopted in the 1990s, largely due to the inconvenience of setting up multiple wired speakers, with companies such as Abbey Road Red graduates OSSIC creating headphones that deliver full spherical sound in headphones and wireless speaker manufacturers such as SONOS creating more convenient speaker based solutions we predict that the market for surround sound music content, delivered digitally, will increase.
Interested in music tech? Get involved and follow Red's news on Twitter @AbbeyRoadRed