Spatial Audio Forum at Imperial College London

Spatial Audio Forum at Imperial College London

9th October 2019
Last week we held the Abbey Road Spatial Audio Forum at the esteemed Imperial College, London, hosted by the awesome Dr Lorenzo Picinali, Senior Lecturer in Audio Experience Design. This was a detour from the normal format of our meet ups as we decided to focus on a show and tell. The more practical approach was a chance for the creative souls from the forum to show off their fabulous work to other members.
 
 
We had a cracking team join the event including the BBC, L-Acoustics, Delta Soundworks, University of York, University of Huddersfield, 1618 Digital, Muki Kulhan and of course our most accommodating hosts Imperial College.

Dr Lorenzo Picinali and his students showed a 3D Tune-In Toolkit Test Application. The 3D Tune-In Toolkit is a custom open-source C++ library for audio spatialisation, hearing loss and hearing aid simulation. It has been released open-source and it can be downloaded also as a standalone test application (including a GUI, available for MacOS, Windows and Linux) and as a VST plugin (MacOS and Windows) from the GitHub releases page.

Also on show from Imperial was PlugSonic, a web- and mobile-based application for the creation of interactive 3D binaural soundscapes through a simple web-browser, allowing the creator to export the soundscape in a custom format, and import it in a mobile app. The mobile app uses the sensors in the mobile device, and its camera (through Apple ARKitv3), allowing the user to navigate the soundscape with 6 degrees of freedom, and without having to rely on external trackers.

There was also a demo of binaural reverb evaluation. More information about this study and demo can be found here.

I decided to show a project Abbey Road recorded with composer Stephen Barton in Studio Two. A string octet played George Martin's score to Eleanor Rigby. Recorded with a mixture of Ambisonics, Spatial arrays and spots microphones we created a mix in Unity with basic graphic content to allow the user to walk around the string octet as they play. The 6 Degrees of freedom allows you to get up close and personal with the musicians, as they play you can hear the bows on the strings, or you can just stand in the middle of the room and hear the players all around you. This is an example of how artists can present their work to fans in new exciting and intimate ways.
 
 
Chris Pike and the BBC R&D team showed three spatial audio projects.

Firstly, the tool that they developed to allow efficient live production of binaural mixes at the BBC Proms.

They also gave a preview of the EAR Production Suite, a set of tools for producing spatial and personalisable audio with the Audio Definition Model standard. This is being developed in collaboration with the IRT and was also shown on the EBU stand at IBC. More info here.

Finally, Kristian Hentschel demonstrated his work on delivering spatial audio over an ad-hoc array of personal devices such as laptops, tablets, and phones. He demo’d the Vostok-K Incident audio drama, which was created through the S3A Future Spatial Audio project, as well as some new tools for quickly prototyping such experiences for “orchestrated” personal devices. More info here.
 
 
The legendary Dr Hyunkook Lee from Huddersfield University showed a demo of his VHAP (Virtual Hemispherical Amplitude Panning) plugin.

VHAP is a novel method to create an elevated phantom source on a virtual upper-hemisphere with only four ear-height loudspeakers. It exploits a psychoacoustic phenomenon known as the phantom image elevation effect. A set of constant power gain coefficients are applied to loudspeakers at ±90° and 0° for panning to a target azimuth and elevation in the front region, and to those at ±90° and 180° for panning in the back region. This VHAP VST plugin supports both loudspeaker and binaural rendering of phantom images over the virtual upper-hemisphere. In the binaural mode, the perceived distance of an elevated image can be also controlled. The VHAP method can be used as a subset of the conventional 7.1 or 5.1 surround format to create "virtual height loudspeakers".
 
 
Dr Gavin Kearney and his students from York University showed a new VR listening test framework and some very interesting and creative 3D phone gyroscopes panning tools. Yes you can pan audio around your head using your phone – who would have known?

The Spatial Audio Listening Test Environment (SALTE) is a perceptual evaluation of spatial audio that plays a significant role in the development of future content production frameworks, for example in the assessment of spatial audio recording techniques and reproduction schemes, binaural rendering algorithms, Head-Related Transfer Function (HRTF) datasets, virtual soundscapes and room acoustics. SALTE is a listening test software which will significantly aid future research and development of spatial audio systems. The tool consists of a dedicated audio rendering engine and virtual reality interface for conducting spatial audio listening experiments. The renderer can be used for controlled playback of Ambisonic scenes (up to 7th order) over headphones using head tracking as well as custom HRTFs contained within separate WAV files or SOFA files. The SALTE framework can be freely obtained from the University of York AudioLab's GitHub page.

Ana Monte and Dan Deboy, the founders from Delta Soundworks very kindly caught a 6am flight from their base in Sandhausen, Germany to be with us. I think it was well worth it as not only were they a huge amount of fun to be around but also showed some stunning VR projects. One of which you place yourself inside the human heart and experience spatial audio representing how a doctor would be listening via a stethoscope for various conditions.

Learning and training in Virtual Reality has created a more immersive experience than any other medium before, making it easy to understand complex information. Spatial audio plays an important role as an emotional trigger in such VR applications.

The pediatric cardiologists at Lucile Packard Children's Hospital Stanford are using virtual reality technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand. For applications like the Stanford Virtual Heart, sound is as important, if not, more important than the visuals in helping students develop their diagnosing skills.

It was a very creative example of how VR is used for educational purposes in ways not previously possible. Check it out here.
 
 
It was a real pleasure seeing so many members of the forum getting together and showing off their amazing work. The room was buzzing and everyone seemed to be riffing off one another – who knows what seeds may have been planted for future projects.

I want to say a huge thank you to everyone who came and a special thank you to Dr Lorenzo Picinali for having us. It was inspiring stuff as always.
 
 

Related News