Mirek Stiles talks us through how to record, mix and render 6 Degrees of Freedom for a band in VR

Mirek Stiles talks us through how to record, mix and render 6 Degrees of Freedom for a band in VR

20th November 2018
The recording team in Studio Three, from L to R; Jess Stubbs, David Rivas Mendez, Gavin Kearney, Calum Armstrong and Mirek Stiles.

Earlier this summer, Abbey Road had the pleasure of once again hosting the fine team from University of York led by Dr. Gavin Kearney along with one of his students working on his final project, David Rivas Mendez. David’s research was focusing on how to record, mix and render 6 Degrees of Freedom for a band in VR. This concept expanded on the last collaboration with UOY which covered 3 Degrees of Freedom, or in other words recording and mixing for VR but listening from a static position. This new research project takes that concept further by enabling the user to walk around the band in VR as they play in Studio Three at Abbey Road. Quite a challenge…
 
A four-piece jazz band was chosen, as they would mostly likely be able to balance themselves well in the room as opposed to a rock band in which the drums could overwhelm the overall room sound. The band was set out in a square configuration with the usual close micing along with two First Order Ambisonic microphones per musician to create sound zones. The centre of the room was captured with a High Order Ambisonic microphone for directional information at head height and a Hamasaki cube 3 meters in height to capture the overall room sound.

For reference, we captured a video of each performance using the new Go Pro Fusion 360 camera, but as the idea of this project was to allow the user to walk through the room, a static camera angle was not going to cut the mustard. We could have a used Lidar and volumetric capturing techniques to achieve this, but as this project was focusing on sound the expense of such techniques seemed beyond the scope of this project. Instead David decided to mock the studio up as a graphic model using Blender. This process involved David spending a day measuring every panel and corner of Studio Three to ensure he could get realistic dimensions in the virtual world.
 
 
The main Unity scene editor window. Each sound source is represented by the white speaker graphics.

Once the graphic model had been achieved David imported the render into the game engine Unity. Avatars for each musician were then added into the scene in which mono and stereo audio pre-mixes of the close microphones could be attached as audio object. Next, we added the FOA microphone zones along with the centre HOA microphone (this had to be down mixed to FOA, as unfortunately Unity currently only allows for up to FOA import of audio files. Although there are third party options to get around this limitation, the time frame for our project did not allow us to fully explore these options and I hope Unity will rethink this limitation in the future) and the Hamasaki cube.

For spatialisation, David decided to use the Google Resonance Unity plugin as this had a set of flexible tools and created highly realistic sound environments for us to create our 6DOF world. There are many parameters in Resonance that greatly effect the end results and at first might be unfamiliar to traditional music sound engineers, including things like Min/Max Distance Volume Roll off, Listener Directivity, Source Directivity and Occlusion, to name just a few.

For the final Virtual Reality experience, we decided to use the Vive/Steam VR workflow as the Vive system with its flexible room sensors loaned itself to 6DOF in a very intuitive and flexible way.

The end results for this joint research project have been nothing short of stunning. The experience is very realistic, and it really does sound as if you are in the room walking around with the band as they play. This project has opened up a lot of questions that need to be explored (that’s the nature of R&D) and also opened my eyes to new workflow opportunities for the studios.
 
The main message I would like to communicate through this post, is that from an independent music maker's point of view, you can go ahead and download Unity and Google Resonance for free and start experimenting with this technology right now. You can just use normal mono and stereo sound sources to give it a go: you don’t need fancy Ambisonic or spatial array audio files. Although the learning curve might seem steep at first, it’s surprising how quickly you can absorb these unfamiliar workflows as almost second nature once you play around with them. There are some amazing tutorial videos on YouTube for both Unity, Resonance and Steam VR so you never really feel alone and if you do get stuck help is usually at hand.

I hope this post inspires you to explore beyond your comfort zone, as we need more musicians and producers pushing the boundaries with this technology – have fun!
 
 

Related News