The main Unity scene editor window. Each sound source is represented by the white speaker graphics.Once the graphic model had been achieved David imported the render into the game engine
Unity. Avatars for each musician were then added into the scene in which mono and stereo audio pre-mixes of the close microphones could be attached as audio object. Next, we added the FOA microphone zones along with the centre HOA microphone (this had to be down mixed to FOA, as unfortunately Unity currently only allows for up to FOA import of audio files. Although there are third party options to get around this limitation, the time frame for our project did not allow us to fully explore these options and I hope Unity will rethink this limitation in the future) and the Hamasaki cube.
For spatialisation, David decided to use the
Google Resonance Unity plugin as this had a set of flexible tools and created highly realistic sound environments for us to create our 6DOF world. There are many parameters in Resonance that greatly effect the end results and at first might be unfamiliar to traditional music sound engineers, including things like Min/Max Distance Volume Roll off, Listener Directivity, Source Directivity and Occlusion, to name just a few.
For the final Virtual Reality experience, we decided to use the
Vive/Steam VR workflow as the Vive system with its flexible room sensors loaned itself to 6DOF in a very intuitive and flexible way.
The end results for this joint research project have been nothing short of stunning. The experience is very realistic, and it really does sound as if you are in the room walking around with the band as they play. This project has opened up a lot of questions that need to be explored (that’s the nature of R&D) and also opened my eyes to new workflow opportunities for the studios.