Tag Archives: DTS

IBC: Surrounded by sound

By Simon Ray

I came to the 2016 IBC Show in Amsterdam at the start of a period of consolidation at Goldcrest in London. We had just gone through three years of expansion, upgrading, building and installing. Our flagship Dolby Atmos sound mixing theatre finished its first feature, Jason Bourne, and the DI department recently upgraded to offer 4K and HDR.

I didn’t have a particular area to research at the show, but there were two things that struck me almost immediately on arrival: the lack of drones and the abundance of VR headsets.

Goldcrest’s Atmos mixing stage.

360 audio is an area I knew a little about, and we did provide a binaural DTS Headphone X mix at the end of Jason Bourne, but there was so much more to learn.

Happily, my first IBC meeting was with Fraunhofer, where I was updated on some of the developments they have made in production, delivery and playback of immersive and 360 sound. Of particular interest was their Cingo technology. This is a playback solution that lives in devices such as phones and tablets and can already be found in products from Google, Samsung and LG. This technology renders 3D audio content onto headphones and can incorporate head movements. That means a binaural render that gives spatial information to make the sound appear to be originating outside the head rather than inside, as can be the case when listening to traditionally mixed stereo material.

For feature films, for example, this might mean taking the 5.1 home theatrical mix and rendering it into a binaural signal to be played back on headphones, giving the listener the experience of always sitting in the sweet spot of a surround sound speaker set-up. Cingo can also support content with a height component, such as 9.1 and 11.1 formats, and add that into the headphone stream as well to make it truly 3D. I had a great demo of this and it worked very well.

I was impressed that Fraunhofer had also created a tool for creating immersive content, a plug-in called Cingo Composer that could run as both VST and AAX plug-ins. This could run in Pro Tools, Nuendo and other DAWs and aid the creation of 3D content. For example, content could be mixed and automated in an immersive soundscape and then rendered into an FOA (First Order Ambisonics or B-Format) 4-channel file that could be played with a 360 video to be played on VR headsets with headtracking.

After Fraunhofer, I went straight to DTS to catch up with what they were doing. We had recently completed some immersive DTS:X theatrical, home theatrical and, as mentioned above, headphone mixes using the DTS tools, so I wanted to see what was new. There were some nice updates to the content creation tools, players and renderers and a great demo of the DTS decoder doing some live binaural decoding and headtracking.

With immersive and 3D audio being the exciting new things, there were other interesting products on display that related to this area. In the Future Zone Sennheiser was showing their Ambeo VR mic (see picture, right). This is an ambisonic microphone that has four capsules arranged in a tetrahedron, which make up the A-format. They also provide a proprietary A-B format encoder that can run as a VST or AAX plug-in on Mac and Windows to process the outputs of the four microphones to the W,X,Y,Z signals (the B-format).

From the B-Format it is possible to recreate the 3D soundfield, but you can also derive any number of first-order microphones pointing in any direction in post! The demo (with headtracking and 360 video) of a man speaking by the fireplace was recorded just using this mic and was the most convincing of all the binaural demos I saw (heard!).

Still in the Future Zone, for creating brand new content I visited the makers of the Spatial Audio Toolbox, which is similar to the Cingo Creator tool from Fraunhofer. B-Com’s Spatial Audio Toolbox contains VST plug-ins (soon to be AAX) to enable you to create an HOA (higher order ambisonics) encoded 3D sound scene using standard mono, stereo or surround source (using HOA Pan) and then listen to this sound scene on headphones (using Render Spk2Bin).

The demo we saw at the stand was impressive and included headtracking. The plug-ins themselves were running on a Pyramix on the Merging Technologies stand in Hall 8. It was great to get my hands on some “live” material and play with the 3D panning and hear the effect. It was generally quite effective, particularly in the horizontal plane.

I found all this binaural and VR stuff exciting. I am not sure exactly how and if it might fit into a film workflow, but it was a lot of fun playing! The idea of rendering a 3D soundfield into a binaural signal has been around for a long time (I even dedicated months of my final year at university to writing a project on that very subject quite a long time ago) but with mixed success. It is exciting to see now that today’s mobile devices contain the processing power to render the binaural signal on the fly. Combine that with VR video and headtracking, and the ability to add that information into the rendering process, and you have an offering that is very impressive when demonstrated.

I will be interested to see how content creators, specifically in the film area, use this (or don’t). The recreation of the 3D surround sound mix over 2-channel headphones works well, but whether headtracking gets added to this or not remains to be seen. If the sound is matched to video that’s designed for an immersive experience, then it makes sense to track the head movements with the sound. If not, then I think it would be off-putting. Exciting times ahead anyway.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.