NBCUni 7.26

D-Cinema Summit: standardization of immersive sound formats

By Mel Lambert

“Our goal is to develop an interoperative audio-creation workflow and a single DCP that can be used to render to whatever playback format – Dolby Atmos, Barco/Auro 3D, DTS:X/MDA – has been installed in the exhibition space,” stated Brian Vessa, chairman of SMPTE Technology Committee 25CSS, which is considering a common standardized method for delivering immersive audio to cinemas. Vessa, who also serves as executive director of Digital Audio Mastering at Sony Pictures Entertainment, was speaking at this past weekend’s joint SMPTE/NAB Technology Summit on Cinema during a session focused on immersive sound formats,, chaired by sound specialist Julian Pinn; also on the panel was Will Files, a leading sound designer from Skywalker Sound best known for his work on Dawn of the Planet of the Apes, Thor: The Dark World, Star Trek Into Darkness, Brave and Mud.

“Standardization will foster innovation,” Vessa stressed. “Sound is a big part of the moviegoing experience,” File confirmed. “It is worth the time and effort to get it right, because immersive sound can add to that experience.” To date, some 300 films have been re-recorded in immersive formats, and currently can be seen on more than 1,000 screens around the world.

As Vessa explained, SMPTE has been actioned by Digital Cinema Initiative – a joint venture of Disney, Fox, Paramount, Sony Pictures, Universal and Warner Bros. film studios – to develop an open-format, object-based playback standard for immersive audio, allowing the same Digital Cinema Package/DCP carrying an object-based soundtrack to play back on any immersive sound-equipped theater anywhere in the world.

“That single, interoperable distribution file format will carry object-based audio essence for use within the D-cinema architecture,” Vessa explained. “Currently there are three distribution formats. In addition to Dolby and Auro, there is the Multi-Dimensional Audio (MDA) format, an uncompressed PCM sound format that derives from research initiated at SRS Labs and refined by DTS, and which is the distribution format for DTS:X. “All object-based immersive surround formats enable a variable number of sound objects within in three-dimensional space to be addressed individually during the re-recording process,” Vessa stated, “with metadata that identifies where each object is located.”

From left: Session chair Julian Pinn, with Sony Pictures Entertainment’s Brian Vessa and Will Files from Skywalker Sound.

From left: Session chair Julian Pinn, with Sony Pictures Entertainment’s Brian Vessa and Will Files from Skywalker Sound.

“Multiple formats, including the new 12-channel IMAX, mean that we need more time on the dub stage,” Files continued. “We might final a mix in two or three weeks and then spend another two weeks making all the other formats. But there are major technical differences between [the speaker layouts of] current immersive formats that affect the creative decisions we make during a mix,” the sound designer/re-recording mixer advised. “Auro involves a pair of five-channel arrays on two different levels, designed to operate at [a reference level of] 82 dBC, with no bass management, while [object-based] Atmos utilizes a single surround layer with individually addressable speakers, with the extra surround speakers in the front third of the auditorium, plus two rows of overhead speakers; all surrounds operate at 85 dbC. Also, Atmos specifies full-range speakers for the surrounds, with bass management.” All of which means that that a mix made in one format might not necessarily translate easily to another.

“Immersive sound is not a simple as 5.1 or 71-channel mixing,” File concluded. “We need Dolby and Auro consultants at the end of the project to help us achieve the results we are seeking. But the uptake of immersive has been twice as fast as 5.1-channel, which says something for the brand shepherding [by Dolby and Barco] of their respective formats.”

“It is important,” Pinn summarized, “for such a standards initiative to be mindful that the artists are often best placed to optimize a sound mix for the varying sound formats. New challenges are likely to result from such work.”

Other topics covered during the well-attended two-day summit included workflows and post tools for high frame rate projects, distributed post-production, high dynamic range shoots, plus on-set ingest and display workflows. Andy Maltz, managing director of the Academy of Motion Picture Arts and Sciences’ Science and Technology Council, provided an update on Academy Color Encoding System/ACES, a digital production standard for image interchange, color management and long-term archiving. As Maltz explained, a production-ready version – ACES 1.0 – is now being integrated into film and TV production and post hardware, with support from 22 partners. “The film/TV industry needs new ‘wiring’,” he stressed. “ACES has been innovated by our industry for our industry, and represents ‘Color Science in a Box’ for smaller productions.”

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached atmel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA

Main Photo Caption: From left: Sony Pictures Entertainment’s Brian Vessa, session chair Julian Pinn and Will Files from Skywalker Sound.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.