Audionamix – 7.1.20

‘Future of Audio Tech’ confab tackles acoustics, loudness, more

By Mel Lambert

Organized by the Audio Engineering Society, “The Future of Audio Entertainment Technology: Cinema, Television and the Internet” conference addressed the myriad challenges facing post professionals working in the motion picture and home delivery industries. Co-chaired by Dr. Sean Olive and Brian McCarty, and held at the TCL Chinese Theatre in Hollywood in early March, the three-day gathering comprised several keynote addresses, workshops and papers sessions.

In addition to sponsorship from Dolby, Harman, Auro3D, Avid, Sennheiser, DTS, NBC Universal Studio Post, MPSE and SMPTE, the event attracted a reported 155 attendees.

Referencing a report last year in The Hollywood Reporter that more than 350 different distribution versions were needed for the release of Marvel Studios’ Captain America: The Winter Soldier, McCarty argued that the film industry now requires a single, interoperative standard for soundtracks. As he stated, SMPTE Technical Committee 25CSS currently is developing an open standard for object-based delivery of immersive audio to cinemas, independent of the theater’s playback configuration.

A subsequent workshop “Cinema Immersive Audio Delivery Standards,” was chaired by Sony Pictures Entertainment’s Brian Vessa, chairman of the SMPTE TC 25CSS. The committee received proposals in September 2013 from Dolby Labs and MDA Cinema Proponents Group, which includes DTS, QSC, Barco and Auro Technologies, “and plans to develop an open standard for a single immersive mix that would work on any playback format,” Vessa stressed.

A companion workshop titled “Integrating Object-, Scene- and Channel­-Based Immersive Audio for Delivery to the Home,” enabled Auro Technologies, Dolby and MPEG-­H Audio Alliance to outline the virtues of their respective encoding and delivery schemes. The in-development ATSC 3.0 standard for DTV transmission over terrestrial, cable and satellite networks — which is scheduled to be implemented within two to five years — will offer multichannel immersive capabilities for both enhanced surround sound with height plus additional dialog and commentary tracks.

As a prelude to his opening day keynote speech (our main image), Avid chairman/CEO Louis Hernandez Jr. stated, “Our business model is now under tremendous pressure,” with online access to entertainment media and a growing emphasis on collaborative production. “With dollars moving away from our creative [community] to the delivery industry, how you monetize an asset has changed dramatically. We need to reward the creative focus.”

He confirmed that currently some 10,000 Avid Pro Tools and Media Composer workstations are being used at post facilities. “Everyone has to develop flexible workflows,” said Hernandez Jr, adding that Avid Everywhere’s collaborative, end-to-end ecosphere was developed to deliver connectivity opportunities based on the firm’s MediaCentral Platform.

During a keynote address “Acoustics for the Theater and Home — Moving Forward on a Foundation of Common Acoustical Science,” Dr. Floyd Toole provided an overview of the ways in which re-recording stages can ensure consistent frequency-response and dynamic-range performance. Citing studies of loudspeaker directivity and room-response parameters, Toole mentioned a recent SMPTE report, called TC-25CSS B-Chain Frequency and Temporal Response Analysis of Theatres and Dubbing Stages, which revealed dramatic performance differences between dub stages and film theaters.

In her paper titled “Predicting the In-Room Response of Cinemas from Anechoic Loudspeaker Data,” Linda Gedemer from the University of Salford, UK, concluded that measurement and calibration of film theaters should focus on the targeted loudspeakers’ anechoic test performance in addition to determining their interaction with the playback environment. Toole stated that the film industry’s adoption of the X-Curve, “is only used within dub stages and motion picture theaters. It fits no pattern in natural acoustics or normal listening.”

“Statistically, there is as much speech intelligibility below 1.8 kHz as above,” stated acoustics expert Peter Mapp during his workshop, “Speech and Dialog Intelligibility in Audio Entertainment — Great Picture — Pity about the Dialogue.” Citing long-term investigations into the Speech Transmission Index (STI), which describes the ability of systems to positively or negatively impact speech intelligibility, Mapp explained that, for immersive sound formats such as Dolby Atmos and Auro3D, “[our hearing] is able to detect a lateral shift of just 1-2 degrees and a discrepancy between visual and auditory voice locations of around 11 degrees.” When that difference reaches about 20 degrees, “the effect can be annoying; it becomes so distracting and fatiguing that intelligibility is lost.”

Several sessions addressed the importance of loudness control for film and TV. Netherlands-based Eelco Grimm from HKU University and Grimm Audio stated that, in an attempt to reduce complaints from cinema audiences about loud soundtracks, many European theaters are lowering the standard Dolby playback levels from “7” to between 4.0 and 5.5, with resultant reduction in dynamic range and loss of dialog intelligibility. To ensure consistency, European re-recording stages also are lowering playback levels which, Grimm warned, can lead to further loudness incompatibilities between films mixed in different parts of the world.

HIPFLIX- Lon Bender_850px

Lon Bender, co-founder of HIPFLix with former Soundelux partner Wylie Stateman, and supervising sound editor/sound designer with Formosa Group, demonstrated (left) his firm’s custom remixing process to improve intelligibility of film soundtracks for home viewers. He showed scenes from a re-mastered version of director John McTiernan’s The Thomas Crown Affair with enhanced dialog levels against background music and sound effects. “As our population ages,” Bender explained, “there is an increasing need for home entertainment that’s accessible to individuals with hearing deficiencies. People who cannot hear the dialog lose track of the most important element: the story.”

The audio equivalent of large-type books, HIPFLix Clarity Algorithm offers clearer dialog by reducing competing sounds. Loud and quiet passages are also balanced to match more closely the dialog level so viewers do not need to constantly adjust playback levels. “We use EQ, compression and limiting to carve a clearer space for the dialog,” Bender stated.

During the workshop “Opportunities and Challenges in The Transition to Streamed Delivery of Audio Content,” Dolby’s Jeffrey Riedmiller stated that “subscription services [for media delivery] are increasing while downloads are decreasing; physical media is still significant in a few countries, including Germany and Japan.” Starz Entertainment’s Sean Richardson explained that “60 percent of peak access via the Internet is for realtime entertainment from Netflix and other OTT services, while mobile users access shorter-duration media from YouTube and the like.”

Over-the-top suppliers “are now looking at software-based solutions to offer enhanced flexibility for their targeted platforms,” Riedmiller stated. “Since bandwidth is limited, bit-reduction is a valuable [function] with more efficient compression schemes. The Dolby creed is: ‘Create Once, Play Anywhere.’”

Nuno Fonseca talking about particle systems.

Nuno Fonseca talking about particle systems.

As part of a workshop called “Applications and Challenges of Object-Based Broadcasting,” Nuno Fonseca from Polytechnic Institute of Leira, Portugal, outlined a sound-design technique based on a particles system for preparing object-based immersive soundtracks — a similar technique to that used by VFX software to prepare smoke, fire, explosions and debris trails. “We can spread particles over space,” Fonseca stated, “with a virtual microphone placed in the middle to capture the resultant sound.”

During the “How to Make Big Small — Can We Really Bring Immersive Sound to the Home?” workshop chaired by Dr. Francis Rumsey, who heads up the AES Technical Council, Frank Melchior from the BBC’s R&D department in the UK described a “baseline renderer” that can be used with new file formats and streaming formats for next-generation immersive audio systems to generate, for example, variable-length programing using spatial audio scenes.

Fellow workshop participant Brian Vessa focused on remastering cinema mixes to the home, which involves “lowering the monitoring level for domestic mixes, with a reduced dynamic range and a revised balance between DME elements, to prevent the loss of subtle details, including quiet dialog, Foley, background and music. It should be a creative process, not an automated one,” he stressed.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at Follow him on Twitter @MelLambertLA. Thanks to Mel for the photos from the event as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.