Tag Archives: SMPTE 2015

SMPTE 2015: the impact of immersive audio formats

By Mel Lambert

“In the very near future, immersive audio will be everywhere,” stated William Redmann, Technicolor’s director of standards for immersive media technologies. “It will enhance our dramas, seat us in sports venues and put us on the field [for outdoor events]. Now it’s just a matter of making it happen!”

Technicolor’s technologist was chairing a fascinating session during the SMPTE Technical Conference & Exhibition, held at Loews Hotel in the heart of Hollywood in late October, focusing on new sound capture and production techniques for cinema and broadcast that will be crucial for the effective authoring of object-based immersive audio.

“Support of these premium experiences requires new mixing skills, tools, tests, monitors and a pervasive respect for the artist’s intent at all levels of presentation… and that includes legacy formats,” Redmann stressed. “Nobody is predicting that stereo will disappear!”

Steven Silva

Steven Silva

Steven Silva, VP of technology and strategy at Twentieth Century Fox, provided a succinct overview of “Object-Based Audio for Live TV Production,” with a focus on the development of the new ATSC 3.0 specification for terrestrial TV broadcasting, which is expected to include multichannel immersive opportunities, using object-based and scene-based audio to enhance the consumer experience.

“The new format will offer qualitative improvements over the current ATSC standard,” Silva stressed, “with more than the current six-channel configuration at lower bit rates, yet with enhanced loudness and dynamic-range capabilities.”

Focusing on scene-based audio, Dr. Nils Peters (pictured in our main image) — a staff research engineer at Qualcomm and co-chair of the AES technical committee on spatial audio — presented an interesting overview of Higher Order Ambisonics (HOA), a technology that can be used to create “holistic descriptions of captured sound scenes, independent from particular loudspeaker layouts.”

As Peters explained, immersive sound is carried by a series of compressed/uncompressed digital channels that contain predominant sounds and companion ambiences. “This is different from conventional channel-based formats, which send one signal for each loudspeaker output; up- or down-mixing would be needed if another speaker configuration is used.” Scene-based audio is said to enable immersive sound with objects at bit rates comparable to current formats.

Nuno Fonseca, a professor at the Polytechnic Institute of Leiria, Portugal, and an invited professor at the Lisbon School of Music, described his ongoing research into enhanced sound design and 3D mixing, including a proprietary Sound Particles app which, reportedly, is being evaluated at several Hollywood studios. As Fosenca stressed, VFX graphics software is used to generate moving 3D objects within a virtual space, with a moving virtual camera being responsible for rendering that scene. “Unfortunately, the same concept is not used for audio post production,” he stated. Instead of using a conventional DAW — in reality, the audio equivalent of video editing software — Fonseca is developing a particle-based program for creating virtual 3D audio scenes.

Brian Claypool

Brian Claypool

The “Listening Test Methodology for Object-Based Audio Rendering Interoperability” session comprised a three-way presentation from current innovators of immersive sound systems. As Brian Claypool, senior director of strategic business development at Barco Audio Technologies, explained, work is progressing on the development of listening tests that will ensure the “preservation and representation of artistic intent.” These criteria are the basic focus of SMPTE Technology Committee 25CSS, which is developing an interoperative audio-creation workflow and a single DCP that can be used regardless of which playback configuration — Dolby Atmos, Barco Auro 3D or DTS MDA — has been installed in the exhibition space.

Other participants included Dr. Markus Mehnert, head of technology at Barco Audio Technologies, which markets the Auro 3D format to the motion picture community, and Bert Van Daele, chief technology officer with Auro Technologies NV, which, like Barco, is based in Belgium. “These subjective, objective and combined tests will check the render compatibility of competitive immersive sound systems replaying the proposed SMPTE single-file [interoperable] format,” Van Daele stressed. “We are working on test procedures that will check object size/spread, object positions and other parameters to retain the director’s artistic intent.”

“Monitoring and Authoring of 3D Immersive Next-Generation Audio Formats,” presented by Peter Poers, managing director of marketing and sales at Junger Audio, Germany, stressed that the importance of key functions to ensuring easy adoption of next-generation immersive audio formats will necessitate changes in audio production workflow to accommodate additional audio channels and object-based formats. “Monitoring the audio material, along with authoring and verification of dynamic metadata will become a new challenge,” Poers emphasized. New procedures for managing object-based content need to be established, along with personalization services that will enable the selection, for example, of alternative audio objects – such as commentator languages — as well as loudness control.

Mel Lambert is principal of LA-based Content Creators. He can be reached at mel.lambert@content-creators.com, and follow him on Twitter @MelLambertLA.

Colorfront demos UHD HDR workflows at SMPTE 2015

Colorfront used the SMPTE 2015 Conference in Hollywood to show off the capabilities of its upcoming 2016 products supporting UHD/HDR workflows. New products include the Transkoder 2016 and On-Set Dailies 2016. Upgrades allow for faster, more flexible processing of the latest UHD HDR camera, color, editorial and deliverables formats for digital cinema, high-end episodic TV and OTT Internet entertainment channels.

Colorfront’s Bruno Munger filled us in on some of the highlights:

More details:
·   Transkoder and On-Set Dailies feature Colorfront Engine, an ACES-compliant, HDR-managed color pipeline, enabling on-set look creation and ensuring color fidelity of UHD/HDR materials and metadata though the camera-to-post chain. Colorfront Engine supports the full dynamic range and color gamut of the latest digital camera formats and mapping into industry-standard deliverables such as the latest IMF specs, AS-11 DPP and HEVC, at a variety of brightness, contrast and color ranges in current display devices.
·   The mastering toolset for Transkoder 2016 is enhanced with new statistical analysis tools for immediate HDR data graphing. Highlights include MaxCLL and MaxFALL calculations, as well as HDR mastering tools with tone and gamut mapping for a variety of target color spaces, including Rec. 2020 and P3D65, as well as XYZ, PQ curve and BBC-NHK Hybrid Log Gamma.
·    New for Transkoder 2016 are tools to concurrently color grade HDR and SDR UHD versions, cutting down the complexity, time and cost of delivering multiple masters at once.
·    Transkoder 2016 will output simultaneous, realtime grades on 4K 60p material to dual Sony OLED BVM-X300 broadcast monitors — concurrently processing HDR 2084 PQ Rec. 2020 at 1000nits and SDR Rec. 709 at 100nits — while visually graphing MaxFALL/MaxCLL light values per frame.

Advanced dailies toolsets enhancements include:
·    Support for the latest camera formats, including full Panasonic Varicam35 VRAW, AVC Intra 444, 422 and LT support, Canon EOS C300 Mark II with new Canon Log2 Gamma, ARRI Alexa 65 and Alexa SXT, Red Weapon, Sony XAVC and the associated image metadata from all of these.
·    The new Multi-view Dailies capability for On-Set Dailies 2016, which allows concurrent, realtime playback and color grading of all cameras and camera views.
·    Transwrapping, which allows video essence data (the RAW, compressed audio/video and metadata inside a container such as MXF or MOV) to be passed through the transcoding process without re-encoding, enabling frame-accurate insert editing on closed digital deliverables. This workflow can be a great time saver in day-to-day production, allowing Transkoder users to quickly generate new masters based on changes and versioning of content in the major mastering formats, like IMF, DCI and ProRes, and efficient trimming of camera original media for VFX pulls and final conform from Arri, Red and Sony cameras.