Tag Archives: Dolby Atmos

Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Deepwater Horizon’s immersive mix via Twenty Four Seven Sound

By Jennifer Walden

The Peter Berg-directed film Deepwater Horizon, in theaters now, opens on a black screen with recorded testimony from real-life Deepwater Horizon crew member Mike Williams recounting his experience of the disastrous oil spill that began April 20, 2010 in the Gulf of Mexico.

“This documentary-style realism moves into a wide, underwater immersive soundscape. The transition sets the music and sound design tone for the entire film,” explains Eric Hoehn, re-recording mixer at Twenty Four Seven Sound in Topanga Canyon, California. “We intentionally developed the immersive mixes to drop the viewer into this world physically, mentally and sonically. That became our mission statement for the Dolby Atmos design on Deepwater Horizon. Dolby empowered us with the tools and technology to take the audience on this tightrope journey between anxiety and real danger. The key is not to push the audience into complete sensory overload.”

eric-and-wylie

L-R: Eric Hoehn and Wylie Stateman.  Photo Credit: Joe Hutshing

The 7.1 mix on Deepwater Horizon was crafted first with sound designer Wylie Stateman and re-recording mixers Mike Prestwood Smith (dialogue/music) and Dror Mohar (sound effects) at Warner Bros in New York City. Then Hoehn mixed the immersive versions, but it wasn’t just a technical upmix. “We spent four weeks mixing the Dolby Atmos version, teasing out sonic story-point details such as the advancing gas pressure, fire and explosions,” Hoehn explains. “We wanted to create a ‘wearable’ experience, where your senses actually become physically involved with the tension and drama of the picture. At times, this movie is very much all over you.”

The setting for Deepwater Horizon is interesting in that the vertical landscape of the 25-story oil rig is more engrossing than the horizontal landscape of the calm sea. This dynamic afforded Hoehn the opportunity to really work with the overhead Atmos environment, making the audience feel as though they’re experiencing the story and not just witnessing it. “The story takes place 40 miles out at sea on a floating oil drilling platform. The challenge was to make this remote setting experiential for the audience,” Hoehn explains. “For visual artists, the frame is the boundary. For us, working in Atmos, the format extends the boundaries into the auditorium. We wanted the audience to feel as if they too were trapped with our characters aboard the Deepwater Horizon. The movement of sound into the theater adds to the sense of disorientation and confusion that they’re viewing on screen, making the story more immediate and disturbing.”

In their artistic approach to the Atmos mix, Stateman and sound effects designers Harry Cohen and Sylvain Lasseur created an additional sound design layer — specific Atmos objects that help to reinforce the visuals by adding depth and weight via sound. For example, during a sequence after a big explosion and blow out, Mike Williams (Mark Wahlberg) wakes up with a pile of rubble and a broken door on top of him. Twisted metal, confusing announcements and alarms were designed from scratch to become objects that added detail to the space above the audience. “I think it’s one of the most effective Atmos moments in the film. You are waking up with Williams in the aftermath of this intense, destructive sequence. The entire rig is overwhelmed by off-stage explosions, twisting metal, emergency announcements and hissing steam. Things are falling apart above you and around you,” details Hoehn.

Hoehn shares another example: during a scene on the drill deck they created sound design objects to describe the height and scale of the 25-story oil derrick. “We put those sounds into the environment by adding delays and echoes that make it feel like those sounds are pinging around high above you. We wanted the audience to sense the vertical layers of the Deepwater Horizon oil rig,” says Hoehn, who created the delays and echoes using a multichannel delay plug-in called Slapper by The Cargo Cult. “I had separate mix control over the objects and the acoustic echoes applied. I could put the discrete echoes in distinct places in the Atmos environment. It was an agitative design element. It was designed to make the audience feel oriented and at the same time disoriented.”

The additional sounds they created were not an attempt to reimagine the soundtrack, but rather a means of enhancing what was there. “We were deliberate about what we added,” Hoehn explains. “As a team we strived to maximize the advantages of an Atmos theater, which allows us to keep a film mentally, physically and sonically intense. That was the filmmaker’s primary goal.”

The landscape in Deepwater Horizon doesn’t just tower over the audience; it extends under them as well. The underwater scenes were an opportunity to feature the music since these “sequences don’t contain metal banging and explosions. These moments allow the music to give an emotional release,” says Hoehn.

Hoehn explains that the way music exists in Atmos is sort of like a big womb of sound; it surrounds the audience. The underwater visuals depict the catastrophic failure of the blowout preventer — a valve that can close off the well and prevent an uncontrolled flow of oil, and the music punctuates this emotional and pivotal point in the film. It gives a sense of calm that contrasts what’s happening on screen. Sonically, it’s also a contrast to the stressful soundscape happening on-board the rig. Hoehn says, “It’s good for such an intense film and story to have moments where you can find comfort, and I think that is where the music provides such emotional depth. It provides that element of comfort between the moments where your senses are being flooded. We played with dynamic range, going to silence and using the quiet to heighten the anticipation of a big release.”

Hoehn mixed the Atmos version in Twenty Four Seven Sound’s Dolby Atmos lab, which uses an Avid S6 console running Pro Tools 12 and features Meyer Acheron mains and 26 JBL AC28 monitors for the surrounds and overheads. It is an environment designed to provide sonic precision so that when the mixer turns a knob or pushes a fader, the change can instantly be heard. “You can feel your cause-and-effect happen immediately. Sometimes when you’re in a bigger room, you are battling the acoustics of the space. It’s helpful to work under a magnifying glass, particularly on a soundtrack that is as detailed as Deepwater Horizon’s,” says Hoehn.

Hoehn spent a month on the Atmos mix, which served as the basis for the other immersive formats, such as the IMAX 5 and IMAX 12 mixes. “The IMAX versions maintain the integrity of our Atmos design,” says Hoehn, “A lot of care had to be taken in each of the immersive versions to make sure the sound worked in service of the storytelling process.”

Bring On VR
In addition to the theatrical release, Hoehn discussed the prospect of a Deepwater Horizon VR experience. “Working with our friends at Dolby, we’re looking at virtual reality and experimenting with sequences from Deepwater Horizon. We are working to convert the Atmos mix to a headset, virtual sound environment,” says Hoehn. He explains that binaural sound or surround sound in headphones present its own design challenges; it’s not just a direct lift of the 7.1 or Atmos mix.

“Atmos mixing for a theatrical sound pressure environment is different than the sound pressure environment in headphones,” explains Hoehn. “It’s a different sound pressure that you have to design for, and the movement of sounds needs to be that much more precise. Your brain needs to track movement and so maybe you have less objects moving around. Or, you have one sound object hand off to another object and it’s more of a parade of sound. When you’re in a theater, you can have audio coming from different locations and your brain can track it a lot easier because of the fixed acoustical environment of a movie theater. So that’s a really interesting challenge that we are excited to sink our teeth into.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

IBC: Surrounded by sound

By Simon Ray

I came to the 2016 IBC Show in Amsterdam at the start of a period of consolidation at Goldcrest in London. We had just gone through three years of expansion, upgrading, building and installing. Our flagship Dolby Atmos sound mixing theatre finished its first feature, Jason Bourne, and the DI department recently upgraded to offer 4K and HDR.

I didn’t have a particular area to research at the show, but there were two things that struck me almost immediately on arrival: the lack of drones and the abundance of VR headsets.

Goldcrest’s Atmos mixing stage.

360 audio is an area I knew a little about, and we did provide a binaural DTS Headphone X mix at the end of Jason Bourne, but there was so much more to learn.

Happily, my first IBC meeting was with Fraunhofer, where I was updated on some of the developments they have made in production, delivery and playback of immersive and 360 sound. Of particular interest was their Cingo technology. This is a playback solution that lives in devices such as phones and tablets and can already be found in products from Google, Samsung and LG. This technology renders 3D audio content onto headphones and can incorporate head movements. That means a binaural render that gives spatial information to make the sound appear to be originating outside the head rather than inside, as can be the case when listening to traditionally mixed stereo material.

For feature films, for example, this might mean taking the 5.1 home theatrical mix and rendering it into a binaural signal to be played back on headphones, giving the listener the experience of always sitting in the sweet spot of a surround sound speaker set-up. Cingo can also support content with a height component, such as 9.1 and 11.1 formats, and add that into the headphone stream as well to make it truly 3D. I had a great demo of this and it worked very well.

I was impressed that Fraunhofer had also created a tool for creating immersive content, a plug-in called Cingo Composer that could run as both VST and AAX plug-ins. This could run in Pro Tools, Nuendo and other DAWs and aid the creation of 3D content. For example, content could be mixed and automated in an immersive soundscape and then rendered into an FOA (First Order Ambisonics or B-Format) 4-channel file that could be played with a 360 video to be played on VR headsets with headtracking.

After Fraunhofer, I went straight to DTS to catch up with what they were doing. We had recently completed some immersive DTS:X theatrical, home theatrical and, as mentioned above, headphone mixes using the DTS tools, so I wanted to see what was new. There were some nice updates to the content creation tools, players and renderers and a great demo of the DTS decoder doing some live binaural decoding and headtracking.

With immersive and 3D audio being the exciting new things, there were other interesting products on display that related to this area. In the Future Zone Sennheiser was showing their Ambeo VR mic (see picture, right). This is an ambisonic microphone that has four capsules arranged in a tetrahedron, which make up the A-format. They also provide a proprietary A-B format encoder that can run as a VST or AAX plug-in on Mac and Windows to process the outputs of the four microphones to the W,X,Y,Z signals (the B-format).

From the B-Format it is possible to recreate the 3D soundfield, but you can also derive any number of first-order microphones pointing in any direction in post! The demo (with headtracking and 360 video) of a man speaking by the fireplace was recorded just using this mic and was the most convincing of all the binaural demos I saw (heard!).

Still in the Future Zone, for creating brand new content I visited the makers of the Spatial Audio Toolbox, which is similar to the Cingo Creator tool from Fraunhofer. B-Com’s Spatial Audio Toolbox contains VST plug-ins (soon to be AAX) to enable you to create an HOA (higher order ambisonics) encoded 3D sound scene using standard mono, stereo or surround source (using HOA Pan) and then listen to this sound scene on headphones (using Render Spk2Bin).

The demo we saw at the stand was impressive and included headtracking. The plug-ins themselves were running on a Pyramix on the Merging Technologies stand in Hall 8. It was great to get my hands on some “live” material and play with the 3D panning and hear the effect. It was generally quite effective, particularly in the horizontal plane.

I found all this binaural and VR stuff exciting. I am not sure exactly how and if it might fit into a film workflow, but it was a lot of fun playing! The idea of rendering a 3D soundfield into a binaural signal has been around for a long time (I even dedicated months of my final year at university to writing a project on that very subject quite a long time ago) but with mixed success. It is exciting to see now that today’s mobile devices contain the processing power to render the binaural signal on the fly. Combine that with VR video and headtracking, and the ability to add that information into the rendering process, and you have an offering that is very impressive when demonstrated.

I will be interested to see how content creators, specifically in the film area, use this (or don’t). The recreation of the 3D surround sound mix over 2-channel headphones works well, but whether headtracking gets added to this or not remains to be seen. If the sound is matched to video that’s designed for an immersive experience, then it makes sense to track the head movements with the sound. If not, then I think it would be off-putting. Exciting times ahead anyway.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

Deluxe Toronto adds Dolby Atmos theater, Steve Foster joins sound team

Steve Foster, a 25-year veteran of the sound industry, has joined Deluxe Toronto as a senior re-recording mixer. Foster’s first project at Deluxe Toronto will be the second season of the SyFy series The Expanse.

Foster comes to Deluxe Toronto from Technicolor Toronto, formerly Toronto’s Sounds Interchange, where he helped establish the long form audio and ADR departments. He also wrote the score for ‘90s thriller Killer Image. Other credits include Narcos, Rolling Stones: At the Max and the TV series Hannibal. He earned a Gemini Award for Best Sound in a Dramatic Program on Everest, a Genie Award for Overall Sound on Passchendaele and four Motion Picture Sound Editor Golden Reels for sound editing and ADR for various episodics.

In other news, Deluxe Toronto has also extended its capabilities, adding a new Dolby Atmos mixing theater geared toward episodic production to its facility. It features equipment and layout identical to the studio’s existing three episodic sound theaters, allowing for consistent and flexible review sessions for all of the 10 to 12 projects simultaneously flowing through Deluxe Toronto. The facility also houses a large theatrical mix theater with 36-channel Dolby Atmos sound, and a soundstage for ADR recording.

Our Main Image: (L-R) Steve Foster, Mike Baskerville, Christian T. Cooke.

Call of the Wild —Tarzan’s iconic yell

By Jennifer Walden

For many sound enthusiasts, Tarzan’s iconic yell is the true legend of that story. Was it actually actor Johnny Weissmuller performing the yell? Or was it a product of post sound magic involving an opera singer, a dog, a violin and a hyena played backwards as MGM Studios claims? Whatever the origin, it doesn’t impact how recognizable that yell is, and this fact wasn’t lost on the filmmakers behind the new Warner Bros. movie The Legend of Tarzan.

The updated version is not a far cry from the original, but it is more guttural and throaty, and less like a yodel. It has an unmistakable animalistic quality. While we may never know the true story behind the original Tarzan yell, postPerspective went behind the scenes to learn how the new one was created.

Supervising sound editor/sound designer Glenn Freemantle and sound designer/re-recording mixer Niv Adiri at Sound24, a multi-award winning audio post company located on the lot of Pinewood Film Studios in Buckinghamshire, UK, reveal that they went through numerous iterations of the new Tarzan yell. “We had quite a few tries on that but in the end it’s quite a simple sound. It’s actor Alexander Skarsgård’s voice and there are some human and animal elements, like gorillas, all blended together in it,” explains Freemantle.

Since the new yell always plays in the distance, it needed to feel powerful and raw, as though Tarzan is waking up the jungle. To emphasize this, Freemantle says, “We have animal sounds rushing around the jungle after the Tarzan yell, as if he is taking control of it.”

The jungle itself is a marvel of sight and sound. Freemantle notes that everything in the film, apart from the actors on screen, was generated afterward — the Congo, the animals, even the villages and people, a harbor with ships and an action sequence involving a train. Everything.

LEGEND OF TARZANThe film was shot on a back lot of Warner Bros. Studios in Leavesden, UK, so making the CGI-created Congo feel like the real deal was essential. They wanted the Congo to feel alive, and have the sound change as the characters moved through the space. Another challenge was grounding all the CG animals — the apes, wildebeests, ostriches, elephants, lions, tigers, and other animals — in that world.

When Sound24 first started on the film, a year and a half before its theatrical release, Freemantle says there was very little to work with visually. “Basically it was right from the nuts and bolts up. There was nothing there, nothing to see in the beginning apart from still pictures and previz. Then all the apes, animals and jungles were put in and gradually the visuals were built up. We were building temp mixes for the editors to use in their cut, so it was like a progression of sound over time,” he says.

Sound24’s sound design got increasingly detailed as the visuals presented more details. They went from building ambient background for different parts of Africa — from the deep jungle to the open plains — at different times of the day and night to covering footsteps for the CG gorillas. The sound design team included Ben Barker, Tom Sayers, and Eilam Hoffman, with sound effects editing by Dan Freemantle and Robert Malone. Editing dialogue and ADR was Gillian Dodders. Foley was recorded at Shepperton Studios by Foley mixer Glen Gathard.

Capturing Sounds
Since capturing their own field recordings in the Congo would have proved too challenging, Sound 24 opted to source sound recordings authentic to that area. They also researched and collected the best animal sounds they could find, which were particularly useful for the gorilla design.

Sound24’s sound design team designed the gorillas to have a range of reactions, from massive roars and growls to smaller grunts and snorts. They cut and layered different animal sounds, including processed human vocalizations, to create a wide range of gorilla sounds.

There were three main gorillas, and each sounds a bit different, but the most domineering of all was Akut. During a fight between Akut and Tarzan, Adiri notes that in the mix, they wanted to communicate Akut’s presence and power through sound. “We tried to create dynamics within Akut’s voice so that you feel that he is putting in a lot of effort into the fight. You see him breathing hard and moving, so his voice had to have his movement in it. We had to make it dynamic and make sure that there was space for the hits, and the falls, and whatever is happening visually. We had to make sure that all of the sounds are really tied to the animal and you feel that he’s not some super ape, but he’s real,” Adiri says. They also designed sounds for the gang of gorillas that came to egg on Akut in his fight.

The Mix
All the effects, Foley and backgrounds were edited and premixed in Avid Pro Tools 11. Since Sound24 had been working on The Legend of Tarzan for over a year, keeping everything in the box allowed them to update their session over time and still have access to previous elements and temp mixes. “The mix was evolving throughout the sound editorial process. Once we had that first temp mix we just kept working with that, remixing sounds and reworking scenes but it was all done in the box up until the final mix. We never started the mix from scratch on the dub stage,” says Adiri.

For the final Dolby Atmos mix at Warner Bros. De Lane Lea Studios in London, Adiri and Freemantle brought in their Avid S6 console to studio. “That surface was brilliant for us,” says Adiri, who mixed the effects/Foley/backgrounds. He shared the board with re-recording mixer Ian Tapp, on dialogue/music.

Adiri feels the Atmos surround field worked best for quiet moments, like during a wide aerial shot of the jungle where the camera moves down through the canopy to the jungle floor. There he was able to move through layers of sounds, from the top speakers down, and have the ambience change as the camera’s position changed. Throughout the jungle scenes, he used the Atmos surrounds to place birds and distant animal cries, slowly panning them around the theater to make the audience feel as though they are surrounded by a living jungle.

He also likes to use the overhead speakers for rain ambience. “It’s nice to use them in quieter scenes when you can really feel the space, moving sounds around in a more subliminal way, rather than using them to be in-your-face. Rain is always good because it’s a bright sound. You know that it is coming from above you. It’s good for that very directional sort of sound.”

Ambience wasn’t the only sound that Adiri worked with in Atmos. He also used it to pan the sounds of monkeys swinging through the trees and soaring overhead, and for Tarzan’s swinging. “We used it for these dynamic moments in the storytelling rather than filling up those speakers all the time. For the moments when we do use the Atmos field, it’s striking and that becomes a moment to remember, rather than just sound all the time,” concludes Freemantle.

Jennifer Walden is a New Jersey-based writer and audio engineer. 

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

London’s Halo adds dubbing suite

Last month, London’s Halo launched a dubbing suite, Studio 5, at its Noel Street facility. The studio is suited for TV mix work across all genres, as well as for DCP 5.1 and 7.1 theatrical projects, or as a pre-mix room for Halo’s Dolby Features licensed Studios 1 and 3. The new room is also pre-wired for Dolby Atmos.

The new studio features an HDX2 Pro Tools 12|HD system, a 24-fader Avid S6 M40 and a custom Dynaudio 7.1 speaker system. This is all routed via a Colin Broad TMC-1-Penta controlled DADAX32 digital audio matrix for maximum versatility and future scalability. Picture playback from Pro Tools is provided by an AJA Kona LHi card via a Barco 2K digital projector.

In addition, Halo has built a dedicated 5.1 audio editing room for their recently arrived head of sound editorial, Jay Price, to work from. Situated directly adjacent to the new studio, the room features Pro Tools 12|HD Native system and 5.1 Dynaudio Air 6 speakers.

Jigsaw24 and CB Electronics supplied the hardware and the installation know-how. Level Acoustic designed, and Munro Acoustics provided a custom speaker system.

Warsaw’s Dreamsound develops new spin on Dolby Atmos

This Poland-based studio offers full-service post and a newly installed Atmos mix stage.

By Mel Lambert

Back in 2012, when the owners of Warsaw, Poland-based Dreamsound Studios were contemplating a new re-recording stage, they were invited to Dolby’s European headquarters in Wootton Bassett in England to evaluate the Atmos immersive sound system, which they hoped to install. “But we also decided to conduct our own studies into the correlation between Atmos panning and the localization of phantom sound sources,” recalls Dreamsound co-partner Marcin Kasiński.

Using various samples of filtered pink noise directed at experienced listeners from nine targeted locations around a central seating area, Kasiński and his partner Kacper Habisiak — both graduates from The Frederic Chopin University of Music’s sound engineering department, and experienced editors and re-recording mixers as well as accomplished musicians — discovered that their test audience could more easily identify sound coming from the front quadrant and rear corners and less easily from the sides.

Marcin Kasinski (left) and Kacper Habisiak, flanking Pavel Stverak, sound consultant with Dolby.

During a workshop at the recent AES Convention in Warsaw, Kasiński and Habisiak presented a paper on their findings, reporting that the implications for immersive sound mixing are immediately obvious, with localization of height information being enhanced at high rather than low frequencies.

Their follow-up tests will be with real sound samples rather than tones, and in addition to the correlation between Atmos objects and on-screen images, Kasiński says they plan to test dynamic sounds that move from one loudspeaker channel to another. They also want to open up their evaluation sessions to include non-trained listeners, which will more closely mimic an average movie-going audience.

Editorial & Re-Recording
In addition to their now-up-and-running large Dolby-certified Atmos stage, Dreamsound comprises a quartet of 5.1-channel sound editing rooms, one of which serves as a pre-mix and broadcast-mix area and features a 24-fader Avid ICON D-Command console. “We also have a 1,100-square-foot Foley stage, which is also used for ADR and walla recording,” says Kasiński. The new Atmos re-recording stage features a 32-fader Avid ICON D-Control ES console connecting to a trio of JBL ScreenArray cabinets located behind the screen and multiple SC12/SC eight surround loudspeakers mounted on the ceiling, side and rear walls, plus model 4632 18-inch subwoofers. Crown DSi and XLS Series amplifiers power the 32-speaker system.

“Because JBL speakers are the standard in Polish cinemas, and they match perfectly with Crown amplifiers, they were a logical choice of playback components in our Dolby Atmos screening room,” Habisiak says. “Equally important, the performance of these speakers and amplifiers meet Dolby’s licensing requirements, which are extremely stringent regarding specifications that include sound pressure levels, frequency response, coverage relative to room size and other parameters.”

three

Video playback is handled by a Christie CP2220 2K DCI projector and a 19-foot wide Harkness mini-perforated projection screen. The room also includes three Avid Pro Tools HD playback/record machines, a Lexicon 960L 5.1 reverb unit, Cedar DNS One noise-reduction system and a wide range of Pro Tools plug-ins.

Background, Philosophy & Work
“During our studies [at Frederic Chopin University] we started to work in sound post production. We also got to work on feature film projects,” explains Kasiński. “Since Warsaw is the center of the Polish film industry, we wanted to create a company that could provide the best possible sound services.”

The partners point out that along with some innovative technology, Dreamsound is at its core a creative team of film enthusiasts. “We have managed to gather together a great group of sound editors, Foley artists and mixers,” shares Kasiński.

During the past six years Dreamsound has worked with many acclaimed Polish directors, including Malgorzata Szumowska, Agnieszka Holland, Jerzy Hoffman and Wladyslaw Pasikowski. Last year they won a MPSE Golden Reel Award for Best Sound Editing in a Feature Documentary for Powstanie Warszawskie [Warsaw Uprising], directed by Jan Komasa.

This year, Dreamsound expects to work on six feature films and two TV series — one for Polish TV and one for Polish HBO — in addition to handling re-recording for other supervising sound editors. “While we specialize in Polish-language productions, we have also worked on some foreign films, including one in half-Mandarin and half-English,” reports Kasiński.

Foley

Dreamsound prides itself on being a full-service audio post house, offering post sound editing, Foley, theatrical mix and broadcast deliverables. “We have also fully adopted an American workflow for sound post,” explains Kasiński. “So we follow the same standards and speak the same language as our friends abroad. For example, we recently recorded Foley for a French film studio and handled a remote ADR session for a Japanese studio — there was always fluent and creative collaboration.”

All of that aside, the co-owners readily concede that it may be too early to talk about the success of the new Atmos stage. “Although we haven’t mixed any movies in Atmos yet, there are some productions on the horizon,” says Kasiński. We are waiting for more Polish cinemas to install Atmos systems. Immersive sound has opened a new chapter for movie soundtracks. We only have to wait until DCI or SMPTE establishes an open-standard for immersive audio. Time will tell. For sure, we don’t want to rest on our laurels. We are ready to provide the best possible sound services for clients from around the world.”

Creating sounds, mix, more for ‘The Hunger Games: Mockingjay, Part 1’

By Jennifer Walden

It may be called The Hunger Games, but in Mockingjay, Part 1, the games are over. Life for the people of Panem, outside The Capitol, is about rebellion, war and survival. Supervising sound editor/sound designer/re-recording mixer Jeremy Peirson, at Warner Bros. Sound in Burbank, has worked with director Francis Lawrence on both Catching Fire and Mockingjay, Part 1.

Without the arena and its sinister array of “horrors” (for those who don’t remember Catching Fire, those horrors, such as blood rain, acid fog, carnivorous monkeys and lightening storms were released every hour in the arena), Mockingjay, Part 1 is not nearly as diverse, according to Peirson. “Catching Fire was such a huge story between The Capitol and all the various Districts. Continue reading

Dolby bringing Atmos to homes… are small post houses next?

By Robin Shore

Last month Dolby announced that its groundbreaking Atmos surround sound format will soon be available outside of commercial cinemas. By sometime early next year consumer’s will be able to buy special Atmos-enabled A/V receivers and speakers for their home theater systems.

I recently had the chance to demo a prototype of an Atmos home system at an event hosted at Dolby’s New York offices.

A brief overview for those who might not be totally familiar with this technology: Atmos is Dolby’s latest surround sound format. It includes overhead speakers which allow sounds to be panned above the audience. Rather than using a traditional track-based paradigm, Atmos mixes are object-oriented. An Atmos mix contains up to 128 audio objects, each with  Continue reading