Tag Archives: Dolby Atmos

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Sundance: Audio post for Honey Boy and The Death of Dick Long

By Jennifer Walden

Brent Kiser, an Emmy award-winning supervising sound editor/sound designer/re-recording mixer
at LA’s Unbridled Sound, is no stranger to the Sundance Film Festival. His resume includes such Sundance premieres as Wild Wild Country, Swiss Army Man and An Evening with Beverly Luff Lin.

He’s the only sound supervisor to work on two films that earned Dolby fellowships: Swiss Army Man back in 2016 and this year’s Honey Boy, which premiered in the US Dramatic Competition. Honey Boy is a biopic of actor Shia LaBeouf’s damaging Hollywood upbringing.

Brent Kiser (in hat) and Will Files mixing Honey Boy.

Also showing this year, in the Next category, was The Death of Dick Long. Kiser and his sound team once again collaborated with director Daniel Scheinert. For this dark comedy, the filmmakers used sound to help build tension as a group of friends tries to hide the truth of how their buddy Dick Long died.

We reached out to Kiser to find out more.

Honey Boy was part of the Sundance Institute’s Feature Film Program, which is supported by several foundations including the Ray and Dagmar Dolby Family Fund. You mentioned that this film earned a grant from Dolby. How did that grant impact your approach to the soundtrack?
For Honey Boy, Dolby gave us the funds to finish in Atmos. It allowed us to bring MPSE award-winning re-recording mixer Will Files on to mix the effects while I mixed the dialogue and music. We mixed at Sony Pictures Post Production on the Kim Novak stage. We got time and money to be on a big stage for 11 days — a five-day pre-dub and six-day final mix.

That was huge because the film opens up with these massive-robot action/sci-fi sound sequences and it throws the audience off the idea of this being a character study. That’s the juxtaposition, especially in the first 15 to 20 minutes. It’s blurring the reality between the film world and real life for Shia because the film is about Shia’s upbringing. Shia LaBeouf wrote the film and plays his father. The story focuses on the relationship of young actor Otis Lort (Lucas Hedges) and his alcoholic father James.

The story goes through Shia’s time on Disney Channel’s Even Stevens series and then on Transformers, and looks at how this lifestyle had an effect on him. His father was an ex-junkie, sex-offender, ex-rodeo clown and would just push his son. By age 12, Shia was drinking, smoking weed and smoking cigarettes — all supplied to him by his dad. Shia is isolated and doesn’t have too many friends. He’s not around his mother that much.

This year is the first year that Shia has been sober since age 12. So this film is one big therapeutic movie for him. The director Alma Har’el comes from an alcoholic family, so she’s able to understand where Shia is coming from. Working with Alma is great. She wants to be in every part of the process — pick each sound and go over every bit to make sure it’s exactly what she wants.

Honey Boy director Alma Har’el.

What were director Alma Har’el’s initial ideas for the role of sound in Honey Boy?
They were editing this film for six months or more, and I came on board around mid-edit. I saw three different edits of the film, and they were all very different.

Finally, they settled on a cut that felt really nice. We had spotting sessions before they locked and we were working on creating the environment of the motel where Otis and James were staying. We were also working on creating the sound of Otis being on-set. It had to feel like we were watching a film and when someone screams, “Cut!” it had to feel like we go back into reality. Being able to play with those juxtapositions in a sonic way really helped out. We would give it a cinematic sound and then pulled back into a cinéma vérité-type sound. That was the big sound motif in the movie.

We worked really close with the composer Alex Somers. He developed this little crank sound that helped to signify Otis’ dreams and the turning of events. It makes it feel like Otis is a puppet with all his acting jobs.

There’s also a harness motif. In the very beginning you see adult Otis (Lucas Hedges) standing in front of a plane that has crashed and then you hear things coming up behind him. They are shooting missiles at him and they blow up and he gets yanked back from the explosions. You hear someone say, “Cut!” and he’s just dangling in a body harness about 20 feet up in the air. They reset, pull him down and walk him back. We go through a montage of his career, the drunkenness and how crazy he was, and then him going to therapy.

In the session, he’s told he has PTSD caused by his upbringing and he says, “No, I don’t.” It kicks to the title and then we see young Otis (Noah Jupe) sitting there waiting, and he gets hit by a pie. He then gets yanked back by that same harness, and he dangles for a little while before they bring him down. That is how the harness motif works.

There’s also a chicken motif. Growing up, Otis has a chicken named Henrietta La Fowl, and during the dream sequences the chicken leads Otis to his father. So we had to make a voice for the chicken. We had to give the chicken a dreamy feel. And we used the old-school Yellow Sky wind to give it a Western-feel and add a dreaminess to it.

On the dub stage with director Alma Har’el and her team, plus Will Files (front left) and Andrew Twite (front right).

Andrew Twite was my sound designer. He was also with me on Swiss Army Man. He was able to make some rich and lush backgrounds for that. We did a lot of recording in our neighborhood of Highland Park, which is much like Echo Park where Shia grew up and where the film is based. So it’s Latin-heavy communities with taco trucks and that fun stuff. We gave it that gritty sound to show that, even though Otis is making $8,000 a week, they’re still living on the other side of the tracks.

When Otis is in therapy, it feels like Malibu. It’s nicer, quieter, and not as stressful versus the motel when Otis was younger, which is more pumped up.

My dialogue editor was Elliot Thompson, and he always does a great job for me. The production sound mixer Oscar Grau did a phenomenal job of capturing everything at all moments. There was no MOS (picture without sound). He recorded everything and he gave us a lot of great production effects. The production dialogue was tricky because in many of the scenes young Otis isn’t wearing a shirt and there are no lav mics on him. Oscar used plant mics and booms and captured it all.

What was the most challenging scene for sound design on Honey Boy?
The opening, the intro and the montage right up front were the most challenging. We recut the sound for Alma several different ways. She was great and always had moments of inspiration. We’d try different approaches and the sound would always get better, but we were on a time crunch and it was difficult to get all of those elements in place in the way she was looking for.

Honey Boy on the mix stage at Sony’s Kim Novak Theater.

In the opening, you hear the sound of this mega-massive robot (an homage to a certain film franchise that Shia has been part of in the past, wink, wink). You hear those sounds coming up over the production cards on a black screen. Then it cuts to adult Otis standing there as we hear this giant laser gun charging up. Otis goes, “No, no, no, no, no…” in that quintessential Shia LaBeouf way.

Then, there’s a montage over Missy Elliott’s “My Struggles,” and the footage goes through his career. It’s a music video montage with sound effects, and you see Otis on set and off set. He’s getting sick, and then he’s stuck in a harness, getting arrested in the movie and then getting arrested in real life. The whole thing shows how his life is a blur of film and reality.

What was the biggest challenge in regards to the mix?
The most challenging aspect of the mix, on Will [Files]’s side of the board, was getting those monsters in the pocket. Will had just come off of Venom and Halloween so he can mix these big, huge, polished sounds. He can make these big sound effects scenes sound awesome. But for this film, we had to find that balance between making it sound polished and “Hollywood” while also keeping it in the realm of indie film.

There was a lot of back and forth to dial-in the effects, to make it sound polished but still with an indie storytelling feel. Reel one took us two days on stage to get through. We even spent some time on it on the last mix day as well. That was the biggest challenge to mix.

The rest of the film is more straightforward. The challenge on dialogue was to keep it sounding dynamic instead of smoothed out. A lot of Shia’s performance plays in the realm of vocal dynamics. We didn’t want to make the dialogue lifeless. We wanted to have the dynamics in there, to keep the performance alive.

We mixed in Atmos and panned sounds into the ceiling. I took a lot of the composer’s stems and remixed those in Atmos, spreading all the cues out in a pleasant way and using reverb to help glue it together in the environment.

 

The Death of Dick Long

Let’s look at another Sundance film you’ve worked on this year. The Death of Dick Long is part of the Next category. What were director Daniel Scheinert’s initial ideas for the role of sound on this film?
Daniel Scheinert always shows up with a lot of sound ideas, and most of those were already in place because of picture editor Paul Rogers from Parallax Post (which is right down the hall from our studio Unbridled Sound). Paul and all the editors at Parallax are sound designers in their own right. They’ll give me an AAF of their Adobe Premiere session and it’ll be 80 tracks deep. They’re constantly running down to our studio like, “Hey, I don’t have this sound. Can you design something for me?” So, we feed them a lot of sounds.

The Death of Dick Long

We played with the bug sounds the most. They shot in Alabama, where both Paul and Daniel are from, so there were a lot of cicadas and bugs. It was important to make the distinction of what the bugs sounded like in the daytime versus what they sounded like in the afternoon and at night. Paul did a lot of work to make sure that the balance was right, so we didn’t want to mess with that too much. We just wanted to support it. The backgrounds in this film are rich and full.

This film is crazy. It opens up with a Creed song and ends with a Nickleback song, as a sort of a joke. They wanted to show a group of guys that never really made much of themselves. These guys are in a band called Pink Freud, and they have band practice.

The film starts with them doing dumb stuff, like setting off fireworks and catching each other on fire — just messing around. Then it cuts to Dick (Daniel Scheinert) in the back of a vehicle and he’s bleeding out. His friends just dump him at the hospital and leave. The whole mystery of how Dick dies unfolds throughout the course of the film. The two main guys are Earl (Andre Hyland) and Zeke (Michael Abbott, Jr.).

The Foley on this film — provided by Foley artist John Sievert of JRS Productions — plays a big role. Often, Foley is used to help us get in and out of the scene. For instance, the police are constantly showing up to ask more questions and you hear them sneaking in from another room to listen to what’s being said. There’s a conversation between Zeke and his wife Lydia (Virginia Newcomb) and he’s asking her to help him keep information from the police. They’re in another room but you hear their conversation as the police are questioning Dick Long’s wife, Jane (Jess Weixler).

We used sound effects to help increase the tension when needed. For example, there’s a scene where Zeke is doing the laundry and his wife calls saying she’s scared because there are murderers out there, and he has to come and pick her up. He knows it’s him but he’s trying to play it off. As he is talking to her, Earl is in the background telling Zeke what to say to his wife. As they’re having this conversation, the washing machine out in the garage keeps getting louder and it makes that scene feel more intense.

Director Daniel Scheinert (left) and Puddle relaxing during the mix.

“The Dans” — Scheinert and Daniel Kwan — are known for Swiss Army Man. That film used sound in a really funny way, but it was also relevant to the plot. Did Scheinert have the same open mind about sound on The Death of Dick Long? Also, were there any interesting recording sessions you’d like to talk about?
There were no farts this time, and it was a little more straightforward. Manchester Orchestra did the score on this one too, but it’s also more laid back.

For this film, we really wanted to depict a rural Alabama small-town feel. We did have some fun with a few PA announcements, but you don’t hear those clearly. They’re washed out. Earl lives in a trailer park, so there are trailer park fights happening in the background to make it feel more like Jerry Springer. We had a lot of fun doing that stuff. Sound effects editor Danielle Price cut that scene, and she did a really great job.

What was the most challenging aspect of the sound design on The Death of Dick Long?
I’d say the biggest things were the backgrounds, engulfing the audience in this area and making sure the bugs feel right. We wanted to make sure there was off-screen movement in the police station and other locations to give them all a sense of life.

The whole movie was about creating a sense of intensity. I remember showing it to my wife during one of our initial sound passes, and she pulled the blanket over her face while she was watching it. By the end, only her eyes were showing. These guys keep messing up and it’s stressful. You think they’re going to get caught. So the suspense that the director builds in — not being serious but still coming across in a serious manner — is amazing. We were helping them to build that tension through backgrounds, music and dropouts, and pushing certain everyday elements (like the washing machine) to create tension in scenes.

What scene in this film best represents the use of sound?
I’d say the laundry scene. Also, in the opening scene you hear the band playing in the garage and the perspective slowly gets closer and closer.

During the film’s climax, when you find out how Dick dies, we’re pulling down the backgrounds that we created. For instance, when you’re in the bedroom you hear their crappy fan. When you’re in the kitchen, you hear the crappy compressor on the refrigerator. It’s all about playing up these “bad” sounds to communicate the hopelessness of the situation they are living in.

I want to shout out all of my sound editors for their exceptional work on The Death of Dick Long. There was Jacob “Young Thor” Flack and Elliot Thompson, and Danielle Price who did amazing backgrounds. Also, a shout out to Ian Chase for help on the mix. I want to make sure they share the credit.

I think there needs to be more recognition of the contribution of sound and the sound departments on a film. It’s a subject that needs to be discussed, particularly in these somber days following the death of Oscar-winning re-recording mixer Gregg Rudloff. He was the nicest guy ever. I remember being an intern on the sound stage and he always took the time to talk to us and give us advice. He was one of the good ones.

When post sound gets a credit after the caterers’ on-set, it doesn’t do us justice. On Swiss Army Man, initially I had my own title card because The Dans wanted to give me a title card that said, “Supervising Sound Editor Brent Kiser,” but the Directors Guild took it away. They said it wasn’t appropriate. Their reasoning is that if they give it to one person then they’ll have to give it to everybody. I get it — the visual effects department is new on the block. They wrote their contract knowing what was going on, so they get a title card. But try watching a film on mute and then talk to me about the importance of sound. That needs to start changing, for the sheer fact of burnout and legacy.

At the end of the day, you worked so hard to get these projects done. You’re taking care of someone else’s baby and helping it to grow up to be this great thing, but then we’re only seen as the hired help. Or, we never even get a mention. There is so much pressure and stress on the sound department, and I feel we deserve more recognition for what we give to a film.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Pixelogic London adds audio mix, digital cinema theaters

Pixelogic has added new theaters and production suites to its London facility, which offers creation and mastering of digital cinema packages and theatrical screening of digital cinema content, as well as feature and episodic audio mixing.

Pixelogic’s London location now features six projector-lit screening rooms: three theaters and three production suites. Purpose-built from the ground up, the theaters offer HDR picture and immersive audio technologies, including Dolby Atmos and DTS:X.

The equipment offered in the three theaters includes Avid S6 and S3 consoles and Pro Tools systems that support a wide range of theatrical mixing services, complemented by two new ADR booths.

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

London’s LipSync upgrades studio, adds Dolby Atmos

LipSync Post, located in London’s Soho, has upgraded its studio with Dolby Atmos and  installed a new control system. To accomplish this, LipSync teamed up with HHB Communications’ Scrub division to create a hybrid dual Avid S6 and AMS Neve DFC3D desk while also upgrading the room to create Dolby Atmos mixes with a new mastering unit. Now that the upgrade to Theatre 2 is complete, LipSync plans to upgrade Theatre 1 this summer.

The setup has the best of both worlds with full access to both the classic Neve DFC sound while also bringing more hands-on control of their Avid Pro Tools automation via the S6 desks. In order to streamline their workflow as more projects are mixed exclusively “in the box,” LipSync installed the S6s within the same frame as the DFC, with custom furniture created by Frozen Fish Design. This dual operator configuration frees the mix engineers to work on separate Pro Tools systems simultaneously for fast and efficient turnaround in order to meet crucial project deadlines.

“The move into extended surround formats like Dolby Atmos is very exciting,” explains LipSync senior re-recording mixer Rob Hughes. “We have now completed our first feature mix in the refitted theater (Vita & Virginia directed by Chanya Button). It has a very detailed, involved soundtrack and the new system handled it with ease.”

Nugen adds 3D Immersive Extension to Halo Upmix

Nugen Audio has updated its Halo Upmix with a new 3D Immersive Extension, adding further options beyond the existing Dolby Atmos bed track capability. The 3D Immersive Extension now provides ambisonic-compatible output as an alternative to channel-based output for VR, game and other immersive applications. This makes it possible to upmix, re-purpose or convert channel-based audio for an ambisonic workflow.

With this 3D Immersive Extension, Halo fully supports Avid’s newly announced Pro Tools V.2.8, now with native 7.1.2 stems for Dolby Atmos mixing. The combination of Pro Tools 12.8 and Halo 3D Immersive Extension can provide a more fluid workflow for audio post pros handling multi-channel and object-based audio formats.

Halo Upmix is available immediately at a list price of $499 for both OS X and Windows, with support for Avid AAX, AudioSuite, VST2, VST3 and AU formats. The new 3D Immersive Extension replaces the Halo 9.1 Extension and can now be purchased for $199. Owners of the existing Halo 9.1 Extension can upgrade to the Halo 3D Immersive Extension for no additional cost. Support for native 7.1.2 stems in Avid Pro Tools 12.8 is available on launch.

Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Deepwater Horizon’s immersive mix via Twenty Four Seven Sound

By Jennifer Walden

The Peter Berg-directed film Deepwater Horizon, in theaters now, opens on a black screen with recorded testimony from real-life Deepwater Horizon crew member Mike Williams recounting his experience of the disastrous oil spill that began April 20, 2010 in the Gulf of Mexico.

“This documentary-style realism moves into a wide, underwater immersive soundscape. The transition sets the music and sound design tone for the entire film,” explains Eric Hoehn, re-recording mixer at Twenty Four Seven Sound in Topanga Canyon, California. “We intentionally developed the immersive mixes to drop the viewer into this world physically, mentally and sonically. That became our mission statement for the Dolby Atmos design on Deepwater Horizon. Dolby empowered us with the tools and technology to take the audience on this tightrope journey between anxiety and real danger. The key is not to push the audience into complete sensory overload.”

eric-and-wylie

L-R: Eric Hoehn and Wylie Stateman.  Photo Credit: Joe Hutshing

The 7.1 mix on Deepwater Horizon was crafted first with sound designer Wylie Stateman and re-recording mixers Mike Prestwood Smith (dialogue/music) and Dror Mohar (sound effects) at Warner Bros in New York City. Then Hoehn mixed the immersive versions, but it wasn’t just a technical upmix. “We spent four weeks mixing the Dolby Atmos version, teasing out sonic story-point details such as the advancing gas pressure, fire and explosions,” Hoehn explains. “We wanted to create a ‘wearable’ experience, where your senses actually become physically involved with the tension and drama of the picture. At times, this movie is very much all over you.”

The setting for Deepwater Horizon is interesting in that the vertical landscape of the 25-story oil rig is more engrossing than the horizontal landscape of the calm sea. This dynamic afforded Hoehn the opportunity to really work with the overhead Atmos environment, making the audience feel as though they’re experiencing the story and not just witnessing it. “The story takes place 40 miles out at sea on a floating oil drilling platform. The challenge was to make this remote setting experiential for the audience,” Hoehn explains. “For visual artists, the frame is the boundary. For us, working in Atmos, the format extends the boundaries into the auditorium. We wanted the audience to feel as if they too were trapped with our characters aboard the Deepwater Horizon. The movement of sound into the theater adds to the sense of disorientation and confusion that they’re viewing on screen, making the story more immediate and disturbing.”

In their artistic approach to the Atmos mix, Stateman and sound effects designers Harry Cohen and Sylvain Lasseur created an additional sound design layer — specific Atmos objects that help to reinforce the visuals by adding depth and weight via sound. For example, during a sequence after a big explosion and blow out, Mike Williams (Mark Wahlberg) wakes up with a pile of rubble and a broken door on top of him. Twisted metal, confusing announcements and alarms were designed from scratch to become objects that added detail to the space above the audience. “I think it’s one of the most effective Atmos moments in the film. You are waking up with Williams in the aftermath of this intense, destructive sequence. The entire rig is overwhelmed by off-stage explosions, twisting metal, emergency announcements and hissing steam. Things are falling apart above you and around you,” details Hoehn.

Hoehn shares another example: during a scene on the drill deck they created sound design objects to describe the height and scale of the 25-story oil derrick. “We put those sounds into the environment by adding delays and echoes that make it feel like those sounds are pinging around high above you. We wanted the audience to sense the vertical layers of the Deepwater Horizon oil rig,” says Hoehn, who created the delays and echoes using a multichannel delay plug-in called Slapper by The Cargo Cult. “I had separate mix control over the objects and the acoustic echoes applied. I could put the discrete echoes in distinct places in the Atmos environment. It was an agitative design element. It was designed to make the audience feel oriented and at the same time disoriented.”

The additional sounds they created were not an attempt to reimagine the soundtrack, but rather a means of enhancing what was there. “We were deliberate about what we added,” Hoehn explains. “As a team we strived to maximize the advantages of an Atmos theater, which allows us to keep a film mentally, physically and sonically intense. That was the filmmaker’s primary goal.”

The landscape in Deepwater Horizon doesn’t just tower over the audience; it extends under them as well. The underwater scenes were an opportunity to feature the music since these “sequences don’t contain metal banging and explosions. These moments allow the music to give an emotional release,” says Hoehn.

Hoehn explains that the way music exists in Atmos is sort of like a big womb of sound; it surrounds the audience. The underwater visuals depict the catastrophic failure of the blowout preventer — a valve that can close off the well and prevent an uncontrolled flow of oil, and the music punctuates this emotional and pivotal point in the film. It gives a sense of calm that contrasts what’s happening on screen. Sonically, it’s also a contrast to the stressful soundscape happening on-board the rig. Hoehn says, “It’s good for such an intense film and story to have moments where you can find comfort, and I think that is where the music provides such emotional depth. It provides that element of comfort between the moments where your senses are being flooded. We played with dynamic range, going to silence and using the quiet to heighten the anticipation of a big release.”

Hoehn mixed the Atmos version in Twenty Four Seven Sound’s Dolby Atmos lab, which uses an Avid S6 console running Pro Tools 12 and features Meyer Acheron mains and 26 JBL AC28 monitors for the surrounds and overheads. It is an environment designed to provide sonic precision so that when the mixer turns a knob or pushes a fader, the change can instantly be heard. “You can feel your cause-and-effect happen immediately. Sometimes when you’re in a bigger room, you are battling the acoustics of the space. It’s helpful to work under a magnifying glass, particularly on a soundtrack that is as detailed as Deepwater Horizon’s,” says Hoehn.

Hoehn spent a month on the Atmos mix, which served as the basis for the other immersive formats, such as the IMAX 5 and IMAX 12 mixes. “The IMAX versions maintain the integrity of our Atmos design,” says Hoehn, “A lot of care had to be taken in each of the immersive versions to make sure the sound worked in service of the storytelling process.”

Bring On VR
In addition to the theatrical release, Hoehn discussed the prospect of a Deepwater Horizon VR experience. “Working with our friends at Dolby, we’re looking at virtual reality and experimenting with sequences from Deepwater Horizon. We are working to convert the Atmos mix to a headset, virtual sound environment,” says Hoehn. He explains that binaural sound or surround sound in headphones present its own design challenges; it’s not just a direct lift of the 7.1 or Atmos mix.

“Atmos mixing for a theatrical sound pressure environment is different than the sound pressure environment in headphones,” explains Hoehn. “It’s a different sound pressure that you have to design for, and the movement of sounds needs to be that much more precise. Your brain needs to track movement and so maybe you have less objects moving around. Or, you have one sound object hand off to another object and it’s more of a parade of sound. When you’re in a theater, you can have audio coming from different locations and your brain can track it a lot easier because of the fixed acoustical environment of a movie theater. So that’s a really interesting challenge that we are excited to sink our teeth into.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

IBC: Surrounded by sound

By Simon Ray

I came to the 2016 IBC Show in Amsterdam at the start of a period of consolidation at Goldcrest in London. We had just gone through three years of expansion, upgrading, building and installing. Our flagship Dolby Atmos sound mixing theatre finished its first feature, Jason Bourne, and the DI department recently upgraded to offer 4K and HDR.

I didn’t have a particular area to research at the show, but there were two things that struck me almost immediately on arrival: the lack of drones and the abundance of VR headsets.

Goldcrest’s Atmos mixing stage.

360 audio is an area I knew a little about, and we did provide a binaural DTS Headphone X mix at the end of Jason Bourne, but there was so much more to learn.

Happily, my first IBC meeting was with Fraunhofer, where I was updated on some of the developments they have made in production, delivery and playback of immersive and 360 sound. Of particular interest was their Cingo technology. This is a playback solution that lives in devices such as phones and tablets and can already be found in products from Google, Samsung and LG. This technology renders 3D audio content onto headphones and can incorporate head movements. That means a binaural render that gives spatial information to make the sound appear to be originating outside the head rather than inside, as can be the case when listening to traditionally mixed stereo material.

For feature films, for example, this might mean taking the 5.1 home theatrical mix and rendering it into a binaural signal to be played back on headphones, giving the listener the experience of always sitting in the sweet spot of a surround sound speaker set-up. Cingo can also support content with a height component, such as 9.1 and 11.1 formats, and add that into the headphone stream as well to make it truly 3D. I had a great demo of this and it worked very well.

I was impressed that Fraunhofer had also created a tool for creating immersive content, a plug-in called Cingo Composer that could run as both VST and AAX plug-ins. This could run in Pro Tools, Nuendo and other DAWs and aid the creation of 3D content. For example, content could be mixed and automated in an immersive soundscape and then rendered into an FOA (First Order Ambisonics or B-Format) 4-channel file that could be played with a 360 video to be played on VR headsets with headtracking.

After Fraunhofer, I went straight to DTS to catch up with what they were doing. We had recently completed some immersive DTS:X theatrical, home theatrical and, as mentioned above, headphone mixes using the DTS tools, so I wanted to see what was new. There were some nice updates to the content creation tools, players and renderers and a great demo of the DTS decoder doing some live binaural decoding and headtracking.

With immersive and 3D audio being the exciting new things, there were other interesting products on display that related to this area. In the Future Zone Sennheiser was showing their Ambeo VR mic (see picture, right). This is an ambisonic microphone that has four capsules arranged in a tetrahedron, which make up the A-format. They also provide a proprietary A-B format encoder that can run as a VST or AAX plug-in on Mac and Windows to process the outputs of the four microphones to the W,X,Y,Z signals (the B-format).

From the B-Format it is possible to recreate the 3D soundfield, but you can also derive any number of first-order microphones pointing in any direction in post! The demo (with headtracking and 360 video) of a man speaking by the fireplace was recorded just using this mic and was the most convincing of all the binaural demos I saw (heard!).

Still in the Future Zone, for creating brand new content I visited the makers of the Spatial Audio Toolbox, which is similar to the Cingo Creator tool from Fraunhofer. B-Com’s Spatial Audio Toolbox contains VST plug-ins (soon to be AAX) to enable you to create an HOA (higher order ambisonics) encoded 3D sound scene using standard mono, stereo or surround source (using HOA Pan) and then listen to this sound scene on headphones (using Render Spk2Bin).

The demo we saw at the stand was impressive and included headtracking. The plug-ins themselves were running on a Pyramix on the Merging Technologies stand in Hall 8. It was great to get my hands on some “live” material and play with the 3D panning and hear the effect. It was generally quite effective, particularly in the horizontal plane.

I found all this binaural and VR stuff exciting. I am not sure exactly how and if it might fit into a film workflow, but it was a lot of fun playing! The idea of rendering a 3D soundfield into a binaural signal has been around for a long time (I even dedicated months of my final year at university to writing a project on that very subject quite a long time ago) but with mixed success. It is exciting to see now that today’s mobile devices contain the processing power to render the binaural signal on the fly. Combine that with VR video and headtracking, and the ability to add that information into the rendering process, and you have an offering that is very impressive when demonstrated.

I will be interested to see how content creators, specifically in the film area, use this (or don’t). The recreation of the 3D surround sound mix over 2-channel headphones works well, but whether headtracking gets added to this or not remains to be seen. If the sound is matched to video that’s designed for an immersive experience, then it makes sense to track the head movements with the sound. If not, then I think it would be off-putting. Exciting times ahead anyway.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

Deluxe Toronto adds Dolby Atmos theater, Steve Foster joins sound team

Steve Foster, a 25-year veteran of the sound industry, has joined Deluxe Toronto as a senior re-recording mixer. Foster’s first project at Deluxe Toronto will be the second season of the SyFy series The Expanse.

Foster comes to Deluxe Toronto from Technicolor Toronto, formerly Toronto’s Sounds Interchange, where he helped establish the long form audio and ADR departments. He also wrote the score for ‘90s thriller Killer Image. Other credits include Narcos, Rolling Stones: At the Max and the TV series Hannibal. He earned a Gemini Award for Best Sound in a Dramatic Program on Everest, a Genie Award for Overall Sound on Passchendaele and four Motion Picture Sound Editor Golden Reels for sound editing and ADR for various episodics.

In other news, Deluxe Toronto has also extended its capabilities, adding a new Dolby Atmos mixing theater geared toward episodic production to its facility. It features equipment and layout identical to the studio’s existing three episodic sound theaters, allowing for consistent and flexible review sessions for all of the 10 to 12 projects simultaneously flowing through Deluxe Toronto. The facility also houses a large theatrical mix theater with 36-channel Dolby Atmos sound, and a soundstage for ADR recording.

Our Main Image: (L-R) Steve Foster, Mike Baskerville, Christian T. Cooke.

Call of the Wild —Tarzan’s iconic yell

By Jennifer Walden

For many sound enthusiasts, Tarzan’s iconic yell is the true legend of that story. Was it actually actor Johnny Weissmuller performing the yell? Or was it a product of post sound magic involving an opera singer, a dog, a violin and a hyena played backwards as MGM Studios claims? Whatever the origin, it doesn’t impact how recognizable that yell is, and this fact wasn’t lost on the filmmakers behind the new Warner Bros. movie The Legend of Tarzan.

The updated version is not a far cry from the original, but it is more guttural and throaty, and less like a yodel. It has an unmistakable animalistic quality. While we may never know the true story behind the original Tarzan yell, postPerspective went behind the scenes to learn how the new one was created.

Supervising sound editor/sound designer Glenn Freemantle and sound designer/re-recording mixer Niv Adiri at Sound24, a multi-award winning audio post company located on the lot of Pinewood Film Studios in Buckinghamshire, UK, reveal that they went through numerous iterations of the new Tarzan yell. “We had quite a few tries on that but in the end it’s quite a simple sound. It’s actor Alexander Skarsgård’s voice and there are some human and animal elements, like gorillas, all blended together in it,” explains Freemantle.

Since the new yell always plays in the distance, it needed to feel powerful and raw, as though Tarzan is waking up the jungle. To emphasize this, Freemantle says, “We have animal sounds rushing around the jungle after the Tarzan yell, as if he is taking control of it.”

The jungle itself is a marvel of sight and sound. Freemantle notes that everything in the film, apart from the actors on screen, was generated afterward — the Congo, the animals, even the villages and people, a harbor with ships and an action sequence involving a train. Everything.

LEGEND OF TARZANThe film was shot on a back lot of Warner Bros. Studios in Leavesden, UK, so making the CGI-created Congo feel like the real deal was essential. They wanted the Congo to feel alive, and have the sound change as the characters moved through the space. Another challenge was grounding all the CG animals — the apes, wildebeests, ostriches, elephants, lions, tigers, and other animals — in that world.

When Sound24 first started on the film, a year and a half before its theatrical release, Freemantle says there was very little to work with visually. “Basically it was right from the nuts and bolts up. There was nothing there, nothing to see in the beginning apart from still pictures and previz. Then all the apes, animals and jungles were put in and gradually the visuals were built up. We were building temp mixes for the editors to use in their cut, so it was like a progression of sound over time,” he says.

Sound24’s sound design got increasingly detailed as the visuals presented more details. They went from building ambient background for different parts of Africa — from the deep jungle to the open plains — at different times of the day and night to covering footsteps for the CG gorillas. The sound design team included Ben Barker, Tom Sayers, and Eilam Hoffman, with sound effects editing by Dan Freemantle and Robert Malone. Editing dialogue and ADR was Gillian Dodders. Foley was recorded at Shepperton Studios by Foley mixer Glen Gathard.

Capturing Sounds
Since capturing their own field recordings in the Congo would have proved too challenging, Sound 24 opted to source sound recordings authentic to that area. They also researched and collected the best animal sounds they could find, which were particularly useful for the gorilla design.

Sound24’s sound design team designed the gorillas to have a range of reactions, from massive roars and growls to smaller grunts and snorts. They cut and layered different animal sounds, including processed human vocalizations, to create a wide range of gorilla sounds.

There were three main gorillas, and each sounds a bit different, but the most domineering of all was Akut. During a fight between Akut and Tarzan, Adiri notes that in the mix, they wanted to communicate Akut’s presence and power through sound. “We tried to create dynamics within Akut’s voice so that you feel that he is putting in a lot of effort into the fight. You see him breathing hard and moving, so his voice had to have his movement in it. We had to make it dynamic and make sure that there was space for the hits, and the falls, and whatever is happening visually. We had to make sure that all of the sounds are really tied to the animal and you feel that he’s not some super ape, but he’s real,” Adiri says. They also designed sounds for the gang of gorillas that came to egg on Akut in his fight.

The Mix
All the effects, Foley and backgrounds were edited and premixed in Avid Pro Tools 11. Since Sound24 had been working on The Legend of Tarzan for over a year, keeping everything in the box allowed them to update their session over time and still have access to previous elements and temp mixes. “The mix was evolving throughout the sound editorial process. Once we had that first temp mix we just kept working with that, remixing sounds and reworking scenes but it was all done in the box up until the final mix. We never started the mix from scratch on the dub stage,” says Adiri.

For the final Dolby Atmos mix at Warner Bros. De Lane Lea Studios in London, Adiri and Freemantle brought in their Avid S6 console to studio. “That surface was brilliant for us,” says Adiri, who mixed the effects/Foley/backgrounds. He shared the board with re-recording mixer Ian Tapp, on dialogue/music.

Adiri feels the Atmos surround field worked best for quiet moments, like during a wide aerial shot of the jungle where the camera moves down through the canopy to the jungle floor. There he was able to move through layers of sounds, from the top speakers down, and have the ambience change as the camera’s position changed. Throughout the jungle scenes, he used the Atmos surrounds to place birds and distant animal cries, slowly panning them around the theater to make the audience feel as though they are surrounded by a living jungle.

He also likes to use the overhead speakers for rain ambience. “It’s nice to use them in quieter scenes when you can really feel the space, moving sounds around in a more subliminal way, rather than using them to be in-your-face. Rain is always good because it’s a bright sound. You know that it is coming from above you. It’s good for that very directional sort of sound.”

Ambience wasn’t the only sound that Adiri worked with in Atmos. He also used it to pan the sounds of monkeys swinging through the trees and soaring overhead, and for Tarzan’s swinging. “We used it for these dynamic moments in the storytelling rather than filling up those speakers all the time. For the moments when we do use the Atmos field, it’s striking and that becomes a moment to remember, rather than just sound all the time,” concludes Freemantle.

Jennifer Walden is a New Jersey-based writer and audio engineer. 

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

London’s Halo adds dubbing suite

Last month, London’s Halo launched a dubbing suite, Studio 5, at its Noel Street facility. The studio is suited for TV mix work across all genres, as well as for DCP 5.1 and 7.1 theatrical projects, or as a pre-mix room for Halo’s Dolby Features licensed Studios 1 and 3. The new room is also pre-wired for Dolby Atmos.

The new studio features an HDX2 Pro Tools 12|HD system, a 24-fader Avid S6 M40 and a custom Dynaudio 7.1 speaker system. This is all routed via a Colin Broad TMC-1-Penta controlled DADAX32 digital audio matrix for maximum versatility and future scalability. Picture playback from Pro Tools is provided by an AJA Kona LHi card via a Barco 2K digital projector.

In addition, Halo has built a dedicated 5.1 audio editing room for their recently arrived head of sound editorial, Jay Price, to work from. Situated directly adjacent to the new studio, the room features Pro Tools 12|HD Native system and 5.1 Dynaudio Air 6 speakers.

Jigsaw24 and CB Electronics supplied the hardware and the installation know-how. Level Acoustic designed, and Munro Acoustics provided a custom speaker system.

Warsaw’s Dreamsound develops new spin on Dolby Atmos

This Poland-based studio offers full-service post and a newly installed Atmos mix stage.

By Mel Lambert

Back in 2012, when the owners of Warsaw, Poland-based Dreamsound Studios were contemplating a new re-recording stage, they were invited to Dolby’s European headquarters in Wootton Bassett in England to evaluate the Atmos immersive sound system, which they hoped to install. “But we also decided to conduct our own studies into the correlation between Atmos panning and the localization of phantom sound sources,” recalls Dreamsound co-partner Marcin Kasiński.

Using various samples of filtered pink noise directed at experienced listeners from nine targeted locations around a central seating area, Kasiński and his partner Kacper Habisiak — both graduates from The Frederic Chopin University of Music’s sound engineering department, and experienced editors and re-recording mixers as well as accomplished musicians — discovered that their test audience could more easily identify sound coming from the front quadrant and rear corners and less easily from the sides.

Marcin Kasinski (left) and Kacper Habisiak, flanking Pavel Stverak, sound consultant with Dolby.

During a workshop at the recent AES Convention in Warsaw, Kasiński and Habisiak presented a paper on their findings, reporting that the implications for immersive sound mixing are immediately obvious, with localization of height information being enhanced at high rather than low frequencies.

Their follow-up tests will be with real sound samples rather than tones, and in addition to the correlation between Atmos objects and on-screen images, Kasiński says they plan to test dynamic sounds that move from one loudspeaker channel to another. They also want to open up their evaluation sessions to include non-trained listeners, which will more closely mimic an average movie-going audience.

Editorial & Re-Recording
In addition to their now-up-and-running large Dolby-certified Atmos stage, Dreamsound comprises a quartet of 5.1-channel sound editing rooms, one of which serves as a pre-mix and broadcast-mix area and features a 24-fader Avid ICON D-Command console. “We also have a 1,100-square-foot Foley stage, which is also used for ADR and walla recording,” says Kasiński. The new Atmos re-recording stage features a 32-fader Avid ICON D-Control ES console connecting to a trio of JBL ScreenArray cabinets located behind the screen and multiple SC12/SC eight surround loudspeakers mounted on the ceiling, side and rear walls, plus model 4632 18-inch subwoofers. Crown DSi and XLS Series amplifiers power the 32-speaker system.

“Because JBL speakers are the standard in Polish cinemas, and they match perfectly with Crown amplifiers, they were a logical choice of playback components in our Dolby Atmos screening room,” Habisiak says. “Equally important, the performance of these speakers and amplifiers meet Dolby’s licensing requirements, which are extremely stringent regarding specifications that include sound pressure levels, frequency response, coverage relative to room size and other parameters.”

three

Video playback is handled by a Christie CP2220 2K DCI projector and a 19-foot wide Harkness mini-perforated projection screen. The room also includes three Avid Pro Tools HD playback/record machines, a Lexicon 960L 5.1 reverb unit, Cedar DNS One noise-reduction system and a wide range of Pro Tools plug-ins.

Background, Philosophy & Work
“During our studies [at Frederic Chopin University] we started to work in sound post production. We also got to work on feature film projects,” explains Kasiński. “Since Warsaw is the center of the Polish film industry, we wanted to create a company that could provide the best possible sound services.”

The partners point out that along with some innovative technology, Dreamsound is at its core a creative team of film enthusiasts. “We have managed to gather together a great group of sound editors, Foley artists and mixers,” shares Kasiński.

During the past six years Dreamsound has worked with many acclaimed Polish directors, including Malgorzata Szumowska, Agnieszka Holland, Jerzy Hoffman and Wladyslaw Pasikowski. Last year they won a MPSE Golden Reel Award for Best Sound Editing in a Feature Documentary for Powstanie Warszawskie [Warsaw Uprising], directed by Jan Komasa.

This year, Dreamsound expects to work on six feature films and two TV series — one for Polish TV and one for Polish HBO — in addition to handling re-recording for other supervising sound editors. “While we specialize in Polish-language productions, we have also worked on some foreign films, including one in half-Mandarin and half-English,” reports Kasiński.

Foley

Dreamsound prides itself on being a full-service audio post house, offering post sound editing, Foley, theatrical mix and broadcast deliverables. “We have also fully adopted an American workflow for sound post,” explains Kasiński. “So we follow the same standards and speak the same language as our friends abroad. For example, we recently recorded Foley for a French film studio and handled a remote ADR session for a Japanese studio — there was always fluent and creative collaboration.”

All of that aside, the co-owners readily concede that it may be too early to talk about the success of the new Atmos stage. “Although we haven’t mixed any movies in Atmos yet, there are some productions on the horizon,” says Kasiński. We are waiting for more Polish cinemas to install Atmos systems. Immersive sound has opened a new chapter for movie soundtracks. We only have to wait until DCI or SMPTE establishes an open-standard for immersive audio. Time will tell. For sure, we don’t want to rest on our laurels. We are ready to provide the best possible sound services for clients from around the world.”

Creating sounds, mix, more for ‘The Hunger Games: Mockingjay, Part 1’

By Jennifer Walden

It may be called The Hunger Games, but in Mockingjay, Part 1, the games are over. Life for the people of Panem, outside The Capitol, is about rebellion, war and survival. Supervising sound editor/sound designer/re-recording mixer Jeremy Peirson, at Warner Bros. Sound in Burbank, has worked with director Francis Lawrence on both Catching Fire and Mockingjay, Part 1.

Without the arena and its sinister array of “horrors” (for those who don’t remember Catching Fire, those horrors, such as blood rain, acid fog, carnivorous monkeys and lightening storms were released every hour in the arena), Mockingjay, Part 1 is not nearly as diverse, according to Peirson. “Catching Fire was such a huge story between The Capitol and all the various Districts. Continue reading

Dolby bringing Atmos to homes… are small post houses next?

By Robin Shore

Last month Dolby announced that its groundbreaking Atmos surround sound format will soon be available outside of commercial cinemas. By sometime early next year consumer’s will be able to buy special Atmos-enabled A/V receivers and speakers for their home theater systems.

I recently had the chance to demo a prototype of an Atmos home system at an event hosted at Dolby’s New York offices.

A brief overview for those who might not be totally familiar with this technology: Atmos is Dolby’s latest surround sound format. It includes overhead speakers which allow sounds to be panned above the audience. Rather than using a traditional track-based paradigm, Atmos mixes are object-oriented. An Atmos mix contains up to 128 audio objects, each with  Continue reading