Tag Archives: Sony Post Production

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Blade Runner 2049’s dynamic and emotional mix

By Jennifer Walden

“This film has more dynamic range than any movie we’ve ever mixed,” explains re-recording mixer Doug Hemphill of the Blade Runner 2049 soundtrack. He and re-recording mixer Ron Bartlett, from Formosa Group, worked with director Denis Villeneuve to make sure the audio matched the visual look of the film. From the pounding sound waves of Hans Zimmer and Benjamin Wallfisch’s score to the overwhelming wash of Los Angeles’s street-level soundscape, there’s massive energy in the film’s sonic peaks.

L-R: Ron Bartlett, Denis Villeneuve, Joe Walker, Ben Wallfisch and Doug Hemphill. Credit: Clint Bennett

The first time K (Ryan Gosling) arrives in Los Angeles in the film, the audience is blasted with a Vangelis-esque score that is reminiscent of the original Blade Runner, and that was ultimately the goal there — to envelope the audience in the Blade Runner experience. “That was our benchmark for the biggest, most enveloping sound sequence — without being harsh or loud. We wanted the audience to soak it in. It was about filling out the score, using all the elements in Hans Zimmer’s and Ben Wallfisch’s arsenal there,” says Bartlett, who handled the dialogue and music in the mix.

He and Villeneuve went through a wealth of musical elements — all of which were separated so Villeneuve could pick the ones he liked. His preference gravitated toward the analog synth sounds, like the Yamaha CS-80, which composer Vangelis used in his 1982 Blade Runner composition. “We featured those synth sounds throughout the movie,” says Bartlett. “I played with the spatial aspects, spreading certain elements into the room to envelope you into the score. It was very immersive that way.”

Bartlett notes that initially there were sounds from the original Blade Runner in their mix, like huge drum hits from the original score that were converted into 7.1 versions by supervising sound editor Mark Mangini at Formosa Group. Bartlett used those drum hits as punctuation throughout the film, for scene changes and transitions. “Those hits were everywhere. Actually, they’re the first sound in the movie. Then you can hear those big drum hits in the Vegas walk. That Vegas walk had another score with it, but we kept stripping it away until we were down to just those drum hits. It’s so dramatic.”

But halfway into the final mix for Blade Runner 2049, Mangini phoned Bartlett to tell him that the legal department said they couldn’t use any of those sounds from the original film. They’d need to replace them immediately. “Since I’m a percussionist, Mark asked if I could remake the drum hits. I stayed up until 3am and redid them all in my studio in 7.1, and then brought them in and replaced them throughout the movie. Mark had to make all these new spinner sounds and replace those in the film. That was an interesting moment,” reveals Bartlett.

Sounds of the City
Los Angeles 2049 is a multi-tiered city. Each level offers a different sonic experience. The zen-like prayer that’s broadcast at the top level gradually transforms into a cacophony the closer one gets to street-level. Advertisements, announcements, vehicles, music from storefronts and vending machine sounds mix with multi-language crowds — there’s Russian, Vietnamese, Korean, Japanese, and the list goes on. The city is bursting with sound, and Hemphill enhanced that experience by using Cargo Cult’s Spanner on the crowd effects during the scene where K is sitting outside of Bibi’s Bar to put the crowds around the theater and “give the audience a sense of this crush of humanity,” he says.

The city experience could easily be chaotic, but Hemphill and Bartlett made careful choices on the stage to “rack the focus” — determining for the audience what they should be listening to. “We needed to create the sense that you’re in this overpopulated city environment, but it still had to make sense. The flow of the sound is like ‘musique concrète.’ The sounds have a rhythm and movement that’s musical. It’s not random. There’s a flow,” explains Hemphill, who has an Oscar for his work on The Last of the Mohicans.

Bartlett adds that their goal was to keep a sense of clarity as the camera traveled through the street scene. If there was a big, holographic ad in the forefront, they’d focus on that, and as the scene panned away another sound would drive the mix. “We had to delete some of the elements and then move sounds around. It was a difficult scene and we took a long time on it but we’re happy with the clarity.”

On the quiet end of the spectrum, the film’s soundtrack shines. Spaces are defined with textural ambiences and handcrafted reverbs. Bartlett worked with a new reverb called DSpatial created by Rafael Duyos. “Mark Mangini and I helped to develop DSpatial. It’s a very unique reverb,” says Bartlett.

According to the website, DSpatial Reverb is a space modeler and renderer that offers 48 decorrelated outputs. It doesn’t use recorded impulse responses; instead it uses modeled IRs. This allows the user to select and tweak a series of parameters, like surface texture and space size, to model the acoustic and physical characteristics of any room. “It’s a decorrelated reverb, meaning you can add as many channels as you like and pan them into every Dolby Atmos speaker that is in the room. That wasn’t the only reverb we used, but it was the main one we used in specific environments in the film,” says Bartlett.

In combination with DSpatial, Bartlett used Audio Ease’s Altiverb, FabFilter reverbs and Cargo Cult’s Slapper delay to help create the multifaceted reflections that define the spaces on-screen so well. “We tried to make each space different, “says Bartlett. “We tried to evoke an emotion through the choices of reverbs and delays. It was never just one reverb or delay. I used two or three. It was very interesting creating those textures and creating those rooms.”

For example, in the Tyrell Corporation building, Niander Wallace (Jared Leto)’s private office is a cold, lonely space. Water surrounds a central platform; reflections play on the imposing stone walls. “The way that Roger Deakins lit it was just stunning,” says Bartlett. “It really evoked a cool emotion. That’s what is so intangible about what we do, creating those emotions out of sound.” In addition to DSpatial, Altiverb and FabFilter reverbs, he used Cargo Cult’s Slapper delay, which “added a soft rolling, slight echo to Jared Leto’s voice that made him feel a little more God-like. It gave his voice a unique presence without being distracting.”

Another stunning example of Bartlett’s reverb work was K’s entrance into Rick Deckard’s (Harrison Ford) casino hideout. The space is dead quiet then K opens the door and the sound rings out and slowly dissipates. It conveys the feeling that this is a vast, isolated, and empty space. “It was a combination of three reverbs and a delay that made that happen, so the tail had a really nice shine to it,” says Bartlett.

One of the most difficult rooms to find artistically, says Bartlett, was that of the memory maker, Dr. Ana Stelline (Carla Juri). “Everyone had a different idea of what that dome might sound like. We experimented with four or five different approaches to find a good place with that.”

The reverbs that Bartlett creates are never static. They change to fit the camera perspective. Bartlett needed several different reverb and delay processing chains to define how Dr. Stelline’s voice would react in the environment. For example, “There are some long shots, and I had a longer, more distant reverb. I bled her into the ceiling a little bit in certain shots so that in the dome it felt like the sound was bouncing off the ceiling and coming down at you. When she gets really close to the glass, I wanted to get that resonance of her voice bouncing off of the glass. Then when she’s further in the dome, creating that birthday memory, there is a bit broader reverb without that glass reflection in it,” he says.

On K’s side of the glass, the reverb is tighter to match the smaller dimensions and less reflective characteristics of that space. “The key to that scene was to not be distracting while going in and out of the dome, from one side of the glass to the other,” says Bartlett. “I had to treat her voice a little bit so that it felt like she was behind the glass, but if she was way too muffled it would be too distracting from the story. You have to stay with those characters in the story, otherwise you’re doing a disservice by trying to be clever with your mixing.

“The idea is to create an environment so you don’t feel like someone mixed it. You don’t want to smell the mixing,” he continues. “You want to make it feel natural and cool. If we can tell when we’ve made a move, then we’ll go back and smooth that out. We try to make it so you can’t tell someone’s mixing the sound. Instead, you should just feel like you’re there. The last thing you want to do is to make something distracting. You want to stay in the story. We are all about the story.”

Mixing Tools
Bartlett and Hemphill mixed Blade Runner 2049 at Sony Pictures Post in the William Holden Theater using two Avid S6 consoles running Avid Pro Tools 12.8.2, which features complete Dolby Atmos integration. “It’s nice to have Atmos panners on each channel in Pro Tools. You just click on the channel and the panner pops up. You don’t want to go to just one panner with one joystick all the time so it was nice to have it on each channel,” says Bartlett.

Hemphill feels the main benefit of having the latest gear — the S6 consoles and the latest version of Pro Tools — is that it gives them the ability to carry their work forward. “In times past, before we had this equipment and this level of Pro Tools, we would do temp dubs and then we would scrap a lot of that work. Now, we are working with main sessions all the way from the temp mix through to the final. That’s very important to how this soundtrack was created.”

For instance, the dialogue required significant attention due to the use of practical effects on set, like weather machines for rain and snow. All the dialogue work they did during the temp dubs was carried forward into the final mix. “Production sound mixer Mac Ruth did an amazing job while working in those environments,” explains Bartlett. “He gave us enough to work with and we were able to use iZotope RX 6 to take out noise that was distracting. We were careful not to dig into the dialogue too much because when you start pulling out too many frequencies, you ruin the timbre and quality of the dialogue— the humanness.”

One dialogue-driven scene that made a substantial transformation from temp dub to final mix was the underground sequence in which Freysa [Hiam Abbass] makes a revelation about the replicant child. “The actress was talking in this crazy accent and it was noisy and hard to understand what was happening. It’s a very strong expositional moment in the movie. It’s a very pivotal point,” says Bartlett. They looped the actress for that entire scene and worked to get her ADR performance to sound natural in context to the other sounds. “That scene came such a long way, and it really made the movie for me. Sometimes you have to dig a little deeper to tell the story properly but we got it. When K sits down in the chair, you feel the weight. You feel that he’s crushed by that news. You really feel it because the setup was there.”

Blade Runner 2049 is ultimately a story that questions the essence of human existence. While equipment and technique were an important part of the post process, in the end it was all about conveying the emotion of the story through the soundtrack.

“With Denis [Villeneuve], it’s very much feel-based. When you hear a sound, it brings to mind memories immediately. Denis is the type of director that is plugged into the emotionality of sound usage. The idea more than anything else is to tell the story, and the story of this film is what it means to be a human being. That was the fuel that drove me to do the best possible work that I could,” concludes Hemphill.


Jennifer Walden is a NJ-based writer and audio engineer. Follow her on Twitter @audiojeney.

Sony Pictures Post, Imageworks, Colorworks lend hand for ‘Spider-Man 2’

The visual effects, sound and post teams from Sony Pictures Entertainment teamed up on the post production workflow for Columbia Pictures’ The Amazing Spider-Man 2.

Sony Pictures Imageworks provided over 1,000 visual effects shots, Sony Pictures Post Production provided sound, in its first project in Dolby Atmos, and Colorworks performed film scanning, conforming and color grading all in 4K. An innovative workflow provided the teams with unprecedented access to data associated with the production, enabling them to work together in close and simultaneous collaboration. The result was improved efficiency and a more spectacular cinematic experience for audiences around the world.

“Our groups have developed a close, creative partnership, aided by remarkable new technologies that allowed them to work across disciplines as one team,” said Randy Lake, EVP/GM, Digital Production Services. “The results are clearly evident in The Amazing Spider-Man 2,  which established new standards for technical innovation.”

sr0620_plate.1072sr0620_final_composite.1072

Working with senior visual effects supervisor Jerome Chen, Sony Pictures Imageworks created a variety of visual effects that blend seamlessly with live-action stunts and performances. Artists created three new villains — Electro, Rhino and Goblin — and developed fully-digital CG environments representing New York’s Times Square, Manhattan skyscrapers, a next generation hydroelectric plant and an art deco-era clock tower among many other one-off effects seen throughout the film.

The film’s sound team was led by sound supervisors/designers Addison Teague and Eric Norris, and re-recording mixers Paul Massey and David Giammarco. Working in the newly-renovated William Holden Theater on the Sony Pictures Entertainment lot, Massey and Giammarco mixed the soundtrack natively in Dolby Atmos and finished in Atmos, Auro and 5.1 formats, a combination that has never been done before. They used a Harrison MPC4D console with Xrange engine.

ts0450_electro_over_model.1806 ts0450_final_composite.1826

At Colorworks, film dailies — amounting to more than 1.5 million feet of 35mm film — were scanned to 4K digital format prior to editorial. The film later returned to Colorworks for conforming, final color grading (in 4K) and mastering in 2D and stereo 3D.

Collaboration between the teams was facilitated by Sony Pictures Entertainment’s Production Backbone, the studio’s shared storage environment. Original production elements, along with associated metadata — cumulatively amounting to more than 2.4 Petabytes — were stored on the backbone’s shared-storage environment where it was accessible to sound and picture editors, visual effects artists and others, as needed and in appropriate file formats. The backbone also simplified delivery of elements to external visual effects vendors, who were able to access the backbone through secure, high-speed connections.

“Scanning all of the original film elements at 4K allowed us to work at the highest quality and implement a true digital workflow to the production,” explained Bill Baggelaar, Senior VP of Technology for Colorworks and Post Production. “That saved significant time. The backbone not only provided access to data, it also tracked progress. If an editor or a visual effects artist made a change, it was quickly available to all the other members of the team, increasingly important for quick turn around on a dynamic film like The Amazing Spider-Man 2.”