Tag Archives: Warner Bros. Sound

The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The Emmy-nominated sound editing team’s process on HBO’s Vice Principals

By Jennifer Walden

HBO’s comedy series Vice Principals — starring Danny McBride and Walton Goggins as two rival vice principals of North Jackson High School — really went wild for the Season 2 finale. Since the school’s mascot is a tiger, they hired an actual tiger for graduation day, which wreaked havoc inside the school. (The tiger was part real and part VFX, but you’d never know thanks to the convincing visuals and sound.)

The tiger wasn’t the only source of mayhem. There was gunfire and hostages, a car crash and someone locked in a cage — all in the name of comedy.

George Haddad

Through all the bedlam, it was vital to have clean and clear dialogue. The show’s comedy comes from the jokes that are often ad-libbed and subtle.

Here, Warner Bros. Sound supervising sound editor George Haddad, MPSE, and dialogue/ADR editor Karyn Foster talk about what went into the Emmy-nominated sound editing on the Vice Principals Season 2 finale, “The Union Of The Wizard & The Warrior.”

Of all the episodes in Season 2, why did you choose “The Union of the Wizard & The Warrior” for award consideration?
George Haddad: Personally, this was the funniest episode — whether that’s good for sound or not. They just let loose on this one. For a comedy, it had so many great opportunities for sound effects, walla, loop group, etc. It was the perfect match for award consideration. Even the picture editor said beforehand that this could be the one. Of course, we don’t pay too much attention to its award-potential; we focus on the sound first. But, sure enough, as we went through it, we all agreed that this could be it.

Karyn Foster: This episode was pretty dang large, with the tiger and the chaos that the tiger causes.

In terms of sound, what was your favorite moment in this episode? Why?
Haddad: It was during the middle of the show when the tiger got loose from the cage and created havoc. It’s always great for sound when an animal gets loose. And it was particularly fun because of the great actors involved. This had comedy written all over it. You know no one is going to die, just because the nature of the show. (Actually, the tiger did eat the animal handler, but he kind of deserved it.)

Karyn Foster

I had a lot of fun with the tiger and we definitely cheated reality there. That was a good sound design sequence. We added a lot of kids screaming and adults screaming. The reaction of the teachers was even more scared than the students, so it was funny. It was a perfect storm for sound effects and dialogue.

Foster: My favorite scene was when Lee [Goggins] is on the ground after the tiger mauls his hand and he’s trying to get Neal [McBride] to say, “I love you.” That scene was hysterical.

What was your approach to the tiger sounds?
Haddad: We didn’t have production sound for the tiger, as the handler on-set kept a close watch on the real animal. Then in the VFX, we have the tiger jumping, scratching with its paws, roaring…

I looked into realistic tiger sounds, and they’re not the type of animal you’d think would roar or snarl — sounds we are used to having for a lion. We took some creative license and blended sounds together to make the tiger a little more ferocious, but not too scary. Because, again, it’s a comedy so we needed to find the right balance.

What was the most challenging scene for sound?
Haddad: The entire cast was in this episode, during the graduation ceremony. So you had 500 students and a dozen of the lead cast members. That was pretty full, in terms of sound. We had to make it feel like everyone is panicking at the same time while focusing on the tiger. We had to keep the tension going, but it couldn’t be scary. We had to keep the tone of the comedy going. That’s where the balance was tricky and the mixers did a great job with all the material we gave them. I think they found the right tone for the episode.

Foster: For dialogue, the most challenging scene was when they are in the cafeteria with the tiger. That was a little tough because there are a lot of people talking and there were overlapping lines. Also, it was shot in a practical location, so there was room reflection on the production dialogue.

A comedy series is all about getting a laugh. How do you use sound to enhance the comedy in this series?
Haddad: We take the lead off of Danny McBride. Whatever his character is doing, we’re not going to try to go over the top just because he and his co-stars are brilliant at it. But, we want to add to the comedy. We don’t go cartoonish. We try to keep the sounds in reality but add a little bit of a twist on top of what the characters are already doing so brilliantly on the screen.

Quite frankly, they do most of the work for us and we just sweeten what is going on in the scene. We stay away from any of the classic Hanna-Barbera cartoon sound effects. It’s not that kind of comedy, but at the same time we will throw a little bit of slapstick in there — whether it’s a character falling or slipping or it’s a gun going off. For the gunshots, I’ll have the bullet ricochet and hit a tree just to add to the comedy that’s already there.

A comedy series is all about the dialogue and the jokes. What are some things you do to help the dialogue come through?
Haddad: The production dialogue was clean overall, and the producers don’t want to change any of the performances, even if a line is a bit noisy. The mixers did a great job in making sure that clarity was king for dialogue. Every single word and every single joke was heard perfectly. Comedy is all about timing.

We were fortunate because we get clean dialogue and we found the right balance of all the students screaming and the sounds of panicking when the tiger created havoc. We wanted to make sure that Danny and his co-stars were heard loud and clear because the comedy starts with them. Vice Principals is a great and natural sounding show for dialogue.

Foster: Vice Principals was a pleasure to work on because the dialogue was in good shape. The editing on this episode wasn’t difficult. The lines went together pretty evenly.

We basically work with what we’ve been given. It’s all been chosen for us and our job is to make it sound smooth. There’s very minimal ADR on the show.

In terms of clarification, we make sure that any lines that really need to be heard are completely separate, so when it gets to the mix stage the mixer can push that line through without having to push everything else.

As far as timing, we don’t make any changes. That’s a big fat no-no for us. The picture editor and showrunners have already decided what they want and where, and we don’t mess with that.

There were a large number of actors present for the graduation ceremony. Was the production sound mixer able to record those people in that environment? Or, was that sound covered in loop?
Haddad: There are so many people in the scene. and that can be challenging to do solely in loop group. We did multiple passes with the actors we had in loop. We also had the excellent sound library here at Warner Bros. Sound. I also captured recordings at my kids’ high school. So we had a lot of resource material to pull from and we were able to build out that scene nicely. What we see on-camera, with the number of students and adults, we were able to represent that through sound.

As for recording at my kids’ high school, I got permission from the principal but, of course, my kids were embarrassed to have their dad at school with his sound equipment. So I tried to stay covert. The microphones were placed up high, in inconspicuous places. I didn’t ask any students to do anything. We were like chameleons — we came and set up our equipment and hit record. I had Røde microphones because they were easy to mount on the wall and easy to hide. One was a Røde VideoMic and the other was their NTG1 microphone. I used a Roland R-26 recorder because it’s portable and I love the quality. It’s great for exterior sounds too because you don’t get a lot of hiss.

We spent a couple hours recording and we were lucky enough to get material to use in the show. I just wanted to catch the natural sound of the school. There are 2,700 students, so it’s an unusually high student population and we were able to capture that. We got lucky when kids walked by laughing or screaming or running to the next class. That was really useful material.

Foster: There was production crowd recorded. For most of the episodes when they had pep rallies and events, there was production crowd recorded. They took the time to record some specific takes. When you’re shooting group on the stage, you’re limited to the number of people you have. You have to do multiple takes to try and mimic that many people.

Can you talk about the tools you couldn’t have done without?
Haddad: This show has a natural sound, so we didn’t use pitch shifting or reverb or other processing like we’d use on a show like Gotham, where we do character vocal treatments.

Foster: I would have to say iZotope RX 6. That tool for a dialogue editor is one that you can’t live without. There were some challenging scenes on Vice Principals, and the production sound mixer Christof Gebert did a really good job of getting the mics in there. The iso-mics were really clean, and that’s unusual these days. The dialogue on the show was pleasant to work on because of that.

What makes this show challenging in terms of dialogue is that it’s a comedy, so there’s a lot of ad-libbing. With ad-libbing, there’s no other takes to choose from. So if there’s a big clunk on a line, you have to make that work. With RX 6, you can minimize the clunk on a line or get rid of it. If those lines are ad-libs, they don’t want to have to loop those. The ad-libbing makes the show great but it also makes the dialogue editing a bit more complicated.

Any final thoughts you’d like to share on Vice Principals?
Haddad: We had a big crew because the show was so busy. I was lucky to get some of the best here at Warner Bros. Sound. They helped to make the show sound great, and we’re all very proud of it. We appreciate our peers selecting Vice Principals for Emmy nomination. That to us was a great feeling, to have all of our hard work pay off with an Emmy nomination.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

AMC’s ‘Preacher’: Creating a sound path for the series

By Jennifer Walden

When I heard that Seth Rogen and Evan Goldberg developed a TV series for AMC based on the comic book series penned by Garth Ennis, I was immediately hooked. Would Preacher be like Pineapple Express? Or more like This Is The End?

Turns out it’s more like This is the End meets Breaking Bad, thanks to Preacher co-writer/executive producer Sam Catlin, who held those same titles on the long-running Breaking Bad series. But Catlin isn’t the only Breaking Bad alum involved in Preacher; composer Dave Porter and picture editor Kelley Dixon (editor on Preacher’s pilot) also had a hand in the series.

Michael Babcock

Michael Babcock

Handling the pilot’s sound was supervising sound editor/sound designer Michael Babcock at Warner Bros. in Burbank — a surprising name to find attached to a TV series, because these days he regularly works on feature films. But when you hear that those films include the Rogen/Goldberg offerings The Interview, This is the End and Neighbors, you understand the jump to TV for this series.

“Seth and Evan are really fun to work with because they come up with original storylines,” explains Babcock. “One reason I wanted to dip my toe back into the TV world was because they were developing this series. It was an excuse for me to make cool sounds for them.”

Although Babcock’s schedule only allowed him to supervise the pilot’s sound, he is still able to contribute sound design on episodes. Richard Yawn took over as supervising sound editor for the rest of the season.

In The Beginning
Preacher’s pilot opens on bold, block type of the words “Outer Space” superimposed over a retro representation of our solar system through which a ball of light flies. This ball eventually crashes on Earth, in the heart of Africa, into the body of a preacher. Sound-wise, the outer-space scene is carried by sound design, with no music or voiceover. Air raid sirens, awash in reverb, act as an underscore while sand-sprinkled sci-fi whooshes accompany the supernatural entity’s flight through the ether.

Rogen and Goldberg were very involved in the sound, says Babcock. “The sound all came out of their heads. They had all of these ideas and themes of what they envisioned these things would sound like.”

Babcock based his sound palette on the themes that Rogen and Goldberg described, like baby sounds, water and heartbeats. “I kept trying to build on their themes. It helps me as a sound designer when a director asks for a specific emotion or sound as their direction,” he says. “For example, if the scene was in a dark room and we needed a dark tone, then I’d stick with slowing down heartbeats or underwater ambience and rumbling. I just tried to stick with those themes.”

The concept for the supernatural entity — which came from Rogen and Goldberg — was that it was like a baby being born. So Babcock designed its sound using baby-related elements, like an ultrasound heartbeat, layered with reversed or manipulated baby vocalizations. When the entity possesses a host, like that first preacher in Africa, it exists inside that person. Using that idea, Babcock worked with womb related sounds, like underwater ambiences that he slowed and pitched.

Throughout the pilot, the entity is searching for the perfect host. It’s tries out different religious figures, including Tom Cruise, and then explodes them if they’re not a match. It eventually ends up inside Preacher’s protagonist, Jesse (Dominic Cooper), a small-town preacher who has possessed a dark side long before the entity possesses him.

“The entity is not just this evil demon thing; there’s more to it. It’s basically growing up as the season goes along, so that’s why Seth and Evan decided to use baby sounds,” explains Babcock. (Those that watched the Directors Commentary version of the pilot will remember that Rogen and Goldberg noted the entity’s sound as a clue for the season.)

Babcock describes the scene in which Jesse is in the church at night, right before the entity finds him. It blows open the doors and knocks the church pews aside as it moves down the aisle. As the entity slams into Jesse, he’s thrown across the room and into the wall. In keeping with the water theme, Babcock says, “I used depth charge sounds for the pews being forced aside. That scene was actually a lot of fun because that’s where I got a chance to really hone-in on what had become the entity heartbeat sound.”

God-Like Sound Design
Following his work on the pilot, Babcock’s main focus for sound design on the other episodes relates to the preacher Jesse, and, in particular, his voice. “Jesse goes into the voice of God mode where he’s channeling this entity,” explains Babcock. To create the vocal effect, Babcock starts with the production dialogue in his Avid Pro Tools 12 session. He runs it through the Waves Renaissance Bass (RBass) plug-in to create a richer low end sound by adding a bit of chorusing. Then, if the lines need more of a rumble, Babcock runs them through Avid’s Pro Subharmonic plug-in. Next, he adds in shaking and wave rumbling sound effects to hit each syllable, the amount depending on how intense or aggressive Jesse needs to be. “It’s a process we are calling ‘the kitchen sink,’ so they’ll say, ‘on this one it needs the kitchen sink.’”

The pilot offered Babcock numerous sound design opportunities. There is blood and gore for the African preacher blowing up, and for the cow that gets rapidly devoured by the vampire, Cassidy (Joseph Gilgun). There’s even a subtle cow death that occurs off-screen in the slaughterhouse scene when Jesse visits Betsy Schenck (Jamie Anne Allman). “In that scene, there is a door opening, and in the time it takes for the door to open and close there is a cow moo and then a gun shot. I don’t know if people have picked up on it because it only happens in one place, but that is the humor of Seth and Evan,” comments Babcock.

There was hand-to-hand combat to design, like for Cassidy’s confrontation on the plane, which he also manages to set on fire. Then there is the Tarantino-esque fight scene where Tulip (Ruth Negga) neutralizes her assailants as her car plows through a cornfield. “We had a bunch of recordings that we did for Interstellar where they wiped out a bunch of cornfields in the film. Sound designer Richard King literally drove a truck through a farmer’s cornfield, after they had harvested the crop, and recorded all of that corn being mowed down. I borrowed those recordings to use for the car fight, to go all around in the surrounds,” says Babcock.

Much of Babcock’s sound design sets the tone for the rest of the season. An example of a reoccurring sound is the church ambience. Babcock used wooden boat creaks and placed them around the room in the 5.1 environment. “They are slowed down so it has this creaking, breathing feel to it. That’s the sound they’re using at night when the church is empty,” he says.

Final Mix
Preacher’s final 5.1 mix was done at Sony Pictures Studios by Deb Adair handling music/dialogue and Ian Herzon taking on sound effects/Foley/backgrounds. As directors for the feature film industry, it’s no surprise that Rogen and Goldberg wanted the Preacher pilot to sound as dynamic and impactful as a theatrical release. That can be difficult to achieve when dealing with television sound specs.

“This is the kind of show where the story needs to be supported by some pretty heavy dynamics to be quiet and loud. I think of all the things that you have to deal with on a creative level… dealing with broadcast spec is just as challenging because Seth and Evan want the show to look theatrical, and they want it to sound theatrical, too,” concludes Babcock.

If you haven’t already, you can check out Preacher on AMC, Sundays at 9/8c.

Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Sound Design for ‘The Hunger Games: Mockingjay — Part 2’

Warner Bros. Sound re-teams with director Francis Lawrence for the final chapter

By Jennifer Walden

It’s the final installment of The Hunger Games, and all the cards are on the table. Katniss Everdeen encourages all the districts to ban together and turn against the Capitol, but President Snow is ready for their attack.

In true Hunger Games-style, he decides to broadcast the invasion and has rigged the city to be a maze full of traps, called pods, which unleash deadly terrors on the rebel attackers. The pods trigger things like flamethrowers, a giant wave of toxic oil, a horde of subhuman creatures called “mutts,” heat-lasers and massive machine guns, all of which are brought to life on-screen thanks to the work of the visual effects team led by VFX supervisor Charles Gibson.

Warner Bros. Sound supervising sound editor/re-recording mixer Jeremy Peirson, who began working with director Francis Lawrence on The Hunger Games franchise during Catching Fire, knew what to expect in terms of VFX. He and Lawrence developed a workflow where Peirson was involved very early on, with a studio space set up in the cutting room.

Picture and Sound Working Together
Without much in the way of VFX early in the post process, Lawrence relied on Peirson’s sound design to help sell the idea of what was happening on-screen. It’s like a rough pencil sketch of how the scene might sound. As the visuals start coming in, Peirson redesigned, refineed and recorded more elements to better fit the scene. “As we move through the process, sometimes the ideas change,” he explains. “Unfortunately, sound is usually the last step before we finish the film. The visual effects were coming in pretty late in the game and sometimes we got surprised, and they’re completely different. All the work we did in trying to prepare ourselves for the final version changed. You just have to roll with it basically.”

Despite having to rework a scene four or five times, there were advantages to this workflow. One was having constant input from director Lawrence. He was able to hear the sound take shape from a very rough point, and guide Peirson’s design. “Francis popped in a couple times a day to listen to what I was doing. He’d say, ‘Yes this is the right direction’ or ‘No, I was thinking more purple or more bold.’ It allowed for this unique situation where we could fine-tune how the movie is going to sound starting very early in the process,” he says.

Jeremy Peirson

Jeremy Peirson

Another advantage to being embedded with the picture department is that sound is able to inform how the picture is cut. “Sometimes they will give me a scene and ask me to quickly create the sound for it so they can re-cut the scene to make it better. That’s always a fun collaboration, when the picture department and sound department can work so closely together,” Peirson states.

The Gun Pod
One of Peirson’s most challenging “pods” to design sound for was the gun pod, where two .50 caliber machine guns were blasting away a concrete archway, causing it to collapse. Peirson needed to build detail and clarity into a scene that had bullets and rubble spraying everywhere. To do this, he spent hours recording specific, individual impacts. “I bought a bunch of brick and tile of various different kinds, and I took a 12-pound shot-put, raised it up about 10 feet and dropped it onto these things to get individual impacts, as well as clatter and debris.”

In the edit, he finessed the rhythm of the impacts, spacing them out so there was a distinguishable variety of sounds and it wasn’t just a wash. “It’s not a single note of sound,” he says. “It was a wide palette of impacts. Each individual impact was hand placed throughout the whole sequence. I tried to differentiate the sound of the wall from the pavement and the grass, the stairs and the metal pole which happened to be in that particular area.”

For Mockingjay  —Part 1, Peirson, sound recordist John Fasal, and sound designer Bryan O. Watkins, did a bullet-by and bullet-ricochet recording session. All of that material came into play for Mockingjay — Part 2, in addition to new material, such as the gun sounds captured by Peirson, Fasal, Watkins and sound designer Mitch Osias.

For one of their gun recording sessions, Peirson notes they headed to an industrial park where they were able to capture the gun sounds in a mock-urban environment that would match the acoustics of the city streets on-screen. “We wanted to know how the guns would echo off the buildings and down the alleys — how that would sound from various distances.”

They took it one step further by recording gun sounds inside a warehouse that simulated the underground subway environment in the film. “We were able to record them in different ways, putting the guns in certain spots in the warehouse so we could get a tighter, closer feel that sounded very different from an outside perspective,” he says.

With four recordists, they were able to capture 26 individual sets of recordings for each gunshot — some mono, some stereo and some quad recordings. “We used a large range of mics, everything from Neumann to Schoeps to Sennheiser to AKG. You name it and we probably used it.”

      

When building a gun sound in the edit, Peirson started by selecting a close-up gunshot, then he added an acoustic flavor to that gun. “We didn’t always pick the same type of gun for the acoustic response,” he explains. “It was a lot of hand-cutting to make sure everything was in sync since certain guns fire at different rates; some fire faster and some are slower, but they had to be in the same range as the initial close-up sound.”

Another challenge was designing the mutts — the subhuman lizard-like creatures that inhabit the underground area. Peirson says, “Anytime you have creatures — and we had a lot of creatures — you can design the perfect sound for each one, but how do you sell the difference between all of these creatures when you’re surrounded by 30 or 40 of them?”

Even though there may have been a large group of mutts, within that the characters were only fighting a few of them at any given time. They needed to sound the same, yet different. Peirson’s design also had to factor in how the sound would work against the music, and it had to evolve with the VFX as well.

As the re-recording mixer on the effects, Peirson was able to mix a sound as he was designing it. If something wasn’t working, he could get rid of it right away. “I didn’t need to carry it around and then pick and choose later. By the time we got to the stage, we had the opportunity to refine the whole sonic palette so we only had what we wanted.”

He found that moving to the larger space of the dub stage, and hearing how the sound design plays with the music, generated new ideas for sound. “We added a bit of a different flavor to help the sound cut through, or we add a little bit of detail that was getting lost in the music.”

Since composer James Newton Howard scored all four films in The Hunger Games series, Peirson had a wealth of demos and themes to reference when designing the sound. They were a good indication of what frequency range he could work within and still have the effects cut through the music. “We had an idea of how it would sound, but when you get that fully recorded score, it’s a totally different ballgame in terms of scope. It kicks that demo up a huge notch.”

SS_D142-42501.dng

The Mix
Peirson and re-recording mixer Skip Lievsay — who worked on dialogue and music — crafted the final mix first in Dolby Atmos on Warner Bros. Stage 6 in Burbank, using three Avid ICONs. “This was a completely in-the-box virtual mix,” says Peirson. “We had sound effects on one Pro Tools system, dialogue on another and music on a third system. My sound effects session, which had close to 730 tracks, was a completely virtual mix, meaning there were no physically recorded pre-dubs.”

Using the final Atmos mix as their guide, Peirson and Lievsay then mixed the film in Barco Auro-3D, DTS:X, IMAX and IMAX 12.0, plus 7.1, 5.1 and two-track. “That’s every single format that I know of right now for film,” he concludes. “It was an interesting exercise in seeing the difference between all those formats.”

With Oscar season getting into full swing, we wouldn’t be surprised if the sound team on Mockingjay — Part 2 gets a nod.

‘The 33’: surrounding the audience in sound via Atmos

Warner Bros. and Formosa combine to bring people underground, sonically

By Jennifer Walden

Few films seem so perfectly suited for a playback system that incorporates overhead speakers as Warner Bros.’ The 33. Director Patricia Riggen’s subterranean story, based on the true account of 33 Chilean miners trapped underground for 69 days, is a natural fit for the Dolby Atmos system.

While some mixers have said that putting sound in the ceiling can actually make the mix feel boxed in because the height of the environment is being defined, that actually worked perfectly for The 33. The miners are trapped under a couple thousand feet of solid rock, and with Atmos’ overhead speakers the audience can experience that feeling of isolation too.

Formosa Group‘s supervising sound editor Mark Stoeckinger and re-recording mixer Martyn Zub, along with WB Sound re-recording mixer Mike Prestwood Smith, were ablin minee to define the miner’s space underground by applying convolution reverbs, via Audio Ease’s Altiverb, to the dialogue, effects and Foley.

The convolution reverbs were created using impulse responses that Stoeckinger, sound designer Alan Rankin and librarian/field recordist Charlie Campagna captured during their trip to California Caverns, located in the Sierra Nevada foothills in California’s historic Gold Country. The impulse responses — recordings that represent how a space reacts to sound — were recorded inside the cavern and mimicked the space of the mine on-screen. “The interesting thing about a mine is that it sounds like there is a lot of reverb on the words, or whatever the sound is that excites the reverb, but it doesn’t have much of a tail at all,” notes Stoeckinger. When applied to the dialog and sound effects, Zub says, “Those impulse responses gave us a truer sense of the space.”

Smith adds, “We were able make anyone or anything sound like they were in those spaces.”

In addition to impulse responses, Stoeckinger and his team “world-ized” numerous effects inside the cavern, like explosions, footsteps, drills, jackhammers and rock falls, by playing them back via a large PA speaker and then recording how it sounded in the space using Sound Devices 788T and 702 digital recorders, as well as a Zoom H4.

tMartyn Zub DSC_5560_headshot_theaterMark Stoeckinger headshot_theater
L-R: Mike Prestwood Smith, Martyn Zub and Mark Stoeckinger

“We had a MacBook Pro with a Pro Tools session loaded with the sounds we wanted to world-ize. We set up all the equipment, hit play and then we’d leave so we didn’t contaminate the recording,” explains Stoeckinger. “After one or two turns in the cave, you couldn’t hear a thing. Even though we were blasting sounds 100 feet away, it was really quiet. It was the weirdest thing. We would think it stopped playing, so we would sneak around the corner and all of a sudden be blasted with sound.”

Before the mine collapses, sounds of the working mine come drifting through the tunnels. Mixing in Atmos on Audio Head’s Stage B, Zub, who handled the sound effects/ backgrounds/Foley in the mix, was able to place machines and drills throughout the space, above and beside the audience. “You can hear it through the rock. That was a huge advantage to mixing in Dolby Atmos. There is activity all around you. Then, when the mine collapses, it just goes dead silent. In the Atmos version, the difference is obvious. It really plays beautifully, from being active and then really feeling the quiet, confined space that these guys are trapped in,” says Zub.

One of Smith’s favorite scenes to mix comes after the mine collapse, where the characters, with only their headlamps in the pitch black, discover the scale of the situation. “It’s nearly all ADR and we were able to position it in the space so convincingly that it actually feels as though you are there with them. In many ways the Atmos system is most effective when you have a quiet space to mix within,” says Smith.

He used convolution reverbs made with Stoeckinger’s custom recorded impulse responses, as well processing via FabFilter and Waves plug-ins. “The impulse responses, in combination with the full frequency surround speakers of the Atmos system, allowed us to really play with perspective and distance in a way that we had never been able to do before.”

Most of the sound after the collapse comes from the reverb return on the dialogue and effects, in addition to rumbly cracking sounds indicative of the mine’s continuing instability. “There were different cracks that preempt the collapse of the mine, and after the collapse, those sounds continue,” says Zub. “They give the sense that this mine isn’t stable. Everybody can hear it moving and so they’re on edge the whole time.”

looking up

The Drill
As the miners are stuck below, several attempts are made to locate and free them. One such effort comes particularly close. For the Atmos mix of this “near miss” scene, the sound of the drill begins overhead and travels down the left wall before disappearing below the floor. “It feels like their salvation. You really feel like you’re in the mine as that is happening,” says Stoeckinger.

Zub adds, “The drill sound is right there in your face. It goes from a sound that’s so loud to very quiet. Director Patricia Riggen gave us space and room to pull out all the music and just let the effects be prominent through those areas, which I think was truly effective.”

When all hope seems lost, and even the group’s leader Mario Sepúlveda (Antonio Banderas) has given up hope, drops of water start to fall on his upturned face. Using the Atmos overheads, Zub slowly brought in the sound of the drill. “It’s slowly approaching. It starts off so quiet and then it gets really loud. You can really feel the sound of the drill all around you. It was a nice thing to be able to do with the sound mix, to have those dynamics,” he says. “I’ve been in a mine and it really is pitch black when the lights go out. It’s a scary environment and to imagine what those guys went through down in the mine, being down there for such a long period of time, is just astronomical really. In the soundtrack, we really tried to capture that feeling of loneliness.”

Creating sounds, mix, more for ‘The Hunger Games: Mockingjay, Part 1’

By Jennifer Walden

It may be called The Hunger Games, but in Mockingjay, Part 1, the games are over. Life for the people of Panem, outside The Capitol, is about rebellion, war and survival. Supervising sound editor/sound designer/re-recording mixer Jeremy Peirson, at Warner Bros. Sound in Burbank, has worked with director Francis Lawrence on both Catching Fire and Mockingjay, Part 1.

Without the arena and its sinister array of “horrors” (for those who don’t remember Catching Fire, those horrors, such as blood rain, acid fog, carnivorous monkeys and lightening storms were released every hour in the arena), Mockingjay, Part 1 is not nearly as diverse, according to Peirson. “Catching Fire was such a huge story between The Capitol and all the various Districts. Continue reading

Creating Under the Dome’s sound experience

By Jennifer Walden

Imagine living your life under an invisible dome that offers no escape, seeing the same people in the same town day after day… oh, and the  “prison” you call home has supernatural powers that might or might not be evil. That’s what the residents of the fictional town of Under the Dome’s Chester’s Mill have to contend with every day on CBS’s sophomore offering based on a Stephen King novel of the same name. Then imagine what that would sound like. Would there be echoes? Would the sounds be magnified? Dulled?

Walter Newman, supervising sound editor at Burbank’s Warner Bros. Sound, is currently working on Season 2 of Under the Dome, which premieres June 30 on CBS with an episode written by King himself.

Continue reading

Creating ever-changing environments for CW’s ‘The 100’

By Jennifer Walden

The CW’s post-apocalyptic drama The 100 follows 100 juvenile convicts sent back to Earth 97 years after a nuclear war. They are sent back from a  space station called The Ark, which is dying: resources are diminishing and life support systems are starting to fail.

Viewers get to see both very distinctive environments and the struggles the inhabitants face. The Ark represents a sense of decay, and the Earth represents the hope of rebirth for humanity.

It was up to sound effects editor Peter Lago and supervising sound editor Charlie Crutcher, MPSE, to come up with what decay and rebirth sound like. They work out of Warner Bros. Studios in Burbank.

Continue reading