Category Archives: Audio Mixing

Skywalker’s Michael Semanick: Mixing SFX for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic.

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage. (And come back next week for our interview with Ren Klyce about mixing the music and sounds.)

You have all of these amazing elements — Skywalker’s effects, John Williams’s score and the dialogue. How do you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and to get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.

Super Bowl: Sound Lounge’s audio post for Pepsi, NFL and more

By Jennifer Walden

Super Bowl Sunday is an unofficial national holiday in this country, with almost as much excitement for the commercials that air as for the actual game. And regardless of which teams are playing, New York’s advertising and post communities find themselves celebrating, because they know the work they are providing will be seen by millions and talked about repeatedly in offices and on social media. To land a Super Bowl ad is a pretty big deal, and audio post facility Sound Lounge has landed seven!

Tom Jucarone

In this story, president/mixer/sound designer Tom Jucarone, mixer/sound designer Rob DiFondi and mixer/sound designer Glen Landrum share details on how they helped to craft the Super Bowl ads for Pepsi, E*Trade, the NFL and more.

Pepsi This is the Pepsi via Pepsi’s in-house creative team
This spot looks at different Pepsi products through the ages and features different pop-culture icons — like Cindy Crawford — who have endorsed Pepsi over the years. The montage-style ad is narrated by Jimmy Fallon.

Sonically, what’s unique about this spot?
Jucarone: What’s unique about this spot is the voiceover — it’s Jimmy Fallon. Sound-wise, the spot was about him and the music more than anything else. The sound effects were playing a very secondary role.

Pepsi had a really interesting vision of how they wanted Jimmy to sound. We spent a lot of time making his voice work well against the music. The Pepsi team wanted Fallon’s voice to have a fullness yet still be bright enough to cut through the heavy-duty music track. They wanted his voice to sound big and full, but without losing the personality.

What tools helped?
I used a few plug-ins on his voice. Obviously, there was some EQ but I also used one plug-in called MaxxBass by Waves, which is a bass enhancement plug-in. With that, I was able to manipulate where on the low-end I could affect his voice with more fullness. Then we added a touch of reverb to make it a bit bigger. For that, I used Audio Ease’s Altiverb but it’s very slight.

Persil Game-Time Stain-Time via DDB New York
In this spot, there’s a time-out during the big game and an announcer on TV taps on the television’s glass — from the inside. He points out a guacamole stain on one viewer’s shirt, then comes through the TV and offers up a jug of laundry detergent. The man’s shirt flies off, goes into the washer and comes out perfect. Suddenly, the shirt is back on the viewer’s body and the announcer returns to inside the TV.

Sonically, what’s unique about this spot?
Jucarone: It’s an interesting spot because it’s so totally different from what you’d expect to see during the Super Bowl. It’s this fun, little quirky spot. This guy comes out of the TV and turns all these people onto this product that cleans their clothes. There was no music, just a few magical sound effects. It’s a dialogue-driven spot, so the main task there was to clean up the dialogue and make it clear.

iZotope is my go-to tool for dialogue clean up. I love that program. There are so many different ways to attack the clean up. I’m working on a spot now that has dialogue that is basically not savable, but I think I can save it with iZotope. It’s a great tool — one of the best ones to have. I used RX 6 a lot on the Persil spot, particularly for this one guy who whispers, “What is going on?” The room tone was pretty heavy on that line, and it was one of the funniest lines, so we really wanted that one to be clear.

The approach to all these spots was to find out what unique sonic pieces are important to the story, and those are the ones you want to highlight. Back before the CALM Act, everyone was trying to make their commercial louder than everybody else’s. Now that we have that regulation, we’re a bit more open to making a spot more cinematic. We have a greater opportunity for storytelling.

E*Trade This is Getting Old via MullenLowe
In this spot, a collection of senior citizens sing about still being in the workforce — “I’m eighty-five, and I want to go home.” It’s set to the music of Harry Belafonte’s song “Day-O.” From lifeguard to club DJ, their careers are interesting, sure, but they really want nothing more than to retire.

Sonically, what’s unique about this spot?
Jucarone: That spot was difficult because of all the different voices involved, all the different singers. The agency worked with mixer Rob DiFondi and me on this one. Rob did the final mix.

The spot has a music track with solo and group performances. They had recorded the performers at a recording studio and then brought those tracks to us as a playlist of roughly 20 different versions. There were multiple people with multiple different versions, and the challenge was going through all of those to find the most unique and funniest voices for each person. So that took some time. Then, we had to match all of those voices so they sounded similar in tone. We had to re-mix each voice as we found it and used it because it wasn’t already processed. Then we had to also craft the group.
I worked with the agency to get the solo performances finalized and then Rob, the other mixer on it, took over and created the group performances. He had to combine all of these singular voices to make it sound like they were all singing together in a group, which was pretty difficult. It turned out to be a very complex session. We had multiple versions because they wanted to have choices after the fact.

What tools did you use?
There were a couple of different reverbs that really helped on this spot. We used the Waves Renaissance Reverb, and Avid’s Reverb One. We used a fun analog modeling EQ called Waves V-EQ4, which is modeled after a Neve 1081 console EQ. We wanted the individual voices to sound like they were singing together, one after another.

Any particular challenge?
DiFondi: My big job was the background chorus. We had to make a group of eight elderly background singers sound much larger. The problem there was layering the same eight people four times doesn’t net you the same as having 32 individuals. So what I did was treat each track separately. I varied the timing of each layer and I put each one in a separate room using different reverb settings and in the end that gave us the sound of a much larger chorus though we had only eight people.

Super Bowl NFL Celebrations to Come via Grey New York
NY Giants players Eli Manning and Odell Beckham, Jr. re-enact the famous last dance from Dirty Dancing, including the legendary lift at the end. The spot starts out realistic with on-camera dialogue for Eli and Odell during a team practice, but then it transitions into more of a music video as the players get wrapped up in the dance.

Sonically, what’s unique about this spot?
Landrum: There was lots of mastering on the main music track to get it to pop and be loud on-air. I used the iZotope Neutron for my music track mastering. I love that plug-in and have been using it and learning more about it. It has great multi-band compression, and the exciter is a cool addition to really finesse frequencies.

I think the most interesting part of the process was working with the director and agency creatives and producers to edit the music to match the storyboard they had before the actual shoot. We cut a few versions of varying lengths to give some flexibility. They used the music edits on-set so the guys could dance to it. I thought this was so smart because they would know what’s working and what isn’t while on-set and could adjust accordingly. I know they had a short shoot day so this had to help.

Everything worked out perfectly. I think they edited in less than a week (editor Geoff Hounsell from Arcade Edit, NY) and we mixed in a day or less. The creatives and producers involved with this spot and the NFL account are an awesome group. They make decisions and get it done and the result was amazing. Also, our expert team of producers here made the process smooth as silk during the stressful Super Bowl time.


Jennifer Walden is a New Jersey-based audio engineer and writer.  Follow her on Twitter @AudioJeney.

Cinna 1.2

Oscar Watch: The Shape (and sound) of Water

Post production sound mixers Christian Cooke and Brad Zoern, who are nominated (with production mixer Glen Gauthier) for their work on Fox’s The Shape of Water, have sat side-by-side at mixing consoles for nearly a decade. The frequent collaborators, who handle mixing duties at Deluxe Toronto, faced an unusual assignment given that the film’s two lead characters never utter a single word of actual dialogue. In The Shape of Water, which has been nominated for 13 Academy Awards, Elisa (Sally Hawkins) is mute and the creature she falls in love with makes undefined sounds. This creative choice placed more than the usual amount of importance on the rest of the soundscape to support the story.

L-R: Nathan Robitaille, J. Miles Dale, Brad Zoern, director Guillermo del Toro, Christian Cooke, Nelson Ferreira, Filip Hosek, Cam McLauchlin, video editor Sidney Wolinsky, Rob Hegedus, Doug Wilkinson.

Cooke, who focused on dialogue and music, and Zoern, who worked with effects, backgrounds and Foley, knew from the start that their work would need to fit into the unique and delicate tone that infused the performances and visuals. Their work began, as always, with pre-dubs followed by three temp mixes of five days each, which allowed for discussion and input from director Guillermo del Toro. It was at the premixes that the mixers got a feel for del Toro’s conception for the film’s soundtrack. “We were more literal at first with some of the sounds,” says Zoern. “He had ideas about blending effects and music. By the time we started on the five-week-long mix, we had a very clear idea about what he was looking for.”

The final mix took place in one of Deluxe Toronto’s five stages, which have identical acoustic qualities and the same Avid Pro Tools-based Harrison MP4D/Avid S6 hybrid console, JBL M2 speakers and Crown amps.

The mixers worked to shape sonic moments that do more than represent “reality,” but create mood and tension. This includes key moments such as the sound of a car’s windshield wipers that build in volume until they take over the track in the form of a metronome-like beat underlining the tension of the moment. One pivotal scene finds Richard Strickland (Michael Shannon) paying a visit to Zelda Fuller (Octavia Spencer). As Strickland speaks, Zelda’s husband Brewster (Martin Roach) watches television. “It was an actual mono track from a real show,” Cooke explains. “It starts out sounding roomy and distant as it would really have sounded. As the scene progresses, it expands, getting more prominent and spreading out around the speakers [for the 5.1 version]. By the end of the scene, the audio from the TV has become something totally different from what it started the scene as and then we melded that seamlessly into Alexandre Desplat’s score.”

Beyond the aesthetic work of building a sound mix, particularly one so fluid and expressionistic, post production mixers must also collaborate on a large number of technical decisions during the mix to ensure the elements have the right amount of emotional punch without calling attention to themselves. Individual sounds, even specific frequencies, vie for audience attention and the mixers orchestrate and layer them.

“It’s raining outside when they come into the room,” Zoern notes about the above scene. “We want to initially hear the sound of the rain to have a context for the scene. You never just want dialogue coming out of nowhere; it needs to live in a space. But then we pull that back to focus on the dialogue, and then the [augmented] audio from the TV gains prominence. During the final mix, Chris and I are always working together, side by side, to meld the hundreds of sounds the editors have built in a way that reflects the story and mood of the film.”

“We’re like an old married couple,” Cooke jokes. “We finish each other’s sentences. But it’s very helpful to have that kind of shorthand in this job. We’re blending so many pieces together and if people notice what we’ve done, we haven’t done our jobs.”


Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.


Genelec intros The Ones series of near-field monitors

Genelec is now offering point source monitoring with The Ones series, featuring the established 8351 three-way monitor along with the new 8341 and 8331. These small three-way coaxial monitors are housed in enclosures no larger than a traditional two-way Genelec 8040 or 8030. Their coaxial driver design provides accurate imaging and improved sound quality with clear accuracy, both on- and off-axis, vertically as well as horizontally. Also, there are no visible woofers.

Like the 8351, both the 8341 and 8331 can be orientated horizontally or vertically using an adjustable IsoPod base for isolation. But while the 8341 and 8331 both echo the 8351 in form and function, the new models have been entirely reengineered and feature ultra-compact dimensions: 13.78 in. x 9.33 in. x 9.57 in. [350 mm x 237 mm x 243 mm] for the 8341, and 11.77 in. x 7.44 in. x 8.70 in. [299 mm x 189 mm x 212 mm] for the 8331.

Innovations include a motor assembly that sees both the midrange and the tweeter share the same compact magnet system, reducing size and weight with no reduction in response. The midrange coaxial driver cone is now composed of concentric sections, optimizing midrange linearity — as does the DCW, which covers the entire front face of the enclosure. Despite the size of the 8341 and 8331, each unit incorporates three stages of dedicated Class D amplification.

Short-term maximum output capacity is 110 dB SPL for the 8341 (at 1 m) and 104 dB SPL for the 8331 (at 1 m), with accuracy better than ±1.5 dB, and respective frequency responses start at 45 Hz and 38 Hz (-6 dB) and extend beyond 40 kHz both for the analog and digital inputs.

The coaxial design allows for ultra-near-field listening, creating a dramatic improvement in the direct sound-to-reverberant sound ratio and further reducing the room’s influence while monitoring. The listening distance may be as short as 16 inches, with no loss of precision.

The Ones were recently used by Richard Chycki for his latest project, a 5.1 mix of The Tragically Hip – A National Celebration.


Capturing Foley for Epix’s Berlin Station

Now in its second season on Epix, the drama series Berlin Station centers on undercover agents, diplomats and whistleblowers inhabiting a shadow world inside the German capital.

Leslie Bloome

Working under the direction of series supervising sound editor Ruy Garcia, Westchester, New York-based Foley studio Alchemy Post Sound is providing Berlin Station with cinematic sound. Practical effects, like the clatter of weapons and clinking glass, are recorded on the facility’s main Foley stage. Certain environmental effects are captured on location at sites whose ambience is like the show’s settings. Interior footsteps, meanwhile, are recorded in the facility’s new “live” room, a 1,300-square-foot space with natural reverb that’s used to replicate the environment of rooms with concrete, linoleum and tile floors.

Garcia wants a soundtrack with a lot of detail and depth of field,” explains lead Foley artist and Alchemy Post founder Leslie Bloome. “So, it’s important to perform sounds in the proper perspective. Our entire team of editors, engineers and Foley artists need to be on point regarding the location and depth of field of sounds we’re recording. Our aim is to make every setting feel like a real place.”

A frequent task for the Foley team is to come up with sounds for high-tech cameras, surveillance equipment and other spy gadgetry. Foley artist Joanna Fang notes that sophisticated wall safes appear in several episodes, each one featuring differing combinations of electronic, latch and door sounds. She adds that in one episode a character has a microchip concealed in his suit jacket and the Foley team needed to invent the muffled crunch the chip makes when the man is frisked. “It’s one of those little ‘non-sounds’ that Foley specializes in,” she says. “Most people take it for granted, but it helps tell the story.”

The team is also called on to create Foley effects associated with specific exterior and interior locations. This can include everything from seedy safe houses and bars to modern office suites and upscale hotel rooms. When possible, Alchemy prefers to record such effects on location at sites closely resembling those pictured on-screen. Bloome says that recording things like creaky wood floors on location results in effects that sound more real. “The natural ambiance allows us to grab the essence of the moment,” he explains, “and keep viewers engaged with the scene.”

Footsteps are another regular Foley task. Fang points out that there is a lot of cat-and-mouse action with one character following another or being pursued, and the patter of footsteps adds to the tension. “The footsteps are kind of tough,” she says. “Many of the characters are either diplomats or spies and they all wear hard soled shoes. It’s hard to build contrast, so we end up creating a hierarchy, dark powerful heels for strong characters, lighter shoes for secondary roles.”

For interior footsteps, large theatrical curtains are used to adjust the ambiance in the live stage to fit the scene. “If it’s an office or a small room in a house, we draw the curtains to cut the room in half; if it’s a hotel lobby, we open them up,” Fang explains. “It’s amazing. We’re not only creating depth and contrast by using different types of shoes and walking surfaces, we’re doing it by adjusting the size of the recording space.”

Alchemy edits their Foley in-house and delivers pre-mixed and synced Foley that can be dropped right into the final mix seamlessly. “The things we’re doing with location Foley and perspective mixing are really cool,” says Foley editor and mixer Nicholas Seaman. “But it also means the responsibility for getting the sound right falls squarely on our shoulders. There is no ‘fix in the mix.’ From our point of view, the Foley should be able to stand on its own. You should be able to watch a scene and understand what’s going on without hearing a single line of dialogue.”

The studio used Neumann U87 and KMR81 microphones, a Millennia mic-pre and Apogee converter, all recorded into Avid Pro Tools on a C24 console. In addition to recording a lot of guns, Alchemy also borrowed a Doomsday prep kit for some of the sounds.

The challenge to deliver sound effects that can stand up to that level of scrutiny keeps the Foley team on its toes. “It’s a fascinating show,” says Fang. “One moment, we’re inside the station with the usual office sounds and in the next edit, we’re in the field in the middle of a machine gun battle. From one episode to the next, we never know what’s going to be thrown at us.”


Coco’s sound story — music, guitars and bones

By Jennifer Walden

Pixar’s animated Coco is a celebration of music, family and death. In the film, a young Mexican boy named Miguel (Anthony Gonzalez) dreams of being a musician just like his great-grandfather, even though his family is dead-set against it. On the evening of Día de los Muertos (the Mexican holiday called Day of the Dead), Miguel breaks into the tomb of legendary musician Ernesto de la Cruz (Benjamin Bratt) and tries to steal his guitar. The attempted theft transforms Miguel into a spirit, and as he flees the tomb he meets his deceased ancestors in the cemetery.

Together they travel to the Land of the Dead where Miguel discovers that in order to return to life he must have the blessing of his family. The matriarch, great-grandmother Mamá Imelda (Alanna Ubach) gives her blessing with one stipulation, that Miguel can never be a musician. Feeling as though he cannot live without music, Miguel decides to seek out the blessing of his musician great-grandfather.

Music is intrinsically tied to the film’s story, and therefore to the film’s soundtrack. Ernesto de la Cruz’s guitar is like another character in the film. The Skywalker Sound team handled all the physical guitar effects, from subtle to destructive. Although they didn’t handle any of the music, they covered everything from fret handling and body thumps to string breaks and smashing sounds. “There was a lot of interaction between music and effects, and a fine balance between them, given that the guitar played two roles,” says supervising sound editor/sound designer/re-recording mixer Christopher Boyes, who was just nominated for a CAS award for his mixing work on Coco. His Skywalker team on the film included co-supervising sound editor J.R. Grubbs, sound effects editors Justin Doyle and Jack Whittaker, and sound design assistant Lucas Miller.

Boyes bought a beautiful guitar from a pawn shop in Petaluma near their Northern California location, and he and his assistant Miller spent a day recording string sounds and handling sounds. “Lucas said that one of the editors wanted us to cut the guitar strings,” says Boyes. “I was reluctant to cut the strings on this beautiful guitar, but we finally decided to do it to get the twang sound effects. Then Lucas said that we needed to go outside and smash the guitar. This was not an inexpensive guitar. I told him there was no way we were going to smash this guitar, and we didn’t! That was not a sound we were going to create by smashing the actual guitar! But we did give it a couple of solid hits just to get a nice rhythmic sound.”

To capture the true essence of Día de los Muertos in Mexico, Boyes and Grubbs sent effects recordists Daniel Boyes, Scott Guitteau, and John Fasal to Oaxaca to get field recordings of the real 2016 Día de los Muertos celebrations. “These recordings were essential to us and director Lee Unkrich, as well as to Pixar, for documenting and honoring the holiday. As such, the recordings formed the backbone of the ambience depicted in the track. I think this was a crucial element of our journey,” says Boyes.

Just as the celebration sound of Día de los Muertos was important, so too was the sound of Miguel’s town. The team needed to provide a realistic sense of a small Mexican town to contrast with the phantasmagorical Land of the Dead, and the recordings that were captured in Mexico were a key building block for that environment. Co-supervising sound editor Grubbs says, “Those recordings were invaluable when we began to lay the background tracks for locations like the plaza, the family compound, the workshop, and the cemetery. They allowed us to create a truly rich and authentic ambiance for Miguel’s home town.”

Bone Collecting
Another prominent set of sounds in Coco are the bones. Boyes notes that director Unkrich had specific guidelines for how the bones should sound. Characters like Héctor (Gael García Bernal), who are stuck in the Land of the Dead and are being forgotten by those still alive, needed to have more rattle-y sounding bones, as if the skeleton could come apart easily. “Héctor’s life is about to dissipate away, just as we saw with his friend Chicharrón [Edward James Olmos] on the docks, so their skeletal structure is looser. Héctor’s bones demonstrated that right from the get-go,” he explains.

In contrast, if someone is well remembered, such as de la Cruz, then the skeletal structure should sound tight. “In Miguel’s family, Papá Julio [Alfonso Arau] comically bursts apart many times, but he goes back together as a pretty solid structure,” explains Boyes. “Lee [Unkrich] wanted to dig into that dynamic first of all, to have that be part of the fabric that tells the story. Certain characters are going to be loose because nobody remembers them and they’re being forgotten.”

Creating the bone sounds was the biggest challenge for Boyes as a sound designer. Unkrich wanted to hear the complexity of the bones, from the clatter and movement down to the detail of cartilage. “I was really nervous about the bones challenge because it’s a sound that’s not easily embedded into a track without calling attention to itself, especially if it’s not done well,” admits Boyes.

Boyes started his bone sound collection by recording a mobile he built using different elements, like real bones, wooden dowels, little stone chips and other things that would clatter and rattle. Then one day Boyes stumbled onto an interesting bone sound while making a coconut smoothie. “I cracked an egg into the smoothie and threw the eggshell into the empty coconut hull and it made a cool sound. So I played with that. Then I was hitting the coconut on concrete, and from all of those sources I created a library of bone sounds.” Foley also contributed to the bone sounds, particularly for the literal, physical movements, like walking.

According to Grubbs, the bone sounds were designed and edited by the Skywalker team and then presented to the directors over several playbacks. The final sound of the skeletons is a product of many design passes, which were carefully edited in conjunction with the Foley bone recordings and sometimes used in combination with the Foley.

L-R: J.R. Grubbs and Chris Boyes

Because the film is so musical, the bone tracks needed to have a sense of rhythm and timing. To hit moments in a musical way, Boyes loaded bone sounds and other elements into Native Instruments’ Kontakt and played them via a MIDI keyboard. “One place for the bones that was really fun was when Héctor went into the security office at the train station,” says Boyes.

Héctor comes apart and his fingers do a little tap dance. That kind of stuff really lent to the playfulness of his character and it demonstrated the looseness of his skeletal structure.”

From a sound perspective, Boyes feels that Coco is a great example of how movies should be made. During editorial, he and Grubbs took numerous trips to Pixar to sit down with the directors and the picture department. For several months before the final mix, they played sequences for Unkrich that they wanted to get direction on. “We would play long sections of just sound effects, and Lee — being such a student of filmmaking and being an animator — is quite comfortable with diving down into the nitty-gritty of just simple elements. It was really a collaborative and healthy experience. We wanted to create the track that Lee wanted and wanted to make sure that he knew what we were up to. He was giving us direction the whole way.”

The Mix
Boyes mixed alongside re-recording mixer Michael Semanick (music/dialogue) on Skywalker’s Kurosawa Stage. They mixed in native Dolby Atmos on a DFC console. While Boyes mixed, effects editor Doyle handled last-minute sound effects needs on the stage, and Grubbs ran the logistics of the show. Grubbs notes that although he and Boyes have worked together for a long time this was the first time they’ve shared a supervising credit.

“J.R. [Grubbs] and I have been working together for probably 30 years now.” Says Boyes. “He always helped to run the show in a very supervisory way, so I just felt it was time he started getting credit for that. He’s really kept us on track, and I’m super grateful to him.”

One helpful audio tool for Boyes during the mix was the Valhalla Room reverb, which he used on Miguel’s footsteps inside de la Cruz’s tomb. “Normally, I don’t use plug-ins at all when I’m mixing. I’m a traditional mixer who likes to use a console and TC Electronic’s TC 6000 and the Leixcon 480 reverb as outboard gear. But in this one case, the Valhalla Room plug-in had a preset that really gave me a feeling of the stone tomb.”

Unkrich allowed Semanick and Boyes to have a first pass at the soundtrack to get it to a place they felt was playable, and then he took part in the final mix process with them. “I just love Lee’s respect for us; he gives us time to get the soundtrack into shape. Then, he sat there with us for 9 to 10 hours a day, going back and forth, frame by frame at times and section by section. Lee could hear everything, and he was able to give us definitive direction throughout. The mix was achieved by and directed by Lee, every frame. I love that collaboration because we’re here to bring his vision and Pixar’s vision to the screen. And the best way to do that is to do it in the collaborative way that we did,” concludes Boyes.


Jennifer Walden is a New Jersey-based audio engineer and writer.


The 54th annual CAS Award nominees

The Cinema Audio Society announced the nominees for the 54th Annual CAS Awards for Outstanding Achievement in Sound Mixing. There are seven creative categories for 2017, and the Outstanding Product nominations were revealed as well.

Here are this year’s nominees:

Baby Driver

Motion Picture – Live Action

Baby Driver

Production Mixer – Mary H. Ellis, CAS

Re-recording Mixer – Julian Slater, CAS

Re-recording Mixer – Tim Cavagin

Scoring Mixer – Gareth Cousins, CAS

ADR Mixer – Mark Appleby

Foley Mixer – Glen Gathard

Dunkirk

Production Mixer – Mark Weingarten, CAS

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Gary Rizzo, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Thomas J. O’Connell

Foley Mixer – Scott Curtis

Star Wars: The Last Jedi

Production Mixer – Stuart Wilson, CAS

Re-recording Mixer – David Parker

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Ren Klyce

Scoring Mixer – Shawn Murphy

ADR Mixer – Doc Kane, CAS

Foley Mixer – Frank Rinella

The Shape of Water

Production Mixer – Glen Gauthier

Re-recording Mixer – Christian T. Cooke, CAS

Re-recording Mixer – Brad Zoern, CAS

Scoring Mixer – Peter Cobbin

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Peter Persaud, CAS

Wonder Woman

Production Mixer – Chris Munro, CAS

Re-recording Mixer – Chris Burdon

Re-recording Mixer – Gilbert Lake, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Nick Kray

Foley Mixer – Glen Gathard

 

Motion Picture Animated

The Lego Batman Movie

Cars 3

Original Dialogue Mixer – Doc Kane, CAS

Re-recording Mixer – Tom Meyers

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Blake Collins

Coco

Original Dialogue Mixer – Vince Caro

Re-recording Mixer – Christopher Boyes

Re-recording Mixer – Michael Semanick

Scoring Mixer – Joel Iwataki

Foley Mixer – Blake Collins

Despicable Me 3

Original Dialogue Mixer – Carlos Sotolongo

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Tim Nielson

Re-recording Mixer – Brandon Proctor

Scoring Mixer – Greg Hayes

Foley Mixer – Scott Curtis

Ferdinand

Original Dialogue Mixer – Bill Higley, CAS

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Lora Hirschberg

Re-recording Mixer – Leff Lefferts

Scoring Mixer – Shawn Murphy

Foley Mixer – Scott Curtis

The Lego Batman Movie

Original Dialogue Mixer – Jason Oliver

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Wayne Pashley

Scoring Mixer – Stephen Lipson

Foley Mixer – Lisa Simpson

 

Motion Picture – Documentary

An Inconvenient Sequel: Truth to Power

Production Mixer – Gabriel Monts

Re-recording Mixer – Kent Sparling

Re-recording Mixer – Gary Rizzo, CAS

Re-recording Mixer – Zach Martin

Scoring Mixer – Jeff Beal

Foley Mixer – Jason Butler

Long Strange Trip

Eric Clapton: Life in 12 Bars

Re-recording Mixer – Tim Cavagin

Re-recording Mixer – William Miller

ADR Mixer – Adam Mendez, CAS

Gaga: Five Feet Two

Re-recording Mixer – Jonathan Wales, CAS

Re-recording Mixer – Jason Dotts

Jane

Production Mixer – Lee Smith

Re-recording Mixer – David E. Fluhr, CAS

Re-recording Mixer – Warren Shaw

Scoring Mixer – Derek Lee

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Ryan Maguire

Long Strange Trip

Production Mixer – David Silberberg

Re-recording Mixer – Bob Chefalas

Re-recording Mixer – Jacob Ribicoff

 

Television Movie Or Mini-Series

Big Little Lies: “You Get What You Need”

Production Mixer – Brendan Beebe, CAS

Re-recording Mixer – Gavin Fernandes, CAS

Re-recording Mixer – Louis Gignac

Black Mirror: “USS Callister”

Production Mixer – John Rodda, CAS

Re-recording Mixer – Tim Cavagin

Fargo

Re-recording Mixer – Dafydd Archard

Re-recording Mixer – Will Miller

ADR Mixer – Nick Baldock

Foley Mixer – Sophia Hardman

Fargo: ”The Narrow Escape Problem”

Production Mixer – Michael Playfair, CAS

Re-recording Mixer – Kirk Lynds, CAS

Re-recording Mixer – Martin Lee

Scoring Mixer – Michael Perfitt

Sherlock: “The Lying Detective”

Production Mixer –John Mooney, CAS

Re-recording Mixer – Howard Bargroff

Scoring Mixer – Nick Wollage

ADR Mixer – Peter Gleaves, CAS

Foley Mixer – Jamie Talbutt

Twin Peaks: “Gotta Light?”

Production Mixer – Douglas Axtell

Re-recording Mixer –Dean Hurley

Re-recording Mixer – Ron Eng

 

Television Series – 1-Hour

Better Call Saul: “Lantern”

Production Mixer – Phillip W. Palmer, CAS

Re-recording Mixer – Larry B. Benjamin, CAS

Re-recording Mixer – Kevin Valentine

ADR Mixer – Matt Hovland

Foley Mixer – David Michael Torres, CAS

Game of Thrones: “Beyond the Wall”

Game of Thrones

Production Mixer – Ronan Hill, CAS

Production Mixer – Richard Dyer, CAS

Re-recording Mixer – Onnalee Blank, CAS

Re-recording Mixer – Mathew Waters, CAS

Foley Mixer – Brett Voss, CAS

Stranger Things: “The Mind Flayer”

Production Mixer – Michael P. Clark, CAS

Re-recording Mixer – Joe Barnett

Re-recording Mixer – Adam Jenkins

ADR Mixer – Bill Higley, CAS

Foley Mixer – Anthony Zeller, CAS

The Crown: “Misadventure”

Production Mixer – Chris Ashworth

Re-recording Mixer – Lee Walpole

Re-recording Mixer – Stuart Hilliker

Re-recording Mixer – Martin Jensen

ADR Mixer – Rory de Carteret

Foley Mixer – Philip Clements

The Handmaid’s Tale: “Offred”

Production Mixer – John J. Thomson, CAS

Re-recording Mixer – Lou Solakofski

Re-recording Mixer – Joe Morrow

Foley Mixer – Don White

 

Television Series – 1/2 Hour

Ballers: “Yay Area”

Production Mixer – Scott Harber, CAS

Re-recording Mixer – Richard Weingart, CAS

Re-recording Mixer – Michael Colomby, CAS

Re-recording Mixer – Mitch Dorf

Black-ish: “Juneteenth, The Musical”

Production Mixer – Tom N. Stasinis, CAS

Re-recording Mixer – Peter J. Nusbaum, CAS

Re-recording Mixer – Whitney Purple

Modern Family: “Lake Life”

Production Mixer – Stephen A. Tibbo, CAS

Re-recording Mixer – Dean Okrand, CAS

Re-recording Mixer – Brian R. Harman, CAS

Silicon Valley: “Hooli-Con”

Production Mixer – Benjamin A. Patrick, CAS

Re-recording Mixer – Elmo Ponsdomenech

Re-recording Mixer – Todd Beckett

Veep: “Omaha”

Production Mixer – William MacPherson, CAS

Re-recording Mixer – John W. Cook II, CAS

Re-recording Mixer – Bill Freesh, CAS

 

Television Non-Fiction, Variety Or Music Series Or Specials

American Experience: “The Great War – Part 3”

Production Mixer – John Jenkins

Re-Recording Mixer – Ken Hahn

Anthony Bourdain: Parts Unknown: “Oman”

Re-Recording Mixer – Benny Mouthon, CAS

Anthony Bourdain: Parts Unknown

Deadliest Catch: “Last Damn Arctic Storm”

Re-Recording Mixer – John Warrin

Rolling Stone: “Stories from the Edge”

Production Mixer – David Hocs

Production Mixer – Tom Tierney

Re-Recording Mixer – Tom Fleischman, CAS

Who Killed Tupac?: “Murder in Vegas”

Production Mixer – Steve Birchmeier

Re-Recording Mixer – John Reese

 

Nominations For Outstanding Product – Production

DPA – DPA Slim

Lectrosonics – Duet Digital Wireless Monitor System

Sonosax – SX-R4+

Sound Devices – Mix Pre- 10T Recorder

Zaxcom – ZMT3-Phantom

 

Nominations For Outstanding Product – Post Production

Dolby – Dolby Atmos Content Creation Tools

FabFilter – Pro Q2 Equalizer

Exponential Audio – R4 Reverb

iZotope – RX 6 Advanced

Todd-AO – Absentia DX

The Awards will be presented at a ceremony on February 24 at the Omni Los Angeles Hotel at California Plaza. This year’s CAS Career Achievement Award will be presented to re-recording mixer Anna Behlmer, the CAS Filmmaker Award will be given to Joe Wright and the Edward J. Greene Award for the Advancement of Sound will be presented to Tomlinson Holman, CAS. The Student Recognition Award winner will also be named and will receive a cash prize.

Main Photo: Wonder Woman


Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

Music
On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”


Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.

Blade Runner 2049’s dynamic and emotional mix

By Jennifer Walden

“This film has more dynamic range than any movie we’ve ever mixed,” explains re-recording mixer Doug Hemphill of the Blade Runner 2049 soundtrack. He and re-recording mixer Ron Bartlett, from Formosa Group, worked with director Denis Villeneuve to make sure the audio matched the visual look of the film. From the pounding sound waves of Hans Zimmer and Benjamin Wallfisch’s score to the overwhelming wash of Los Angeles’s street-level soundscape, there’s massive energy in the film’s sonic peaks.

L-R: Ron Bartlett, Denis Villeneuve, Joe Walker, Ben Wallfisch and Doug Hemphill. Credit: Clint Bennett

The first time K (Ryan Gosling) arrives in Los Angeles in the film, the audience is blasted with a Vangelis-esque score that is reminiscent of the original Blade Runner, and that was ultimately the goal there — to envelope the audience in the Blade Runner experience. “That was our benchmark for the biggest, most enveloping sound sequence — without being harsh or loud. We wanted the audience to soak it in. It was about filling out the score, using all the elements in Hans Zimmer’s and Ben Wallfisch’s arsenal there,” says Bartlett, who handled the dialogue and music in the mix.

He and Villeneuve went through a wealth of musical elements — all of which were separated so Villeneuve could pick the ones he liked. His preference gravitated toward the analog synth sounds, like the Yamaha CS-80, which composer Vangelis used in his 1982 Blade Runner composition. “We featured those synth sounds throughout the movie,” says Bartlett. “I played with the spatial aspects, spreading certain elements into the room to envelope you into the score. It was very immersive that way.”

Bartlett notes that initially there were sounds from the original Blade Runner in their mix, like huge drum hits from the original score that were converted into 7.1 versions by supervising sound editor Mark Mangini at Formosa Group. Bartlett used those drum hits as punctuation throughout the film, for scene changes and transitions. “Those hits were everywhere. Actually, they’re the first sound in the movie. Then you can hear those big drum hits in the Vegas walk. That Vegas walk had another score with it, but we kept stripping it away until we were down to just those drum hits. It’s so dramatic.”

But halfway into the final mix for Blade Runner 2049, Mangini phoned Bartlett to tell him that the legal department said they couldn’t use any of those sounds from the original film. They’d need to replace them immediately. “Since I’m a percussionist, Mark asked if I could remake the drum hits. I stayed up until 3am and redid them all in my studio in 7.1, and then brought them in and replaced them throughout the movie. Mark had to make all these new spinner sounds and replace those in the film. That was an interesting moment,” reveals Bartlett.

Sounds of the City
Los Angeles 2049 is a multi-tiered city. Each level offers a different sonic experience. The zen-like prayer that’s broadcast at the top level gradually transforms into a cacophony the closer one gets to street-level. Advertisements, announcements, vehicles, music from storefronts and vending machine sounds mix with multi-language crowds — there’s Russian, Vietnamese, Korean, Japanese, and the list goes on. The city is bursting with sound, and Hemphill enhanced that experience by using Cargo Cult’s Spanner on the crowd effects during the scene where K is sitting outside of Bibi’s Bar to put the crowds around the theater and “give the audience a sense of this crush of humanity,” he says.

The city experience could easily be chaotic, but Hemphill and Bartlett made careful choices on the stage to “rack the focus” — determining for the audience what they should be listening to. “We needed to create the sense that you’re in this overpopulated city environment, but it still had to make sense. The flow of the sound is like ‘musique concrète.’ The sounds have a rhythm and movement that’s musical. It’s not random. There’s a flow,” explains Hemphill, who has an Oscar for his work on The Last of the Mohicans.

Bartlett adds that their goal was to keep a sense of clarity as the camera traveled through the street scene. If there was a big, holographic ad in the forefront, they’d focus on that, and as the scene panned away another sound would drive the mix. “We had to delete some of the elements and then move sounds around. It was a difficult scene and we took a long time on it but we’re happy with the clarity.”

On the quiet end of the spectrum, the film’s soundtrack shines. Spaces are defined with textural ambiences and handcrafted reverbs. Bartlett worked with a new reverb called DSpatial created by Rafael Duyos. “Mark Mangini and I helped to develop DSpatial. It’s a very unique reverb,” says Bartlett.

According to the website, DSpatial Reverb is a space modeler and renderer that offers 48 decorrelated outputs. It doesn’t use recorded impulse responses; instead it uses modeled IRs. This allows the user to select and tweak a series of parameters, like surface texture and space size, to model the acoustic and physical characteristics of any room. “It’s a decorrelated reverb, meaning you can add as many channels as you like and pan them into every Dolby Atmos speaker that is in the room. That wasn’t the only reverb we used, but it was the main one we used in specific environments in the film,” says Bartlett.

In combination with DSpatial, Bartlett used Audio Ease’s Altiverb, FabFilter reverbs and Cargo Cult’s Slapper delay to help create the multifaceted reflections that define the spaces on-screen so well. “We tried to make each space different, “says Bartlett. “We tried to evoke an emotion through the choices of reverbs and delays. It was never just one reverb or delay. I used two or three. It was very interesting creating those textures and creating those rooms.”

For example, in the Tyrell Corporation building, Niander Wallace (Jared Leto)’s private office is a cold, lonely space. Water surrounds a central platform; reflections play on the imposing stone walls. “The way that Roger Deakins lit it was just stunning,” says Bartlett. “It really evoked a cool emotion. That’s what is so intangible about what we do, creating those emotions out of sound.” In addition to DSpatial, Altiverb and FabFilter reverbs, he used Cargo Cult’s Slapper delay, which “added a soft rolling, slight echo to Jared Leto’s voice that made him feel a little more God-like. It gave his voice a unique presence without being distracting.”

Another stunning example of Bartlett’s reverb work was K’s entrance into Rick Deckard’s (Harrison Ford) casino hideout. The space is dead quiet then K opens the door and the sound rings out and slowly dissipates. It conveys the feeling that this is a vast, isolated, and empty space. “It was a combination of three reverbs and a delay that made that happen, so the tail had a really nice shine to it,” says Bartlett.

One of the most difficult rooms to find artistically, says Bartlett, was that of the memory maker, Dr. Ana Stelline (Carla Juri). “Everyone had a different idea of what that dome might sound like. We experimented with four or five different approaches to find a good place with that.”

The reverbs that Bartlett creates are never static. They change to fit the camera perspective. Bartlett needed several different reverb and delay processing chains to define how Dr. Stelline’s voice would react in the environment. For example, “There are some long shots, and I had a longer, more distant reverb. I bled her into the ceiling a little bit in certain shots so that in the dome it felt like the sound was bouncing off the ceiling and coming down at you. When she gets really close to the glass, I wanted to get that resonance of her voice bouncing off of the glass. Then when she’s further in the dome, creating that birthday memory, there is a bit broader reverb without that glass reflection in it,” he says.

On K’s side of the glass, the reverb is tighter to match the smaller dimensions and less reflective characteristics of that space. “The key to that scene was to not be distracting while going in and out of the dome, from one side of the glass to the other,” says Bartlett. “I had to treat her voice a little bit so that it felt like she was behind the glass, but if she was way too muffled it would be too distracting from the story. You have to stay with those characters in the story, otherwise you’re doing a disservice by trying to be clever with your mixing.

“The idea is to create an environment so you don’t feel like someone mixed it. You don’t want to smell the mixing,” he continues. “You want to make it feel natural and cool. If we can tell when we’ve made a move, then we’ll go back and smooth that out. We try to make it so you can’t tell someone’s mixing the sound. Instead, you should just feel like you’re there. The last thing you want to do is to make something distracting. You want to stay in the story. We are all about the story.”

Mixing Tools
Bartlett and Hemphill mixed Blade Runner 2049 at Sony Pictures Post in the William Holden Theater using two Avid S6 consoles running Avid Pro Tools 12.8.2, which features complete Dolby Atmos integration. “It’s nice to have Atmos panners on each channel in Pro Tools. You just click on the channel and the panner pops up. You don’t want to go to just one panner with one joystick all the time so it was nice to have it on each channel,” says Bartlett.

Hemphill feels the main benefit of having the latest gear — the S6 consoles and the latest version of Pro Tools — is that it gives them the ability to carry their work forward. “In times past, before we had this equipment and this level of Pro Tools, we would do temp dubs and then we would scrap a lot of that work. Now, we are working with main sessions all the way from the temp mix through to the final. That’s very important to how this soundtrack was created.”

For instance, the dialogue required significant attention due to the use of practical effects on set, like weather machines for rain and snow. All the dialogue work they did during the temp dubs was carried forward into the final mix. “Production sound mixer Mac Ruth did an amazing job while working in those environments,” explains Bartlett. “He gave us enough to work with and we were able to use iZotope RX 6 to take out noise that was distracting. We were careful not to dig into the dialogue too much because when you start pulling out too many frequencies, you ruin the timbre and quality of the dialogue— the humanness.”

One dialogue-driven scene that made a substantial transformation from temp dub to final mix was the underground sequence in which Freysa [Hiam Abbass] makes a revelation about the replicant child. “The actress was talking in this crazy accent and it was noisy and hard to understand what was happening. It’s a very strong expositional moment in the movie. It’s a very pivotal point,” says Bartlett. They looped the actress for that entire scene and worked to get her ADR performance to sound natural in context to the other sounds. “That scene came such a long way, and it really made the movie for me. Sometimes you have to dig a little deeper to tell the story properly but we got it. When K sits down in the chair, you feel the weight. You feel that he’s crushed by that news. You really feel it because the setup was there.”

Blade Runner 2049 is ultimately a story that questions the essence of human existence. While equipment and technique were an important part of the post process, in the end it was all about conveying the emotion of the story through the soundtrack.

“With Denis [Villeneuve], it’s very much feel-based. When you hear a sound, it brings to mind memories immediately. Denis is the type of director that is plugged into the emotionality of sound usage. The idea more than anything else is to tell the story, and the story of this film is what it means to be a human being. That was the fuel that drove me to do the best possible work that I could,” concludes Hemphill.


Jennifer Walden is a NJ-based writer and audio engineer. Follow her on Twitter @audiojeney.