Tag Archives: audio mixing

Capturing realistic dialogue for The Front Runner

By Mel Lambert

Early on in his process, The Front Runner director Jason Reitman asked frequent collaborator and production sound mixer Steve Morrow, CAS, to join the production. “It was maybe inevitable that Jason would ask me to join the crew,” says Morrow, who has worked with the director on Labor Day, Up in the Air and Thank You for Smoking. “I have been part of Jason’s extended family for at least 10 years — having worked with his father Ivan Reitman on Draft Day — and know how he likes to work.”

Steve Morrow

This Sony Pictures film was co-written by Reitman, Matt Bai and Jay Carson, and based on Bai’s book, “All the Truth Is Out.” The Front Runner follows the rise and fall of Senator Gary Hart, set during his unsuccessful presidential campaign in 1988 when he was famously caught having an affair with the much younger Donna Rice. Despite capturing the imagination of young voters, and being considered the overwhelming front runner for the Democratic nomination, Hart’s campaign was sidelined by the affair.

It stars Hugh Jackman as Gary Hart, Vera Farmiga as his wife Lee, J.K. Simmons as campaign manager Bill Dixon and Alfred Molina as the Washington Post’s managing editor, Ben Bradlee.

“From the first read-through of the script, I knew that we would be faced with some production challenges,” recalls Morrow, a 20-year industry veteran. “There were a lot of ensemble scenes with the cast talking over one another, and I knew from previous experience that Jason doesn’t like to rely on ADR. Not only is he really concerned about the quality of the sound we secure from the set — and gives the actors space to prepare — but Jason’s scripts are always so well-written that they shouldn’t need replacement lines in post.”

Ear Candy Post’s Perry Robertson and Scott Sanders, MPSE, served as co-supervising sound editors on the project, which was re-recorded on Deluxe Stage 2 — the former Glen Glenn Sound facility — by Chris Jenkins handling dialogue and music and Jeremy Peirson, CAS, overseeing sound effects. Sebastian Sheehan Visconti was sound effects editor.

With as many as two dozen actors in a busy scene, Morrow soon realized that he would have to mic all of the key campaign team members. “I knew that we were shooting a political film like Robert Altman’s All the President’s Men or [Michael Ritchie’s] The Candidate, so I referred back to the multichannel techniques pioneered by Jim Webb and his high-quality dialogue recordings. I elected to use up to 18 radio mics for those ensemble scenes,” including Reitman’s long opening sequence in which the audience learns who the key participants are on the campaign trail. I did this “while recording each actor on a separate track, together with a guide mono mix of the key participants for the picture editor Stefan Grube.”

Reitman is well known for his films’ elaborate opening title sequences and often highly subjective narration from a main character. His motion pictures typically revolve around characters that are brashly self-confident, but then begin to rethink their lives and responsibilities. He is also reported to be a fan of ‘70s-style cinema verite, which uses a meandering camera and overlapping dialogue to draw the audience into an immersive reality. The Front Runner’s soundtrack is layered with dialogue, together with a constant hum of conversation — from the principals to the press and campaign staff. Since Bai and Carson have written political speeches, Reitman had them on set to ensure that conversations sounded authentic.

Even though there might be four or so key participants speaking in a scene, “Jason wants to capture all of the background dialogue between working press and campaign staff, for example,” Morrow continues.

“He briefed all of the other actors on what the scene was about so they could develop appropriate conversations and background dialogue while the camera roamed around the room. In other words, if somebody was on set they got a mic — one track per actor. In addition to capturing everything, Jason wanted me to have fun with the scene; he likes a solid mix for the crew, dailies and picture editorial, so I gave him the best I could get. And we always had the ability to modify it later in post production from the iso mic channels.”

Morrow recorded the pre-fader individual tracks at between 10dB and 15dB lower than the main mix, “which I rode hot, knowing that we could go back and correct it in post. Levels on that main mix were within ±5 dB most of the time,” he says. Assisting Morrow during the 40-day shoot, which took place in and around Atlanta and Savannah, were Collin Heath and Craig Dollinger, who also served as the boom operator on a handful of scenes.

The mono production mix was also useful for the camera crew, says Morrow. “They sometimes had problems understanding the dramatic focus of a particular scene. In other words, ‘Where does my eye go?’ When I fed my mix to their headphones they came to understand which actors we were spotlighting from the script. This allowed them to follow that overview.”

Production Tools
Morrow used a Behringer Midas Model M32R digital console that features 16 rear-channel inputs and 16 more inputs via a stage box that connects to the M32R via a Cat-5 cable. The console provided pre-fader and mixed outputs to Morrow’s pair of 64-track Sound Devices 970 hard-disk recorders — a main and a parallel backup — via Audinate Dante digital ports. “I also carried my second M32R mixer as a spare,” Morrow says. “I turned over the Compact Flash media at the end of each day’s shooting and retained the contents of the 970’s internal 1TB SSDs and external back-up drives until the end of post, just in case. We created maybe 30GB of data per recorder per day.”

Color coding helps Morrow mix dialogue more accurately.

For easy level checking, the two recorders with front-panel displays were mounted on Morrow’s production sound cart directly above his mixing console. “When I can, I color code the script to highlight the dialogue of key characters in specific scenes,” he says. “It helps me mix more accurately.”

RF transmitters comprised two dozen Lectrosonics SSM Micro belt-pack units — Morrow bought six or seven more for the film — linked to a bank of Lectrosonics Venue2 modular four-channel and three-channel VR receivers. “I used my collection of Sanken COS-11D miniature lavalier microphones for the belt packs. They are my go-to lavs with clean audio output and excellent performance. I also have some DPA lavaliers, if needed.”

With 20+ RF channels simultaneously in use within metropolitan centers, frequency coordination was an essential chore to ensure consistent operation for all radio systems. “The Lectrosonics Venue receivers can auto-assign radio-mic frequencies,” Morrow explains. “The best way to do this is to have everything turned off, and then one by one let the system scan the frequency spectrum. When it finds a good channel, you assign it to the first microphone and then repeat that process for the next radio transmitters. I try to keep up with FCC deliberations [on diminishing RF spectrum space], but realize that companies who manufacture this equipment also need to be more involved. So, together, I feel good that we’ll have the separation we all need for successful shoots.”

Morrow’s setup.

Morrow also made several location recordings on set. “I mounted a couple of lavaliers on bumpers to secure car-byes and other sounds for supervising sound editor Perry Robertson, as well as backgrounds in the house during a Labor Day gathering. We also recorded Vera Farmiga playing the piano during one scene — she is actually a classically-trained pianist — using a DPA Model 5099 microphone (which I also used while working on A Star is Born). But we didn’t record much room tone, because we didn’t find it necessary.”

During scenes at a campaign rally, Morrow provided a small PA system that comprised a couple of loudspeakers mounted on a balcony and a vocal microphone on the podium. “We ran the system at medium-levels, simply to capture the reverb and ambiance of the auditorium,” he explains, “but not so much that it caused problems in post production.”

Summarizing his experience on The Front Runner, Morrow offers that Reitman, and his production partner Helen Estabrook, bring a team spirit to their films. “The set is a highly collaborative environment. We all hang out with one another and share birthdays together. In my experience, Jason’s films are always from the heart. We love working with him 120%. The low point of the shoot is going home!”


Mel Lambert has been involved with production and post on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

Cinema Audio Society sets next awards date and timeline

The Cinema Audio Society (CAS) will be holding its 55th Annual CAS Awards on Saturday, February 16, 2019 at the InterContinental Los Angeles Downtown in the Wilshire Grand Ballroom. The CAS Awards recognize outstanding sound mixing in film and television as well as outstanding products for production and post. Recipients for the CAS Career Achievement Award and CAS Filmmaker Award will be announced later in the year.

The InterContinental Los Angeles Downtown is a new venue for the awards. They were held at the Omni Los Angeles Hotel at California Plaza last year.

The timeline for the awards is as follows:
• Entry submission form will be available online on the CAS website on Thursday, October 11, 2018.
• Entry submissions are due online by 5:00pm PST on Thursday, November 15, 2018.
• Outstanding product entry submissions are due online by 5:00pm PST on Friday December 7, 2018.
• Nomination ballot voting begins online on Thursday, December 13, 2018.
• Nomination ballot voting ends online at 5:00pm PST on Thursday, January 3, 2019.
• Final nominees in each category will be announced on Tuesday, January 8, 2019.
• Final voting begins online on Thursday, January 24, 2019.
• Final voting ends online at 5:00pm PST on Wednesday, February 6, 2019.

 

Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.

Michael Semanick: Mixing SFX, Foley for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic. (Look for next week’s Star Wars story, in which re-recording mixer Ren Klyce talks about his approach to mixing John Williams’ score.)

Michael Semanick

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage.

You had all of these amazing elements — Skywalker’s effects, John Williams’ score and the dialogue. How did you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Oscar Watch: The Shape (and sound) of Water

Post production sound mixers Christian Cooke and Brad Zoern, who are nominated (with production mixer Glen Gauthier) for their work on Fox’s The Shape of Water, have sat side-by-side at mixing consoles for nearly a decade. The frequent collaborators, who handle mixing duties at Deluxe Toronto, faced an unusual assignment given that the film’s two lead characters never utter a single word of actual dialogue. In The Shape of Water, which has been nominated for 13 Academy Awards, Elisa (Sally Hawkins) is mute and the creature she falls in love with makes undefined sounds. This creative choice placed more than the usual amount of importance on the rest of the soundscape to support the story.

L-R: Nathan Robitaille, J. Miles Dale, Brad Zoern, director Guillermo del Toro, Christian Cooke, Nelson Ferreira, Filip Hosek, Cam McLauchlin, video editor Sidney Wolinsky, Rob Hegedus, Doug Wilkinson.

Cooke, who focused on dialogue and music, and Zoern, who worked with effects, backgrounds and Foley, knew from the start that their work would need to fit into the unique and delicate tone that infused the performances and visuals. Their work began, as always, with pre-dubs followed by three temp mixes of five days each, which allowed for discussion and input from director Guillermo del Toro. It was at the premixes that the mixers got a feel for del Toro’s conception for the film’s soundtrack. “We were more literal at first with some of the sounds,” says Zoern. “He had ideas about blending effects and music. By the time we started on the five-week-long mix, we had a very clear idea about what he was looking for.”

The final mix took place in one of Deluxe Toronto’s five stages, which have identical acoustic qualities and the same Avid Pro Tools-based Harrison MP4D/Avid S6 hybrid console, JBL M2 speakers and Crown amps.

The mixers worked to shape sonic moments that do more than represent “reality,” but create mood and tension. This includes key moments such as the sound of a car’s windshield wipers that build in volume until they take over the track in the form of a metronome-like beat underlining the tension of the moment. One pivotal scene finds Richard Strickland (Michael Shannon) paying a visit to Zelda Fuller (Octavia Spencer). As Strickland speaks, Zelda’s husband Brewster (Martin Roach) watches television. “It was an actual mono track from a real show,” Cooke explains. “It starts out sounding roomy and distant as it would really have sounded. As the scene progresses, it expands, getting more prominent and spreading out around the speakers [for the 5.1 version]. By the end of the scene, the audio from the TV has become something totally different from what it started the scene as and then we melded that seamlessly into Alexandre Desplat’s score.”

Beyond the aesthetic work of building a sound mix, particularly one so fluid and expressionistic, post production mixers must also collaborate on a large number of technical decisions during the mix to ensure the elements have the right amount of emotional punch without calling attention to themselves. Individual sounds, even specific frequencies, vie for audience attention and the mixers orchestrate and layer them.

“It’s raining outside when they come into the room,” Zoern notes about the above scene. “We want to initially hear the sound of the rain to have a context for the scene. You never just want dialogue coming out of nowhere; it needs to live in a space. But then we pull that back to focus on the dialogue, and then the [augmented] audio from the TV gains prominence. During the final mix, Chris and I are always working together, side by side, to meld the hundreds of sounds the editors have built in a way that reflects the story and mood of the film.”

“We’re like an old married couple,” Cooke jokes. “We finish each other’s sentences. But it’s very helpful to have that kind of shorthand in this job. We’re blending so many pieces together and if people notice what we’ve done, we haven’t done our jobs.”

Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.