Tag Archives: audio post production

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Creating Foley for FX’s Fosse/Verdon

Alchemy Post Sound created Foley for Fosse/Verdon, FX’s miniseries about choreographer Bob Fosse (Sam Rockwell) and his collaborator and wife, the singer/dancer Gwen Verdon (Michelle Williams). Working under the direction of supervising sound editors Daniel Timmons and Tony Volante, Foley artist Leslie Bloome and his team performed and recorded hundreds of custom sound effects to support the show’s dance sequences and add realistic ambience to its historic settings.

Spanning five decades, Fosse/Verdon focuses on the romantic and creative partnership between Bob Fosse and Gwen Verdon. The former was a visionary filmmaker and one of the theater’s most influential choreographers and directors, while the latter was one of the greatest Broadway dancers of all time.

Given the subject matter, it’s hardly surprising that post production sound was a crucial element in the series. For its many musical scenes, Timmons and Volante were tasked with conjuring intricate sound beds to match the choreography and meld seamlessly with the score. They also created dense soundscapes to back the very distinctive environments of film sets and Broadway stages, as well as a myriad of other exterior and interior locations.

For Timmons, the project’s mix of music and drama posed significant creative challenges but also a unique opportunity. “I grew up in upstate New York and originally hoped to work in live sound, potentially on Broadway,” he recalls. “With this show, I got to work with artists who perform in that world at the highest level. It was not so much a television show as a blend of Broadway music, Broadway acting and television. It was fun to collaborate with people who were working at the top of their game.”

The crew drew on an incredible mix of sources in assembling the sound. Timmons notes that to recreate Fosse’s hacking cough (a symptom of his overuse of prescription medicine), they poured through audio stems from the classic 1979 film All That Jazz. “Roy Scheider, who played Bob Fosse’s alter ego in the film, was unable to cough like him, so Bob went into a recording studio and did some of the coughing himself,” Timmons says. “We ended up using those old recordings along with ADR of Sam Rockwell. When Bob’s health starts to go south, some of the coughing you hear is actually him. Maybe I’m superstitious, but for me it helped to capture his identity. I felt like the spirit of Bob Fosse was there on the set.”

A large portion of the post sound effects were created by Alchemy Post Sound. Most notably, Foley artists meticulously reproduced the footsteps of dancers. Foley tap dancing can be heard throughout the series, not only in musical sequences, but also in certain transitions. “Bob Fosse got his start as a tap dancer, so we used tap sounds as a motif,” explains Timmons. “You hear them when we go into and out of flashbacks and interior monologues.”Along with Bloome, Alchemy’s team included Foley artist Joanna Fang, Foley mixers Ryan Collison and Nick Seaman, and Foley assistant Laura Heinzinger.

Ironically, Alchemy had to avoid delivering sounds that were “too perfect.”  Fang points out that scenes depicting musical performances from films were meant to represent the production of those scenes rather than the final product. “We were careful to include natural background sounds that would have been edited out before the film was delivered to theaters,” she explains, adding that those scenes also required Foley to match the dancers’ body motion and costuming. “We spent a lot of time watching old footage of Bob Fosse talking about his work, and how conscious he was not just of the dancers’ footwork, but their shuffling and body language. That’s part of what made his art unique.”

Foley production was unusually collaborative. Alchemy’s team maintained a regular dialogue with the sound editors and were continually exchanging and refining sound elements. “We knew going into the series that we needed to bring out the magic in the dance sequences,” recalls production Foley editor Jonathan Fuhrer. “I spoke with Alchemy every day. I talked with Ryan and Nick about the tonalities we were aiming for and how they would play in the mix. Leslie and Joanna had so many interesting ideas and approaches; I was ceaselessly amazed by the thought they put into performances, props, shoes and surfaces.”

Alchemy also worked hard to achieve realism in creating sounds for non-musical scenes. That included tracking down props to match the series’ different time periods. For a scene set in a film editing room in the 1950s, the crew located a 70-year-old Steenbeck flatbed editor to capture its unique sounds. As musical sequences involved more than tap dancing, the crew assembled a collection of hundreds of pairs of shoes to match the footwear worn by individual performers in specific scenes.

Some sounds undergo subtle changes over the course of the series relative to the passage of time. “Bob Fosse struggled with addictions and he is often seen taking anti-depression medication,” notes Seaman. “In early scenes, we recorded pills in a glass vial, but for scenes in later decades, we switched to plastic.”

Such subtleties add richness to the soundtrack and help cement the character of the era, says Timmons. “Alchemy fulfilled every request we made, no matter how far-fetched,” he recalls. “The number of shoes that they used was incredible. Broadway performers tend to wear shoes with softer soles during rehearsals and shoes with harder soles when they get close to the show. The harder soles are more strenuous. So the Foley team was always careful to choose the right shoes depending on the point in rehearsal depicted in the scene. That’s accuracy.”

The extra effort also resulted in Foley that blended easily with other sound elements, dialogue and music. “I like Alchemy’s work because it has a real, natural and open sound; nothing sounds augmented,” concludes Timmons. “It sounds like the room. It enhances the story even if the audience doesn’t realize it’s there. That’s good Foley.”

Alchemy used Neumann KMR 81 and U 87 mics, Millennia mic pres, Apogee converters, and C24 mixer into Avid Pro Tools.

Behind the Title: Cinematic Media head of sound Martin Hernández

This audio post pro’s favorite part of the job is the start of a project — having a conversation with the producer and the director. “It’s exciting, like any new relationship,” he says.

Name: Martin Hernández

Job Title: Supervising Sound Editor

Company: Mexico City’s Cinematic Media

Can you describe Cinematic Media and your role there?
I lead a new sound post department at Cinematic Media, Mexico’s largest post facility focused on television and cinema. We take production sound through the full post process: effects, backgrounds, music editing… the whole thing. We finish the sound on our mix stages.

What would surprise people most about what you do?
We want the sound to go unnoticed. The viewer shouldn’t be aware that something has been added or is unnatural. If the viewer is distracted from the story by the sound, it’s a lousy job. It’s like an actor whose performance draws attention to himself. That’s bad acting. The same applies to every aspect of filmmaking, including sound. Sound needs to help the narrative in a subjective and quiet way. The sound should be unnoticed… but still eloquent. When done properly, it’s magical.

Hernandez has been working on Easy for Netflix.

What’s your favorite part of the job?
Entering the project for the first time and having a conversation with the team: the producer and the director. It’s exciting, like any new relationship. It’s beautiful. Even if you’re working with people you’ve worked with before, the project is newborn.

My second favorite part is the start of sound production, when I have a picture but the sound is a blank page. We must consider what to add. What will work? What won’t? How much is enough or too much? It’s a lot like cooking. The dish might need more of this spice and a little less of that. You work with your ingredients, apply your personal taste and find the right flavor. I enjoy cooking sound.

What’s your least favorite part of the job?
Me.

What do you mean?
I am very hard on myself. I only see my shortcomings, which are, to tell you the truth, many. I see my limitations very clearly. In my perception of things, it is very hard to get where I want to go. Often you fail, but every once in a while, a few things actually work. That’s why I’m so stubborn. I know I am going to have a lot of misses, so I do more than expected. I will shoot three or four times, hoping to hit the mark once or twice. It’s very difficult for me to work with me.

What is your most productive time of the day?
In the morning. I’m a morning person. I work from my own place, very early, like 5:30am. I wake up thinking about things that I left behind in the session. It’s useless to remain in bed, so I go to my studio and start working on these ideas. It’s amazing how much you can accomplish between 6am and 9am. You have no distractions. No one’s calling. No emails. Nothing. I am very happy working in the mornings.

If you didn’t have this job, what would you be doing?
That’s a tough question! I don’t know anything else. Probably, I would cook. I’d go to a restaurant and offer myself as an intern in the kitchen.

For most people I know, their career is not something they’ve chosen; it was embedded in them when they were born. It’s a matter of realizing what’s there inside you and embracing it. I never, in my wildest dreams, expected to be doing this work.

When I was young, I enjoyed watching films, going to the movies, listening to music. My earliest childhood memories are sound memories, but I never thought that would be my work. It happened by accident. Actually, it was one accident after another. I found myself working with sound as a hobby. I really liked it, so I embraced it. My hobby then became my job.

So you knew early on that audio would be your path?
I started working in radio when I was 20. It happened by chance. A neighbor told me about a radio station that was starting up from scratch. I told my friend from school, Alejandro Gonzalez Iñárritu, the director. Suddenly, we’re working at a radio station. We’re writing radio pieces and doing production sound. It was beautiful. We had our own on-air, live shows. I was on in the mornings. He did the noon show. Then he decided to make films and I followed him.

Easy

What are some of your recent projects?
I just finished a series for Joe Swanberg, the third season of Easy. It’s on Netflix. It’s the fourth project I’ve done with Joe. I’ve also done two shows here in Mexico. The first one is my first full-time job as supervisor/designer for Argos, the company lead by Epigmenio Ibarra. Yankee is our first series together for Netflix, and we’re cutting another one to be aired later in the year. It’s a very exciting for me.

Is there a project that you’re most proud of?
I am very proud of the results that we’ve been getting on the first two series here in Mexico. We built the sound crew from scratch. Some are editors I’ve worked with before, but we’ve also brought in new talent. That’s a very joyful process. Finding talent is not easy, but once you do, it’s very gratifying. I’m also proud of this work because the quality is very good. Our clients are happy, and when they’re happy, I’m happy.

What pieces of technology can you not live without?
Avid Pro Tools. It’s the universal language for sound. It allows me to share sound elements and sessions from all over the world, just like we do locally, between editing and mixing stages. The second is my converter. We are using the Red system from Focusrite. It’s a beautiful machine.

This is a high-stress job with deadlines and client expectations. What do you do to de-stress from it all?
Keep working.

Andy Greenberg on One Union Recording’s fire and rebuild

San Francisco’s One Union Recording Studios has been serving the sound needs of ad agencies, game companies, TV and film producers, and corporate media departments in the Bay Area and beyond for nearly 25 years.

In the summer of 2017, the facility was hit by a terrible fire that affected all six of its recording studios. The company, led by president John McGleenan, immediately began an ambitious rebuilding effort, which it completed earlier this year. One Union Recording is now back up to full operation and its five recording studios, outfitted with the latest sound technologies including Dolby Atmos capability, are better than ever.

Andy Greenberg, One Union Recording’s facility engineer and senior mix engineer, who works alongside engineers Joaby Deal, Eben Carr, Matt Wood and Isaac Olsen. We recently spoke with Greenberg about the company’s rebuild and plans for the future.

Rebuilding the facility after the fire must have been an enormous task.
You’re not kidding. I’ve worked at One Union for 22 years, and I’ve been through every growth phase and upgrade. I was very proud of the technology we had in place in 2017. We had six rooms, all cutting-edge. The software was fully up to date. We had few if any technical problems and zero downtime. So, when the fire hit, we were devastated. But John took a very business-oriented approach to it, and within a few days he was formulating a plan. He took it as an opportunity to implement new technology, like Dolby Atmos, and to grow. He turned sadness into enthusiasm.

How did the facility change?
Ironically, the timing was good. A lot of new technology had just come out that I was very excited about. We were able to consolidate what were large systems into smaller units while increasing quality 10-fold. We moved leaps and bounds beyond where we had been.

Prior to the fire, we were running Avid Pro Tools 12.1. Now we’re on Pro Tools Ultimate. We had just purchased four Avid/Euphonix System 5 digital audio consoles with extra DSP in March of 2017 but had not had time to install them before the fire due to bookings. These new consoles are super powerful. Our number of inputs and outputs quadrupled. The routing power and the bus power are vastly improved. It’s phenomenal.

We also installed Avid MTRX, an expandable interface designed in Denmark and very popular now, especially for Atmos. The box feels right at home with the Avid S5 because it’s MADI and takes the physical outputs of our ProTools systems up to 64 or 128 channels.

That’s a substantial increase.
A lot of delivered projects use from two to six channels. Complex projects might go to 20. Being able to go far beyond that increases the power and flexibility of the studio tremendously. And then, of course, our new Atmos room requires that kind of channel count to work in immersive surround sound.

What do you do for data storage?
Even before the fire, we had moved to a shared storage network solution. We had a very strong infrastructure and workflow in terms of data storage, archiving and the ability to recall sessions. Our new infrastructure includes 40TB of active storage of client data. Forty terabytes is not much for video, but for audio, it’s a lot. We also have 90TB of instantly recallable data.

We have client data archived back 25 years, and we can have anything online in any room in just a few minutes. It’s literally drag and drop. We pride ourselves on maintaining triple redundancy in backups. Even during the fire, we didn’t lose any client data because it was all backed up on tape and off site. We take backup and data security very seriously. Backups happen automatically every day…  actually every three hours.

What are some of the other technical features of the rebuilt studios?
There’s actually a lot. For example, our rooms — including the two Dolby-certified Atmos rooms — have new Genelec SAM studio monitors. They are “smart” speakers that are self-tuning. We can run some test tones and in five minutes the rooms are perfectly tuned. We have custom tunings set up for 5.1 and Atmos. We can adjust the tuning via computer and the speakers have built-in DPS, so we don’t have to rely on external systems.

Another cool technology that we are using is Dante, which is part of the Avid MTRX interface. Dante is basically audio-over-IP or audio-over-Cat6. It essentially replaced our AES router. We were one of the first facilities in San Francisco to have a full audio AES router, and it was very strong for us at the time. It was a 64×64 stereo-paired AES router. It has been replaced by the MTRX interface box that has, believe it or not, a three-inch by two-inch card that handles 64×64 routing per room. So, each room’s routing capability went up exponentially by 64.

We use Dante to route secondary audio, like our ISDN and web-based IP communication devices. We can route signals from room to room and over the web securely. It’s seamless, and it comes up literally into your computer. It’s amazing technology. The other day, I did a music session and used a 96K sample rate, which is very high. The quality of the headphone mix was astounding. Everyone was happy and it took just one, quick setting and we were off and running. The sound is fantastic and there is no noise and no latency problems. It’s super-clean, super-fast and easy to use.

What about video monitoring?
We have 4K monitors and 4K projection in all the rooms via Sony XBR 55A1E Bravia OLED monitors, Sony VPL-VW885ES True 4K Laser Projectors and a DLP 4K550 projector.Our clients appreciate the high-quality images and the huge projection screens.

Sound Lounge ups Becca Falborn to EP 

New York’s Sound Lounge, an audio post house that provides sound services for advertising, television and feature films, has promoted Becca Falborn to executive producer.

In her new role, Falborn will manage the studio’s advertising division and supervise its team of producers. She will also lead client relations and sales. Additionally, she will manage Sound Lounge Everywhere, the company’s remote sound services offering, which currently operates in Boston and Boulder, Colorado.

“Becca is a smart, savvy and passionate producer, qualities that are critical to success in her new role,” said Sound Lounge COO and partner Marshall Grupp. “She has developed an excellent rapport with our team of mixers and clients and has consistently delivered projects on time and on budget, even under the most challenging circumstances.”

Falborn joined Sound Lounge in 2017 as a producer and was elevated to senior producer last year. She has produced voiceover recordings, sound design, and mixing for many advertising projects, including seven out of the nine spots produced by Sound Lounge that debuted during this year’s Super Bowl telecast.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including past positions with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

After fire, SF audio house One Union is completely rebuilt

San Francisco-based audio post house One Union Recording Studios has completed a total rebuild of its facility. It features five all-new, state-of-the-art studios designed for mixing, sound design, ADR, voice recording and other sound work.

Each studio offers Avid/Euphonix digital mixing consoles, Avid MTRX interface systems, the latest Pro Tools software PT Ultimate and robust monitoring and signal processing gear. All studios have dedicated, large voice recording booths. One is certified for Dolby Atmos sound production. The facility’s infrastructure and central machine room are also all new.

One Union began its reconstruction in September 2017 in the aftermath of a fire that affected the entire facility. “Where needed, we took the building back to the studs,” says One Union president/owner John McGleenan. “We pulled out, removed and de-installed absolutely everything and started fresh. We then rebuilt the studios and rewired the whole facility. Each studio now has new consoles, speakers, furniture and wiring, and all are connected to new machine rooms. Every detail has been addressed and everything is in its proper place.”

During the 18 months of reconstruction, One Union carried on operations on a limited basis while maintaining its full staff. That included its team of engineers Joaby Deal, Eben Carr, Andy Greenberg, Matt Wood and Isaac Olsen who worked continuously and remain in place.

Reconstruction was managed by LA-based Yanchar Design & Consulting Group. All five studios feature Avid/Euphonix System 5 digital audio consoles, Pro Tools 2018 and Avid MTRX with Dante interface systems. Studio 4 adds Dolby Atmos capability with a full Atmos Production Suite as well as Atmos RMU. Studio 5, the facility’s largest recording space, has two MTRX systems, with a total of more than 240 analog, MADI and Dante outputs (256 inputs), integrated with a nine-foot Avid/Euphonix console. It also features a 110-inch, retractable projection screen in the control room and a 61-inch playback monitor in its dedicated voice booth. Among other things, the central machine room includes 300TB LTO archiving system.

John McGleenan

The facility was also rebuilt with an eye toward avoiding production delays. “All of the equipment is enterprise-grade and everything is redundant,” McGleenan notes. “The studios are fed by a dual power supply and each is equipped with dual devices. If some piece of gear goes down, we have a redundant system in place to keep going. Additionally, all our critical equipment is hot-swappable. Should any component experience a catastrophic failure, it will be replaced by the manufacturer within 24 hours.”

McGleenan adds that redundancy extends to broadband connectivity. To avoid outages, the facility is served by two 1Gig fiber optic connections provided by different suppliers. WiFi is similarly available through duplicate services.

One Union Recording was founded by McGleenan, a former advertising agency executive, in 1994 and originally had just one sound studio. More studios were soon added as the company became a mainstay sound services provider to the region’s advertising industry.

In recent years, the company has extended its scope to include corporate and branded media, television, film and games, and built a client base that extends across the country and around the world.

Recent work includes commercials for Mountain Dew and carsharing company Turo, the television series Law and Order SVU and Grand Hotel, and the game The Grand Tour.

Providing audio post for Three Identical Strangers documentary

By Randi Altman

It is a story that those of us who grew up in the New York area know well. Back in the ‘80s, triplet brothers separated at birth were reunited, after two of them attended the same college within a year of each other — with one being confused for the other. A classmate figured it out and their story was made public. Enter brother number three.

It’s an unbelievable story that at the time was considered to be a heart-warming tale of lost brothers — David Kellman, Bobby Shafran and Eddy Galland — who found each other again at the age of 19. But heart-warming turned heart-breaking when it was discovered that the triplets were part of a calculated, psychological research project. Each brother was intentionally placed in different levels of economic households, where they were “checked in on” over the years.

L-R: Chad Orororo, Nas Parkash and Kim Tae Hak

Last year, British director Tim Wardle told the story in his BAFTA-nominated documentary, Three Identical Strangers, produced by Raw TV. For audio post production, Wardle called on dialogue editor and re-recording mixer Nas Parkash, sound effects editor Kim Tae Hak and Foley and archive FX editor editor Chad Orororo, all from London-based post house Molinare. The trio was nominated for an MPSE Award earlier this year for their work on the film.

We recently reached out to the team to ask about workflow on this compelling work.

When you first started on Three Identical Strangers, did you realize then how powerful a film it was going to be?
Nas Parkash: It was after watching the film for the first time that we realized it was going to be seminal film. It’s an outrageous story — the likes of which we hadn’t come across before. We as a team have been fortunate to work on a broad range of documentary features, but this one has stuck out, probably because of its unpredictability and sheer number of plot twists.

Chad Orororo: I agree. It was quite an exciting moment to watch an offline cut and instantly know that it was going to be phenomenal project. The great thing about having this reaction was that the pressure was fused with excitement, which is always a win-win. Especially as the storytelling had so much charisma.

Kim Tae Hak: When the doc was first mentioned, I had no idea about their story, but soon after viewing the first cut I realized that this would be a great film. The documentary is based on an unbelievable true story — it evokes a lot of mixed feelings, and I wanted to ensure that every single sound effect element reflected those emotions and actions.

How early did you get involved in the project?
Tae Hak: I got to start working on the SFX as soon as the picture was locked and available.

Parkash: We had a spotting session a week before we started, with director Tim Wardle and editor Michael Harte, where we watched the film in sections and made notes. This helped us determine what the emotion in each scene should be, which is important when you’ve come to a film cold. They had been living with the edit, evolving it over months, so it was important to get up to speed with their vision as quickly as possible.

Courtesy of Newsday

Documentary audio often comes from many different sources and in varying types of quality. Can you talk about that and the challenges related to that?
Parkash: The audio quality was pretty good. The interview recordings were clean and on mic. We had two mics for every interview, but I went with the boom every time, as it sounded nicer, albeit more ambient, but with atmospheres that bedded in nicely.

Even the archive clips, such as from the Phil Donahue Show, were good. Funnily enough, you tend to get worse-sounding archives the more recent it is in history. 1970s stuff on the whole seems to have been preserved quite well, whereas stuff from the 1990s can be terrible.

Any technical challenges on the project?
Parkash: The biggest challenge for me was mixing in commercial music with vocals underneath interview dialogue. It had to be kept at a loud enough level to retain impact in the cinema, but low enough that it didn’t fight with the interview dialogue. The biggest deliberation was to what degree should we use sound effects in the drama recon — do we fully fill or just go with dialogue and music? In the end it was judged on a case-by-case basis.

How was Foley used within the doc?
Orororo: The Foley covered everything that you see on screen — all of the footsteps, clothing movement, shaving and breathing. You name it. It’s in there somewhere. My job was to add a level of subtle actuality, especially during the drama reconfiguration scenes.

These scenes took quite a bit of work to get right because they had to match the mood of the narration. For example, the coin spillage during the telephone box scene required a specific amount of coins on the right surface. It took a numerous amount of takes to get right because you can’t exactly control how objects fall and the texture also changes depending on the height from which you drop an object. So generally, there’s a lot more to consider when recording Foley than people may assume.

Unfortunately there we’re a few scenes where Foley was completely dropped (mainly on the archive material), but this is something that usually happens. The shape of the overall mix always takes favor over the individual elements that contribute to the mix. Teamwork makes the dream work, as they say, and I really think that showed with the final result.

Parkash: We did have sync sound recorded on location, but we decided it would be better to re-record at a higher fidelity. Some of it was noisy or didn’t sound cinematic enough. When it’s cleaner sound, you can make more of it.

What about the sound effects? Did you use a library or your own?
Parkash: Kim has his own extensive sound effects library. We also have our own personal ones, plus of Molinare’s. Anything we can’t find, we’ll go out and record. Kim has a Zoom recorder and his breathing has been featured on many films now (laughs).

Tae Hak: I mainly used my own SFX library. I always build up my own FX library, which I can apply instantly for any type of motioned pictures. I then tweak by applying various software plugins, such as Pitch & Time Pro, Altiverb and many more.

As a brief example of how I completed sound design for the opening title, the first thing I did was specifically look for realistic heartbeats of six-month infants. After successfully collecting some natural heartbeats. I then blended them with other synthetic elements as I started to vary the pitch slightly between them (for the three babies), applying various effects, such as chorus and reverb, so each heartbeat has a slightly different texture. It was a bit tricky to make them distinct, but still the same (like identical triplets).

The three heartbeats were panned across the front three speakers in order to create as much separation and clarity as possible. Once I was happy with the heartbeats as a foundation. I then added other sound elements, such as underwater, ambiguous liquids and other sound design elements. It was important for this sequence to build in a dramatic way, starting as mono and gradually filling the 5.1 space before a hard cut into the interview room.

Can you talk about working with director Tim Wardle?
Tae Hak: Tim was fantastic and very supportive throughout the project. As an FX editor, I had less face to face with him than Nas, but we had a spot session together before the first day of working, and we also talked about our sound designing approach over the phone, especially for the opening title, and the aforementioned sound of triplets’ heartbeats.

Orororo: Tim was great to work with! He’s a very open-minded director who also trusts in the talent that he’s working with, which can be hard to come by especially on a project as important as Three Identical Strangers.

Parkash: Tim and editor Michael Harte were wonderful to work with. The best aspect of working in this industry are the people you meet and the friendships you make. They are both cinephiles, who cited numerous other films and directors in order to guide us through the process — “this scene should feel like this scene from such and such movie.” But they were also open to our suggestions and willing to experiment with different approaches. It felt like a collaboration, and I remember having fun in those intense few weeks.

How much stock footage versus new footage was shot?
Parkash: It was all pretty much new — the sit-down interviews, drama recon and the GVs (b-roll). The archive material was obviously cleared from various sources. The home movie footage came mute, so we rebuilt the sound but upon review decided that it was better left mute. It tends to change the audience’s perspective of the material depending on whether you hear the sound or not. Without, it feels more like you’re looking upon the subjects, as opposed to being with them.

What kind of work went into the new interviews?
Parkash: EQ, volume automation, de-essing, noise reduction, de-reverb, reverb, mouth de-click — Izotope RX6 software basically. We’ve become quite reliant upon this software for unifying our source material into something consistent and to achieve a quality good enough to stand up in the cinema, at theatrical level.

What are you all working on now at Molinare?
Tae Hak: I am working on a project about football (soccer for Americans) as the FX editor. I can’t name it yet, but it’s a six-episode series for Amazon Prime. I’m thoroughly enjoying the project, as I am a football fan myself. It’s filmed across the world, including Russia where the World Cup was held last year. The story really captures the beautiful game, how it’s more than just a game, and its impact on so much of the global culture.

Parkash: We’ve just finished a series for Discovery ID, about spouses who kill each other. I’m also working on the football series that Kim mentioned for Amazon Prime. So, murder and footy! We are lucky to work on such varied, high-quality films, one after another.

Orororo: Surprisingly, I’m also working on this football series (smiles). I work with Nas fairly often and we’ve just finished up on an evocative, feature-length TV documentary that follows personal accounts of people who have survived massacre attacks in the US.

Molinare has revered creatives everywhere you look, and I’m lucky enough to be working with one of the sound greats — Greg Gettens — on a new HBO Channel 4 documentary. However, it’s quite secret so I can’t say much more, but keep your eyes peeled.

Main Image: Courtesy of Neon


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Spider-Man Into the Spider-Verse: sound editors talk ‘magical realism’

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse isn’t your ordinary Spider-Man movie, from its story to its look to its sound. The filmmakers took a familiar story and turned it on its head a bit, letting audiences know that Spider-Man isn’t just one guy wearing that mask… or even a guy, or even from this dimension.

The film focuses on Miles Morales, a teenager from Brooklyn, struggling with all things teenager while also dealing with the added stress of being Spider-Man.

Geoff Rubay

Audio played a huge role in this story, and we recently reached out to Sony supervising sound editors Geoff Rubay and Curt Schulkey to dig in a bit deeper. The duo recently won an MPSE Award for Outstanding Achievement in Sound Editing — Feature Animation… industry peers recognizing the work that went into creating the sound for this stylized world.

Let’s find out more about the sound process on Spider-Man: Into the Spider Verse, which won the Academy Award for Best Animated Feature.

What do you think is the most important element of this film’s sound?
Curt Schulkey: It is fun, it is bold, it has style and it has attitude. It has energy. We did everything we could to make the sound as stylistic and surprising as the imagery. We did that while supporting the story and the characters, which are the real stars of the movie. We had the opportunity to work with some incredibly creative filmmakers, and we did our best to surprise and delight them. We hope that audiences like it too.

Geoff Rubay: For me, it’s the fusion of the real and the fantastic. Right from the beginning, the filmmakers made it clear that it should feel believable — grounded — while staying true to the fantastic nature of the visuals. We did not hold back on the fantastic side, but we paid close attention to the story and made sure we were supporting that and not just making things sound awesome.

Curt Schulkey

How early did your team get involved in the film?
Rubay: We started on an SFX pre-design phase in late February for about a month. The goal was to create sounds for the picture editors and animators to work with. We ended up doing what amounted to a temp mix of some key sequences. The “Super Collider” was explored. We only worked on the first sequence for the collider, but the idea was that material could be recycled by the picture department and used in the early temp mixes until the final visuals arrived.

Justin Thompson, the production designer, was very generous with his time and resources early on. He spent several hours showing us work-in-progress visuals and concept art so that we would know where visuals would eventually wind up. This was invaluable. We were able to work on sounds long before we saw them as part of the movie. In the temp mix phase, we had to hold back or de-emphasize some of those elements because they were not relevant yet. In some cases, the sounds would not work at all with the storyboards or un-lit animation that was in the cut. Only when the final lit animation showed up would those sounds make sense.

Schulkey: I came onto the film in May, about 9.5 months before completion. We were neck-deep in following changes throughout our work. We were involved in the creation of sounds from the very first studio screening, through previews and temp mixes, right on to the end of the final mix. This sometimes gave us the opportunity to create sounds in advance of the images, or to influence the development of imagery and timing. Because they were so involved in building the movie, the directors did not always have time to discuss their needs with us, so we would speculate on what kinds of sounds they might need or want for events that they were molding visually. As Geoff said, the time that Justin Thompson spent with us was invaluable. The temp-mix process often gave us the opportunity to audition creations for the directors/producers.

What sort of direction did you receive from the directors?
Schulkey: Luckily, because of our previous experiences with producers Chris Miller and Phil Lord and editor Bob Fisher, we had a pretty good idea of their tastes and sensitivities, so our first attempts were usually pointed in the right direction. The three directors — Bob Persichetti, Peter Ramsey and Rodney Rothman — also provided input, so we were rich with direction.

As with all movies, we had hundreds of side discussions with the directors along the way about details, nuances, timing and so on. I think that the most important overall direction we got from the filmmakers was related to the dynamic arc of the movie. They wanted the soundtrack to be forceful but not so much that it hurt. They wanted it to breathe — quiet in some spots, loud in others, and they wanted it to be fun. So, we had to figure out what “fun” sounds like.

Rubay: This will sound strange, but we never did a spotting session for the movie. We just started our work and got feedback when we showed sequences or did temp mixes. Phil called when we started the pre-design phase and gave us general notes about tone and direction. He made it clear he did not want us to hold back, but he wanted to keep the film grounded. He explained the importance of the various levels of technology of different characters.

Peni Parker is from the 31st century, so her robot sidekick needed to sound futuristic. Scorpion is a pile of rusty metal. Prowler’s tech is appropriated from his surroundings and possibly with some help from Kingpin. We discussed the sound of previous Spider-Man movies and asked how much we needed to stay true to established sounds from those films. The direction was “not at all unless it makes sense.” We endeavored to make Peter Parker’s web-slings sound like the previous films. After that, we just “went for it.”

How was working on a film like this different than working on something live-action? Did it allow you more leeway?
Schulkey: In a live-action film, most or all of the imagery is shot before we begin working. Many aspects of the sound are already stamped in. On this film, we had a lot more creative involvement. At the start, a good percentage of the movie was still in storyboards, so if we expanded or contracted the timing of an event, the animators might adjust their work to fit the sounds. As the visual elements developed, we began creating layers of sound to support them.

For me, one of the best parts of an animated film’s soundtrack is that no sounds are imposed by the real world, as is often the case in live-action productions. In live-action, if a dialogue scene is shot on a city street in Brooklyn, there is a lot of uninteresting traffic noise built into the dialogue recordings.

Very few directors (or actors) want to lose the spontaneity of the original performance by re-recording dialogue in a studio, so we tweak, clean and process the dialogue to lessen unwanted noise, sometimes diminishing the quality of the recording. We sometimes make compromises with sound effects and music to support a not-so-ideal dialogue track. In an animated film, we don’t have that problem. Sound effects and ambiences can shine without getting in the way. This film has very quiet moments, which feel very natural and organic. That’s a pleasure to have in the movie.

Rubay: Everything Curt said! You have quite a bit of freedom because there is no “production track.” On the flip side, every sound that is added is just that — added. You have to be aware of that; more is not always better.

Spider-Man: Into the Spider-Verse is an animated film with a unique visual style. At times, we played the effects straight, as we might in a live-action picture, to ground it. Other times, we stripped away any notion of “reality.” Sometimes we would do both in the same scene as we cut from one angle to the next. Chris and Phil have always welcomed hard right angle turns, snapping sounds off on a cut or mixing and matching styles in close proximity. They like to do whatever supports the story and directs the audience. Often, we use sound to make your eye notice one thing or look away from another. Other times, we expand the frame, adding sounds outside of what you can see to further enhance the image.

There are many characters in the film. Can you talk about helping to create personality for each?
Rubay: There was a lot of effort made to differentiate the various “spider people” from each other. Whether it was through their web-slings or inherent technology, we were directed to give as much individual personality as possible to each character. Since that directive was baked in from the beginning, every department had it in mind. We paid attention to every visual cue. For example, Miles wears a particular pair of shoes — Nike Air Jordan 1s. My son, Alec Rubay, who was the Foley supervisor, is a real sneakerhead. He tracked down those shoes — very rare — and we recorded them, capturing every sound we could. When you hear Miles’s shoes squeak, you are hearing the correct shoes. Those shoes sound very specific. We applied that mentality wherever possible.

Schulkey: We took the opportunity to exploit the fact that some characters are from different universes in making their sound signatures different from one another. Spider-Ham is from a cartoon universe, so many of the sounds he makes are cartoon sounds. Sniffles, punches, swishes and other movements have a cartoon sensibility. Peni Parker, the anime character, is in a different sync than the rest of the cast, and her voice is somewhat more dynamic. We experimented with making Spider-Man Noir sound like he was coming from an old movie soundtrack, but that became obnoxious, so we abandoned the idea. Nicolas Cage was quite capable of conveying that aspect of the character without our help.

Because we wanted to ground characters in the real world, a lot of effort was put into attaching their voices to their images. Sync, of course, is essential, as is breathing. Characters in most animated films don’t do much breathing, but we added a lot of breaths, efforts and little stutters to add realism. That had to be done carefully. We had a very special, stellar cast and we wanted to maintain the integrity of their performances. I think that effort shows up nicely in some of the more intimate, personal scenes.

To create the unique look of this movie, the production sometimes chose to animate sections of the film “on twos.” That means that mouth movements change every other frame rather than every frame, so sync can be harder than usual to pinpoint. I worked closely with director Bob Persichetti to get dialogue to look in its best sync, doing careful reviews and special adjustments, as needed, on all dialogue in the film.

The main character in this Spider-Man thread is Miles Morales, a brilliant African-American/Puerto Rican Brooklyn teenager trying to find his way in his multi-cultural world. We took special care to show his Puerto Rican background with added Spanish-language dialogue from Miles and his friends. That required dialect coaches, special record sessions and thorough review.

The group ADR required a different level of care than most films. We created voices for crowds, onlookers and the normal “general” wash of voices for New York City. Our group voices covered many very specific characters and were cast in detail by our group leader, Caitlin McKenna. We took a very realistic approach to crowd activity. It had to be subtler than most live-action films to capture the dry nonchalance of Miles Morales’s New York.

Would you describe the sounds as realistic? Fantastical? Both?
Schulkey: The sounds are fantastically realistic. For my money, I don’t want the sounds in my movie to seem fantastical. I see our job as creating an illusion for the audience — the illusion that they are hearing what they are seeing, and that what they are seeing is real. This is an animated film, where nothing is actually real, but has its own reality. The sounds need to live in the world we are watching. When something fantastical happens in the movie’s reality, we had to support that illusion, and we sometimes got to do fun stuff. I don’t mean to say that all sounds had to be realistic.

For example, we surmised that an actual supercollider firing up below the streets of Brooklyn would sound like 10,000 computer fans. Instead, we put together sounds that supported the story we were telling. The ambiences were as authentic as possible, including subway tunnels, Brooklyn streets and school hallways. Foley here was a great tool for giving reality to animated images. When Miles walks into the cemetery at night, you hear his footsteps on snow and sidewalk, gentle cloth movements and other subtle touches. This adds to a sense that he’s a real kid in a real city. Other times, we were in the Spider-Verse and our imagination drove the work.

Rubay: The visuals led the way, and we did whatever they required. There are some crazy things in this movie. The supercollider is based on a real thing so we started there. But supercolliders don’t act as they are depicted in the movie. In reality, they sound like a giant industrial site, fans and motors, but nothing so distinct or dramatic, so we followed the visuals.

Spider-sense is a kind of magical realism that supports, informs, warns, communicates, etc. There is no realistic basis for any of that, so we went with directions about feelings. Some early words of direction were “warm,” “organic,” “internal” and “magical.” Because there are no real sounds for those words, we created sounds that conveyed the emotional feelings of those ideas to the audience.

The portals that allow spider-people to move between dimensions are another example. Again, there was no real-world event to link to. We saw the visuals and assumed it should be a pretty big deal, real “force of nature” stuff. However, it couldn’t simply be big. We took big, energetic sounds and glued them onto what we were seeing. Of course, sometimes people are talking at the same time, so we shifted the frequency center of the moment to clear for the dialog. As music is almost always playing, we had to look for opportunities within the spaces it left.

 

Can you talk about working on the action scenes?
Rubay: For me, when the action starts, the sound had to be really specific. There is dialogue for sure. The music is often active. The guiding philosophy for me at that point is not “Keep adding until there is nothing left to add,” rather, it’s, “We’re done when there is nothing left to strip out.” Busy action scene? Broom the backgrounds away. Usually, we don’t even cut BG’s in a busy action scene, but, if we do, we do so with a skeptical eye. How can we make it more specific? Also, I keep a keen eye on “scale.” One wrong, small detail sound, no matter how cool or interesting, will get the broom if it throws off the scale. Sometimes everything might be sounding nice and big; impressive but not loud, just big, and then some small detail creeps in and spoils it. I am constantly looking out for that.

The “Prowler Chase” scene was a fun exploration. There are times where the music takes over and runs; we pull out every sound we can. Other times, the sound effects blow over everything. It is a matter of give and take. There is a truck/car/prowler motorcycle crash that turns into a suspended slo-mo moment. We had to decide which sounds to play where and when. Its stripped-down nature made it among my favorite moments in the picture.

Can you talk about the multiple universes?
Rubay: The multiverse presented many challenges. It usually manifested itself as a portal or something we move between. The portals were energetic and powerful. The multiverse “place” was something that we used as a quiet place. We used it to provide contrast because, usually, there was big action on either side.

A side effect of the multiple universes interacting was a buildup or collision/overlap. When universes collide or overlap, matter from each tries to occupy the same space. Visually, this created some very interesting moments. We referred to the multi-colored prismatic-looking stuff as “Picasso” moments. The supporting sound needed to convey “force of nature” and “hard edges,” but couldn’t be explosive, loud or gritty. Ultimately, it was a very multi-layered sound event: some “real” sounds teamed with extreme synthesis. I think it worked.

Schulkey: Some of the characters in the movie are transported from another dimension into the dimension of the movie, but their bodies rebel, and from time to time their molecules try to jump back to their native dimension, causing “glitching.” We developed, with a combination of plug-ins, blending, editing and panning, a signature sound that served to signal glitching throughout the movie, and was individually applied for each iteration.

What stands out in your mind as the most challenging scenes audio wise?
Rubay: There is a very quiet moment between Miles and his dad when dad is on one side of the door and Miles is on the other. It’s a very quiet, tender one-way conversation. When a movie gets that quiet every sound counts. Every detail has to be perfect.

What about the Dolby Atmos mix? How did that enhance the film? Can you give a scene or two as an example?
Schulkey: This film was a native Atmos mix, meaning that the primary final mix was directly in the Atmos format, as opposed to making a 7.1 mix and then going back to re-mix sections using the Atmos format.

The native Atmos mix allowed us a lot more sonic room in the theater. This is an extremely complex and busy mix, heavily driven by dialogue. By moving the score out into the side and surround speakers — away from the center speaker — we were able to make the dialogue clearer and still have a very rich and exciting score. Sonic movement is much more effective in this format. When we panned sounds around the room, it felt more natural than in other formats.

Rubay: Atmos is fantastic. Being able to move sounds vertically creates so much space, so much interest, that might otherwise not be there. Also, the level and frequency response of the surround channels makes a huge difference.

You guys used Avid Pro Tools for editing, can you mention some other favorite tools you employed on this film?
Schulkey : The Delete key and the Undo key.

Rubay: Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, Recording rigs and microphones of all sorts.

What haven’t I asked that’s important?
Our crew! Just in case anyone thinks this can be done by two people, it can’t.
– re-recording mixers Michael Semanick and Tony Lamberti
– sound designer John Pospisil
– dialogue editors James Morioka and Matthew Taylor
– sound effects editors David Werntz, Kip Smedley, Andy Sisul, Chris Aud, Donald Flick, Benjamin Cook, Mike Reagan and Ando Johnson
– Foley mixer Randy Singer
– Foley artists Gary Hecker, Michael Broomberg and Rick Owens

CAS and MPSE honor audio post pros and their work

By Mel Lambert

With a BAFTA win and high promise for the upcoming Oscar Awards, the sound team behind Bohemian Rhapsody secured a clean sweep at both the Cinema Audio Society (CAS) and Motion Picture Sound Editors (MPSE) ceremonies here in Los Angeles last weekend.

Paul Massey

The 55th CAS Awards also honored sound mixer Lee Orloff with a Cinema Audio Society Career Achievement Award, while director Steven Spielberg received its Cinema Audio Society Filmmaker Award. And at the MPSE Awards, director Antoine Fuqua accepted the 2019 Filmmaker Award, while supervising sound editor Stephen H. Flick secured the MPSE Career Achievement honor.

Re-recording mixer Paul Massey — accepting the CAS Award for Outstanding Sound Mixing Motion Picture-Live Action on behalf of his fellow dubbing mixers Tim Cavagin and Niv Adiri, together with production mixer John Casali — thanked Bohemian Rhapsody’s co-executive producer and band members Roger Taylor and Brian May for “trusting me to mix the music of Queen.”

The film topped a nominee field that also included A Quiet Place, A Star is Born, Black Panther and First Man; for several years the CAS winner in the feature-film category also has secured an Oscar Award for sound mixing.

Isle of Dogs secured a CAS Award in the animation category, which also included Incredibles 2, Ralph Breaks the Internet, Spider-Man: Into the Spider-Verse and The Grinch. The sound-mixing team included original dialogue mixer Darrin Moore and re-recording mixers Christopher Scarabosio and Wayne Lemmer, together with scoring mixers Xavier Forcioli and Simon Rhodes and Foley mixer Peter Persaud.

Free Solo won a documentary award for production mixer Jim Hurst, re-recording mixers Tom Fleischman and Ric Schnupp, together with scoring mixer Tyson Lozensky, ADR mixer David Boulton and Foley mixer Joana Niza Braga.

Finally, American Crime Story: The Assassination of Gianni Versace (Part 1) The Man Who Would Be Vogue, The Marvelous Mrs. Maisel: Vote For Kennedy, Vote For Kennedy and Anthony Bourdain: Parts Unknown (Bhutan) won CAS Awards within various broadcast sound categories.

Steven Spielberg and Bradley Cooper

The CAS Filmmaker Award was presented to Steven Spielberg by fellow director Bradley Cooper. This followed tributes from regular members of Spielberg’s sound team, including production sound mixer Ron Judkins plus re-recording mixers Andy Nelson and Gary Rydstrom, who quipped: “We spent so much money on Jurassic Park that [Steven] had to shoot Schindler’s List in black & white!”

“Through your talent, [sound editors and mixers] allow the audience to see with their ears,” Spielberg acknowledged, while stressing the full sonic and visual impact of a theatrical experience. “There’s nothing like a big, dark theater,” he stated. He added that he still believes that movie theaters are the best environment in which to fully enjoy his cinematic creations.

Upon receiving his Career Achievement Award from sound mixer Chris Noyes and director Dean Parisot, production sound mixer Lee Orloff acknowledged the close collaboration that needs to exist between members of the filmmaking team. “It is so much more powerful than the strongest wall you could build,” he stated, recalling a 35-year career that spans nearly 80 films.

Lee Orloff

Outgoing CAS president Mark Ulano presented the President’s Award to leading Foley mixer MaryJo Lang, while the CAS Student Award went to Anna Wozniewicz of Chapman University. Finalists included Maria Cecilia Ayalde Angel of Pontificia Universidad Javeriana, Bogota, Allison Ng of USC, Bo Pang of Chapman University and Kaylee Yacono of Savannah College of Art and Design.

Finally, the CAS Outstanding Product Awards went to Dan Dugan Sound Design for its Dugan Automixing in the Sound Devices 633 Compact Mixer, and to Izotope for its RX7 Audio Repair Software.

The CAS Awards ceremony was hosted by comedian Michael Kosta.

 

Motion Picture Sound Editors Awards

During the 66th Annual Golden Reels, outstanding achievement in sound editing awards were presented in 23 categories, encompassing feature films, long- and short-form television, animation, documentaries, games, special venue and other media.

The Americans, Atlanta, The Marvelous Mrs. Maisel and Westworld figured prominently within the honored TV series.

Following introductions by re-recording mixer Steve Pederson and supervising sound editor Mandell Winter, director/producer Michael Mann presented the 2019 MPSE Filmmaker Award to Antoine Fuqua, while Academy Award-winning supervising sound editor Ben Wilkins presented the MPSE Career Achievement Award to fellow supervising sound editor Stephen H. Flick, who also serves as professor of cinematic arts at the University of Southern California.

Antoine Fuqua

“We celebrate the creation of entertainment content that people will enjoy for generations to come,” MPSE president Tom McCarthy stated in his opening address. “As new formats appear and new ways to distribute content are developed, we need to continue to excel at our craft and provide exceptional soundtracks that heighten the audience experience.”

As Pederson stressed during his introduction to the MPSE Filmmaker Award, Fuqua “counts on sound to complete his vision [as a filmmaker].” “His films are stylish and visceral,” added Winter, who along with Pederson has worked on a dozen films for the director during the past two decades.

“He is a director who trusts his own vision,” Mandell confirmed. “Antoine loves a layered soundtrack. And ADR has to be authentic and true to his artistic intentions. He is a bone fide storyteller.”

Four-time Oscar-nominee Mann stated that the honored director “always elevates everything he touches; he uses sound design and music to its fullest extent. [He is] a director who always pushes the limits, while evolving his art.”

Pre-recorded tributes to Fuqua came from actor Chis Pratt, who starred in The Magnificent Seven (2017). “Nobody deserves [this award] more,” he stated. Actor Mark Wahlberg, who starred in Shooter (2007), and producer Jerry Bruckheimer were also featured.

Stephen Hunter Flick

During his 40-year career in the motion picture industry, while working on some 150 films, Steven H. Flick has garnered two Oscar Award wins for Speed (1994) and Robocop (1987) together with nominations for Total Recall (1990), Die Hard (1988) and Poltergeist (1982).

The award for Outstanding Achievement in Sound Editing – Animation Short Form went to Overwatch – Reunion from Blizzard Entertainment, headed by supervising sound editor Paul Menichini. The Non-Theatrical Animation Long Form award was awarded to NextGen from Netflix, headed by supervising sound editors David Acord and Steve Slanec.

The Feature Animation award went to the Oscar-nominated Spider-Man: Into the Spider-Verse from Sony Pictures Entertainment/Marvel, headed by supervising sound editors Geoffrey Rubay and Curt Schulkey. The Non-Theatrical Documentary award went to Searching for Sound — Islandman and Veyasin from Karga Seven Pictures/Red Bull TV, headed by supervising sound editor Suat Ayas. Finally, the Feature Documentary was a tie between Free Solo from National Geographic Documentary Films, headed by supervising sound editor Deborah Wallach, and They Shall Not Grow Old from Wingnut Films/Fathom Events/Warner Bros., headed by supervising sound editors Martin Kwok, Brent Burge, Melanie Graham and Justin Webster.

The Outstanding Achievement in Sound Editing — Music Score award also went to Spider-Man: Into the Spider-Verse, with music editors Katie Greathouse and Catherine Wilson, while the Musical award went to Bohemian Rhapsody from GK Films/Fox Studios, with supervising music editor John Warhurst and music editor Neil Stemp. The Dialogue/ADR award also went to Bohemian Rhapsody, with supervising ADR/dialogue editors Nina Hartston and Jens Petersen, while the Effects/Foley award went to A Quiet Place from Paramount Pictures, with supervising sound editors Ethan Van der Ryn and Erik Aadahl.

The Student Film/Verna Fields Award went to Facing It from National Film and Television School, with supervising sound designer/editor Adam Woodhams.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Karol Urban is president of CAS, others named to board

As a result of the Cinema Audio Society board of Directors election Karol Urban will replace CAS president Mark Ulano, whose term has come to an end.  Steve Venezia with replace treasurer Peter Damski who opted not to run for re-election.

“I am so incredibly honored to have garnered the confidence of our esteemed members,” says Urban. “After years of serving under different presidents and managing the content for the CAS Quarterly I have learned so much about the achievements, interests, talents and concerns of our membership. I am excited to given this new platform to celebrate the achievements and herald new opportunities to serve this incredibly dynamic and talented community.”

For 2019 the Executive Committee with include newly elected Urban and Venezia as well as VP Phillip W. Palmer, CAS, and secretary David J. Bondelevitch, CAS,  who were not up for election.

The incumbent CAS Board of Directors (Production) that were re-elected are  Peter J. Devlin CAS, Lee Orloff CAS, and Jeffrey W. Wexler, CAS. They will be joined by newly elected Amanda Beggs, CAS, and Mary H. Ellis, CAS, who are taking the seats of outgoing  board members Chris Newman CAS and Lisa Pinero, CAS.

Incumbent board members (Post Production) who were reelected are Bob Bronow CAS, and Mathew Waters, CAS, and they will be joined by newly elected Board Members Onnalee Blank, CAS, and Mike Minkler CAS, who will be taking the seats of board members Urban and Steve Venezia, CAS, who are now officers.

Continuing to serve as their terms were not up for reelection are for production Willie Burton, CAS, and Glen Trew, CAS, and for post production Tom Fleischman, CAS, Doc Kane CAS, Sherry Klein, CAS, and Marti Humphrey, CAS.

The new Board will be installed at the 55 Annual CAS Awards Saturday, February 16.

Audio post pro Julienne Guffain joins Sonic Union

NYC-based audio post studio Sonic Union has added sound designer/mix engineer Julienne Guffain to its creative team. Working across Sonic Union’s Bryant Park and Union Square locations, Guffain brings over a decade of experience in audio post production to her new role. She has worked on television, film and branded projects for clients such as Google, Mountain Dew, American Express and Cadillac among others.

A Virginia native, Guffain came to Manhattan to attend New York University’s Tisch School of the Arts. She found herself drawn to sound in film, and it was at NYU where she cut her teeth as a Foley artist and mixer on student films and independent projects. She landed her first industry gig at Hobo Audio, working with clients such as The History Channel, The Discovery Channel and mixing the Emmy-winning television documentary series “Rising: Rebuilding Ground Zero.”

Making her way to Crew Cuts, she began lending her talents to a wide range of spot and brand projects, including the documentary feature “Public Figure,” which examines the psychological effects of constant social media use. It is slated for a festival run later this year.

 

Quick Chat: Crew Cuts’ Nancy Jacobsen and Stephanie Norris

By Randi Altman

Crew Cuts, a full-service production and post house, has been a New York fixture since 1986. Originally established as an editorial house, over the years as the industry evolved they added services that target all aspects of the workflow.

This independently-owned facility is run by executive producer/partner Nancy Jacobsen, senior editor/partner Sherri Margulies Keenan and senior editor/partner Jake Jacobsen. While commercial spots might be in their wheelhouse, their projects vary and include social media, music videos and indie films.

We decided to reach out to Nancy Jacobsen, as well as EP of finishing Stephanie Norris, to find out about trends, recent work and succeeding in an industry and city that isn’t always so welcoming.

Can you talk about what Crew Cuts provides and how you guys have evolved over the years?
Jacobsen: We pretty much do it all. We have 10 offline editors as well as artists working in VFX, 2D/3D animation, motion graphics/design, audio mix and sound design, VO record, color grading, title treatment, advanced compositing and conform. Two of our editors double as directors.

In the beginning, Crew Cuts primarily offered only editorial. As the years went by and the industry climate changed we began to cater to the needs of clients and slowly built out our entire finishing department. We started with some minimal graphics work and one staff artist in 2008.

In 2009, we expanded the team to include graphics, conform and audio mix. From there we just continued to grow and expand our department to the full finishing team we have today.

As a woman owner of a post house, what challenges have you had to overcome?
Jacobsen: When I started in this business, the industry was very different. I made less money than my male counterparts and it took me twice as long to be promoted because I am a woman. I have since seen great change where women are leading post houses and production houses and are finally getting the recognition for the hard work they deserve. Unfortunately, I had to “wait it out” and silently work harder than the men around me. This has paid off for me, and now I can help women get the credit they rightly deserve

Do you see the industry changing and becoming less male-dominated?
Jacobsen: Yes, the industry is definitely becoming less male-dominated. In the current climate, with the birth of the #metoo movement and specifically in our industry with the birth of Diet Madison Avenue (@dietmadisonave), we are seeing a lot more women step up and take on leading roles.

Are you mostly a commercial house? What other segments of the industry do you work in?
Jacobsen: We are primarily a commercial house. However, we are not limited to just broadcast and digital commercial advertising. We have delivered specs for everything from the Godzilla screen in Times Square to :06 spots on Instagram. We have done a handful of music videos and also handle a ton of B2B videos for in-house client meetings, etc., as well as banner ads for conferences and trade shows. We’ve even worked on display ads for airports. Most recently, one of our editors finished a feature film called Public Figure that is being submitted around the film festival circuit.

What types of projects are you working on most often these days?
Jacobsen: The industry is all over the place. The current climate is very messy right now. Our projects are extremely varied. It’s hard to say what we work on most because it seems like there is no more norm. We are working on everything from sizzle pitch videos to spots for the Super Bowl.

What trends have you seen over the last year, and where do you expect to be in a year?
Jacobsen: Over the last year, we have noticed that the work comes from every angle. Our typical client is no longer just the marketing agency. It is also the production company, network, brand, etc. In a year we expect to be doing more production work. Seeing as how budgets are much smaller than they used to be and everyone wants a one-stop shop, we are hoping to stick with our gut and continue expanding our production arm.

Crew Cuts has beefed up its finishing services. Can you talk about that?
Stephanie Norris: We offer a variety of finishing services — from sound design to VO record and mix, compositing to VFX, 2D and 3D motion graphics and color grading. Our fully staffed in-house team loves the visual effects puzzle and enjoys working with clients to help interpret their vision.

Can you name some recent projects and the services you provided?
Norris: We just worked on a new campaign for New Jersey Lottery in collaboration with Yonder Content and PureRed. Brian Neaman directed and edited the spots. In addition to editorial, Crew Cuts also handled all of the finishing, including color, conform, visual effects, graphics, sound design and mix. This was one of those all-hands-on-deck projects. Keeping everything under one roof really helped us to streamline the process.

New Jersey Lottery

Working with Brian to carefully plan the shooting strategy, we filmed a series of plate shots as elements that could later be combined in post to build each scene. We added falling stacks of cash to the reindeer as he walks through the loading dock and incorporated CG inflatable decorations into a warehouse holiday lawn scene. We also dramatically altered the opening and closing exterior warehouse scenes, allowing one shot to work for multiple seasons. Keeping lighting and camera positions consistent was mission-critical, and having our VFX supervisor, Dulany Foster, on set saved us hours of work down the line.

For the New Jersey Lottery Holiday spots, the Crew Cuts CG team, led by our creative director Ben McNamara created a 3D Inflatable display of lottery tickets. This was something that proved too costly and time consuming to manufacture and shoot practically. After the initial R&D, our team created a few different CG inflatable simulations prior to the shoot, and Dulany was able to mock them up live while on set. Creating the simulations was crucial for giving the art department reference while building the set, and also helped when shooting the plates needed to composite the scene together.

Ben and his team focused on the physics of the inflation, while also making sure the fabric simulations, textures and lighting blended seamlessly into the scene — it was important that everything felt realistic. In addition to the inflatables, our VFX team turned the opening and closing sunny, summer shots of the warehouse into a December winter wonderland thanks to heavy compositing, 3D set extension and snow simulations.

New Jersey Lottery

Any other projects you’d like to talk about?
Jacobsen: We are currently working on a project here that we are handling soup to nuts from production through finishing. It was a fun challenge to take on. The spot contains a hand model on a greenscreen showing the audience how to use a new product. The shoot itself took place here at Crew Cuts. We turned our common area into a stage for the day and were able to do so without interrupting any of the other employees and projects going on.

We are now working on editorial and finishing. The edit is coming along nicely. What really drives the piece here is the graphic icons. Our team is having a lot of fun designing these elements and implementing them into the spot. We are so proud because we budgeted wisely to make sure to accommodate all of the needs of the project so that we could handle everything and still turn a profit. It was so much fun to work in a different setting for the day and has been a very successful project so far. Clients are happy and so are we.

Main Image: (L-R) Stephanie Norris and Nancy Jacobsen

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.

Quick Chat: Digital Arts’ Josh Heilbronner on Audi, Chase spots

New York City’s Digital Arts provided audio post on a couple of 30-second commercial spots that presented sound designer/mixer Josh Heilbronner with some unique audio challenges. They are Audi’s Night Watchman via agency Venables Bell & Partners in New York and Chase’s Mama Said Knock You Out, featuring Serena Williams from agency Droga5 in New York.

Josh Heilbronner

Heilbronner, who has been sound designing and mixing for broadcast and film for almost 10 years, has worked on large fashion brands like Nike and J Crew to Fortune 500 Companies like General Electric, Bank of America and Estee Lauder. He has also mixed promos and primetime broadcast specials for USA Network, CBS and ABC Television. In addition to commercial VO recording, editing and mixing, Heilbronner has a growing credit list of long-form documentaries and feature films, including The Broken Ones, Romance (In the Digital Age), Generation Iron 2, The Hurt Business and Giving Birth in America (a CNN special series).

We recently reached out to Heilbronner to find out more about these two very different commercial projects and how he tackled each.

Both Audi and Chase are very different assignments from an audio perspective. How did these projects come your way?
On Audi, we were asked to be part of their new 2019 A7 campaign, which follows a security guard patrolling the Audi factory in the middle of night. It’s sort of James Bond meets Night at the Museum. The factory is full of otherworldly rooms built to put the cars through their paces (extreme cold, isolation etc.). Q Department did a great job crafting the sounds of those worlds and really bringing the viewer into the factory. Agency Venables & Bell were looking to really pull everything together tightly and have the dialogue land up-front, while still maintaining the wonderfully lush and dynamic music and sound design that had been laid down already.

The Chase Serena campaign is an impact-driven series of spots. Droga5 has a great reputation for putting together cinematic spots and this is no exception. Drazen Bosnjak from Q Department originally reached out to see if I would be interested in mixing this one because one of the final deliverables was the Jumbotron at the US Open in Arthur Ashe Stadium.

Digital Arts has a wonderful 7.1 Dolby approved 4K theater, so we were able to really get a sense of what the finals would sound and look like up on the big screen.

Did you have any concerns going into the project about what would be required creatively or technically?
For Audi our biggest challenge was the tight deadline. We mixed in New York but we had three different time zones in play, so getting approvals could sometimes be difficult. With Chase, the amount of content for this campaign was large. We needed to deliver finals for broadcast, social media (Snapchat, Instagram, Facebook, Twitter), Jumbotron and cinema. Making sure they played back as loud and crisp as they could on all those platforms was a major focus.

What was the most challenging aspect for you on the project?
As with a lot of production audio, the noise on set was pretty extreme. For Audi they had to film the night watchman walking in different spaces, delivering the copy at a variety of volumes. It all needed to gel together as if he was in one smaller room talking directly to the camera, as if he were a narrator. We didn’t have access to re-record him, so we had to use a few different denoise tools, such as iZotope RX6, Brusfri and Waves WNS to clear out the clashing room tones.

The biggest challenge on Chase was the dynamic range and power of these spots. Serena beautifully hushed whisper narration is surrounded by impactful bass drops, cinematic hits and lush ambiences. Reigning all that in, building to a climax and still having her narration be the focus was a game of cat and mouse. Also, broadcast standards are a bit restrictive when it comes to large impacts, so finding the right balance was key.

Any interesting technology or techniques that you used on the project?
I mainly use Avid Pro Tools Ultimate 2018. They have made some incredible advancements — you can now do everything on one machine, all in the box. I can have 180 tracks running in a surround session and still print every deliverable (5.1, stereo, stems etc.) without a hiccup.

I’ve been using Penteo 7 Pro for stereo 5.1 upmixing. It does a fantastic job filling in the surrounds, but also folds down to stereo nicely (and passes QC). Spanner is another useful tool when working with all sorts of channel counts. It allows me to down-mix, rearrange channels and route audio to the correct buses easily.

First Man: Historical fiction meets authentic sound

By Jennifer Walden

Historical fiction is not a rigidly factual account, but rather an interpretation. Fact and fiction mix to tell a story in a way that helps people connect with the past. In director Damien Chazelle’s film First Man, audiences experience his vision of how the early days of space exploration may have been for astronaut Neil Armstrong.

Frank A. Montaño

The uncertainty of reaching the outer limits of Earth’s atmosphere, the near disasters and mistakes that led to the loss of several lives and the ultimate success of landing on the moon. These things are presented so viscerally that the audience feels as though they are riding along with Armstrong.

While First Man is not a documentary, there are factual elements in the film, particularly in the sound. “The concept was to try to be true to the astronauts’ sonic experience. What would they hear?” says effects re-recording mixer Frank A. Montaño, who mixed the film alongside re-recording mixer Jon Taylor (on dialogue/music) in the Alfred Hitchcock Theater at Universal Studios in Los Angeles.

Supervising sound editors Ai-Ling Lee (who also did re-recording mixing on the film) and Milly Iatrou were in charge of designing a soundtrack that was both authentic and visceral — a mix of reality and emotionality. When Armstrong (Ryan Gosling) and Dave Scott (Christopher Abbott) are being shot into space on a Gemini mission, everything the audience hears may not be completely accurate, but it’s meant to produce the accurate emotional response — i.e., fear, uncertainty, excitement, anxiety. The sound helps the audience to connect with the astronauts strapped into that handcrafted space capsule as it rattles and clatters its way into space.

As for the authentic sounds related to the astronauts’ experience — from the switches and toggles to the air inside the spacesuits — those were collected by several members of the post sound team, including Montaño, who by coincidence is an avid fan of the US space program and full of interesting facts on the subject. Their mission was to find and record era-appropriate NASA equipment and gear.

Recording
Starting at ILC Dover in Frederica, Delaware — original manufacturers of spacesuits for the Apollo missions — Montaño and sound effects recordist Alex Knickerbocker recorded a real A7L-B, which, says Montaño, is the second revision of the Apollo suit. It was actually worn by astronaut Paul Weiss, although it wasn’t the one he wore in space. “ILC Dover completely opened up to us, and were excited for this to happen,” says Montaño.

They spent eight hours recording every detail of the suit, like the umbilicals snapping in and out of place, and gloves and helmet (actually John Young’s from Apollo 10) locking into the rings. “In the film, when you see them plug in the umbilical for water or air, that’s the real sound. When they are locking the bubble helmet on to Neil’s suit in the clean room, that’s the real sound,” explains Montaño.

They also captured the internal environment of the spacesuit, which had never been officially documented before. “We could get hours of communications — that was easy — but there was no record of what those astronauts [felt like in those] spacesuits for that many hours, and how those things kept them alive,” says Montaño.

Back at Universal on the Hitchcock stage, Taylor and mix tech Bill Meadows were receiving all the recorded sounds from Montaño and Knickerbocker, who were still at ILC Dover. “We weren’t exactly in the right environment to get these recordings, so JT [Jon Taylor] and Bill let us know if it was a little too live or a little too sharp, and we’d move the microphones or try different microphones or try to get into a quieter area,” says Montaño.

Next, Montaño and Knickerbocker traveled to the US Space and Rocket Center in Huntsville, Alabama, where the Saturn V rocket was developed. “This is where Wernher von Braun (chief architect of the Saturn V rocket) was based out of, so they have a huge Apollo footprint,” says Montaño. There they got to work inside a Lunar Excursion Module (LEM) simulator, which according to Montaño was one of only two that were made for training. “All Apollo astronauts trained in these simulators including Neil and Buzz, so it was under plexiglass as it was only for observation. But, they opened it up to us. We got to go inside the LEM and flip all the switches, dials, and knobs and record them. It was historic. This has never been done before and we were so excited to be there,” says Montaño.

Additionally, they recorded a DSKY (Display and Keypad) flight guidance computer used by the crew to communicate with the LEM computer. This can be seen during the sequence of Buzz (Corey Stoll) and Neil landing on the moon. “It has this big numeric keypad, and when Buzz is hitting those switches it’s the real sound. When they flip all those switch banks, all those sounds are the real deal,” reports Montaño.

Other interesting recording adventures include the Cosmosphere in Hutchinson, Kansas, where they recorded all the switches and buttons of the original control flight consoles from Mission Control at the Johnson Space Center (JSC). At Edwards Airforce Base in Southern California, they recorded Joe Walker’s X-15 suit, capturing the movement and helmet sounds.

The team also recorded Beta cloth at the Space Station Museum in Novato, California, which is the white-colored, fireproof silica fiber cloth used for the Apollo spacesuits. Gene Cernan’s (Apollo 17) connector cover was used, which reportedly sounds like a plastic bag or hula skirt.

Researching
They also recreated sounds based on research. For example, they recorded an approximation of lunar boots on the moon’s surface but from exterior perspective of the boots. What would boots on the lunar surface sound like from inside the spacesuit? First, they did the research to find the right silicone used during that era. Then Frank Cuomo, who is a post supervisor at Universal, created a unique pair of lunar boots based on Montaño’s idea of having ports above the soles, into which they could insert lav mics. “Frank happens to do this as a hobby, so I bounced this idea for the boots off of him and he actually made them for us,” says Montaño.

Next, they researched what the lunar surface was made of. Their path led to NASA’s Ames Research Center where they have an eight-ton sandbox filled with JSC-1A lunar regolith simulant. “It’s the closest thing to the lunar surface that we have on earth,” he explains.

He strapped on the custom-made boots and walked on this “lunar surfasse” while Knickerbocker and sound effects recordist Peter Brown captured it with numerous different mics, including a hydrophone placed on the surface “which gave us a thuddy, non-pitched/non-fidelity-altered sound that was the real deal,” says Montaño. “But what worked best, to get that interior sound, were the lav mics inside those ports on the soles.”

While the boots on the lunar surface sound ultimately didn’t make it into the film, the boots did come in handy for creating a “boots on LEM floor” sound. “We did a facsimile session. JT (Taylor) brought in some aluminum and we rigged it up and got the silicone soles on the aluminum surface for the interior of the LEM,” says Montaño.

Jon Taylor

Another interesting sound they recreated was the low-fuel alarm sound inside the LEM. According to Montaño, their research uncovered a document that shows the alarm’s specific frequencies, that it was a square wave, and that it was 750 cycles to 2,000 cycles. “The sound got a bit tweaked out just for excitement purposes. You hear it on their powered descent, when they’re coming in for a landing on the moon, and they’re low on fuel and 20 seconds from a mandatory abort.”

Altogether, the recording process was spread over nearly a year, with about 98% of their recorded sounds making it into the final soundtrack, Taylor says, “The locking of the gloves, and the locking and handling of the helmet that belonged to John Young will live forever. It was an honor to work with that material.”

Montaño adds, “It was good to get every angle that we could, for all the sounds. We spent hours and hours trying to come up with these intangible pieces that only a handful of people have ever heard, and they’re in the movie.”

Helmet Comms
To recreate the comms sound of the transmissions back and forth between NASA and the astronauts, Montaño and Taylor took a practical approach. Instead of relying on plug-ins for futz and reverb, they built a 4-foot-by-3-foot isolated enclosure on wheels, deadened with acoustical foam and featuring custom fit brackets inside to hold either a high-altitude helmet (to replicate dialogue for the X-15 and the Gemini missions) or a bubble helmet (for the Apollo missions).

Each helmet was recorded independently using its own two-way coaxial car speaker and a set of microphones strapped to mini tripods that were set inside each helmet in the enclosure. The dialogue was played through the speaker in the helmet and sent back to the console through the mics. Taylor says, “It would come back really close to being perfectly in sync. So I could do whatever balance was necessary and it wouldn’t flange or sound strange.”

By adjusting the amount of helmet feed in relation to the dry dialogue, Taylor was able to change the amount of “futz.” If a scene was sonically dense, or dialogue clarity wasn’t an issue (such as the tech talk exchanges between Houston and the astronauts), then Taylor could push the futz further. “We were constantly changing the balance depending on what the effects and music were doing. Sometimes we could really feel the helmet and other times we’d have to back off for clarity’s sake. But it was always used, just sometimes more than others.”

Density and Dynamics
The challenge of the mix on First Man was to keep the track dynamic and not let the sound get too loud until it absolutely needed to. This made the launches feel powerful and intense. “If everything were loud up to that point, it just wouldn’t have the same pop,” says Taylor. “The director wanted to make sure that when we hit those rockets they felt huge.

One way to support the dynamics was choosing how to make the track appropriately less dense. For example, during the Gemini launch there are the sounds of the rocket’s different stages as it blasts off and breaks through the atmosphere, and there’s the sound of the space capsule rattling and metal groaning. On top of that, there’s Neil’s voice reading off various specs.

“When it comes to that kind of density sound-wise, you have to decide should we hear the actors? Are we with them? Do we have to understand what they are saying? In some cases, we just blew through that dialogue because ‘RCS Breakers’ doesn’t mean anything to anybody, but the intensity of the rocket does. We wanted to keep that energy alive, so we drove through the dialogue,” says Montaño. “You can feel that Neil’s calm, but you don’t need to understand what he’s saying. So that was a trick in the balance; deciding what should be heard and what we can gloss over.”

Another helpful factor was that the film’s score, by composer Justin Hurwitz, wasn’t bombastic. During the rocket launches, it wasn’t fighting for space in the mix. “The direction of the music is super supportive and it never had to play loud. It just sits in the pocket,” says Taylor. “The Gemini launch didn’t have music, which really allowed us to take advantage of the sonic structure that was built into the layers of sound effects and design for the take off.”

Without competition from the music and dialogue, the effects could really take the lead and tell the story of the Gemini launch. The camera stays close-up on Neil in the cockpit and doesn’t show an exterior perspective (as it does during the Apollo launch sequence). The audiences’ understanding of what’s happening comes from the sound. You hear the “bbbbbwhoop” of the Titan II missile during ignition, and hear the liftoff of the rocket. You hear the point at which they go through maximum dynamic pressure, characterized by the metal rattling and groaning inside the capsule as it’s subjected to extreme buffeting and stress.

Next you hear the first stage cut-off and the initial boosters break away followed by the ignition of the second stage engine as it takes over. Then, finally, it’s just the calmness of space with a few small metal pings and groans as the capsule settles into orbit.

Even though it’s an intense sequence, all the details come through in the mix. “Once we got the final effects tracks, as usual, we started to add more layers and more detail work. That kind of shaping is normal. The Gemini launch builds to that moment when it comes to an abrupt stop sonically. We built it up layer-wise with more groan, more thrust, more explosive/low-end material to give it some rhythm and beats,” says Montaño.

Although the rocket sounds like it’s going to pieces, Neil doesn’t sound like he’s going to pieces. He remains buttoned-up and composed. “The great thing about that scene was hearing the contrast between this intense rocket and the calmness of Neil’s voice. The most important part of the dialogue there was that Neil sounded calm,” says Taylor.

Apollo
Visually, the Apollo launch was handled differently in the film. There are exterior perspectives, but even though the camera shows the launch from various distances, the sound maintains its perspective — close as hell. “We really filled the room up with it the whole time, so it always sounds large, even when we are seeing it from a distance. You really feel the weight and size of it,” says Montaño.

The rocket that launched the Apollo missions was the most powerful ever created: the Saturn V. Recreating that sound was a big job and came with a bit of added pressure from director Chazelle. “Damien [Chazelle] had spoken with one of the Armstrong sons, Mark, who said he’s never really felt or heard a Saturn V liftoff correctly in a film. So Damien threw it our way. He threw down the gauntlet and challenged us to make the Armstrong family happy,” says Montaño.

Field recordists John Fasal and Skip Longfellow were sent to record the launch of the world’s second largest rocket — SpaceX’s Falcon Heavy. They got as close as they could to the rocket, which generated 5.5 million pounds of thrust. They also recorded it at various distances farther away. This was the biggest component of their Apollo launch sound for the film. It’s also bolstered by recordings that Lee captured of various rocket liftoffs at Vandenberg Air Force Base in California.

But recreating the world’s most powerful rocket required some mega recordings that regular mics just couldn’t produce. So they headed over to the Acoustic Test Chamber at JPL in Pasadena, which is where NASA sonically bombards and acoustically excites hardware before it’s sent into space. “They simulate the conditions of liftoff to see if the hardware fails under that kind of sound pressure,” says Montaño. They do this by “forcing nitrogen gas through this six-inch hose that goes into a diaphragm that turns that gas into some sort of soundwave, like pink noise. There are four loudspeakers bolted to the walls of this hard-shelled room, and the speakers are probably about 4’x4’ feet. It goes up to 153dB in there; that’s max.” (Fun Fact: The sound team wasn’t able to physically be in the room to hear the sound since the gas would have killed them. They could only hear the sound via their recordings.)

The low-end energy of that sound was a key element in their Apollo launch. So how do you capture the most low-end possible from a high-SPL source? Taylor had an interesting solution of using a 10-inch bass speaker as a microphone. “Years ago, while reading a music magazine, I discovered this method of recording low-end using a subwoofer or any bass speaker. If you have a 10-inch speaker as a mic, you’re going to be able to capture much more low-end. You may even be able to get as low as 7Hz,” Taylor says.

Montaño adds, “We were able to capture another octave lower than we’d normally get. The sounds we captured really shook the room, really got your chest cavity going.”
For the rocket sequences — the X-15 flight, the Gemini mission and the Apollo mission —their goal was to craft an experience the audience could feel. It was about energy and intensity, but also clarity.

Taylor concludes, “Damien’s big thing — which I love — is that he is not greedy when it comes to sound. Sometimes you get a movie where everything has to be big. Often, Damien’s notes were for things to be lower, to lower sounds that weren’t rocket affiliated. He was constantly making sure that we did what we could to get those rocket scenes to punch, so that you really felt it.”


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

A Star is Born: Live vocals, real crowds and venues

By Jennifer Walden

Warner Bros. Pictures’ remake of A Star is Born stars Bradley Cooper as Jackson Maine, a famous musician with a serious drinking hobby who stumbles onto singer/songwriter Ally (Lady Gaga) at a drag bar where she’s giving a performance. Jackson is taken by her raw talent and their chance meeting turns into something more. With Jackson’s help, Ally becomes a star but her fame is ultimately bittersweet.

Jason Ruder

Aside from Lady Gaga and Bradley Cooper (who also directed and co-wrote the screenplay), the other big star of this film is the music. Songwriting started over two years ago. Cooper and Gaga collaborated with several other songwriters along the way, like Lukas Nelson (son of Willie Nelson), Mark Ronson, Hillary Lindsey and DJ White Shadow.

According to supervising music editor/re-recording mixer Jason Ruder from 2 Pop Music — who was involved with the film from pre-production through post — the lyrics, tempo and key signatures were even changing right up to the day of the shoot. “The songwriting went to the 11th hour. Gaga sort of works in that fashion,” says Ruder, who witnessed her process first-hand during a sound check at Coachella. (2 Pop Music is located on the Warner Bros. lot in Burbank.)

Before each shoot, Ruder would split out the pre-recorded instrumental tracks, reference vocals and have them ready for playback, but there were days when he would get a call from Gaga’s manager as he was driving to the set. “I was told that she had gone into the studio in the middle of the night and made changes, so there were all new pre-records for the day. I guess she could be called a bit of a perfectionist, always trying to make it better.

“On the final number, for instance, it was only a couple hours before the shoot and I got a message from her saying that the song wasn’t final yet and that she wanted to try it in three different keys and three different tempos just to make sure,” continues Ruder. “So there were a lot of moving parts going into each day. Everyone that she works with has to be able to adapt very quickly.”

Since the music is so important to the story, here’s what Cooper and Gaga didn’t want — they start singing and the music suddenly switches over to a slick, studio-produced track. That concern was the driving force behind the production and post teams’ approach to the on-camera performances.

Recording Live Vocals
All the vocals in A Star is Born were recorded live on-set for all the performances. Those live vocals are the ones used in the film’s final mix. To pull this off, Ruder and the production sound team did a stage test at Warner Bros. to see if this was possible. They had a pre-recorded track of the band, which they played back on the stage. First, Cooper and Gaga did live vocals. Then they tried the song again, with Cooper and Gaga miming along to pre-recorded vocals. Ruder took the material back to his cutting room and built a quick version of both. The comparison solidified their decision. “Once we got through that test, everyone was more confident about doing the live vocals. We felt good about it,” he says.

Their first shoot for the film was at Coachella, on a weekday since there were no performances. They were shooting a big, important concert scene for the film and only had one day to get it done. “We knew that it all had to go right,” says Ruder. It was their first shot at live vocals on-set.

Neither the music nor the vocals were amplified through the stage’s speaker system since song security was a concern — they didn’t want the songs leaked before the film’s release. So everything was done through headphone mixes. This way, even those in the crowd closest to the stage couldn’t hear the melodies or lyrics. Gaga is a seasoned concert performer, comfortable with performing at concert volume. She wasn’t used to having the band muted and the vocals live (though not amplified), so some adjustments needed to be made. “We ended up bringing her in-ear monitor mixer in to help consult,” explains Ruder. “We had to bring some of her touring people into our world to help get her perfectly comfortable so she could focus on acting and singing. It worked really well, especially later for Arizona Sky, where she had to play the piano and sing. Getting the right balance in her ear was important.”

As for Jackson Maine’s band on-screen, those were all real musicians and not actors — it was Lukas Nelson’s band. “They’re used to touring together. They’re very tight and they’re seasoned musicians,” says Ruder. “Everyone was playing and we were recording their direct feeds. So we had all the material that the musicians were playing. For the drums, those had to be muted because we didn’t want them bleeding into the live vocals. We were on-set making sure we were getting clean vocals on every take.”

Real Venues, Real Reverbs
Since the goal from the beginning was to create realistic-sounding concerts, Ruder decided to capture impulse responses at every performance location — from big stages like Coachella to much smaller venues — and use those to create reverbs in Audio Ease’s Altiverb.

The challenge wasn’t capturing the IRs, but rather, trying to convince the assistant director on-set that they needed to be captured. “We needed to quiet the whole set for five or 10 minutes so we could put up some mics and shoot these tones through the spaces. This all had to be done on the production clock, and they’re just not used to that. They didn’t understand what it was for and why it was important — it’s not cheap to do that during production,” explains Ruder.

Those IRs were like gold during post. They allowed the team to recreate spaces like the main stage at Coachella, the Greek Theatre and the Shrine Auditorium. “We were able to manufacture our own reverbs that were pretty much exactly what you would hear if you were standing there. For Coachella, because it’s so massive, we weren’t sure if they were going to come out, but it worked. All the reverbs you hear in the film are completely authentic to the space.”

Live Crowds
Oscar-winning supervising sound editor Alan Murray at Warner Bros. Sound was also capturing sound at the concert performances, but his attention was away from the stage and into the crowd. “We had about 300 to 500 people at the concerts, and I was able to get clean reactions from them since I wasn’t picking up any music. So that approach of not amplifying the music worked for the crowd sounds too,” he says.

Production sound mixer Steven Morrow had set up mics in and around the crowd and recorded those to a multitrack recorder while Murray had his own mic and recorder that he could walk around with, even capturing the crowds from backstage. They did multiple recordings for the crowds and then layered those in Avid Pro Tools in post.

Alan Murray

“For Coachella and Glastonbury, we ended up enhancing those with stadium crowds just to get the appropriate size and excitement we needed,” explains Murray. They also got crowd recordings from one of Gaga’s concerts. “There was a point in the Arizona Sky scene where we needed the crowd to yell, ‘Ally!’ Gaga was performing at Fenway Park in Boston and so Bradley’s assistant called there and asked Gaga’s people to have the crowd do an ‘Ally’ chant for us.”

Ruder adds, “That’s not something you can get on an ADR stage. It needed to have that stadium feel to it. So we were lucky to get that from Boston that night and we were able to incorporate it into the mix.”

Building Blocks
According to Ruder, they wanted to make sure the right building blocks were in place when they went into post. Those blocks — the custom recorded impulse responses, the custom crowds, the live vocals, the band’s on-set performances, and the band’s unprocessed studio tracks that were recorded at The Village — gave Ruder and the re-recording mixers ultimate flexibility during the edit and mix to craft on-scene performances that felt like big, live concerts or intimate songwriting sessions.

Even with all those bases covered, Ruder was still worried about it working. “I’ve seen it go wrong before. You get tracks that just aren’t usable, vocals that are distorted or noisy. Or you get shots that don’t work with the music. There were those guitar playing shots…”

A few weeks after filming, while Ruder was piecing all the music together in post, he realized that they got it all. “Fortunately, it all worked. We had a great DP on the film and it was clear that he was capturing the right shots. Once we got to that point in post, once we knew we had the right pieces, it was a huge relief.”

Relief gave way to excitement when Ruder reached the dub stage — Warner Bros. Stage 10. “It was amazing to walk into the final mix knowing that we had the material and the flexibility to pull this off,” he says.

In addition to using Altiverb for the reverbs, Ruder used Waves plug-ins, such as the Waves API Collection, to give the vocals and instrumental tracks a live concert sound. “I tend to use plug-ins that emulate more of a tube sound to get punchier drums and that sort of thing. We used different 5.1 spreaders to put the music in a 5.1 environment. We changed the sound to match the picture, so we dried up the vocals on close-ups so they felt more intimate. We had tons and tons of flexibility because we had clean vocals and raw guitars and drum tracks.”

All the hard work paid off. In the film, Ally joins Jackson Maine on stage to sing a song she wrote called “Shallow.” For Murray and Ruder, this scene portrays everything they wanted to achieve for the performances in A Star is Born. The scene begins outside the concert, as Ally and her friend get out of the car and head toward the stage. The distant crowd and music reverberate through the stairwell as they’re led up to the backstage area. As they get closer, the sound subtly changes to match their proximity to the band. On stage, the music and crowd are deafening. Jackson begins to play guitar and sing solo before Ally finds the courage to join in. They sing “Shallow” together and the crowd goes crazy.

“The whole sequence was timed out perfectly, and the emotion we got out of them was great. The mix there was great. You felt like you were there with them. From a mix perspective, that was probably the most successful moment in the film,” concludes Ruder.


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

Sim Post NY expands audio offerings, adds five new staffers

Sim Post in New York is in growth mode. They recently expanded their audio for TV and film services and boosted their post team with five new hires. Following the recent addition of a DI theater to its New York location, Sim is building three audio suites, a voiceover room and support space for the expanded audio capabilities.

Primetime Emmy award-winner Sue Pelino joins Sim as a senior re-recording mixer. Over Pelino’s career, she has been nominated for 10 Primetime Emmy Awards, most recently winning her third Emmy in 2017 for Outstanding Sound Mixing for her work on the 2017 Rock & Roll Hall of Fame Induction Ceremony (HBO). Project highlights that include performance series such as VH1 Sessions at West 54th, Tony Bennett: An American Classic, Alicia Keys — Unplugged, Tupac: Resurrection and Elton John: The Red Piano.

Dan Ricci also joins the Sim audio department as a re-recording mixer. A graduate of the Berklee College of Music, his prior work experience includes time at Sony Music and credits include Comedians in Cars Getting Coffee and the Grammy-nominated Jerry Before Seinfeld Netflix special. Ricci has worked extensively with Dolby Atmos and immersive technologies involved in VR content creation.

Ryan Schumer completes Sim New York’s audio department as an assistant audio engineer. Schumer has a bachelor’s degree from Five Towns College on Long Island in Jazz Commercial Music with a concentration in audio recording technology.

Stephanie Pacchiano joins Sim as a finishing producer, following a 10-year stint at Broadway Video where she provided finishing and delivery services for a robust roster of clients. Highlights include Jerry Seinfeld’s Comedians in Cars Getting Coffee, Atlanta, Portlandia, Documentary Now! and delivering Saturday Night Live to over 25 domestic and international platforms.

Kassie Caffiero joins Sim as VP, business development, east coast sales. She brings with her over 25 years of post experience. A graduate of Queens College with a degree in communication arts, Caffiero began her post career in the mid 1980s and found herself working on the CBS TV series. Caffiero’s experience managing the scheduling, operations and sales departments at major post facilities led her to the role of VP of post production at Sony Music Studios in New York City for 10 years. This was followed by a stint at Creative Group in New York for five years and most recently Broadway Video, also in New York, for six years.

Sim Post is a division of Sim, provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

Netflix’s Lost in Space: New sounds for a classic series

By Jennifer Walden

Netflix’s Lost in Space series, a remake of the 1965 television show, is a playground for sound. In the first two episodes alone, the series introduces at least five unique environments, including an alien planet, a whole world of new tech — from wristband communication systems to medical analysis devices — new modes of transportation, an organic-based robot lifeform and its correlating technologies, a massive explosion in space and so much more.

It was a mission not easily undertaken, but if anyone could manage it, it was four-time Emmy Award-winning supervising sound editor Benjamin Cook of 424 Post in Culver City. He’s led the sound teams on series like Starz’s Black Sails, Counterpart and Magic City, as well as HBO’s The Pacific, Rome and Deadwood, to name a few.

Benjamin Cook

Lost in Space was a reunion of sorts for members of the Black Sails post sound team. Making the jump from pirate ships to spaceships were sound effects editors Jeffrey Pitts, Shaughnessy Hare, Charles Maynes, Hector Gika and Trevor Metz; Foley artists Jeffrey Wilhoit and Dylan Tuomy-Wilhoit; Foley mixer Brett Voss; and re-recording mixers Onnalee Blank and Mathew Waters.

“I really enjoyed the crew on Lost in Space. I had great editors and mixers — really super-creative, top-notch people,” says Cook, who also had help from co-supervising sound editor Branden Spencer. “Sound effects-wise there was an enormous amount of elements to create and record. Everyone involved contributed. You’re establishing a lot of sounds in those first two episodes that are carried on throughout the rest of the season.”

Soundscapes
So where does one begin on such a sound-intensive show? The initial focus was on the soundscapes, such as the sound of the alien planet’s different biomes, and the sound of different areas on the ships. “Before I saw any visuals, the showrunners wanted me to send them some ‘alien planet sounds,’ but there is a huge difference between Mars and Dagobah,” explains Cook. “After talking with them for a bit, we narrowed down some areas to focus on, like the glacier, the badlands and the forest area.”

For the forest area, Cook began by finding interesting snippets of animal, bird and insect recordings, like a single chirp or little song phrase that he could treat with pitching or other processing to create something new. Then he took those new sounds and positioned them in the sound field to build up beds of creatures to populate the alien forest. In that initial creation phase, Cook designed several tracks, which he could use for the rest of the season. “The show itself was shot in Canada, so that was one of the things they were fighting against — the showrunners were pretty conscious of not making the crash planet sound too Earthly. They really wanted it to sound alien.”

Another huge aspect of the series’ sound is the communication systems. The characters talk to each other through the headsets in their spacesuit helmets, and through wristband communications. Each family has their own personal ship, called a Jupiter, which can contact other Jupiter ships through shortwave radios. They use the same radios to communicate with their all-terrain vehicles called rovers. Cook notes these ham radios had an intentional retro feel. The Jupiters can send/receive long-distance transmissions from the planet’s surface to the main ship, called Resolute, in space. The families can also communicate with their Jupiters ship’s systems.

Each mode of communication sounds different and was handled differently in post. Some processing was handled by the re-recording mixers, and some was created by the sound editorial team. For example, in Episode 1 Judy Robinson (Taylor Russell) is frozen underwater in a glacial lake. Whenever the shot cuts to Judy’s face inside her helmet, the sound is very close and claustrophobic.

Judy’s voice bounces off the helmet’s face-shield. She hears her sister through the headset and it’s a small, slightly futzed speaker sound. The processing on both Judy’s voice and her sister’s voice sounds very distinct, yet natural. “That was all Onnalee Blank and Mathew Waters,” says Cook. “They mixed this show, and they both bring so much to the table creatively. They’ll do additional futzing and treatments, like on the helmets. That was something that Onna wanted to do, to make it really sound like an ‘inside a helmet’ sound. It has that special quality to it.”

On the flipside, the ship’s voice was a process that Cook created. Co-supervisor Spencer recorded the voice actor’s lines in ADR and then Cook added vocoding, EQ futz and reverb to sell the idea that the voice was coming through the ship’s speakers. “Sometimes we worldized the lines by playing them through a speaker and recording them. I really tried to avoid too much reverb or heavy futzing knowing that on the stage the mixers may do additional processing,” he says.

In Episode 1, Will Robinson (Maxwell Jenkins) finds himself alone in the forest. He tries to call his father, John Robinson (Toby Stephens — a Black Sails alumni as well) via his wristband comm system but the transmission is interrupted by a strange, undulating, vocal-like sound. It’s interference from an alien ship that had crashed nearby. Cook notes that the interference sound required thorough experimentation. “That was a difficult one. The showrunners wanted something organic and very eerie, but it also needed to be jarring. We did quite a few versions of that.”

For the main element in that sound, Cook chose whale sounds for their innate pitchy quality. He manipulated and processed the whale recordings using Symbolic Sound’s Kyma sound design workstation.

The Robot
Another challenging set of sounds were those created for Will Robinson’s Robot (Brian Steele). The Robot makes dying sounds, movement sounds and face-light sounds when it’s processing information. It can transform its body to look more human. It can use its hands to fire energy blasts or as a tool to create heat. It says, “Danger, Will Robinson,” and “Danger, Dr. Smith.” The Robot is sometimes a good guy and sometimes a bad guy, and the sound needed to cover all of that. “The Robot was a job in itself,” says Cook. “One thing we had to do was to sell emotion, especially for his dying sounds and his interactions with Will and the family.”

One of Cook’s trickiest feats was to create the proper sense of weight and movement for the Robot, and to portray the idea that the Robot was alive and organic but still metallic. “It couldn’t be earthly technology. Traditionally for robot movement you will hear people use servo sounds, but I didn’t want to use any kind of servos. So, we had to create a sound with a similar aesthetic to a servo,” says Cook. He turned to the Robot’s Foley sounds, and devised a processing chain to heavily treat those movement tracks. “That generated the basic body movement for the Robot and then we sweetened its feet with heavier sound effects, like heavy metal clanking and deeper impact booms. We had a lot of textures for the different surfaces like rock and foliage that we used for its feet.”

The Robot’s face lights change color to let everyone know if it’s in good-mode or bad-mode. But there isn’t any overt sound to emphasize the lights as they move and change. If the camera is extremely close-up on the lights, then there’s a faint chiming or tinkling sound that accentuates their movement. Overall though, there is a “presence” sound for the Robot, an undulating tone that’s reminiscent of purring when it’s in good-mode. “The showrunners wanted a kind of purring sound, so I used my cat purring as one of the building block elements for that,” says Cook. When the Robot is in bad-mode, the sound is anxious, like a pulsing heartbeat, to set the audience on edge.

It wouldn’t be Lost in Space without the Robot’s iconic line, “Danger, Will Robinson.” Initially, the showrunners wanted that line to sound as close to the original 1960’s delivery as possible. “But then they wanted it to sound unique too,” says Cook. “One comment was that they wanted it to sound like the Robot had metallic vocal cords. So we had to figure out ways to incorporate that into the treatment.” The vocal processing chain used several tools, from EQ, pitching and filtering to modulation plug-ins like Waves Morphoder and Dehumaniser by Krotos. “It was an extensive chain. It wasn’t just one particular tool; there were several of them,” he notes.

There are other sound elements that tie into the original 1960’s series. For example, when Maureen Robinson (Molly Parker) and husband John are exploring the wreckage of the alien ship they discover a virtual map room that lets them see into the solar system where they’ve crashed and into the galaxy beyond. The sound design during that sequence features sound material from the original show. “We treated and processed those original elements until they’re virtually unrecognizable, but they’re in there. We tried to pay tribute to the original when we could, when it was possible,” says Cook.

Other sound highlights include the Resolute exploding in space, which caused massive sections of the ship to break apart and collide. For that, Cook says contact microphones were used to capture the sound of tin cans being ripped apart. “There were so many fun things in the show for sound. From the first episode with the ship crash and it sinking into the glacier to the black hole sequence and the Robot fight in the season finale. The show had a lot of different challenges and a lot of opportunities for sound.”

Lost in Space was mixed in the Anthony Quinn Theater at Sony Pictures in 7.1 surround. Interestingly, the show was delivered in Dolby’s Home Atmos format. Cook explains, “When they booked the stage, the producer’s weren’t sure if we were going to do the show in Atmos or not. That was something they decided to do later so we had to figure out a way to do it.”

They mixed the show in Atmos while referencing the 7.1 mix and then played those mixes back in a Dolby Home Atmos room to check them, making any necessary adjustments and creating the Atmos deliverables. “Between updates for visual effects and music as well as the Atmos mixes, we spent roughly 80 days on the dub stage for the 10 episodes,” concludes Cook.

Behind the Title: Grey Ghost Music mix engineer Greg Geitzenauer

NAME: Greg Geitzenauer

COMPANY: Minneapolis-based Grey Ghost Music

CAN YOU DESCRIBE YOUR COMPANY?
Side A: Music production, creative direction and licensing for the advertising and marketing industries. Side B: Audio post production for the advertising and marketing industries.

WHAT’S YOUR JOB TITLE?
Senior Mix Engineer

WHAT DOES THAT ENTAIL?
All the hands-on audio post work our clients need — from VO recording, editing, forensic/cleanup work to sound design and final mixing.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The number of times my voice has ended up in a final spot when the script calls for “recording engineer. “

WHAT’S YOUR FAVORITE PART OF THE JOB?
There are some really funny people in this industry. I laugh a lot.

WHAT’S YOUR LEAST FAVORITE?
Working on a particular project so long that I lose perspective on whether the changes being made are helping any more.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get to work early — the time I get to spend confirming all my shit is together.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cutting together music for my daughter’s dance team.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was 14 when I found out what a recording engineer did, and I just knew. Audio and technology… it just pushes all my buttons.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Essentia Water, Best Buy, Comcast, Invisalign, 3M and Xcel Energy.

Invisalign

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
An anti-smoking radio campaign that won Radio Mercury and One Show awards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools HD, Kensington Expert Mouse trackball and Pentel Quicker-Clicker mechanical pencils.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Reddit and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Go home.

JoJo Whilden/Hulu

Color and audio post for Hulu’s The Looming Tower

Hulu’s limited series, The Looming Tower, explores the rivalries and missed opportunities that beset US law enforcement and intelligence communities in the lead-up to the 9/11 attacks. Based on the Pulitzer Prize-winning book by Lawrence Wright, who also shares credit as executive producer with Dan Futterman and Alex Gibney, the show’s 10 episodes paint an absorbing, if troubling, portrait of the rise of Osama bin Laden and al-Qaida, and offer fresh insight into the complex people who were at the center of the fight against terrorism.

For The Looming Tower’s sound and picture post team, the show’s sensitive subject matter and blend of dramatizations and archival media posed significant technical and creative challenges. Colorist Jack Lewars and online editor Jeff Cornell of Technicolor PostWorks New York, were tasked with integrating grainy, run-and-gun news footage dating back to 1998 with crisply shot, high-resolution original cinematography. Supervising sound designer/effects mixer Ruy García and re-recording mixer Martin Czembor from PostWorks, along with a Foley team from Alchemy Post Sound, were charged with helping to bring disparate environments and action to life, but without sensationalizing or straying from historical accuracy.

L-R: colorist Jack Lewars and editor Jeff Cornell

Lewars and Cornell mastered the series in Dolby Vision HDR, working from the production’s camera original 2K and 3.4K ArriRaw files. Most of the color grading and conforming work was done with a light touch, according to Lewars, as the objective was to adhere to a look that appeared real and unadulterated. The goal was for viewers to feel they are behind the scenes, watching events as they happened.

Where more specific grades were applied, it was done to support the narrative. “We developed different look sets for the FBI and CIA headquarters, so people weren’t confused about where we were,” Lewars explains. “The CIA was working out of the basement floors of a building, so it’s dark and cool — the light is generated by fluorescent fixtures in the room. The FBI is in an older office building — its drop ceiling also has fluorescent lighting, but there is a lot of exterior light, so its greener, warmer.”

The show adds to the sense of realism by mixing actual news footage and other archival media with dramatic recreations of those same events. Lewars and Cornell help to cement the effect by manipulating imagery to cut together seamlessly. “In one episode, we matched an interview with Osama bin Laden from the late ‘90s with new material shot with an Arri Alexa,” recalls Lewars. “We used color correction and editorial effects to blend the two worlds.”

Cornell degraded some scenes to make them match older, real-world media. “I took the Alexa material and ‘muddied’ it up by exporting it to compressed SD files and then cutting it back into the master timeline,” he notes. “We also added little digital hits to make it feel like the archival footage.”

While the color grade was subtle and adhered closely to reality, it still packed an emotional punch. That is most apparent in a later episode that includes the attack on the Twin Towers. “The episode starts off in New York early in the morning,” says Lewars. “We have a series of beauty shots of the city and it’s a glorious day. It’s a big contrast to what follows — archival footage after the towers have fallen where everything is a white haze of dust and debris.”

Audio Post
The sound team also strove to remain faithful to real events. García recalls his first conversations about the show’s sound needs during pre-production spotting sessions with executive producer Futterman and editor Daniel A. Valverde. “It was clear that we didn’t want to glamorize anything,” he says. “Still, we wanted to create an impact. We wanted people to feel like they were right in the middle of it, experiencing things as they happened.”

García says that his sound team approached the project as if it were a documentary, protecting the performances and relying on sound effects that were authentic in terms of time and place. “With the news footage, we stuck with archival sounds matching the original production footage and accentuating whatever sounds were in there that would connect emotionally to the characters,” he explains. “When we moved to the narrative side with the actors, we’d take more creative liberties and add detail and texture to draw you into the space and focus on the story.”

He notes that the drive for authenticity extended to crowd scenes, where native speakers were used as voice actors. Crowd sounds set in the Middle East, for example, were from original recordings from those regions to ensure local accents were correct.

Much like Lewars approach to color, García and his crew used sound to underscore environmental and psychological differences between CIA and FBI headquarters. “We did subtle things,” he notes. “The CIA has more advanced technology, so everything there sounds sharper and newer versus the FBI where you hear older phones and computers.”

The Foley provided by artists and mixers from Alchemy Post Sound further enhanced differences between the two environments. “It’s all about the story, and sound played a very important role in adding tension between characters,” says Leslie Bloome, Alchemy’s lead Foley artist. “A good example is the scene where CIA station chief Diane Marsh is berating an FBI agent while casually applying her makeup. Her vicious attitude toward the FBI agent combined with the subtle sounds of her makeup created a very interesting juxtaposition that added to the story.”

In addition to footsteps, the Foley team created incidental sounds used to enhance or add dimension to explosions, action and environments. For a scene where FBI agents are inspecting a warehouse filled with debris from the embassy bombings in Africa, artists recorded brick and metal sounds on a Foley stage designed to capture natural ambience. “Normally, a post mixer will apply reverb to place Foley in an environment,” says Foley artist Joanna Fang. “But we recorded the effects in our live room to get the perspective just right as people are walking around the warehouse. You can hear the mayhem as the FBI agents are documenting evidence.”

“Much of the story is about what went wrong, about the miscommunication between the CIA and FBI,” adds Foley mixer Ryan Collison, “and we wanted to help get that point across.”

The soundtrack to the series assumed its final form on a mix stage at PostWorks. Czembor spent weeks mixing dialogue, sound and music elements into what he described as a cinematic soundtrack.

L-R: Martin Czember and Ruy Garcia

Czembor notes that the sound team provided a wealth of material, but for certain emotionally charged scenes, such as the attack on the USS Cole, the producers felt that less was more. “Danny Futterman’s conceptual approach was to go with almost no sound and let the music and the story speak for themselves,” he says. “That was super challenging, because while you want to build tension, you are stripping it down so there’s less and less and less.”

Czembor adds that music, from composer Will Bates, is used with great effect throughout the series, even though it might go by unnoticed by viewers. “There is actually a lot more music in the series than you might realize,” he says. “That’s because it’s not so ‘musical;’ there aren’t a lot of melodies or harmonies. It’s more textural…soundscapes in a way. It blends in.”

Czembor says that as a longtime New Yorker, working on the show held special resonance for him, and he was impressed with the powerful, yet measured way it brings history back to life. “The performances by the cast are so strong,” he says. “That made it a pleasure to work on. It inspires you to add to the texture and do your job really well.”

Pace Pictures opens large audio post and finishing studio in Hollywood

Pace Pictures has opened a new sound and picture finishing facility in Hollywood. The 20,000-square-foot site offers editorial finishing, color grading, visual effects, titling, sound editorial and sound mixing services. Key resources include a 20-seat 4K color grading theater, two additional HDR color grading suites and 10 editorial finishing suites. It also features a Dolby Atmos mix stage designed by three-time Academy Award-winning re-recording mixer Michael Minkler, who is a partner in the company’s sound division.

The new independently-owned facility is located within IgnitedSpaces, a co-working site whose 45,000 square feet span three floors along Hollywood Boulevard. IgnitedSpaces targets media and entertainment professionals and creatives with executive offices, editorial suites, conference rooms and hospitality-driven office services. Pace Pictures has formed a strategic partnership with IgnitedSpaces to provide film and television productions service packages encompassing the entire production lifecycle.

“We’re offering a turnkey solution where everything is on-demand,” says Pace Pictures founder Heath Ryan. “A producer can start out at IgnitedSpaces with a single desk and add offices as the production grows. When they move into post production, they can use our facilities to manage their media and finish their projects. When the production is over, their footprint shrinks, overnight.”

Pace Pictures is currently providing sound services for the upcoming Universal Pictures release Mamma Mia! Here We Go Again. It is also handling post work for a VR concert film from this year’s Coachella Valley Music and Arts Festival.

Completed projects include the independent features Silver Lake, Flower and The Resurrection of Gavin Stone, the TV series iZombie, VR Concerts for the band Coldplay, Austin City Limits and Lollapalooza, and a Mariah Carey music video related to Sony Pictures’ animated feature Star.

Technical features of the new facility include three DaVinci Resolve Studio color grading suites with professional color consoles, a Barco 4K HDR digital cinema projector in the finishing theater, and dual Avid Pro Tools S6 consoles in the Dolby Atmos mix stage, which also includes four Pro Tools HDX systems. The site features facilities for sound design, ADR and voiceover recording, title design and insert shooting. Onsite media management includes a robust SAN network, as well as LTO7 archiving and dailies services, and cold storage.

Ryan is an editor who has operated Pace Pictures as an editorial service for more than 15 years. His many credits include the films Woody Woodpecker, Veronica Mars, The Little Rascals, Lawless Range and The Lookalike, as well as numerous concert films, music clips, television specials and virtual reality productions. He has also served as a producer on projects for Hallmark, Mariah Carey, Queen Latifah and others. Originally from Australia, he began his career with the Australian Broadcasting Corporation.

Ryan notes that the goal of the new venture is to break from the traditional facility model and provide producers with flexible solutions tailored to their budgets and creative needs. “Clients do not have to use our talent; they can bring in their own colorists, editors and mixers,” he says. “We can be a small part of the production, or we can be the backbone.”

Sound editor/re-recording mixer Will Files joins Sony Pictures Post

Sony Pictures Post Production Services has added supervising sound editor/re-recording mixer Will Files, who has spent a decade at Skywalker Sound. His brings with him credits on more than 80 feature films, including Passengers, Deadpool, Star Wars: The Force Awakens and Fantastic Four.

Files won a 2018 MPSE Golden Reel Award for his work on War for the Planet of the Apes. His current project is the upcoming Columbia Pictures release Venom out in US theaters this October.

He adds that he was also attracted by Sony Pictures’ ability to support his work both as a sound editor/sound designer and as a re-recording mixer. “I tend to wear a lot of hats. I often supervise sound, create sound design and mix my projects,” he says. “Sony Pictures has embraced modern workflows by creating technically-advanced rooms that allow sound artists to begin mixing as soon as they begin editing. It makes the process more efficient and improves creative storytelling.”

Files will work in a new pre-dub mixing stage and sound design studio on the Sony Pictures lot in Culver City. The stage has Dolby Atmos mixing capabilities and features two Avid S6 mixing consoles, four Pro Tools systems, a Sony 4K digital cinema projector and a variety of other support gear.

Files describes the stage as a sound designer/mixer’s dream come true. “It’s a medium-size space, big enough to mix a movie, but also intimate. You don’t feel swallowed up when it’s just you and the filmmaker,” he says. “It’s very conducive to the creative process.”

Files began his career with Skywalker Sound in 2002, shortly after graduating from the University of North Carolina School of the Arts. He earned his first credit as supervising sound editor on the 2008 sci-fi hit Cloverfield. His many other credits include Star Trek: Into Darkness, Dawn of the Planet of the Apes and Loving.

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

London’s LipSync upgrades studio, adds Dolby Atmos

LipSync Post, located in London’s Soho, has upgraded its studio with Dolby Atmos and  installed a new control system. To accomplish this, LipSync teamed up with HHB Communications’ Scrub division to create a hybrid dual Avid S6 and AMS Neve DFC3D desk while also upgrading the room to create Dolby Atmos mixes with a new mastering unit. Now that the upgrade to Theatre 2 is complete, LipSync plans to upgrade Theatre 1 this summer.

The setup has the best of both worlds with full access to both the classic Neve DFC sound while also bringing more hands-on control of their Avid Pro Tools automation via the S6 desks. In order to streamline their workflow as more projects are mixed exclusively “in the box,” LipSync installed the S6s within the same frame as the DFC, with custom furniture created by Frozen Fish Design. This dual operator configuration frees the mix engineers to work on separate Pro Tools systems simultaneously for fast and efficient turnaround in order to meet crucial project deadlines.

“The move into extended surround formats like Dolby Atmos is very exciting,” explains LipSync senior re-recording mixer Rob Hughes. “We have now completed our first feature mix in the refitted theater (Vita & Virginia directed by Chanya Button). It has a very detailed, involved soundtrack and the new system handled it with ease.”

Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Review: RTW’s Masterclass Mastering Tools

By David Hurd

RTW, based in Cologne, Germany, has been making broadcast-quality metering tools for audio professionals since 1965. Today, we will be looking at its Masterclass Mastering Tools and Loudness Tools plug-ins, which are awesome to have in your arsenal if you are mastering music or audio for broadcast.

These tools operate both as DAW plugins and in standalone mode. I tested them in Magix Sound Forge.

To start, I simply opened Sound Forge and added the RTW plug-in to the Plug-in Chain. RTW’s Masterclass Mastering Tools handle all of the loudness standards for broadcast so that your mix doesn’t get squished while giving you a detailed picture of the dynamics of your mix for use on the Web.

The Masterclass Mastering bundle includes a lot of loudness presets that will conform your audio levels to the standards of other countries. Since the listeners of most of my projects reside in the USA, I used one of the US standard presets.

The CALM Act preset uses a K- weighted metering scale with “True Peak,” “Momentary,” “Short” and “Integrated Total Level” views, as well as a meter that displays your loudness range. I was mostly concerned with the Integrated Level and True Peak displays. The integrated level shows you an average of the perceived loudness over the entire length of the program. It actually improves your dynamic range since it doesn’t count the extremely quiet and loud areas in your mix.

This comes in handy on projects like a home improvement show that I work, where I have mostly dialog except for a loud power tool like an air nailer or chop saw.

As long as the whole program conforms to the average for US standards for Integrated Level, my dialog can be heard while still allowing the power tools to be loud. This allows me to have a robust mix and still keep it legal.

If you have ever tested the difference between Peak and RMS settings on a loudness plug-in, you know that your settings can make a huge difference in the perceived loudness of your audio signal. Usually, loud is good, but it depends on the hardware path that your program will have to take on its way to the listeners.

If your audio is going to be broadcast, your loud mix may be degraded when it is processed for broadcast by the station. If the broadcast output processing limiters think that your mix is too loud they will add compression or limiting of their own. Suddenly, you’ll learn too late that the station’s hardware has squished your wonderful loud and punchy mix into mush.

If your listeners are on the Web, rather than watching a TV broadcast, you will have less of a problem. Most of the Internet broadcast venues, like YouTube and iTunes, are using an automatic volume control that just adjusts the file volume instead of applying any compression or limiting to your audio. The net result is that your listeners will hear your mix as it was intended to be heard.

Digital clipping is an ugly thing, which no one wants any part of. To make sure that my program never clips, I also keep an eye on the True Peak meter. The True Peak meter looks for peaks in your audio program, and here’s the cool part. It actually calculates where your audio wave would have peaked had there been headroom and uses that level. This allows me to easily set an overall level for the whole mix that doesn’t include any clipping distortion.
As you probably know, the phase relationship between your audio channels is very important, so Masterclass Mastering Tools include tools for these as well.

You get a Stereo Correlation Meter, a Surround Sound Analyzer and a RealTime Frequency Analyzer. To top it off, you also get a Vectorscope for monitoring the phase relationship between any pair of audio channels.

It’s not like you couldn’t add a bunch of metering plug-ins to your present system and get roughly the same results. But, why would you want to? The Masterclass Mastering Tools from RTW puts everything that you need together in one easy-to-use package.

Summing Up
If you are on a budget, you may want to look into the Loudness Tools package, which is only $239 dollars. It contains everything the Mastering Tools package offers, except for the Surround Sound Analyzer, RealTime Analyzer and the Vectorscope. The full-blown Mastering Tools package is only $578.91, which gives you everything you need to comply with loudness standards all over the world.

For conforming world-class professional audio, you need to use professional tools, and Masterclass Mastering Tools will easily enable you to get the job done.


David Hurd own David Hurd Productions in Tampa, Florida. He has been reviewing products for over 20 years.

Ren Klyce: Mixing the score for Star Wars: The Last Jedi

By Jennifer Walden

There are space battles and epic music, foreign planets with unique and lively biomes, blasters, lightsabers, a universe at war and a force that connects it all. Over the course of eight “Episodes” and through numerous spin-off series and games, fans of Star Wars have become well acquainted with its characteristic sound.

Creating the world, sonically, is certainly a feat, but bringing those sounds together is a challenge of equal measure. Shaping the soundtrack involves sacrifice and egoless judgment calls that include making tough decisions in service of the story.

Ren Klyce

Skywalker Sound’s Ren Klyce was co-supervising sound editor, sound designer and a re-recording mixer on Star Wars: The Last Jedi. He not only helped to create the film’s sounds but he also had a hand in shaping the final soundtrack. As re-recording mixer of the music, Klyce got a new perspective on the film’s story.

He’s earned two Oscar nominations for his work on the Rian Johnson-directed The Last Jedi — one for sound editing and another for sound mixing. We reached out to Klyce to ask about his role as a re-recording mixer, what it was like to work with John Williams’ Oscar-nominated score, and what it took for the team to craft The Last Jedi’s soundtrack.

You had all the Skywalker-created effects, the score and all the dialog coming together for the final mix. How did you bring clarity to what could have been be a chaotic soundtrack?
Mostly, it’s by forcing ourselves to potentially get rid of a lot of our hard work for the sake of the story. Getting rid of one’s work can be difficult for anyone, but it’s the necessary step in many instances. When you initially premix sound for a film, there are so many elements and often times we have everything prepared just in case they’re asked for. In the case of Star Wars, we didn’t know what director Rian Johnson might want and not want. So we had everything at the ready in either case.

On Star Wars, we ended up doing a blaze pass where we played everything from the beginning to the end of a reel all at once. We could clearly see that it was a colossal mess in one scene, but not so bad in another. It was like getting a 20-minute Cliff Notes of where we were going to need to spend some time.

Then it comes down to having really skilled mixers like David Parker (dialog) and Michael Semanick (sound effects), whose skill-sets include understanding storytelling. They understand what their role is about — which is making decisions as to what should stay, what should go, what should be loud or quiet, or what should be turned off completely. With sound effects, Michael is very good at this. He can quickly see the forest for the trees. He’ll say, “Let’s get rid of this. These elements can go, or the background sounds aren’t needed here.” And that’s how we started shaping the mix.

After doing the blaze pass, we will then go through and listen to just the music by itself. John Williams tells his story through music and by underscoring particular scenes. A lot of the process is learning what all the bits and pieces are and then weighing them up against each other. We might decide that the music in a particular scene tells the story best.

That is how we would start and then we worked together as a team to continue shaping the mix into a rough piece. Rian would then come in and give his thoughts to add more sound here or less music there, thus shaping the soundtrack.

After creating all of those effects, did you wish you were the one to mix them? Or, are you happy mixing music?
For me personally, it’s a really great experience to listen to and be responsible for the music because I’ve learned so much about the power of the music and what’s important. If it were the other way around, I might be a little more overly focused on the sound effects. I feel like we have a good dynamic. Michael Semanick has such great instincts. In fact, Rian described Michael as being an incredible storyteller, and he really is.

Mixing the music for me is a wonderful way to get a better scope of the entire soundtrack. By not touching the sound effects on the stage, those faders aren’t so precious. Instead, the movie itself and the soundtrack takes precedence instead of the bits and pieces that make it up.

What was the trickiest scene to mix in terms of music?
I think that would have to be the ski speeder sequence on the salt planet of Crait. That was very difficult because there was a lot of dodging and burning in the mix. In other words, Rian wanted to have loud music and then the music would have to dive down to expose a dialogue line, and then jump right back up again for more excitement and then dive down to make way for another dialogue line. Then boom, some sound effects would come in and the Millennium Falcon would zoom by. Then the Star Wars theme would take over and then it had to come down for the dialogue. So we worked that sequence quite a bit.

Our picture editor Bob Ducsay really guided us through the shape of that sequence. What was so great about having the picture editor present was that he was so intimate with the rhythm of the dialogue and his picture cutting. He knew where all of the story points were supposed to be, what motivated a look to the left and so on. Bob would say something like, “When we see Rose here, we really need to make sure we hear her musical theme, but then when we cut away, we need to hear the action.”

Were you working with John Williams’ music stems? Did you feel bad about pulling things out of his score? How do you dissect the score?
Working with John is obviously an incredible experience, and on this film I was lucky enough to work with Shawn Murphy as well, who is really one of my heroes and I’ve known him for years. He is the one who records the orchestra for John Williams and balances everything. Not only does he record the orchestra, but Shawn is a true collaborator with John as well. It’s incredible the way they communicate.

John is really mixing his own soundtrack when he’s up there on the podium conducting, and he’s making initial choices as to which instruments are louder than others — how loud the woodwinds play, how loud the brass plays, how loud the percussion is and how loud the strings are. He’s really shaping it. Between Williams and Murphy, they work on intonation, tuning and performance. They go through and record and then do pickups for this measure and that measure to make sure that everything is as good as it can be.

I actually got to witness John Williams do this incredible thing — which was during the recording of the score for the Crait scene. There was this one section where the brass was playing and John (who knows every single person’s name in that orchestra) called out to three people by name and said something like, “Mark, on bar 63, from beat two to beat six, can you not play please. I just want a little more clarity with two instruments instead of three. Thank you.” So they backed up and did a pick-up on that bar and that gentleman dropped out for those few beats. It was amazing.

In the end, it really is John who is creating that mix. Then, editorially, there would be moments where we had to change things. Ramiro Belgardt, another trusted confidant of John Williams, was our music editor. Once the music is recorded and premixed, it was up to Ramiro to keep it as close to what John intended throughout all of the picture changes.

A scene would be tightened or opened up, and the music isn’t going to be re-performed. That would be impossible to do, so it has to be edited or stretched or looped or truncated. Ramiro had the difficult job of making the music seem exactly how it was on the day it was performed. But in truth, if you look at his Pro Tools session, you’ll see all of these splices and edits that he did to make everything function properly.

Does a particular scene stick out?
There was one scene where Rey ignites the lightsaber for the very first time on Jedi Island, and there we did change the balance within the music. She’s on the cliff by the ocean and Luke is watching her as she’s swinging the lightsaber. Right when she ignites the lightsaber, her theme comes in, which is this beautiful piano melody. The problem was when they mixed the piano they didn’t have a really loud lightsaber sound going with it. We were really struggling because we couldn’t get that piano melody to speak right there. I asked Ramiro if there was any way to get that piano separately because I would love it if we could hear that theme come in just as strong as that lightsaber. Those are the types of little tiny things that we would do, but those are few and far between. For the most part, the score is how John and Shawn intended the mix to be.

It was also wonderful having Ramiro there as John’s spokesperson. He knew all of the subtle little sacred moments that Williams had written in the score. He pointed them out and I was able to push those and feature those.

Was Rian observing the sessions?
Rian attended every single scoring session and knew the music intricately. He was really excited for the music and wanted it to breathe. Rian’s knowledge of the music helped guide us.

Where did they perform and record the score?
This was recorded at the Barbra Streisand Scoring Stage on the Sony Pictures Studios lot in Culver City, California.

Are there any Easter eggs in terms of the score?
During the casino sequence there’s a beautiful piece of music that plays throughout, which is something like an homage that John Williams wrote, going back to the Cantina song that he wrote for the original Star Wars.

So, the Easter egg comes as the Fathiers are wreaking havoc in the casino and we cut to the inside of a confectionery shop. There’s an abrupt edit where all the music stops and you hear this sort of lounge piano that’s playing, like a piece of source music. That lounge piano is actually John Williams playing “The Long Goodbye,” which is the score that he wrote for the film The Long Goodbye. Rian is a huge fan of that score and he somehow managed to get John Williams to put that into the Star Wars film. It’s a wonderful little Easter egg.

John Williams is, in so many ways, the closest thing we have to Beethoven or Brahms in our time. When you’re in his presence — he’s 85 years old now — it’s humbling. He still writes all of his manuscripts by hand.

On that day that John sat down and played “The Long Goodbye” piano piece, Rian was so excited that he pulled out his iPhone and filmed the whole thing. John said, “Only for you, Rian, do I do this.” It was a very special moment.

The other part of the Easter egg is that John’s brother Donald Williams is a timpanist in the orchestra. So what’s cool is you hear John playing the piano and the very next sound is the timpani, played by his brother. So you have these two brothers and they do a miniature solo next to each other. So those are some of the fun little details.

John Williams earned an Oscar nomination for Best Original Music Score for Star Wars: The Last Jedi.
It’s an incredible score. One of the fortunate things that occurred on this film was that Rian and producer Ram Bergman wanted to give John Williams as much time as possible so they started him really early. I think he had a year to compose, which was great. He could take his time and really work diligently through each sequence. When you listen to just the score, you can hear all of the little subtle nuances that John composed.

For example, Rose stuns Finn and she’s dragging him on this little cart and they’re having this conversation. If you listen to just the music through there, the way that John has scored every single little emotional beat in that sequence is amazing. With all the effects and dialogue, you’re not really noticing the musical details. You hear two people arguing and then agreeing. They hate each other and now they like each other. But when you deconstruct it, you hear the music supporting each one of those moments. Williams does things like that throughout the entire film. Every single moment has all these subtle musical details. All the scenes with Snoke in his lair have these ominous, dark musical choir phrases for example. It’s phenomenal.

The moments where the choice was made to remove the score completely, was that a hard sell for the director? Or, was he game to let go of the score in those effects-driven moments?
No, it wasn’t too difficult. There was one scene that we did revert on though. It was on Crait, and Rian wanted to get rid of the whole big music sequence when Leia sees that the First Order is approaching and they have to shut the giant door. There was originally a piece of music, and that was when the crystal foxes were introduced. So we got rid of the music there. Then we watched the film and Rian asked us to put that music back.

A lot of the music edits were crafted in the offline edit, and those were done by music editor Joseph Bonn. Joe would craft those moments ahead of time and test them. So a lot of that was decided before it got to my hands.

But on the stage, we were still experimenting. Ramiro would suggest trying to lose a cue and we’d mute it from the sequence. That was a fun part of collaborating with everyone. It’s a live experiment. I would say that on this film most of the music editorial choices were decided before we got to the final mix. Joe Bonn spent months and months crafting the music guide, which helped immensely.

What is one audio tool that you could not have lived without on the mix? Why?
Without a doubt, it’s our Avid Pro Tools editing software. All the departments —dialog, Foley, effects and music were using Pro Tools. That is absolutely hands-down the one tool that we are addicted to. At this point, not having Pro Tools is like not having a hammer.

But you used a console for the final mix, yes?
Yes. Star Wars: The Last Jedi was not an in-the-box mix. We mixed it on a Neve DFC Gemini console in the traditional manner. It was not a live Pro Tools mix. We mixed it through the DFC console, which had its own EQ, dynamics processing, panning, reverb sends/returns, AUX sends/returns and LFE sends/returns.

The pre-pre-mixing was done in Pro Tools. Then, looking at the sound effects for example, that was shaped roughly in the offline edit room, and then that would go to the mix stage. Michael Semanick would pre-mix the effects through the Neve DFC in a traditional premixing format that we would record to 9.1 pre-dubs and objects. A similar process was done with the dialogue. So that was done with the console.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Oscar Watch: The Shape (and sound) of Water

Post production sound mixers Christian Cooke and Brad Zoern, who are nominated (with production mixer Glen Gauthier) for their work on Fox’s The Shape of Water, have sat side-by-side at mixing consoles for nearly a decade. The frequent collaborators, who handle mixing duties at Deluxe Toronto, faced an unusual assignment given that the film’s two lead characters never utter a single word of actual dialogue. In The Shape of Water, which has been nominated for 13 Academy Awards, Elisa (Sally Hawkins) is mute and the creature she falls in love with makes undefined sounds. This creative choice placed more than the usual amount of importance on the rest of the soundscape to support the story.

L-R: Nathan Robitaille, J. Miles Dale, Brad Zoern, director Guillermo del Toro, Christian Cooke, Nelson Ferreira, Filip Hosek, Cam McLauchlin, video editor Sidney Wolinsky, Rob Hegedus, Doug Wilkinson.

Cooke, who focused on dialogue and music, and Zoern, who worked with effects, backgrounds and Foley, knew from the start that their work would need to fit into the unique and delicate tone that infused the performances and visuals. Their work began, as always, with pre-dubs followed by three temp mixes of five days each, which allowed for discussion and input from director Guillermo del Toro. It was at the premixes that the mixers got a feel for del Toro’s conception for the film’s soundtrack. “We were more literal at first with some of the sounds,” says Zoern. “He had ideas about blending effects and music. By the time we started on the five-week-long mix, we had a very clear idea about what he was looking for.”

The final mix took place in one of Deluxe Toronto’s five stages, which have identical acoustic qualities and the same Avid Pro Tools-based Harrison MP4D/Avid S6 hybrid console, JBL M2 speakers and Crown amps.

The mixers worked to shape sonic moments that do more than represent “reality,” but create mood and tension. This includes key moments such as the sound of a car’s windshield wipers that build in volume until they take over the track in the form of a metronome-like beat underlining the tension of the moment. One pivotal scene finds Richard Strickland (Michael Shannon) paying a visit to Zelda Fuller (Octavia Spencer). As Strickland speaks, Zelda’s husband Brewster (Martin Roach) watches television. “It was an actual mono track from a real show,” Cooke explains. “It starts out sounding roomy and distant as it would really have sounded. As the scene progresses, it expands, getting more prominent and spreading out around the speakers [for the 5.1 version]. By the end of the scene, the audio from the TV has become something totally different from what it started the scene as and then we melded that seamlessly into Alexandre Desplat’s score.”

Beyond the aesthetic work of building a sound mix, particularly one so fluid and expressionistic, post production mixers must also collaborate on a large number of technical decisions during the mix to ensure the elements have the right amount of emotional punch without calling attention to themselves. Individual sounds, even specific frequencies, vie for audience attention and the mixers orchestrate and layer them.

“It’s raining outside when they come into the room,” Zoern notes about the above scene. “We want to initially hear the sound of the rain to have a context for the scene. You never just want dialogue coming out of nowhere; it needs to live in a space. But then we pull that back to focus on the dialogue, and then the [augmented] audio from the TV gains prominence. During the final mix, Chris and I are always working together, side by side, to meld the hundreds of sounds the editors have built in a way that reflects the story and mood of the film.”

“We’re like an old married couple,” Cooke jokes. “We finish each other’s sentences. But it’s very helpful to have that kind of shorthand in this job. We’re blending so many pieces together and if people notice what we’ve done, we haven’t done our jobs.”

The 54th annual CAS Award nominees

The Cinema Audio Society announced the nominees for the 54th Annual CAS Awards for Outstanding Achievement in Sound Mixing. There are seven creative categories for 2017, and the Outstanding Product nominations were revealed as well.

Here are this year’s nominees:

Baby Driver

Motion Picture – Live Action

Baby Driver

Production Mixer – Mary H. Ellis, CAS

Re-recording Mixer – Julian Slater, CAS

Re-recording Mixer – Tim Cavagin

Scoring Mixer – Gareth Cousins, CAS

ADR Mixer – Mark Appleby

Foley Mixer – Glen Gathard

Dunkirk

Production Mixer – Mark Weingarten, CAS

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Gary Rizzo, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Thomas J. O’Connell

Foley Mixer – Scott Curtis

Star Wars: The Last Jedi

Production Mixer – Stuart Wilson, CAS

Re-recording Mixer – David Parker

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Ren Klyce

Scoring Mixer – Shawn Murphy

ADR Mixer – Doc Kane, CAS

Foley Mixer – Frank Rinella

The Shape of Water

Production Mixer – Glen Gauthier

Re-recording Mixer – Christian T. Cooke, CAS

Re-recording Mixer – Brad Zoern, CAS

Scoring Mixer – Peter Cobbin

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Peter Persaud, CAS

Wonder Woman

Production Mixer – Chris Munro, CAS

Re-recording Mixer – Chris Burdon

Re-recording Mixer – Gilbert Lake, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Nick Kray

Foley Mixer – Glen Gathard

 

Motion Picture Animated

The Lego Batman Movie

Cars 3

Original Dialogue Mixer – Doc Kane, CAS

Re-recording Mixer – Tom Meyers

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Blake Collins

Coco

Original Dialogue Mixer – Vince Caro

Re-recording Mixer – Christopher Boyes

Re-recording Mixer – Michael Semanick

Scoring Mixer – Joel Iwataki

Foley Mixer – Blake Collins

Despicable Me 3

Original Dialogue Mixer – Carlos Sotolongo

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Tim Nielson

Re-recording Mixer – Brandon Proctor

Scoring Mixer – Greg Hayes

Foley Mixer – Scott Curtis

Ferdinand

Original Dialogue Mixer – Bill Higley, CAS

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Lora Hirschberg

Re-recording Mixer – Leff Lefferts

Scoring Mixer – Shawn Murphy

Foley Mixer – Scott Curtis

The Lego Batman Movie

Original Dialogue Mixer – Jason Oliver

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Wayne Pashley

Scoring Mixer – Stephen Lipson

Foley Mixer – Lisa Simpson

 

Motion Picture – Documentary

An Inconvenient Sequel: Truth to Power

Production Mixer – Gabriel Monts

Re-recording Mixer – Kent Sparling

Re-recording Mixer – Gary Rizzo, CAS

Re-recording Mixer – Zach Martin

Scoring Mixer – Jeff Beal

Foley Mixer – Jason Butler

Long Strange Trip

Eric Clapton: Life in 12 Bars

Re-recording Mixer – Tim Cavagin

Re-recording Mixer – William Miller

ADR Mixer – Adam Mendez, CAS

Gaga: Five Feet Two

Re-recording Mixer – Jonathan Wales, CAS

Re-recording Mixer – Jason Dotts

Jane

Production Mixer – Lee Smith

Re-recording Mixer – David E. Fluhr, CAS

Re-recording Mixer – Warren Shaw

Scoring Mixer – Derek Lee

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Ryan Maguire

Long Strange Trip

Production Mixer – David Silberberg

Re-recording Mixer – Bob Chefalas

Re-recording Mixer – Jacob Ribicoff

 

Television Movie Or Mini-Series

Big Little Lies: “You Get What You Need”

Production Mixer – Brendan Beebe, CAS

Re-recording Mixer – Gavin Fernandes, CAS

Re-recording Mixer – Louis Gignac

Black Mirror: “USS Callister”

Production Mixer – John Rodda, CAS

Re-recording Mixer – Tim Cavagin

Fargo

Re-recording Mixer – Dafydd Archard

Re-recording Mixer – Will Miller

ADR Mixer – Nick Baldock

Foley Mixer – Sophia Hardman

Fargo: ”The Narrow Escape Problem”

Production Mixer – Michael Playfair, CAS

Re-recording Mixer – Kirk Lynds, CAS

Re-recording Mixer – Martin Lee

Scoring Mixer – Michael Perfitt

Sherlock: “The Lying Detective”

Production Mixer –John Mooney, CAS

Re-recording Mixer – Howard Bargroff

Scoring Mixer – Nick Wollage

ADR Mixer – Peter Gleaves, CAS

Foley Mixer – Jamie Talbutt

Twin Peaks: “Gotta Light?”

Production Mixer – Douglas Axtell

Re-recording Mixer –Dean Hurley

Re-recording Mixer – Ron Eng

 

Television Series – 1-Hour

Better Call Saul: “Lantern”

Production Mixer – Phillip W. Palmer, CAS

Re-recording Mixer – Larry B. Benjamin, CAS

Re-recording Mixer – Kevin Valentine

ADR Mixer – Matt Hovland

Foley Mixer – David Michael Torres, CAS

Game of Thrones: “Beyond the Wall”

Game of Thrones

Production Mixer – Ronan Hill, CAS

Production Mixer – Richard Dyer, CAS

Re-recording Mixer – Onnalee Blank, CAS

Re-recording Mixer – Mathew Waters, CAS

Foley Mixer – Brett Voss, CAS

Stranger Things: “The Mind Flayer”

Production Mixer – Michael P. Clark, CAS

Re-recording Mixer – Joe Barnett

Re-recording Mixer – Adam Jenkins

ADR Mixer – Bill Higley, CAS

Foley Mixer – Anthony Zeller, CAS

The Crown: “Misadventure”

Production Mixer – Chris Ashworth

Re-recording Mixer – Lee Walpole

Re-recording Mixer – Stuart Hilliker

Re-recording Mixer – Martin Jensen

ADR Mixer – Rory de Carteret

Foley Mixer – Philip Clements

The Handmaid’s Tale: “Offred”

Production Mixer – John J. Thomson, CAS

Re-recording Mixer – Lou Solakofski

Re-recording Mixer – Joe Morrow

Foley Mixer – Don White

 

Television Series – 1/2 Hour

Ballers: “Yay Area”

Production Mixer – Scott Harber, CAS

Re-recording Mixer – Richard Weingart, CAS

Re-recording Mixer – Michael Colomby, CAS

Re-recording Mixer – Mitch Dorf

Black-ish: “Juneteenth, The Musical”

Production Mixer – Tom N. Stasinis, CAS

Re-recording Mixer – Peter J. Nusbaum, CAS

Re-recording Mixer – Whitney Purple

Modern Family: “Lake Life”

Production Mixer – Stephen A. Tibbo, CAS

Re-recording Mixer – Dean Okrand, CAS

Re-recording Mixer – Brian R. Harman, CAS

Silicon Valley: “Hooli-Con”

Production Mixer – Benjamin A. Patrick, CAS

Re-recording Mixer – Elmo Ponsdomenech

Re-recording Mixer – Todd Beckett

Veep: “Omaha”

Production Mixer – William MacPherson, CAS

Re-recording Mixer – John W. Cook II, CAS

Re-recording Mixer – Bill Freesh, CAS

 

Television Non-Fiction, Variety Or Music Series Or Specials

American Experience: “The Great War – Part 3”

Production Mixer – John Jenkins

Re-Recording Mixer – Ken Hahn

Anthony Bourdain: Parts Unknown: “Oman”

Re-Recording Mixer – Benny Mouthon, CAS

Anthony Bourdain: Parts Unknown

Deadliest Catch: “Last Damn Arctic Storm”

Re-Recording Mixer – John Warrin

Rolling Stone: “Stories from the Edge”

Production Mixer – David Hocs

Production Mixer – Tom Tierney

Re-Recording Mixer – Tom Fleischman, CAS

Who Killed Tupac?: “Murder in Vegas”

Production Mixer – Steve Birchmeier

Re-Recording Mixer – John Reese

 

Nominations For Outstanding Product – Production

DPA – DPA Slim

Lectrosonics – Duet Digital Wireless Monitor System

Sonosax – SX-R4+

Sound Devices – Mix Pre- 10T Recorder

Zaxcom – ZMT3-Phantom

 

Nominations For Outstanding Product – Post Production

Dolby – Dolby Atmos Content Creation Tools

FabFilter – Pro Q2 Equalizer

Exponential Audio – R4 Reverb

iZotope – RX 6 Advanced

Todd-AO – Absentia DX

The Awards will be presented at a ceremony on February 24 at the Omni Los Angeles Hotel at California Plaza. This year’s CAS Career Achievement Award will be presented to re-recording mixer Anna Behlmer, the CAS Filmmaker Award will be given to Joe Wright and the Edward J. Greene Award for the Advancement of Sound will be presented to Tomlinson Holman, CAS. The Student Recognition Award winner will also be named and will receive a cash prize.

Main Photo: Wonder Woman

Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

Music
On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”


Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.

MPSE to present John Paul Fasal with Career Achievement Award

The Motion Picture Sound Editors (MPSE) will present sound designer and sound recordist John Paul Fasal with its 2018 MPSE Career Achievement Award. A 30-year veteran of the sound industry, Fasal has contributed to more than 150 motion pictures and is best known for his work in field recording.

Among his many credits are Top Gun, Master and Commander: The Far Side of the World, Interstellar, The Dark Knight, American Sniper and this year’s Dunkirk. Fasal will receive his award at the MPSE Golden Reel Awards ceremony, February 18, 2018 in Los Angeles.

“John is a master of his craft, an innovator who has pioneered many new recording techniques, and a restless, creative spirit who will stop at nothing to capture the next great sound,” says MPSE president Tom McCarthy.

The MPSE Career Achievement Award recognizes “sound artists who have distinguished themselves by meritorious works as both an individual and fellow contributor to the art of sound for feature film, television and gaming and for setting an example of excellence for others to follow.”

Fasal joins a distinguished list of sound innovators, including 2017 Career Achievement recipient Harry Cohen, Richard King, John Roesch, Skip Lievsay, Randy Thom, Larry Singer, Walter Murch and George Watters II.

“Sound artists typically work behind the scenes, out of the limelight, and so to be recognized in this way by my peers is humbling,” says Fasal. “It is an honor to join the past recipients of this award, many of whom are both colleagues and friends.”

Fasal began his career as a musician and songwriter, but gravitated toward post production sound in the 1980s. Among his first big successes was Top Gun for which he recorded and designed many of the memorable jet aircraft sound effects. He has been a member of the sound teams on several films that have won Academy Awards in sound categories, including Inception, The Dark Knight, Letters From Iwo Jima, Master and Commander: The Far Side of the World, The Hunt for Red October and Pearl Harbor.

Fasal has worked as a sound designer and recordist throughout his career, but in recent years has increasingly focused on field recording. He enjoys especially high regard for his ability to capture the sounds of planes, ships, automobiles and military weaponry. “The equipment has changed dramatically over the course of my career, but the philosophy behind the craft remains the same,” he says. “It still involves the layering of sounds to create a sonic picture and help tell the story.”

 

Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 

The sound of Netflix’s The Defenders

By Jennifer Walden

Netflix’s The Defenders combines the stories of four different Marvel shows already on the streaming service: Daredevil, Iron Fist, Luke Cage and Jessica Jones. In the new show, the previously independent superheroes find themselves all wanting to battle the same foe —a cultish organization called The Hand, which plans to destroy New York City. Putting their differences aside, the superheroes band together to protect their beloved city.

Supervising sound editor Lauren Stephens, who works at Technicolor at Paramount, has earned two Emmy nominations for her sound editing work on Daredevil. And she supervised the sound for each of the aforementioned Marvel series, with the exception of Jessica Jones. So when it came to designing The Defenders she was very conscious of maintaining the specific sonic characteristics they had already established.

“We were dedicated to preserving the palette of each of the previous Marvel characters’ neighborhoods and sound effects,” she explains. “In The Defenders, we wanted viewers of the individual series to recognize the sound of Luke’s Harlem and Daredevil’s Hell’s Kitchen, for example. In addition, we kept continuity for all of the fight material and design work established in the previous four series. I can’t think of another series besides Better Call Saul that borrows directly from its predecessors’ sound work.”

But it wasn’t all borrowed material. Eventually, Luke Cage (Mike Colter), Daredevil (Charlie Cox), Jessica Jones (Krysten Ritter), Iron Fist (Finn Jones) and Elektra Natchios (Elodie Yung) come together to fight The Hand’s leader Alexandra Reid (Sigourney Weaver). “We experience new locations, and new fighting techniques and styles,” says Stephens. “Not to mention that half the city gets destroyed by The Hand. We haven’t had that happen in the previous series.”

Even though these Netflix/Marvel series are based on superheroes, the sound isn’t overly sci-fi. It’s as though the superheroes have more practical superhuman abilities. Stephens says their fight sounds are all real punches and impacts, with some design elements added only when needed, such as when Iron Fist’s iron fist is activated. “At the heart of our punches, for instance, is the sound of a real fist striking a side of beef,” she says. “It sounds like you’d expect, and then we amp it up when we mix. We record a ton of cloth movement and bodies scraping and sliding and tumbling in Foley. Those elements connect us to the humans on-screen.”

Since most of the violence plays out in hand-to-hand combat, it takes a lot of editing to make those fight scenes, and it involves contributions from several sound departments. Stephens has her hard effects team — led by sound designer Jordon Wilby (who has worked on all the Netflix/Marvel series) cut sound effects for every single punch, grab, flip, throw and land. In addition, they cut metal shings and whooshes, impacts and drops for weapons, crashes and bumps into walls and furniture, and all the gunshot material.

Stephens then has the Technicolor Foley team — Foley artists Zane Bruce and Lindsay Pepper and mixer Antony Zeller —cover all the footsteps, cloth “scuffle,” wall bumps, body falls and grabs. Additionally, she has dialogue editor Christian Buenaventura clean up any dialogue that occurs within or around the fight scenes. With group ADR, they replace every grunt and effort for each individual in the fight so that they have ultimate control over every element during the mix.

Stephens finds Gallery’s SpotStudio to be very helpful for cueing all the group ADR. “I shoot a lot of group ADR for the fights and to help create the right populated feel for NYC. SpotStudio is a slick program that interfaces well with Avid’s Pro Tools. It grabs timecode location of ADR cues and can then output that to many word processing programs. Personally, I use FileMaker Pro. I can make great cuesheets that are easy to format and use for engineers and talent.”

All that effort results in fight scenes that feel “relentless and painful,” says Stephens. “I want them to have movement, tons of detail and a wide range of dynamics. I want the fights to sound great wherever our fans are listening.”

The most challenging fight in The Defenders happens in the season finale, when the superheroes fight The Hand in the sublevels of a building. “That underground fight was the toughest simply because it was endless and shot with a 360-degree turn. I focused on what was on-screen and continued those sounds just until the action passed out of frame. This kept our tracks from getting too cluttered but still gives us the right idea that 60 people are going at it,” concludes Stephens

Audio post vet Paul Rodriguez has passed away

It is with a heavy heart that we share the news that post sound vet and all-around nice guy Paul Rodriguez passed away September 26th in Los Angeles of cardiac arrest after a brief hospitalization. He was 65.

Rodriguez was president of South Lake Audio Services and VP of audio services and development at Roundabout Entertainment in Burbank where he oversaw post production sound for projects including HBO’s Westworld. He was also a long-time board member of the Motion Picture Sound Editors (MPSE) and served as its treasurer for eight years. He produced the organizations’ annual MPSE Golden Reel Awards ceremony.

An active member of the professional sound community for more than 30 years, Rodriguez served in executive, sales and creative capacities at Todd-AO/Soundelux, Wilshire Stages, 4MC and EFX Systems. He was also co-owner of the Eagle Eye Film Company, a supplier of picture editing systems. He joined Roundabout Entertainment in 2015. Known for his infectious humor and gregarious personality, Rodriguez was a tireless ambassador for the art of entertainment sound and enjoyed universal respect and affection among his industry colleagues and friends.

“Paul will be remembered for the energy, wisdom and true dedication he gave to the sound industry,” said MPSE president Tom McCarthy. “His passing leaves a great void on our board and in the hearts of our members.”

postPerspective had the opportunity to interview Paul at NAB this past April. He was funny and smart and a pleasure to be around. His positive attitude and humor were contagious.

Rodriguez is survived by his son Hunter, daughter-in-law Abbie and granddaughter Charlie; daughter Rachael and son-in-law Manny Wong; daughter Alexa and her partner James Gill; his former wife, Catheryn Rodriguez; and several sisters.

Donations in Rodriguez’s name may be made to Montrose Church, Best Friends Animal Society or Alzheimer’s Association.

 

 

Eleven’s Ben Freer celebrates 10 years, Jordan Meltzer now mixer

Eleven, a Santa Monica-based audio boutique, has some mixer news. Ben Freer is celebrating his 10th year with the studio, and Jordan Meltzer has been promoted to mixer and sound designer.

A Manchester-native with a California upbringing, Freer was inspired by all things sound from a young age and was first introduced to Eleven as an intern in 2007. Mentored by Eleven founder/mixer Jeff Payne and quickly climbing the ranks to become an official staff member the same year. Freer has mixed for renowned clients in the advertising and multimedia industries, including Toyota, GMC, T-Mobile, Nike, H&R Block, The Weeknd and Lorde.

“When I started at Eleven, I didn’t know much about audio mixing, I just knew that I wanted to immerse myself in it,” says Freer. “Working with the industry’s best and eventually getting my own mix room has been an incredibly humbling experience.”

Los Angeles native Jordan Meltzer got hooked on sound and began gravitating toward the craft after seeing The Who perform at the Hollywood Bowl at age 9. He played in bands while growing up in the San Fernando Valley, eventually completing his BA in audio post production from Emerson College. After joining Eleven as an intern, similar to Freer, he climbed the ranks and took on a newfound role as assistant mixer, building his portfolio on a variety of films and commercials with clients HP, Dodge, Disney, FitBit and Sam Smith. Meltzer’s contributions led him to a recent promotion as mixer and sound designer.

“Climbing the Eleven ladder has been fulfilling, satisfying and challenging,” says Meltzer. “I remember sitting in the studio as an intern with Ben and Jeff, trying to learn and absorb it all. I always saw myself sitting in the chair, and it’s truly an honor to now be recognized as a mixer at such a warm, supportive and creative company.”

Main Image: L-R: Ben Freer and Jordan Meltzer

Emmy Awards: American Horror Story: Roanoke

A chat with supervising sound editor Gary Megregian

By Jennifer Walden

Moving across the country and buying a new house is an exciting and scary process, but when it starts raining teeth at that new residence the scary factor pretty much makes the exciting feelings void. That’s the situation that Matt and Shelby, a couple from Los Angeles, find themselves in for American Horror Story’s sixth season on FX Networks. After moving into an old mansion in Roanoke, North Carolina, they discover that the dwelling and the local neighbors aren’t so accepting of outsiders.

American Horror Story: Roanoke explores a true-crime-style format that uses re-enactments to play out the drama. The role of Matt is played by Andre Holland in “reality” and by Cuba Gooding, Jr. in the re-enactments. Shelby is played by Lily Rabe and Sarah Paulson, respectively. It’s an interesting approach that added a new dynamic to an already creative series.

Emmy-winning Technicolor at Paramount supervising sound editor Gary Megregian is currently working on his seventh season of American Horror Story, coming to FX in early September. He took some time out to talk about Season 6, Episode 1, Chapter 1, for which he and his sound editorial team have been nominated for an Emmy for Outstanding Sound Editing for a Limited Series. They won the Emmy in 2013, and this year marks their sixth nomination.

American Horror Story: Roanoke is structured as a true-crime series with re-enactments. What opportunities did this format offer you sound-wise?
This season was a lot of fun in that we had both the realistic world and the creative world to play in. The first half of the series dealt more with re-enactments than the reality-based segments, especially in Chapter 1. Aside from some interview segments, it was all re-enactments. The re-enactments were where we had more creative freedom for design. It gave us a chance to create a voice for the house and the otherworldly elements.

Gary Megregian

Was series creator Ryan Murphy still your point person for sound direction? For Chapter 1, did he have specific ideas for sound?
Ryan Murphy is definitely the single voice in all of his shows but my point person for sound direction is his executive producer Alexis Martin Woodall, as well as each episode’s picture editor.

Having been working with them for close to eight years now, there’s a lot of trust. I usually have a talk with them early each season about what direction Ryan wants to go and then talk to the picture editor and assistant as they’re building the show.

The first night in the house in Roanoke, Matt and Shelby hear this pig-like scream coming from outside. That sound occurs often throughout the episode. How did that sound come to be? What went into it?
The pig sounds are definitely a theme that goes through Season 6, but they started all the way back in Season 1 with the introduction of Piggy Man. Originally, when Shelby and Matt first hear the pig we had tried designing something that fell more into an otherworldly sound, but Ryan definitely wanted it to be real. Other times, when we see Piggy Man we went back to the design we used in Season 1.

The doors in the house sound really cool, especially that back door. What were the sources for the door sounds? Did you do any processing on the recordings to make them spookier?
Thanks. Some of the doors came from our library at Technicolor and some were from a crowd-sourced project from New Zealand-based sound designer Tim Prebble. I had participated in a project where he asked everyone involved to record a complete set of opens, closes, knocks, squeaks, etc. for 10 doors. When all was said and done, I gained a library of over 100GB of amazing door recordings. That’s my go-to for interesting doors.

As far as processing goes, nothing out of the ordinary was used. It’s all about finding the right sound.

When Shelby and Lee (Adina Porter) are in the basement, they watch this home movie featuring Piggy Man. Can you tell me about the sound work there?
The home movie was a combination of the production dialogue, Foley, the couple instances of hearing pig squeals and Piggy Man design along with VHS and CRT noise. For dialogue, we didn’t clean up the production tracks too much and Foley was used to help ground it. Once we got to the mix stage, re-recording mixers Joe Earle and Doug Andham helped bring it all together in their treatment.

What was your favorite scene to design? Why? What went into the sound?
One of my favorite scenes is the hail/teeth storm when Shelby’s alone in the house. I love the way it starts slow and builds from the inside, hearing the teeth on the skylight and windows. Once we step outside it opens up to surround us. I think our effects editor/designer Tim Cleveland did a great job on this scene. We used a number of hail/rain recordings along with Foley to help with some of the detail work, especially once we step outside.

Were there any audio tools that were helpful when working on Chapter 1? Can you share specific examples of how you used them?
I’m going to sound like many others in this profession, but I’d say iZotope RX. Ryan is not a big fan of ADR, so we have to make the production work. I can count on one hand the number of times we’ve had any actors in for ADR last season. That’s a testament to our production mixer Brendan Beebe and dialogue editor Steve Stuhr. While the production is well covered and recorded well, Steve still has his work cut out for him to present a track that’s clean. The iZotope RX suite helps with that.

Why did you choose Chapter 1 for Emmy consideration for its sound editorial?
One of the things I love about working on American Horror Story is that every season is like starting a new show. It’s fun to establish the sound and the tone of a show, and Chapter 1 is no exception. It’s a great representation of our crew’s talent and I’m really happy for them that they’re being recognized for it. It’s truly an honor.