NBCUni 7.26

Category Archives: Audio Mixing

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

NBCUni 7.26

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.


The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.


Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.


Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.


Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Blackmagic: Resolve 16.1 in public beta, updates Pocket Cinema Camera

Blackmagic Design has announced DaVinci Resolve 16.1, an updated version of its edit, color, visual effects and audio post software that features updates to the new cut page, further speeding up the editing process.

With Resolve 16, introduced at NAB 2019, now in final release, the Resolve 16.1 public beta is now available for download from the Blackmagic Design website. This new public beta will help Blackmagic continue to develop new ideas while collaborating with users to ensure those ideas are refined for real-world workflows.

The Resolve 16.1 public beta features changes to the bin that now make it possible to place media in various folders and isolate clips from being used when viewing them in the source tape, sync bin or sync window. Clips will appear in all folders below the current level, and as users navigate around the levels in the bin, the source tape will reconfigure in real time. There’s even a menu for directly selecting folders in a user’s project.

Also new in this public beta is the smart indicator. The new cut page in DaVinci Resolve 16 introduced multiple new smart features, which work by estimating where the editor wants to add an edit or transition and then applying it without the editor having to waste time placing exact in and out points. The software guesses what the editor wants to do and just does it — it adds the inset edit or transition to the edit closest to where the editor has placed the CTI.

But a problem can arise in complex edits, where it is hard to know what the software would do and which edit it would place the effect or clip into. That’s the reason for the beta version’s new smart indicator. The smart indicator provides a small marker in the timeline so users get constant feedback and always know where DaVinci Resolve 16.1 will place edits and transitions. The new smart indicator constantly live-updates as the editor moves around the timeline.

One of the most common items requested by users was a faster way to cut clips in the timeline, so now DaVinci Resolve 16.1 includes a “cut clip” icon in the user interface. Clicking on it will slice the clips in the timeline at the CTI point.

Multiple changes have also been made to the new DaVinci Resolve Editor Keyboard, including a new adaptive scroll feature on the search dial, which will automatically slow down a job when editors are hunting for an in point. The live trimming buttons have been renamed to the same labels as the functions in the edit page, and they have been changed to trim in, trim out, transition duration, slip in and slip out. The function keys along the top of the keyboard are now being used for various editing functions.

There are additional edit models on the function keys, allowing users to access more types of editing directly from dedicated keys on the keyboard. There’s also a new transition window that uses the F4 key, and pressing and rotating the search dial allows instant selection from all the transition types in DaVinci Resolve. Users who need quick picture picture-in in-picture effects can use F5 and apply them instantly.

Sometimes when editing projects with tight deadlines, there is little time to keep replaying the edit to see where it drags. DaVinci Resolve 16.1 features something called a Boring Detector that highlights the timeline where any shot is too long and might be boring for viewers. The Boring Detector can also show jump cuts, where shots are too short. This tool allows editors to reconsider their edits and make changes. The Boring Detector is helpful when using the source tape. In that case, editors can perform many edits without playing the timeline, so the Boring Detector serves as an alternative live source of feedback.

Another one of the most requested features of DaVinci Resolve 16.1 is the new sync bin. The sync bin is a digital assistant editor that constantly sorts through thousands of clips to find only what the editor needs and then displays them synced to the point in the timeline the editor is on. The sync bin will show the clips from all cameras on a shoot stacked by camera number. Also, the viewer transforms into a multi-viewer so users can see their options for clips that sync to the shot in the timeline. The sync bin uses date and timecode to find and sync clips, and by using metadata and locking cameras to time of day, users can save time in the edit.

According to Blackmagic, the sync bin changes how multi-camera editing can be completed. Editors can scroll off the end of the timeline and keep adding shots. When using the DaVinci Resolve Editor Keyboard, editors can hold the camera number and rotate the search dial to “live overwrite” the clip into the timeline, making editing faster.

The closeup edit feature has been enhanced in DaVinci Resolve 16.1. It now does face detection and analysis and will zoom the shot based on face positioning to ensure the person is nicely framed.

If pros are using shots from cameras without timecode, the new sync window lets them sort and sync clips from multiple cameras. The sync window supports sync by timecode and can also detect audio and sync clips by sound. These clips will display a sync icon in the media pool so editors can tell which clips are synced and ready for use. Manually syncing clips using the new sync window allows workflows such as multiple action cameras to use new features such as source overwrite editing and the new sync bin.

Blackmagic Pocket Cinema Camera
Besides releasing the DaVinci Resolve 16.1 public beta, Blackmagic also updated the Blackmagic Pocket Cinema Camera. Blackmagic not only upgraded the camera from 4K to 6K resolution, but it changed the mount to the much used Canon EF style. Previous iterations of the Pocket Cinema Camera used a Micro 4/3s mount, but many users chose to purchase a Micro 4/3s-to-Canon EF adapter, which easily runs over $500 new. Because of the mount change in the Pocket Cinema Camera 6K, users can avoid buying the adapter and — if they shoot with Canon EF — can use the same lenses.

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Audio houses Squeak E. Clean and Nylon Studios have merged

Music and sound studios Squeak E. Clean and Nylon Studios have merged to form Squeak E. Clean Studios. This union brings together a diverse roster of artists offering musical talent and exceptional audio production to agencies and brands. The company combines leadership from both former houses, with Nylon’s Hamish Macdonald serving as managing director and Nylon’s Simon Lister and Squeak E. Clean’s Sam Spiegel overseeing the company’s creative vision as co-executive creative directors. Nylon’s founding partner, David Gaddie, will become strategy partner.

The new Squeak E. Clean Studios has absorbed and operates all the existing studios of the former companies in Los Angeles, New York, Chicago, Austin, Sydney and Melbourne. Clients can now access a full range of services in every studio, including original composition, sound design and mix, music licensing, artist partnerships, experiential and spatial sound and sonic branding. Clients will also be able to license tracks from a vast, consolidated music catalog.

New York-based EP Christina Carlo is transferring to the West Coast to lead the Los Angeles studio alongside Amanda Patterson as senior producer. Deb Oh is executive producer of the New York studio, with Cindy Chao as head of sales. Squeak E. Clean Studios’ Sydney studio is led by executive creative producer Karla Henwood, Ceri Davies is EP of the Melbourne studio, and Jocelyn Brown is leading the Chicago location.The company is deeply committed to strong support of the Free the Bid initiative, with three full-time female staff composers already on the roster.

“I always admired the ‘culture changing’ work that Squeak E. Clean Productions crafted–like the Adidas Hello Tomorrow spot with Karen O and Spike Jonze’s Kenzo World with Ape Drums (featuring Assassin),” says Lister. “These are truly the kind of jobs that are not just famous in advertising, but are part of our popular culture.”

“It’s exciting to be able to combine the revolutionary creativity of Squeak E. Clean with the outstanding post, creative music and exceptional client service that Nylon Studios has always offered at the highest level. We love what we do, and this collaboration is going to be an amazing opportunity for all of our artists and clients,” adds Spiegel. “As a combined force, we will make music and sound that people love.”

Main Image: (L-R) Hamish Macdonald, Simon Lister, Sam Spiegel
Image Credit: Shruti Ashok

 

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Creating and mixing authentic sounds for HBO’s Deadwood movie

By Jennifer Walden

HBO’s award-winning series Deadwood might have aired its final episode 13 years ago, but it’s recently found new life as a movie. Set in 1889 — a decade after the series finale — Deadwood: The Movie picks up the threads of many of the main characters’ stories and weaves them together as the town of Deadwood celebrates the statehood of South Dakota.

Deadwood: The Movie

The Deadwood: The Movie sound team.

The film, which aired on HBO and is available on Amazon, picked up eight 2019 Emmy nominations including in the categories of sound editing, sound mixing and  best television movie.

Series creator David Milch has returned as writer on the film. So has director Daniel Minahan, who helmed several episodes of the series. The film’s cast is populated by returning members, as is much of the crew. On the sound side, there are freelance production sound mixer Geoffrey Patterson; 424 Post’s sound designer, Benjamin Cook; NBCUniversal StudioPost’s re-recording mixer, William Freesh; and Mind Meld Arts’ music editor, Micha Liberman. “Series composers Reinhold Heil and Johnny Klimek — who haven’t been a composing team in many years — have reunited just to do this film. A lot of people came back for this opportunity. Who wouldn’t want to go back to Deadwood?” says Liberman.

Freelance supervising sound editor Mandell Winter adds, “The loop group used on the series was also used on the film. It was like a reunion. People came out of retirement to do this. The richness of voices they brought to the stage was amazing. We shot two days of group for the film, covering a lot of material in that limited time to populate Deadwood.”

Deadwood (the film and series) was shot on a dedicated film ranch called Melody Ranch Motion Picture Studio in Newhall, California. The streets, buildings and “districts” are consistently laid out the same way. This allowed the sound team to use a map of the town to orient sounds to match each specific location and direction that the camera is facing.

For example, there’s a scene in which the town bell is ringing. As the picture cuts to different locations, the ringing sound is panned to show where the bell is in relation to that location on screen. “We did that for everything,” says co-supervising sound editor Daniel Colman, who along with Freesh and re-recording mixer John Cook, works at NBCUniversal StudioPost. “You hear the sounds of the blacksmith’s place coming from where it would be.”

“Or, if you’re close to the Chinese section of the town, then you hear that. If you were near the saloons, that’s what you hear. They all had different sounds that were pulled forward from the series into the film,” adds re-recording mixer Freesh.

Many of the exterior and interior sounds on set were captured by Benjamin Cook, who was sound effects editor on the original Deadwood series. Since it’s a practical location, they had real horses and carriages that Cook recorded. He captured every door and many of the props. Colman says, “We weren’t guessing at what something sounded like; we were putting in the actual sounds.”

The street sounds were an active part of the ambience in the series, both day and night. There were numerous extras playing vendors plying their wares and practicing their crafts. Inside the saloons and out in front of them, patrons talked and laughed. Their voices — performed by the loop group in post — helped to bring Deadwood alive. “The loop group we had was more than just sound effects. We had to populate the town with people,” says Winter, who scripted lines for the loopers because they were played more prominently in the mix than what you’d typically hear. “Having the group play so far forward in a show is very rare. It had to make sense and feel timely and not modern.”

In the movie, the street ambience isn’t as strong a sonic component. “The town had calmed down a little bit as it’s going about its business. It’s not quite as bustling as it was in the series. So that left room for a different approach,” says Freesh.

The attenuation of street ambience was conducive to the cinematic approach that director Minahan wanted to take on Deadwood: The Movie. He used music to help the film feel bigger and more dramatic than the series, notes Liberman. Re-recording mixer John Cook adds, “We experimented a lot with music cues. We saw scenes take on different qualities, depending on whether the music was in or out. We worked hard with Dan [Minahan] to end up with the appropriate amount of music in the film.”

Minahan even introduced music on set by way of a piano player inside the Gem Saloon. Production sound mixer Patterson says, “Dan was very active on the set in creating a mood with that music for everyone that was there. It was part and parcel of the place at that time.”

Authenticity was a major driving force behind Deadwood’s aesthetics. Each location on set was carefully dressed with era-specific props, and the characters were dressed with equal care, right down to their accessories, tools and weapons. “The sound of Seth Bullock’s gun is an actual 1889 Remington revolver, and Calamity Jane’s gun is an 1860’s Colt Army cavalry gun. We’ve made every detail as real and authentic as possible, including the train whistle that opens the film. I wasn’t going to just put in any train whistle. It’s the 1880s Black Hills steam engine that actually went through Deadwood,” reports Colman.

The set’s wooden structures and elevated boardwalk that runs in front of the establishments in the heart of town lent an authentic character to the production sound. The creaky wooden doors and thumpiness of footsteps across the raised wooden floors are natural sounds the audience would expect to hear from that environment. “The set for Deadwood was practical and beautiful and amazing. You want to make sure that you preserve that realness and let the 1800s noises come through. You don’t want to over sterilize the tracks. You want them to feel organic,” says Patterson.

Freesh adds, “These places were creaky and noisy. Wind whistled through the windows. You just embrace it. You enhance it. That was part of the original series sound, and it followed through in the movie as well.”

The location was challenging due to its proximity to real-world civilization and all of our modern-day sonic intrusions, like traffic, airplanes and landscaping equipment from a nearby neighborhood. Those sounds have no place in the 1880s world of Deadwood, but “if we always waited for the moment to be perfect, we would never make a day’s work,” says Patterson. “My mantra was always to protect every precious word of David Milch’s script and to preserve the performances of that incredible cast.”

In the end, the modern-day noises at the location weren’t enough to require excessive ADR. John Cook says, “Geoffrey [Patterson] did a great job of capturing the dialogue. Then, between the choices the picture editors made for different takes and the work that Mandell [Winter] did, there were only one or two scenes in the whole movie that required extra attention for dialogue.”

Winter adds, “Even denoising the tracks, I didn’t take much out. The tracks sounded really good when they got to us. I just used iZotope RX 7 and did our normal pass with it.”

Any fan of Deadwood knows just how important dialogue clarity is since the show’s writing is like Shakespeare for the American West — with prolific profanity, of course. The word choices and their flow aren’t standard TV script fare. To help each word come through clearly, Winter notes they often cut in both the boom and lav mic tracks. This created nice, rich dialogue for John Cook to mix.

On the stage, John Cook used the FabFilter Pro-Q 2 to work each syllable, making sure the dialogue sounded bright and punchy and not too muddy or tubby. “I wanted the audience to hear every word without losing the dynamics of a given monologue or delivery. I wanted to maintain the dynamics, but make sure that the quieter moments were just as intelligible as the louder moments,” he says.

In the film, several main characters experience flashback moments in which they remember events from the series. For example, Al Swearengen (Ian McShane) recalls the death of Jen (Jennifer Lutheran) from the Season 3 finale. These flashbacks — or hauntings, as the post team refers to them — went through several iterations before the team decided on the most effective way to play each one. “We experimented with how to treat them. Do we go into the actor’s head and become completely immersed in the past? Or, do we stay in the present — wherever we are — and give it a slight treatment? Or, should there not be any sounds in the haunting? In the end, we decided they weren’t all going to be handled the same,” says Freesh.

Before coming together for the final mix on Mix 6 at NBCUniversal StudioPost on the Universal Studios Lot in Los Angeles, John Cook and Freesh pre-dubbed Deadwood: The Movie in separate rooms as they’d do on a typical film — with Freesh pre-dubbing the backgrounds, effects, and Foley while Cook pre-dubbed the dialogue and music.

The pre-dubbing process gave Freesh and John Cook time to get the tracks into great shape before meeting up for the final mix. Freesh concludes, “We were able to, with all the people involved, listen to the film in real good condition from the first pass down and make intelligent decisions based on what we were hearing. It really made a big difference in making this feel like Deadwood.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Creating Foley for FX’s Fosse/Verdon

Alchemy Post Sound created Foley for Fosse/Verdon, FX’s miniseries about choreographer Bob Fosse (Sam Rockwell) and his collaborator and wife, the singer/dancer Gwen Verdon (Michelle Williams). Working under the direction of supervising sound editors Daniel Timmons and Tony Volante, Foley artist Leslie Bloome and his team performed and recorded hundreds of custom sound effects to support the show’s dance sequences and add realistic ambience to its historic settings.

Spanning five decades, Fosse/Verdon focuses on the romantic and creative partnership between Bob Fosse and Gwen Verdon. The former was a visionary filmmaker and one of the theater’s most influential choreographers and directors, while the latter was one of the greatest Broadway dancers of all time.

Given the subject matter, it’s hardly surprising that post production sound was a crucial element in the series. For its many musical scenes, Timmons and Volante were tasked with conjuring intricate sound beds to match the choreography and meld seamlessly with the score. They also created dense soundscapes to back the very distinctive environments of film sets and Broadway stages, as well as a myriad of other exterior and interior locations.

For Timmons, the project’s mix of music and drama posed significant creative challenges but also a unique opportunity. “I grew up in upstate New York and originally hoped to work in live sound, potentially on Broadway,” he recalls. “With this show, I got to work with artists who perform in that world at the highest level. It was not so much a television show as a blend of Broadway music, Broadway acting and television. It was fun to collaborate with people who were working at the top of their game.”

The crew drew on an incredible mix of sources in assembling the sound. Timmons notes that to recreate Fosse’s hacking cough (a symptom of his overuse of prescription medicine), they poured through audio stems from the classic 1979 film All That Jazz. “Roy Scheider, who played Bob Fosse’s alter ego in the film, was unable to cough like him, so Bob went into a recording studio and did some of the coughing himself,” Timmons says. “We ended up using those old recordings along with ADR of Sam Rockwell. When Bob’s health starts to go south, some of the coughing you hear is actually him. Maybe I’m superstitious, but for me it helped to capture his identity. I felt like the spirit of Bob Fosse was there on the set.”

A large portion of the post sound effects were created by Alchemy Post Sound. Most notably, Foley artists meticulously reproduced the footsteps of dancers. Foley tap dancing can be heard throughout the series, not only in musical sequences, but also in certain transitions. “Bob Fosse got his start as a tap dancer, so we used tap sounds as a motif,” explains Timmons. “You hear them when we go into and out of flashbacks and interior monologues.”Along with Bloome, Alchemy’s team included Foley artist Joanna Fang, Foley mixers Ryan Collison and Nick Seaman, and Foley assistant Laura Heinzinger.

Ironically, Alchemy had to avoid delivering sounds that were “too perfect.”  Fang points out that scenes depicting musical performances from films were meant to represent the production of those scenes rather than the final product. “We were careful to include natural background sounds that would have been edited out before the film was delivered to theaters,” she explains, adding that those scenes also required Foley to match the dancers’ body motion and costuming. “We spent a lot of time watching old footage of Bob Fosse talking about his work, and how conscious he was not just of the dancers’ footwork, but their shuffling and body language. That’s part of what made his art unique.”

Foley production was unusually collaborative. Alchemy’s team maintained a regular dialogue with the sound editors and were continually exchanging and refining sound elements. “We knew going into the series that we needed to bring out the magic in the dance sequences,” recalls production Foley editor Jonathan Fuhrer. “I spoke with Alchemy every day. I talked with Ryan and Nick about the tonalities we were aiming for and how they would play in the mix. Leslie and Joanna had so many interesting ideas and approaches; I was ceaselessly amazed by the thought they put into performances, props, shoes and surfaces.”

Alchemy also worked hard to achieve realism in creating sounds for non-musical scenes. That included tracking down props to match the series’ different time periods. For a scene set in a film editing room in the 1950s, the crew located a 70-year-old Steenbeck flatbed editor to capture its unique sounds. As musical sequences involved more than tap dancing, the crew assembled a collection of hundreds of pairs of shoes to match the footwear worn by individual performers in specific scenes.

Some sounds undergo subtle changes over the course of the series relative to the passage of time. “Bob Fosse struggled with addictions and he is often seen taking anti-depression medication,” notes Seaman. “In early scenes, we recorded pills in a glass vial, but for scenes in later decades, we switched to plastic.”

Such subtleties add richness to the soundtrack and help cement the character of the era, says Timmons. “Alchemy fulfilled every request we made, no matter how far-fetched,” he recalls. “The number of shoes that they used was incredible. Broadway performers tend to wear shoes with softer soles during rehearsals and shoes with harder soles when they get close to the show. The harder soles are more strenuous. So the Foley team was always careful to choose the right shoes depending on the point in rehearsal depicted in the scene. That’s accuracy.”

The extra effort also resulted in Foley that blended easily with other sound elements, dialogue and music. “I like Alchemy’s work because it has a real, natural and open sound; nothing sounds augmented,” concludes Timmons. “It sounds like the room. It enhances the story even if the audience doesn’t realize it’s there. That’s good Foley.”

Alchemy used Neumann KMR 81 and U 87 mics, Millennia mic pres, Apogee converters, and C24 mixer into Avid Pro Tools.

Steinberg’s SpectraLayers Pro 6: visual audio editing with ARA support

Steinberg’s SpectraLayers Pro 6 audio editing software is now available. First distributed by Sony Creative Software and then by Magix Software, the developers behind SpectraLayers have joined forces with Steinberg to release its sixth iteration.

Unlike most audio editing tools, SpectraLayers offers a visual approach to audio editing, allowing users to visualize audio in the spectral domain (in 2D and 3D) and to manipulate its spectral data in many different ways. While many dedicated audio pros typically edit with their ears, this offering targets those who are more comfortable with visuals leading their editing decisions.

With its 25 advanced tools, SpectraLayers Pro 6 provides precision-editing within the spectral domain, comparable with the editing capabilities applied in high-performance photo editing software: modification, selection, measurement and drawing. Think Adobe Photoshop for audio editing.

The features newly introduced in SpectraLayers Pro 6 include ARA 2 support; next to the standalone application, Version 6 offers an ARA plug-in that seamlessly integrates into every ARA 2-compatible DAW, such as Nuendo and Cubase, to be used as a native editor. Fades along the selection border are one of the innovative features in SpectraLayers, and Pro 6 now includes visible fade masks and allows users to select from the many available fade types.

SpectraLayers’ advanced selection engine now features nine revamped selection tools — including the new Transient Selector — making selections more flexible. The new Move tool helps users transform audio intuitively: grab layers to activate and move or scale them. SpectraLayers Pro 6 also provides external editor integration, allowing users to include other editor software so that any selection can be processed by them as well.

“This new version of SpectraLayers offers a refined and more intuitive user interface inspired by picture editors and a new selection system combining multiple fade masks, bringing spectral editing and remixing to a whole new level. We’re also excited by the possibilities unlocked by the new ARA connection between SpectraLayers, Cubase and Nuendo, bringing spectral mixing and editing right within your DAW,” says Robin Lobel, creator of SpectraLayers.

The user interface of SpectraLayers Pro 6 has completely been redesigned to build on the original use of image editing software. The menus have been redesigned and the panels are collapsible; the Layers panel is customizable; and users can now refer to comprehensive tool tip documentation and a new user manual.

The full retail version of SpectraLayers Pro 6 is available as download through the Steinberg Online Shop at the suggested retail price of $399.99, together with various downloadable updates from previous versions.
of the respective owners.

Behind the Title: Cinematic Media head of sound Martin Hernández

This audio post pro’s favorite part of the job is the start of a project — having a conversation with the producer and the director. “It’s exciting, like any new relationship,” he says.

Name: Martin Hernández

Job Title: Supervising Sound Editor

Company: Mexico City’s Cinematic Media

Can you describe Cinematic Media and your role there?
I lead a new sound post department at Cinematic Media, Mexico’s largest post facility focused on television and cinema. We take production sound through the full post process: effects, backgrounds, music editing… the whole thing. We finish the sound on our mix stages.

What would surprise people most about what you do?
We want the sound to go unnoticed. The viewer shouldn’t be aware that something has been added or is unnatural. If the viewer is distracted from the story by the sound, it’s a lousy job. It’s like an actor whose performance draws attention to himself. That’s bad acting. The same applies to every aspect of filmmaking, including sound. Sound needs to help the narrative in a subjective and quiet way. The sound should be unnoticed… but still eloquent. When done properly, it’s magical.

Hernandez has been working on Easy for Netflix.

What’s your favorite part of the job?
Entering the project for the first time and having a conversation with the team: the producer and the director. It’s exciting, like any new relationship. It’s beautiful. Even if you’re working with people you’ve worked with before, the project is newborn.

My second favorite part is the start of sound production, when I have a picture but the sound is a blank page. We must consider what to add. What will work? What won’t? How much is enough or too much? It’s a lot like cooking. The dish might need more of this spice and a little less of that. You work with your ingredients, apply your personal taste and find the right flavor. I enjoy cooking sound.

What’s your least favorite part of the job?
Me.

What do you mean?
I am very hard on myself. I only see my shortcomings, which are, to tell you the truth, many. I see my limitations very clearly. In my perception of things, it is very hard to get where I want to go. Often you fail, but every once in a while, a few things actually work. That’s why I’m so stubborn. I know I am going to have a lot of misses, so I do more than expected. I will shoot three or four times, hoping to hit the mark once or twice. It’s very difficult for me to work with me.

What is your most productive time of the day?
In the morning. I’m a morning person. I work from my own place, very early, like 5:30am. I wake up thinking about things that I left behind in the session. It’s useless to remain in bed, so I go to my studio and start working on these ideas. It’s amazing how much you can accomplish between 6am and 9am. You have no distractions. No one’s calling. No emails. Nothing. I am very happy working in the mornings.

If you didn’t have this job, what would you be doing?
That’s a tough question! I don’t know anything else. Probably, I would cook. I’d go to a restaurant and offer myself as an intern in the kitchen.

For most people I know, their career is not something they’ve chosen; it was embedded in them when they were born. It’s a matter of realizing what’s there inside you and embracing it. I never, in my wildest dreams, expected to be doing this work.

When I was young, I enjoyed watching films, going to the movies, listening to music. My earliest childhood memories are sound memories, but I never thought that would be my work. It happened by accident. Actually, it was one accident after another. I found myself working with sound as a hobby. I really liked it, so I embraced it. My hobby then became my job.

So you knew early on that audio would be your path?
I started working in radio when I was 20. It happened by chance. A neighbor told me about a radio station that was starting up from scratch. I told my friend from school, Alejandro Gonzalez Iñárritu, the director. Suddenly, we’re working at a radio station. We’re writing radio pieces and doing production sound. It was beautiful. We had our own on-air, live shows. I was on in the mornings. He did the noon show. Then he decided to make films and I followed him.

Easy

What are some of your recent projects?
I just finished a series for Joe Swanberg, the third season of Easy. It’s on Netflix. It’s the fourth project I’ve done with Joe. I’ve also done two shows here in Mexico. The first one is my first full-time job as supervisor/designer for Argos, the company lead by Epigmenio Ibarra. Yankee is our first series together for Netflix, and we’re cutting another one to be aired later in the year. It’s a very exciting for me.

Is there a project that you’re most proud of?
I am very proud of the results that we’ve been getting on the first two series here in Mexico. We built the sound crew from scratch. Some are editors I’ve worked with before, but we’ve also brought in new talent. That’s a very joyful process. Finding talent is not easy, but once you do, it’s very gratifying. I’m also proud of this work because the quality is very good. Our clients are happy, and when they’re happy, I’m happy.

What pieces of technology can you not live without?
Avid Pro Tools. It’s the universal language for sound. It allows me to share sound elements and sessions from all over the world, just like we do locally, between editing and mixing stages. The second is my converter. We are using the Red system from Focusrite. It’s a beautiful machine.

This is a high-stress job with deadlines and client expectations. What do you do to de-stress from it all?
Keep working.

Mixing sounds of fantasy and reality for Rocketman

By Jennifer Walden

Paramount Pictures’ Rocketman is a musical fantasy about the early years of Elton John. The story is told through flashbacks, giving director Dexter Fletcher the freedom to bend reality. He blended memories and music to tell an emotional truth as opposed to delivering hard facts.

Mike Prestwood Smith

The story begins with Elton John (Taron Egerton) attending a group therapy session with other recovering addicts. Even as he’s sharing details of his life, he’s stretching the truth. “His recollection of the past is not reliable. He often fantasizes. He’ll say a truth that isn’t really the case, because when you flash back to his memory, it is not what he’s saying,” says BAFTA-winning re-recording mixer Mike Prestwood Smith, who handled the film’s dialogue and music. “So we’re constantly crossing the line of fantasy even in the reality sections.”

For Smith, finding the balance between fantasy and reality was what made Rocketman unique. There’s a sequence in which pre-teen Elton (Kit Connor) evolves into grown-up Elton to the tune of “Saturday’s Alright for Fighting.” It was a continuous shot, so the camera tracks pre-teen Elton playing the piano, who then then gets into a bar fight that spills into an alleyway that leads to a fairground where a huge choreographed dance number happens. Egerton (whose actual voice is featured) is singing the whole way, and there’s a full-on band under him, but specific effects from his surrounding environment poke through the mix. “We have to believe in this layer of reality that is gluing the whole thing together, but we never let that reality get in the way of enjoying the music.”

Smith helped the pre-recorded singing to feel in-situ by adding different reverbs — like Audio Ease’s AltiVerb, Exponential Audio’s PhoenixVerb and Avid’s ReVibe. He created custom reverbs from impulse responses taken from the rooms on set to ground the vocal in that space and help sell the reality of it.

For instance, when Elton is in the alleyway, Smith added a slap verb to Egerton’s voice to make it feel like it’s bouncing off the walls. “But once he gets into the main verses, we slowly move away from reality. There’s this flux between making the audience believe that this is happening and then suspending that belief for a bit so they can enjoy the song. It was a fine line and very subjective,” he says.

He and re-recording mixer/supervising sound editor Matthew Collinge spent a lot of time getting it to play just right. “We had to be very selective about the sound of reality,” says Smith. “The balance of that whole sequence was very complex. You can never do those scenes in one take.”

Another way Smith helped the pre-recorded vocals to sound realistic was by creating movement using subtle shifts in EQ. When Elton moves his head, Smith slightly EQ’d Egerton’s vocals to match. These EQ shifts “seem little, but collectively they have a big impact on selling that reality and making it feel like he’s actually performing live,” says Smith. “It’s one of those things that if you don’t know about it, then you just accept it as real. But getting it to sound that real is quite complicated.”

For example, there’s a scene in which Egerton is working out “Your Song,” and the camera cuts from upstairs to downstairs. “We are playing very real perspectives using reverb and EQ,” says Smith. Then, once Elton gets the song, he gives Bernie Taupin (Jamie Bell) a knowing look. The music gets fleshed out with a more complicated score, with strings and guitar. Next, Elton is recording the song in a studio. As he’s singing, he’s looking down and playing piano. Smith EQ’d all of that to add movement, so “it feels like that performance is happening at that time. But not one single sound of it is from that moment on set. There is a laugh from Bernie, a little giggle that he does, and that’s the only thing from the on-set performance. Everything else is manufactured.”

In addition to EQ and reverb, Smith used plugins from Helsinki-based sound company Oeksound to help the studio recordings to sound like production recordings. In particular, Oeksound’s Spiff plugin was useful for controlling transients “to get rid of that close-mic’d sound and make it feel more like it was captured on set,” Smith says. “Combining EQ and compression and adding reverb helped the vocals to sound like sync, but at the same time, I was careful not to take away too much from the quality of the recording. It’s always a fine line between those things.”

The most challenging transitions were going from dialogue into singing. Such was the case with quiet moments like “Your Song” and “Goodbye Yellow Brick Road.” In the latter, Elton quietly sings to his reflection in a mirror backstage. The music slowly builds up under his voice as he takes off down the hallway and by the time he hops into a cab outside it’s a full-on song. Part of what makes the fantasy feel real is that his singing feels like sync. The vocals had to sound impactful and engage the audience emotionally, but at the same time they had to sound believable — at least initially. “Once you’re into the track, you have the audience there. But getting in and out is hard. The filmmakers want the audience to believe what they’re seeing, that Taron was actually in the situations surrounded by a certain level of reality at any given point, even though it’s a fantasy,” says Smith.

The “Rocketman” song sequence is different though. Reality is secondary and the fantasy takes control, says Smith. “Elton happens to be having a drug overdose at that time, so his reality becomes incredibly subjective, and that gives us license to play it much more through the song and his vocal.”

During “Rocketman,” Elton is sinking to the bottom of a swimming pool, watching a younger version of himself play piano underwater. On the music side, Smith was able to spread the instruments around the Dolby Atmos surround field, placing guitar parts and effect-like orchestrations into speakers discretely and moving those elements into the ceiling and walls. The bubble sound effects and underwater atmosphere also add to the illusion of being submerged. “Atmos works really well when you have quiet, and you can place sounds in the sound field and really hear them. There’s a lot of movement musically in Rocketman and it’s wonderful to have that space to put all of these great elements into,” says Smith.

That sequence ends with Elton coming on stage at Dodger Stadium and hitting a baseball into the massive crowd. The whole audience — 100,000 people — sing the chorus with him. “The moment the crowd comes in is spine-tingling. You’re just so with him at that point, and the sound and the music are doing all of that work,” he explains.

The Music
The music was a key ingredient to the success of Rocketman. According to Smith, they were changing performances from Egerton and also orchestrations right through the post sound mix, making sure that each piece was the best it could be. “Taron [Egerton] was very involved; he was on the dub stage a lot. Once everything was up on the screen, he’d want to do certain lines again to get a better performance. So, he did pre-records, on-set performances and post recording as well,” notes Smith.

Smith needed to keep those tracks live through the mix to accommodate the changes, so he and Collinge chose Avid S6 control surfaces and mixed in-the-box as opposed to printing the tracks for a mix on a traditional large-format console. “To have locked down the music and vocals in any way would have been a disaster. I’ve always been a proponent of mixing inside Pro Tools mainly because workflow-wise, it’s very collaborative. On Rocketman, having the tracks constantly addressable — not just by me but for the music editors Cecile Tournesac and Andy Patterson as well — was vital. We were able to constantly tweak bits and pieces as we went along. I love the collaborative nature of making and mixing sound for film, and this workflow allows for that much more so than any other. I couldn’t imagine doing this any other way,” says Smith.

Smith and Collinge mixed in native Dolby Atmos at Goldcrest London in Theatre 1 and Theatre 2, and also at Warner Bros. De Lane Lea. “It was such a tight schedule that we had all three mixing stages going for the very end of it, because it got a bit crazy as these things do,” says Smith. “All the stages we mixed at had S6s, and I just brought the drives with me. At one point we were print mastering and creating M&Es on one stage and doing some fold-downs on a different stage, all with the same session. That made it so much more straightforward and foolproof.”

As for the fold-down from Atmos to 5.1, Smith says it was nearly seamless. The pre-recorded music tracks were mixed by music producer Giles Martin at Abbey Road. Smith pulled those tracks apart, spread them into the Atmos surround field and then folded them down to 5.1. “Ultimately, the mixing that Giles Martin did at Abbey Road was a great thing because it meant the fold-downs really had the best backbone possible. Also, the way that Dolby has been tweaking their fold-down processing, it’s become something special. The fold-downs were a lot easier than I thought they’d be,” concludes Smith.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Accusonus intros plugin bundles for sound and video editors

Accusonus is bringing its single-knob audio cleaning and noise reduction technology to its new ERA 4 Bundles for video editors, audio engineers and podcasters.

The ERA 4 Bundles (Enhancement and Repair of Audio) are a collection of single-knob audio cleaning plugins designed to reduce the complexity of the sound design and audio workflow without compromising sound quality or fidelity.

Accusonus says that its patented single-knob design appeals to professional editors, filmmakers and podcasters because it reduces the time-consuming audio repair workflow to a twist of a dial. Additionally, the ERA 4 Standard family of plugins enables aspiring content creators, YouTubers and film and audio students to quickly master audio workflows with minimal effort or expertise.

ERA 4 Bundles are available in two collections: The Standard Bundle and the Pro Bundle.

The ERA 4 Standard Bundle features audio cleaning plugins designed for speed and fidelity with minimal effort, even if users have never edited audio before. The Standard Bundle offers professional sound design and includes: Noise Remover, Reverb Remover, De-esser, Plosive Remover, Voice Leveler and De-clipper.

The ERA 4 Pro Bundle targets professional editors, audio engineers and podcasters in advanced post and music production environments. It includes all of the plugins from the Standard Bundle and adds the sophisticated ERA De-Esser Pro plugin. Except from the large main knob, ERA De-Esser Pro offers extra controls for greater granularity and fine-tuning when fixing an especially rough recording.

The Accusonus ERA Bundle is fully supported by Avid Pro Tools 12.6 (or higher), Audacity 2.2.2, Apple Logic Pro 10.4.3 (or higher), Ableton Live 9 (or higher), Cockos Reaper v5.9, Image Line FL Studio 12, Presonus Studio One 3 (or higher), Steinberg Cubase 8 (or higher), Adobe Audition CC 2017 (or higher) and Apple GarageBand 10.3.2

The ERA Bundle supports Adobe Premiere CC 2017 (or higher), Apple Final Cut Pro X 10.4 (or higher), Blackmagic DaVinci Resolve 14 (or higher), Avid Media Composer 2018.12 and Magix Vegas Pro 15 (or higher).

The ERA 4 Standard Bundle is available at a special introductory price of $119 until July 31. After that, the price will be $149. The ERA 4 Pro Bundle is available at a special introductory price of $349 until July 31. After that, the price will be $499.

Picture Shop buys The Farm Group

Burbank’s Picture Shop has acquired UK-based The Farm Group. The Farm Group was founded in 1998 and currently has four locations in London, as well as facilities in Manchester, Bristol and Los Angeles.

The Farm, London

The Farm also operates the in-house post production teams for BBC Sport in Salford, England; UKTV; and Fremantle Media. This deal marks Picture Shop’s second international acquisition, followed by the deal it made for Vancouver’s Finalé Post earlier this year.

The founders of The Farm, Nicky Sargent and Vikki Dunn, will stay involved in The Farm Group. In a joint statement, Sargent and Dunn said, “We are delighted that after 20 successful years, we have a new partner. Picture Shop is poised to expand in the international post market and provide the combination of technical, creative and professional excellence to the world’s content creators.”

The duo will also re-invest in the expanded Picture Head Group, which includes Picture Head and audio post company Formosa Group, in addition to Picture Shop.

L-R: The Farm Group’s Nicky Sargent and Vikki Dunn.

Bill Romeo, president of Picture Shop, says, “Based on the amount of content being created internationally, we felt it was important to have a presence worldwide and support our clients’ needs. The Farm, based on its reputation and creative talent, will be able to maintain the philosophy of Picture Shop. It is a perfect fit. Our clients will benefit from our collaborative efforts internationally, as well as benefit from our technology and experience. We will continue to partner and support our clients while maintaining our boutique feel.”

Recent work from The Farm Group includes BBC Two’s Summer of Rockets, Sky One’s Jamestown and Britain’s Got Talent.

 

Andy Greenberg on One Union Recording’s fire and rebuild

San Francisco’s One Union Recording Studios has been serving the sound needs of ad agencies, game companies, TV and film producers, and corporate media departments in the Bay Area and beyond for nearly 25 years.

In the summer of 2017, the facility was hit by a terrible fire that affected all six of its recording studios. The company, led by president John McGleenan, immediately began an ambitious rebuilding effort, which it completed earlier this year. One Union Recording is now back up to full operation and its five recording studios, outfitted with the latest sound technologies including Dolby Atmos capability, are better than ever.

Andy Greenberg, One Union Recording’s facility engineer and senior mix engineer, who works alongside engineers Joaby Deal, Eben Carr, Matt Wood and Isaac Olsen. We recently spoke with Greenberg about the company’s rebuild and plans for the future.

Rebuilding the facility after the fire must have been an enormous task.
You’re not kidding. I’ve worked at One Union for 22 years, and I’ve been through every growth phase and upgrade. I was very proud of the technology we had in place in 2017. We had six rooms, all cutting-edge. The software was fully up to date. We had few if any technical problems and zero downtime. So, when the fire hit, we were devastated. But John took a very business-oriented approach to it, and within a few days he was formulating a plan. He took it as an opportunity to implement new technology, like Dolby Atmos, and to grow. He turned sadness into enthusiasm.

How did the facility change?
Ironically, the timing was good. A lot of new technology had just come out that I was very excited about. We were able to consolidate what were large systems into smaller units while increasing quality 10-fold. We moved leaps and bounds beyond where we had been.

Prior to the fire, we were running Avid Pro Tools 12.1. Now we’re on Pro Tools Ultimate. We had just purchased four Avid/Euphonix System 5 digital audio consoles with extra DSP in March of 2017 but had not had time to install them before the fire due to bookings. These new consoles are super powerful. Our number of inputs and outputs quadrupled. The routing power and the bus power are vastly improved. It’s phenomenal.

We also installed Avid MTRX, an expandable interface designed in Denmark and very popular now, especially for Atmos. The box feels right at home with the Avid S5 because it’s MADI and takes the physical outputs of our ProTools systems up to 64 or 128 channels.

That’s a substantial increase.
A lot of delivered projects use from two to six channels. Complex projects might go to 20. Being able to go far beyond that increases the power and flexibility of the studio tremendously. And then, of course, our new Atmos room requires that kind of channel count to work in immersive surround sound.

What do you do for data storage?
Even before the fire, we had moved to a shared storage network solution. We had a very strong infrastructure and workflow in terms of data storage, archiving and the ability to recall sessions. Our new infrastructure includes 40TB of active storage of client data. Forty terabytes is not much for video, but for audio, it’s a lot. We also have 90TB of instantly recallable data.

We have client data archived back 25 years, and we can have anything online in any room in just a few minutes. It’s literally drag and drop. We pride ourselves on maintaining triple redundancy in backups. Even during the fire, we didn’t lose any client data because it was all backed up on tape and off site. We take backup and data security very seriously. Backups happen automatically every day…  actually every three hours.

What are some of the other technical features of the rebuilt studios?
There’s actually a lot. For example, our rooms — including the two Dolby-certified Atmos rooms — have new Genelec SAM studio monitors. They are “smart” speakers that are self-tuning. We can run some test tones and in five minutes the rooms are perfectly tuned. We have custom tunings set up for 5.1 and Atmos. We can adjust the tuning via computer and the speakers have built-in DPS, so we don’t have to rely on external systems.

Another cool technology that we are using is Dante, which is part of the Avid MTRX interface. Dante is basically audio-over-IP or audio-over-Cat6. It essentially replaced our AES router. We were one of the first facilities in San Francisco to have a full audio AES router, and it was very strong for us at the time. It was a 64×64 stereo-paired AES router. It has been replaced by the MTRX interface box that has, believe it or not, a three-inch by two-inch card that handles 64×64 routing per room. So, each room’s routing capability went up exponentially by 64.

We use Dante to route secondary audio, like our ISDN and web-based IP communication devices. We can route signals from room to room and over the web securely. It’s seamless, and it comes up literally into your computer. It’s amazing technology. The other day, I did a music session and used a 96K sample rate, which is very high. The quality of the headphone mix was astounding. Everyone was happy and it took just one, quick setting and we were off and running. The sound is fantastic and there is no noise and no latency problems. It’s super-clean, super-fast and easy to use.

What about video monitoring?
We have 4K monitors and 4K projection in all the rooms via Sony XBR 55A1E Bravia OLED monitors, Sony VPL-VW885ES True 4K Laser Projectors and a DLP 4K550 projector.Our clients appreciate the high-quality images and the huge projection screens.

London’s Media Production Show: technology for content creation

By Mel Lambert

The fourth annual Media Production Show, held June 11-12 at Olympia West, London, once again attracted a wide cross section of European production, broadcast, post and media-distribution pros. According to its organizers, the two-day confab drew 5,300 attendees and “showcased the technology and creativity behind content creation,” focusing on state-of-the-art products and services. The full program of standing room-only discussion seminars covered a number of contemporary topics, while 150-plus exhibitors presented wares from the media industry’s leading brands.

The State of the Nation: Post Production panel.

During a session called “The State of the Nation: Post Production,” Rowan Bray, managing director of Clear Cut Pictures, said that “while [wage and infrastructure] costs are rising, our income is not keeping up.” And with salaries, facility rent and equipment amortization representing 85% of fixed costs, “it leaves little over for investment in new technology and services. In other words, increasing costs are preventing us from embracing new technologies.”

Focusing on the long-term economic health of the UK post industry, Bray pointed out that few post facilities in London’s Soho area are changing hands, which she says “indicates that this is not a healthy sector [for investment].”

“Several years ago, a number of US companies [including Technicolor and Deluxe] invested £100 million [$130 million] in Soho; they are now gone,” stated Ian Dodd, head of post at Dock10.

Some 25 years ago, there were at least 20 leading post facilities in London. “Now we have a handful of high-end shops, a few medium-sized ones and a handful of boutiques,” Dodd concluded. Other panelists included Cara Kotschy, managing director of Fifty Fifty Post Production.

The Women in Sound panel

During his keynote presentation called “How we made Bohemian Rhapsody,” leading production designer Aaron Haye explained how the film’s large stadium concert scenes were staged and supplemented with high-resolution CGI; he is currently working on Charlie’s Angels (2019) with director/actress Elizabeth Banks.

The panel discussion “Women in Sound” brought together a trio of re-recording mixers with divergent secondary capabilities and experience. Participants were Emma Butt, a freelance mixer who also handles sound editorial and ADR recordings; Lucy Mitchell, a freelance sound editor and mixer; plus Kate Davis, head of sound at Directors Cut Films. As the audience discovered, their roles in professional sound differ. While exploring these differences, the panel revealed helpful tips and tricks for succeeding in the post world.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Izotope’s Neutron 3 streamlines mix workflows with machine learning

Izotope, makers of the RX audio tools, has introduced Neutron 3, a plug-in that — thanks to advances in machine learning — listens to the entire session and communicates with every track in the mix. Mixers can use Neutron 3’s new Mix Assistant to create a balanced starting point for an initial-level mix built around their chosen focus, saving time and energy when making creative mix decisions. Once a focal point is defined, Neutron 3 automatically set levels before the mixer ever has to touch a fader.

Neutron 3 also has a new module called Sculptor (available in Neutron 3 Standard and Advanced) for sweetening, fixing and creative applications. Using never-before-seen signal processing, Sculptor works like a per-band army of compressors and EQs to shape any track. It also communicates with Track Assistant to understand each instrument and gives realtime feedback to help mixers shape tracks to a target EQ curve or experiment with new sounds.

In addition, Neutron 3 includes many new improvements and enhancements based on feedback from the community, such as the redesigned Masking Meter that automatically flags masking issues and allows them to be fixed from a convenient one-window display. This improvement prevents tracks from stepping on each other and muddying the mix.

Neutron 3 has also had a major overhaul in performance for faster processing and load times and smooth metering. Sessions with multiple Neutrons open much quicker, and refresh rates for visualizations have doubled.

Other Neutron 3 Features
• Visual Mixer and Izotope Relay: Users can launch Mix Assistant directly from Visual Mixer and move tracks in a virtual space, tapping into Izotope-enabled inter-plug-in communication
• Improved interface: Smooth visualizations and a resizable interface
• Improved Track Assistant listens to audio and creates a custom preset based on what it hears
• Eight plug-ins in one: Users can build a signal chain directly within one highly connected, intelligent interface with Sculptor, EQ with Soft Saturation mode, Transient Shaper, 2 Compressors, Gate, Exciter, and Limiter
• Component plug-ins: Users can control Neutron’s eight modules as a single plug-in or as eight individual plug-ins
• Tonal Balance Control: Updated to support Neutron 3
• 7.1 Surround sound support and zero-latency mode in all eight modules for professional, lightweight processing for audio post or surround music mixes

Visual Mixer and Izotope Relay will be Included free with all Neutron 3 Advanced demo downloads. In addition, Music Production Suite 2.1 will now include Neutron 3 Advanced, and iZotope Elements Suite will be updated to include Neutron Elements (v3).

Neutron 3 will be available in three different options — Neutron Elements, Neutron 3 Standard and Neutron 3 Advanced. See the comparison chart for more information on what features are included in each version.

Neutron will be available June 30. Check out the iZotope site for pricing.

Sound Lounge ups Becca Falborn to EP 

New York’s Sound Lounge, an audio post house that provides sound services for advertising, television and feature films, has promoted Becca Falborn to executive producer.

In her new role, Falborn will manage the studio’s advertising division and supervise its team of producers. She will also lead client relations and sales. Additionally, she will manage Sound Lounge Everywhere, the company’s remote sound services offering, which currently operates in Boston and Boulder, Colorado.

“Becca is a smart, savvy and passionate producer, qualities that are critical to success in her new role,” said Sound Lounge COO and partner Marshall Grupp. “She has developed an excellent rapport with our team of mixers and clients and has consistently delivered projects on time and on budget, even under the most challenging circumstances.”

Falborn joined Sound Lounge in 2017 as a producer and was elevated to senior producer last year. She has produced voiceover recordings, sound design, and mixing for many advertising projects, including seven out of the nine spots produced by Sound Lounge that debuted during this year’s Super Bowl telecast.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including past positions with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Sugar Studios LA gets social for celebrity-owned Ladder supplement

Sugar Studios LA completed a social media campaign for Ladder perfect protein powder and clean energy booster supplements starring celebrity founders Arnold Schwarzenegger, LeBron James, DJ Khaled, Cindy Crawford and Lindsey Vonn. The playful ad campaign focuses on social media, foregoing the usual TV commercial push and pitching the protein powder directly to consumers.

One spot shows Arnold in the gym annoyed by a noisy dude on the phone, prompting him to turn up his workout soundtrack. Then DJ Khaled is scratching encouragement for LeBron’s workout until Arnold drowns them out with his own personal live oompah band.

The ads were produced and directed by longtime Schwarzenegger collaborator Peter Grigsby, while Sugar Studios’ editor Nico Alba (Chevrolet, Ferrari, Morongo Casino, Mattel) cut the project using Adobe Premiere. When asked about using random spot lengths, as opposed to traditional :15s, :30s, and :60s, Alba explains, “Because it’s social media, we’re not always bound to those segments of time anymore. Basically, it’s ‘find the story,’ and because there are no rules, it makes the storytelling more fun. It’s a process of honing everything down without losing the rhythm or the message and maintaining a nice flow.”

Nico Alba and Jijo Reed. Credit: David Goggin

“Peter Grigsby requested a skilled big-brand commercial editor on this campaign,” Reed says. “Nico was the perfect fit to create that rhythm and flow that only a seasoned commercial editor could bring to the table.”

“We needed a heavy-weight gym ambience to set the stage,” says Alba, who worked closely with sound design/mixers Bret Mazur and Troy Ambroff to complement his editing. “It starts out with a barrage of noisy talking and sounds that really irritate Arnold, setting up the dueling music playlists and the sonic payoff.”

The audio team mixed and created sound design with Avid Pro tools Ultimate. Audio plugins called on include Waves Mercury bundle,, DTS Surround tools and iZotope RX7 Advanced.

The Sugar team also created a cinematic look to the spots, thanks to colorist Bruce Bolden, who called on Blackmagic DaVinci Resolve and a Sony BVM OLED monitor. “He’s a veteran feature film colorist,” says Reed, “so he often brings that sensibility to advertising spots as well, meaning rich blacks and nice, even color palettes.”

Storage used at the studio is Avid Nexis and Facilis Terrablock.

Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Butter Music and Sound adds new ECDs in NYC and LA

Music shop Butter Music and Sound has expanded its in-house creative offerings with the addition of two new executive creative directors (ECDs): Tim Kvasnosky takes the helm in Los Angeles and Aaron Kotler in New York.

The newly appointed ECDs will maintain creative oversight on all projects going through the Los Angeles and New York offices, managing workflow across staff and freelance talent, composing on a wide range of projects and supporting and mentoring in-house talent and staff.

Kvasnosky and Kotler both have extensive experience as composers and musicians, with backgrounds crafting original music for commercials, film and television. They also maintain active careers in the entertainment and performance spaces. Kvasnosky recently scored the feature film JT LeRoy, starring Kristen Stewart and Laura Dern. Kotler performs and records regularly.

Kvasnosky is a composer and music producer with extensive experience across film, TV, advertising and recording. A Seattle native who studied at NYU, he worked as a jazz pianist and studio musician before composing for television and film. His tracks have been licensed in many TV shows and films. He has scored commercial campaigns for Nike, Google, McDonald’s, Amazon, Target and VW. Along with Detroit-based music producer Waajeed and singer Dede Reynolds, Kvasnosky formed the electronic group Tiny Hearts.

Native New Yorker Kotler holds a Bachelor of Music from Northwestern University School of Music and a Master of Music from Manhattan School of Music, both in jazz piano performance. He began his career as a performer and studio musician, playing in a variety of bands and across genres including neo-soul, avante garde jazz, funk, rock and more. He also music directed Jihad! The Musical to a month of sold-out performances at the Edinburgh Festival Fringe. Since then, he has composed commercials, themes and sonic branding campaigns for AT&T, Coca-Cola, Nike, Verizon, PlayStation, Samsung and Honda. He has also arranged music for American Idol and The Emmys, scored films that were screened at a variety of film festivals, and co-produced Nadje Noordhuis’ debut record. In 2013, he teamed up with Michael MacAllister to co-design and build Creekside Sound, a recording and production studio in Brooklyn.

Main Image: (L-R) Tim Kvasnosky and Aaron Kotler

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

After fire, SF audio house One Union is completely rebuilt

San Francisco-based audio post house One Union Recording Studios has completed a total rebuild of its facility. It features five all-new, state-of-the-art studios designed for mixing, sound design, ADR, voice recording and other sound work.

Each studio offers Avid/Euphonix digital mixing consoles, Avid MTRX interface systems, the latest Pro Tools software PT Ultimate and robust monitoring and signal processing gear. All studios have dedicated, large voice recording booths. One is certified for Dolby Atmos sound production. The facility’s infrastructure and central machine room are also all new.

One Union began its reconstruction in September 2017 in the aftermath of a fire that affected the entire facility. “Where needed, we took the building back to the studs,” says One Union president/owner John McGleenan. “We pulled out, removed and de-installed absolutely everything and started fresh. We then rebuilt the studios and rewired the whole facility. Each studio now has new consoles, speakers, furniture and wiring, and all are connected to new machine rooms. Every detail has been addressed and everything is in its proper place.”

During the 18 months of reconstruction, One Union carried on operations on a limited basis while maintaining its full staff. That included its team of engineers Joaby Deal, Eben Carr, Andy Greenberg, Matt Wood and Isaac Olsen who worked continuously and remain in place.

Reconstruction was managed by LA-based Yanchar Design & Consulting Group. All five studios feature Avid/Euphonix System 5 digital audio consoles, Pro Tools 2018 and Avid MTRX with Dante interface systems. Studio 4 adds Dolby Atmos capability with a full Atmos Production Suite as well as Atmos RMU. Studio 5, the facility’s largest recording space, has two MTRX systems, with a total of more than 240 analog, MADI and Dante outputs (256 inputs), integrated with a nine-foot Avid/Euphonix console. It also features a 110-inch, retractable projection screen in the control room and a 61-inch playback monitor in its dedicated voice booth. Among other things, the central machine room includes 300TB LTO archiving system.

John McGleenan

The facility was also rebuilt with an eye toward avoiding production delays. “All of the equipment is enterprise-grade and everything is redundant,” McGleenan notes. “The studios are fed by a dual power supply and each is equipped with dual devices. If some piece of gear goes down, we have a redundant system in place to keep going. Additionally, all our critical equipment is hot-swappable. Should any component experience a catastrophic failure, it will be replaced by the manufacturer within 24 hours.”

McGleenan adds that redundancy extends to broadband connectivity. To avoid outages, the facility is served by two 1Gig fiber optic connections provided by different suppliers. WiFi is similarly available through duplicate services.

One Union Recording was founded by McGleenan, a former advertising agency executive, in 1994 and originally had just one sound studio. More studios were soon added as the company became a mainstay sound services provider to the region’s advertising industry.

In recent years, the company has extended its scope to include corporate and branded media, television, film and games, and built a client base that extends across the country and around the world.

Recent work includes commercials for Mountain Dew and carsharing company Turo, the television series Law and Order SVU and Grand Hotel, and the game The Grand Tour.

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Providing audio post for Three Identical Strangers documentary

By Randi Altman

It is a story that those of us who grew up in the New York area know well. Back in the ‘80s, triplet brothers separated at birth were reunited, after two of them attended the same college within a year of each other — with one being confused for the other. A classmate figured it out and their story was made public. Enter brother number three.

It’s an unbelievable story that at the time was considered to be a heart-warming tale of lost brothers — David Kellman, Bobby Shafran and Eddy Galland — who found each other again at the age of 19. But heart-warming turned heart-breaking when it was discovered that the triplets were part of a calculated, psychological research project. Each brother was intentionally placed in different levels of economic households, where they were “checked in on” over the years.

L-R: Chad Orororo, Nas Parkash and Kim Tae Hak

Last year, British director Tim Wardle told the story in his BAFTA-nominated documentary, Three Identical Strangers, produced by Raw TV. For audio post production, Wardle called on dialogue editor and re-recording mixer Nas Parkash, sound effects editor Kim Tae Hak and Foley and archive FX editor editor Chad Orororo, all from London-based post house Molinare. The trio was nominated for an MPSE Award earlier this year for their work on the film.

We recently reached out to the team to ask about workflow on this compelling work.

When you first started on Three Identical Strangers, did you realize then how powerful a film it was going to be?
Nas Parkash: It was after watching the film for the first time that we realized it was going to be seminal film. It’s an outrageous story — the likes of which we hadn’t come across before. We as a team have been fortunate to work on a broad range of documentary features, but this one has stuck out, probably because of its unpredictability and sheer number of plot twists.

Chad Orororo: I agree. It was quite an exciting moment to watch an offline cut and instantly know that it was going to be phenomenal project. The great thing about having this reaction was that the pressure was fused with excitement, which is always a win-win. Especially as the storytelling had so much charisma.

Kim Tae Hak: When the doc was first mentioned, I had no idea about their story, but soon after viewing the first cut I realized that this would be a great film. The documentary is based on an unbelievable true story — it evokes a lot of mixed feelings, and I wanted to ensure that every single sound effect element reflected those emotions and actions.

How early did you get involved in the project?
Tae Hak: I got to start working on the SFX as soon as the picture was locked and available.

Parkash: We had a spotting session a week before we started, with director Tim Wardle and editor Michael Harte, where we watched the film in sections and made notes. This helped us determine what the emotion in each scene should be, which is important when you’ve come to a film cold. They had been living with the edit, evolving it over months, so it was important to get up to speed with their vision as quickly as possible.

Courtesy of Newsday

Documentary audio often comes from many different sources and in varying types of quality. Can you talk about that and the challenges related to that?
Parkash: The audio quality was pretty good. The interview recordings were clean and on mic. We had two mics for every interview, but I went with the boom every time, as it sounded nicer, albeit more ambient, but with atmospheres that bedded in nicely.

Even the archive clips, such as from the Phil Donahue Show, were good. Funnily enough, you tend to get worse-sounding archives the more recent it is in history. 1970s stuff on the whole seems to have been preserved quite well, whereas stuff from the 1990s can be terrible.

Any technical challenges on the project?
Parkash: The biggest challenge for me was mixing in commercial music with vocals underneath interview dialogue. It had to be kept at a loud enough level to retain impact in the cinema, but low enough that it didn’t fight with the interview dialogue. The biggest deliberation was to what degree should we use sound effects in the drama recon — do we fully fill or just go with dialogue and music? In the end it was judged on a case-by-case basis.

How was Foley used within the doc?
Orororo: The Foley covered everything that you see on screen — all of the footsteps, clothing movement, shaving and breathing. You name it. It’s in there somewhere. My job was to add a level of subtle actuality, especially during the drama reconfiguration scenes.

These scenes took quite a bit of work to get right because they had to match the mood of the narration. For example, the coin spillage during the telephone box scene required a specific amount of coins on the right surface. It took a numerous amount of takes to get right because you can’t exactly control how objects fall and the texture also changes depending on the height from which you drop an object. So generally, there’s a lot more to consider when recording Foley than people may assume.

Unfortunately there we’re a few scenes where Foley was completely dropped (mainly on the archive material), but this is something that usually happens. The shape of the overall mix always takes favor over the individual elements that contribute to the mix. Teamwork makes the dream work, as they say, and I really think that showed with the final result.

Parkash: We did have sync sound recorded on location, but we decided it would be better to re-record at a higher fidelity. Some of it was noisy or didn’t sound cinematic enough. When it’s cleaner sound, you can make more of it.

What about the sound effects? Did you use a library or your own?
Parkash: Kim has his own extensive sound effects library. We also have our own personal ones, plus of Molinare’s. Anything we can’t find, we’ll go out and record. Kim has a Zoom recorder and his breathing has been featured on many films now (laughs).

Tae Hak: I mainly used my own SFX library. I always build up my own FX library, which I can apply instantly for any type of motioned pictures. I then tweak by applying various software plugins, such as Pitch & Time Pro, Altiverb and many more.

As a brief example of how I completed sound design for the opening title, the first thing I did was specifically look for realistic heartbeats of six-month infants. After successfully collecting some natural heartbeats. I then blended them with other synthetic elements as I started to vary the pitch slightly between them (for the three babies), applying various effects, such as chorus and reverb, so each heartbeat has a slightly different texture. It was a bit tricky to make them distinct, but still the same (like identical triplets).

The three heartbeats were panned across the front three speakers in order to create as much separation and clarity as possible. Once I was happy with the heartbeats as a foundation. I then added other sound elements, such as underwater, ambiguous liquids and other sound design elements. It was important for this sequence to build in a dramatic way, starting as mono and gradually filling the 5.1 space before a hard cut into the interview room.

Can you talk about working with director Tim Wardle?
Tae Hak: Tim was fantastic and very supportive throughout the project. As an FX editor, I had less face to face with him than Nas, but we had a spot session together before the first day of working, and we also talked about our sound designing approach over the phone, especially for the opening title, and the aforementioned sound of triplets’ heartbeats.

Orororo: Tim was great to work with! He’s a very open-minded director who also trusts in the talent that he’s working with, which can be hard to come by especially on a project as important as Three Identical Strangers.

Parkash: Tim and editor Michael Harte were wonderful to work with. The best aspect of working in this industry are the people you meet and the friendships you make. They are both cinephiles, who cited numerous other films and directors in order to guide us through the process — “this scene should feel like this scene from such and such movie.” But they were also open to our suggestions and willing to experiment with different approaches. It felt like a collaboration, and I remember having fun in those intense few weeks.

How much stock footage versus new footage was shot?
Parkash: It was all pretty much new — the sit-down interviews, drama recon and the GVs (b-roll). The archive material was obviously cleared from various sources. The home movie footage came mute, so we rebuilt the sound but upon review decided that it was better left mute. It tends to change the audience’s perspective of the material depending on whether you hear the sound or not. Without, it feels more like you’re looking upon the subjects, as opposed to being with them.

What kind of work went into the new interviews?
Parkash: EQ, volume automation, de-essing, noise reduction, de-reverb, reverb, mouth de-click — Izotope RX6 software basically. We’ve become quite reliant upon this software for unifying our source material into something consistent and to achieve a quality good enough to stand up in the cinema, at theatrical level.

What are you all working on now at Molinare?
Tae Hak: I am working on a project about football (soccer for Americans) as the FX editor. I can’t name it yet, but it’s a six-episode series for Amazon Prime. I’m thoroughly enjoying the project, as I am a football fan myself. It’s filmed across the world, including Russia where the World Cup was held last year. The story really captures the beautiful game, how it’s more than just a game, and its impact on so much of the global culture.

Parkash: We’ve just finished a series for Discovery ID, about spouses who kill each other. I’m also working on the football series that Kim mentioned for Amazon Prime. So, murder and footy! We are lucky to work on such varied, high-quality films, one after another.

Orororo: Surprisingly, I’m also working on this football series (smiles). I work with Nas fairly often and we’ve just finished up on an evocative, feature-length TV documentary that follows personal accounts of people who have survived massacre attacks in the US.

Molinare has revered creatives everywhere you look, and I’m lucky enough to be working with one of the sound greats — Greg Gettens — on a new HBO Channel 4 documentary. However, it’s quite secret so I can’t say much more, but keep your eyes peeled.

Main Image: Courtesy of Neon


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Spider-Man Into the Spider-Verse: sound editors talk ‘magical realism’

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse isn’t your ordinary Spider-Man movie, from its story to its look to its sound. The filmmakers took a familiar story and turned it on its head a bit, letting audiences know that Spider-Man isn’t just one guy wearing that mask… or even a guy, or even from this dimension.

The film focuses on Miles Morales, a teenager from Brooklyn, struggling with all things teenager while also dealing with the added stress of being Spider-Man.

Geoff Rubay

Audio played a huge role in this story, and we recently reached out to Sony supervising sound editors Geoff Rubay and Curt Schulkey to dig in a bit deeper. The duo recently won an MPSE Award for Outstanding Achievement in Sound Editing — Feature Animation… industry peers recognizing the work that went into creating the sound for this stylized world.

Let’s find out more about the sound process on Spider-Man: Into the Spider Verse, which won the Academy Award for Best Animated Feature.

What do you think is the most important element of this film’s sound?
Curt Schulkey: It is fun, it is bold, it has style and it has attitude. It has energy. We did everything we could to make the sound as stylistic and surprising as the imagery. We did that while supporting the story and the characters, which are the real stars of the movie. We had the opportunity to work with some incredibly creative filmmakers, and we did our best to surprise and delight them. We hope that audiences like it too.

Geoff Rubay: For me, it’s the fusion of the real and the fantastic. Right from the beginning, the filmmakers made it clear that it should feel believable — grounded — while staying true to the fantastic nature of the visuals. We did not hold back on the fantastic side, but we paid close attention to the story and made sure we were supporting that and not just making things sound awesome.

Curt Schulkey

How early did your team get involved in the film?
Rubay: We started on an SFX pre-design phase in late February for about a month. The goal was to create sounds for the picture editors and animators to work with. We ended up doing what amounted to a temp mix of some key sequences. The “Super Collider” was explored. We only worked on the first sequence for the collider, but the idea was that material could be recycled by the picture department and used in the early temp mixes until the final visuals arrived.

Justin Thompson, the production designer, was very generous with his time and resources early on. He spent several hours showing us work-in-progress visuals and concept art so that we would know where visuals would eventually wind up. This was invaluable. We were able to work on sounds long before we saw them as part of the movie. In the temp mix phase, we had to hold back or de-emphasize some of those elements because they were not relevant yet. In some cases, the sounds would not work at all with the storyboards or un-lit animation that was in the cut. Only when the final lit animation showed up would those sounds make sense.

Schulkey: I came onto the film in May, about 9.5 months before completion. We were neck-deep in following changes throughout our work. We were involved in the creation of sounds from the very first studio screening, through previews and temp mixes, right on to the end of the final mix. This sometimes gave us the opportunity to create sounds in advance of the images, or to influence the development of imagery and timing. Because they were so involved in building the movie, the directors did not always have time to discuss their needs with us, so we would speculate on what kinds of sounds they might need or want for events that they were molding visually. As Geoff said, the time that Justin Thompson spent with us was invaluable. The temp-mix process often gave us the opportunity to audition creations for the directors/producers.

What sort of direction did you receive from the directors?
Schulkey: Luckily, because of our previous experiences with producers Chris Miller and Phil Lord and editor Bob Fisher, we had a pretty good idea of their tastes and sensitivities, so our first attempts were usually pointed in the right direction. The three directors — Bob Persichetti, Peter Ramsey and Rodney Rothman — also provided input, so we were rich with direction.

As with all movies, we had hundreds of side discussions with the directors along the way about details, nuances, timing and so on. I think that the most important overall direction we got from the filmmakers was related to the dynamic arc of the movie. They wanted the soundtrack to be forceful but not so much that it hurt. They wanted it to breathe — quiet in some spots, loud in others, and they wanted it to be fun. So, we had to figure out what “fun” sounds like.

Rubay: This will sound strange, but we never did a spotting session for the movie. We just started our work and got feedback when we showed sequences or did temp mixes. Phil called when we started the pre-design phase and gave us general notes about tone and direction. He made it clear he did not want us to hold back, but he wanted to keep the film grounded. He explained the importance of the various levels of technology of different characters.

Peni Parker is from the 31st century, so her robot sidekick needed to sound futuristic. Scorpion is a pile of rusty metal. Prowler’s tech is appropriated from his surroundings and possibly with some help from Kingpin. We discussed the sound of previous Spider-Man movies and asked how much we needed to stay true to established sounds from those films. The direction was “not at all unless it makes sense.” We endeavored to make Peter Parker’s web-slings sound like the previous films. After that, we just “went for it.”

How was working on a film like this different than working on something live-action? Did it allow you more leeway?
Schulkey: In a live-action film, most or all of the imagery is shot before we begin working. Many aspects of the sound are already stamped in. On this film, we had a lot more creative involvement. At the start, a good percentage of the movie was still in storyboards, so if we expanded or contracted the timing of an event, the animators might adjust their work to fit the sounds. As the visual elements developed, we began creating layers of sound to support them.

For me, one of the best parts of an animated film’s soundtrack is that no sounds are imposed by the real world, as is often the case in live-action productions. In live-action, if a dialogue scene is shot on a city street in Brooklyn, there is a lot of uninteresting traffic noise built into the dialogue recordings.

Very few directors (or actors) want to lose the spontaneity of the original performance by re-recording dialogue in a studio, so we tweak, clean and process the dialogue to lessen unwanted noise, sometimes diminishing the quality of the recording. We sometimes make compromises with sound effects and music to support a not-so-ideal dialogue track. In an animated film, we don’t have that problem. Sound effects and ambiences can shine without getting in the way. This film has very quiet moments, which feel very natural and organic. That’s a pleasure to have in the movie.

Rubay: Everything Curt said! You have quite a bit of freedom because there is no “production track.” On the flip side, every sound that is added is just that — added. You have to be aware of that; more is not always better.

Spider-Man: Into the Spider-Verse is an animated film with a unique visual style. At times, we played the effects straight, as we might in a live-action picture, to ground it. Other times, we stripped away any notion of “reality.” Sometimes we would do both in the same scene as we cut from one angle to the next. Chris and Phil have always welcomed hard right angle turns, snapping sounds off on a cut or mixing and matching styles in close proximity. They like to do whatever supports the story and directs the audience. Often, we use sound to make your eye notice one thing or look away from another. Other times, we expand the frame, adding sounds outside of what you can see to further enhance the image.

There are many characters in the film. Can you talk about helping to create personality for each?
Rubay: There was a lot of effort made to differentiate the various “spider people” from each other. Whether it was through their web-slings or inherent technology, we were directed to give as much individual personality as possible to each character. Since that directive was baked in from the beginning, every department had it in mind. We paid attention to every visual cue. For example, Miles wears a particular pair of shoes — Nike Air Jordan 1s. My son, Alec Rubay, who was the Foley supervisor, is a real sneakerhead. He tracked down those shoes — very rare — and we recorded them, capturing every sound we could. When you hear Miles’s shoes squeak, you are hearing the correct shoes. Those shoes sound very specific. We applied that mentality wherever possible.

Schulkey: We took the opportunity to exploit the fact that some characters are from different universes in making their sound signatures different from one another. Spider-Ham is from a cartoon universe, so many of the sounds he makes are cartoon sounds. Sniffles, punches, swishes and other movements have a cartoon sensibility. Peni Parker, the anime character, is in a different sync than the rest of the cast, and her voice is somewhat more dynamic. We experimented with making Spider-Man Noir sound like he was coming from an old movie soundtrack, but that became obnoxious, so we abandoned the idea. Nicolas Cage was quite capable of conveying that aspect of the character without our help.

Because we wanted to ground characters in the real world, a lot of effort was put into attaching their voices to their images. Sync, of course, is essential, as is breathing. Characters in most animated films don’t do much breathing, but we added a lot of breaths, efforts and little stutters to add realism. That had to be done carefully. We had a very special, stellar cast and we wanted to maintain the integrity of their performances. I think that effort shows up nicely in some of the more intimate, personal scenes.

To create the unique look of this movie, the production sometimes chose to animate sections of the film “on twos.” That means that mouth movements change every other frame rather than every frame, so sync can be harder than usual to pinpoint. I worked closely with director Bob Persichetti to get dialogue to look in its best sync, doing careful reviews and special adjustments, as needed, on all dialogue in the film.

The main character in this Spider-Man thread is Miles Morales, a brilliant African-American/Puerto Rican Brooklyn teenager trying to find his way in his multi-cultural world. We took special care to show his Puerto Rican background with added Spanish-language dialogue from Miles and his friends. That required dialect coaches, special record sessions and thorough review.

The group ADR required a different level of care than most films. We created voices for crowds, onlookers and the normal “general” wash of voices for New York City. Our group voices covered many very specific characters and were cast in detail by our group leader, Caitlin McKenna. We took a very realistic approach to crowd activity. It had to be subtler than most live-action films to capture the dry nonchalance of Miles Morales’s New York.

Would you describe the sounds as realistic? Fantastical? Both?
Schulkey: The sounds are fantastically realistic. For my money, I don’t want the sounds in my movie to seem fantastical. I see our job as creating an illusion for the audience — the illusion that they are hearing what they are seeing, and that what they are seeing is real. This is an animated film, where nothing is actually real, but has its own reality. The sounds need to live in the world we are watching. When something fantastical happens in the movie’s reality, we had to support that illusion, and we sometimes got to do fun stuff. I don’t mean to say that all sounds had to be realistic.

For example, we surmised that an actual supercollider firing up below the streets of Brooklyn would sound like 10,000 computer fans. Instead, we put together sounds that supported the story we were telling. The ambiences were as authentic as possible, including subway tunnels, Brooklyn streets and school hallways. Foley here was a great tool for giving reality to animated images. When Miles walks into the cemetery at night, you hear his footsteps on snow and sidewalk, gentle cloth movements and other subtle touches. This adds to a sense that he’s a real kid in a real city. Other times, we were in the Spider-Verse and our imagination drove the work.

Rubay: The visuals led the way, and we did whatever they required. There are some crazy things in this movie. The supercollider is based on a real thing so we started there. But supercolliders don’t act as they are depicted in the movie. In reality, they sound like a giant industrial site, fans and motors, but nothing so distinct or dramatic, so we followed the visuals.

Spider-sense is a kind of magical realism that supports, informs, warns, communicates, etc. There is no realistic basis for any of that, so we went with directions about feelings. Some early words of direction were “warm,” “organic,” “internal” and “magical.” Because there are no real sounds for those words, we created sounds that conveyed the emotional feelings of those ideas to the audience.

The portals that allow spider-people to move between dimensions are another example. Again, there was no real-world event to link to. We saw the visuals and assumed it should be a pretty big deal, real “force of nature” stuff. However, it couldn’t simply be big. We took big, energetic sounds and glued them onto what we were seeing. Of course, sometimes people are talking at the same time, so we shifted the frequency center of the moment to clear for the dialog. As music is almost always playing, we had to look for opportunities within the spaces it left.

 

Can you talk about working on the action scenes?
Rubay: For me, when the action starts, the sound had to be really specific. There is dialogue for sure. The music is often active. The guiding philosophy for me at that point is not “Keep adding until there is nothing left to add,” rather, it’s, “We’re done when there is nothing left to strip out.” Busy action scene? Broom the backgrounds away. Usually, we don’t even cut BG’s in a busy action scene, but, if we do, we do so with a skeptical eye. How can we make it more specific? Also, I keep a keen eye on “scale.” One wrong, small detail sound, no matter how cool or interesting, will get the broom if it throws off the scale. Sometimes everything might be sounding nice and big; impressive but not loud, just big, and then some small detail creeps in and spoils it. I am constantly looking out for that.

The “Prowler Chase” scene was a fun exploration. There are times where the music takes over and runs; we pull out every sound we can. Other times, the sound effects blow over everything. It is a matter of give and take. There is a truck/car/prowler motorcycle crash that turns into a suspended slo-mo moment. We had to decide which sounds to play where and when. Its stripped-down nature made it among my favorite moments in the picture.

Can you talk about the multiple universes?
Rubay: The multiverse presented many challenges. It usually manifested itself as a portal or something we move between. The portals were energetic and powerful. The multiverse “place” was something that we used as a quiet place. We used it to provide contrast because, usually, there was big action on either side.

A side effect of the multiple universes interacting was a buildup or collision/overlap. When universes collide or overlap, matter from each tries to occupy the same space. Visually, this created some very interesting moments. We referred to the multi-colored prismatic-looking stuff as “Picasso” moments. The supporting sound needed to convey “force of nature” and “hard edges,” but couldn’t be explosive, loud or gritty. Ultimately, it was a very multi-layered sound event: some “real” sounds teamed with extreme synthesis. I think it worked.

Schulkey: Some of the characters in the movie are transported from another dimension into the dimension of the movie, but their bodies rebel, and from time to time their molecules try to jump back to their native dimension, causing “glitching.” We developed, with a combination of plug-ins, blending, editing and panning, a signature sound that served to signal glitching throughout the movie, and was individually applied for each iteration.

What stands out in your mind as the most challenging scenes audio wise?
Rubay: There is a very quiet moment between Miles and his dad when dad is on one side of the door and Miles is on the other. It’s a very quiet, tender one-way conversation. When a movie gets that quiet every sound counts. Every detail has to be perfect.

What about the Dolby Atmos mix? How did that enhance the film? Can you give a scene or two as an example?
Schulkey: This film was a native Atmos mix, meaning that the primary final mix was directly in the Atmos format, as opposed to making a 7.1 mix and then going back to re-mix sections using the Atmos format.

The native Atmos mix allowed us a lot more sonic room in the theater. This is an extremely complex and busy mix, heavily driven by dialogue. By moving the score out into the side and surround speakers — away from the center speaker — we were able to make the dialogue clearer and still have a very rich and exciting score. Sonic movement is much more effective in this format. When we panned sounds around the room, it felt more natural than in other formats.

Rubay: Atmos is fantastic. Being able to move sounds vertically creates so much space, so much interest, that might otherwise not be there. Also, the level and frequency response of the surround channels makes a huge difference.

You guys used Avid Pro Tools for editing, can you mention some other favorite tools you employed on this film?
Schulkey : The Delete key and the Undo key.

Rubay: Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, Recording rigs and microphones of all sorts.

What haven’t I asked that’s important?
Our crew! Just in case anyone thinks this can be done by two people, it can’t.
– re-recording mixers Michael Semanick and Tony Lamberti
– sound designer John Pospisil
– dialogue editors James Morioka and Matthew Taylor
– sound effects editors David Werntz, Kip Smedley, Andy Sisul, Chris Aud, Donald Flick, Benjamin Cook, Mike Reagan and Ando Johnson
– Foley mixer Randy Singer
– Foley artists Gary Hecker, Michael Broomberg and Rick Owens

Warner Bros. Studio Facilities ups Kim Waugh, hires Duke Lim

Warner Bros. Studio Facilities in Burbank has promoted long-time post exec Kim Waugh to executive VP, worldwide post production services. They have also hired Duke Lim to serve as VP, post production sound at the studio.

In his new role, Waugh will be reporting to Jon Gilbert, president, worldwide studio facilities, Warner Bros. and will continue to lead the post creative services senior management team, overseeing all marketing, sales, talent management, facilities and technical operations across all locations. Waugh has been instrumental in expanding the business beyond the studio’s Burbank-based headquarters, first to Soho, London in 2012 with the acquisition of Warner Bros. De Lane Lea and then to New York in the 2015 acquisition of WB Sound in Manhattan.

The group supports all creative post production elements, ranging from sound mixing, editing and ADR to color correction and restoration, for Warner Bros.’ clients worldwide. Waugh’s creative services group features a vast array of award-winning artists, including the Oscar-nominated sound mixing team behind Warner Bros. Pictures’ A Star is Born.

Reporting to Waugh, Lim is responsible for overseeing the post sound creative services supporting Warner Bros.’ film and television clients on a day-to-day basis across the studio’s three facilities.

Duke Lim

Says Gilbert, “At all three of our locations, Kim has attracted award-winning creative talent who are sought out for Warner Bros. and third-party projects alike. Bringing in seasoned post executive Duke Lim will create an even stronger senior management team under Kim.”

Waugh most recently served as SVP, worldwide post production services, Warner Bros. Studio Facilities, a post he had held since 2007. In this position, he managed the post services senior management team, overseeing all talent, sales, facilities and operations on a day-to-day basis, with a primary focus on servicing all Warner Bros. Studios’ post sound clients. Prior to joining Warner Bros. as VP, post production services in 2004, Waugh worked at Ascent Media Creative Sound Services, where he served as SVP of sales and marketing, managing sales and marketing for the company’s worldwide divisional facilities. Prior to that, he spent more than 10 years at Soundelux, holding posts as president of Soundelux Vine Street Studios and Signet Soundelux Studios.

Lim has worked in the post production industry for more than 25 years, most recently posted at the Sony Sound Department, which he joined in 2014 to help expand the creative team and total number of mix stages. He began his career at Skywalker Sound South serving in various positions until their acquisition by Todd-AO in 1995, when Lim was given the opportunity to move into operations and began managing the mixing facilities for both its Hollywood location and the Todd-AO West studio in Santa Monica.

CAS and MPSE honor audio post pros and their work

By Mel Lambert

With a BAFTA win and high promise for the upcoming Oscar Awards, the sound team behind Bohemian Rhapsody secured a clean sweep at both the Cinema Audio Society (CAS) and Motion Picture Sound Editors (MPSE) ceremonies here in Los Angeles last weekend.

Paul Massey

The 55th CAS Awards also honored sound mixer Lee Orloff with a Cinema Audio Society Career Achievement Award, while director Steven Spielberg received its Cinema Audio Society Filmmaker Award. And at the MPSE Awards, director Antoine Fuqua accepted the 2019 Filmmaker Award, while supervising sound editor Stephen H. Flick secured the MPSE Career Achievement honor.

Re-recording mixer Paul Massey — accepting the CAS Award for Outstanding Sound Mixing Motion Picture-Live Action on behalf of his fellow dubbing mixers Tim Cavagin and Niv Adiri, together with production mixer John Casali — thanked Bohemian Rhapsody’s co-executive producer and band members Roger Taylor and Brian May for “trusting me to mix the music of Queen.”

The film topped a nominee field that also included A Quiet Place, A Star is Born, Black Panther and First Man; for several years the CAS winner in the feature-film category also has secured an Oscar Award for sound mixing.

Isle of Dogs secured a CAS Award in the animation category, which also included Incredibles 2, Ralph Breaks the Internet, Spider-Man: Into the Spider-Verse and The Grinch. The sound-mixing team included original dialogue mixer Darrin Moore and re-recording mixers Christopher Scarabosio and Wayne Lemmer, together with scoring mixers Xavier Forcioli and Simon Rhodes and Foley mixer Peter Persaud.

Free Solo won a documentary award for production mixer Jim Hurst, re-recording mixers Tom Fleischman and Ric Schnupp, together with scoring mixer Tyson Lozensky, ADR mixer David Boulton and Foley mixer Joana Niza Braga.

Finally, American Crime Story: The Assassination of Gianni Versace (Part 1) The Man Who Would Be Vogue, The Marvelous Mrs. Maisel: Vote For Kennedy, Vote For Kennedy and Anthony Bourdain: Parts Unknown (Bhutan) won CAS Awards within various broadcast sound categories.

Steven Spielberg and Bradley Cooper

The CAS Filmmaker Award was presented to Steven Spielberg by fellow director Bradley Cooper. This followed tributes from regular members of Spielberg’s sound team, including production sound mixer Ron Judkins plus re-recording mixers Andy Nelson and Gary Rydstrom, who quipped: “We spent so much money on Jurassic Park that [Steven] had to shoot Schindler’s List in black & white!”

“Through your talent, [sound editors and mixers] allow the audience to see with their ears,” Spielberg acknowledged, while stressing the full sonic and visual impact of a theatrical experience. “There’s nothing like a big, dark theater,” he stated. He added that he still believes that movie theaters are the best environment in which to fully enjoy his cinematic creations.

Upon receiving his Career Achievement Award from sound mixer Chris Noyes and director Dean Parisot, production sound mixer Lee Orloff acknowledged the close collaboration that needs to exist between members of the filmmaking team. “It is so much more powerful than the strongest wall you could build,” he stated, recalling a 35-year career that spans nearly 80 films.

Lee Orloff

Outgoing CAS president Mark Ulano presented the President’s Award to leading Foley mixer MaryJo Lang, while the CAS Student Award went to Anna Wozniewicz of Chapman University. Finalists included Maria Cecilia Ayalde Angel of Pontificia Universidad Javeriana, Bogota, Allison Ng of USC, Bo Pang of Chapman University and Kaylee Yacono of Savannah College of Art and Design.

Finally, the CAS Outstanding Product Awards went to Dan Dugan Sound Design for its Dugan Automixing in the Sound Devices 633 Compact Mixer, and to Izotope for its RX7 Audio Repair Software.

The CAS Awards ceremony was hosted by comedian Michael Kosta.

 

Motion Picture Sound Editors Awards

During the 66th Annual Golden Reels, outstanding achievement in sound editing awards were presented in 23 categories, encompassing feature films, long- and short-form television, animation, documentaries, games, special venue and other media.

The Americans, Atlanta, The Marvelous Mrs. Maisel and Westworld figured prominently within the honored TV series.

Following introductions by re-recording mixer Steve Pederson and supervising sound editor Mandell Winter, director/producer Michael Mann presented the 2019 MPSE Filmmaker Award to Antoine Fuqua, while Academy Award-winning supervising sound editor Ben Wilkins presented the MPSE Career Achievement Award to fellow supervising sound editor Stephen H. Flick, who also serves as professor of cinematic arts at the University of Southern California.

Antoine Fuqua

“We celebrate the creation of entertainment content that people will enjoy for generations to come,” MPSE president Tom McCarthy stated in his opening address. “As new formats appear and new ways to distribute content are developed, we need to continue to excel at our craft and provide exceptional soundtracks that heighten the audience experience.”

As Pederson stressed during his introduction to the MPSE Filmmaker Award, Fuqua “counts on sound to complete his vision [as a filmmaker].” “His films are stylish and visceral,” added Winter, who along with Pederson has worked on a dozen films for the director during the past two decades.

“He is a director who trusts his own vision,” Mandell confirmed. “Antoine loves a layered soundtrack. And ADR has to be authentic and true to his artistic intentions. He is a bone fide storyteller.”

Four-time Oscar-nominee Mann stated that the honored director “always elevates everything he touches; he uses sound design and music to its fullest extent. [He is] a director who always pushes the limits, while evolving his art.”

Pre-recorded tributes to Fuqua came from actor Chis Pratt, who starred in The Magnificent Seven (2017). “Nobody deserves [this award] more,” he stated. Actor Mark Wahlberg, who starred in Shooter (2007), and producer Jerry Bruckheimer were also featured.

Stephen Hunter Flick

During his 40-year career in the motion picture industry, while working on some 150 films, Steven H. Flick has garnered two Oscar Award wins for Speed (1994) and Robocop (1987) together with nominations for Total Recall (1990), Die Hard (1988) and Poltergeist (1982).

The award for Outstanding Achievement in Sound Editing – Animation Short Form went to Overwatch – Reunion from Blizzard Entertainment, headed by supervising sound editor Paul Menichini. The Non-Theatrical Animation Long Form award was awarded to NextGen from Netflix, headed by supervising sound editors David Acord and Steve Slanec.

The Feature Animation award went to the Oscar-nominated Spider-Man: Into the Spider-Verse from Sony Pictures Entertainment/Marvel, headed by supervising sound editors Geoffrey Rubay and Curt Schulkey. The Non-Theatrical Documentary award went to Searching for Sound — Islandman and Veyasin from Karga Seven Pictures/Red Bull TV, headed by supervising sound editor Suat Ayas. Finally, the Feature Documentary was a tie between Free Solo from National Geographic Documentary Films, headed by supervising sound editor Deborah Wallach, and They Shall Not Grow Old from Wingnut Films/Fathom Events/Warner Bros., headed by supervising sound editors Martin Kwok, Brent Burge, Melanie Graham and Justin Webster.

The Outstanding Achievement in Sound Editing — Music Score award also went to Spider-Man: Into the Spider-Verse, with music editors Katie Greathouse and Catherine Wilson, while the Musical award went to Bohemian Rhapsody from GK Films/Fox Studios, with supervising music editor John Warhurst and music editor Neil Stemp. The Dialogue/ADR award also went to Bohemian Rhapsody, with supervising ADR/dialogue editors Nina Hartston and Jens Petersen, while the Effects/Foley award went to A Quiet Place from Paramount Pictures, with supervising sound editors Ethan Van der Ryn and Erik Aadahl.

The Student Film/Verna Fields Award went to Facing It from National Film and Television School, with supervising sound designer/editor Adam Woodhams.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy

Karol Urban is president of CAS, others named to board

As a result of the Cinema Audio Society board of Directors election Karol Urban will replace CAS president Mark Ulano, whose term has come to an end.  Steve Venezia with replace treasurer Peter Damski who opted not to run for re-election.

“I am so incredibly honored to have garnered the confidence of our esteemed members,” says Urban. “After years of serving under different presidents and managing the content for the CAS Quarterly I have learned so much about the achievements, interests, talents and concerns of our membership. I am excited to given this new platform to celebrate the achievements and herald new opportunities to serve this incredibly dynamic and talented community.”

For 2019 the Executive Committee with include newly elected Urban and Venezia as well as VP Phillip W. Palmer, CAS, and secretary David J. Bondelevitch, CAS,  who were not up for election.

The incumbent CAS Board of Directors (Production) that were re-elected are  Peter J. Devlin CAS, Lee Orloff CAS, and Jeffrey W. Wexler, CAS. They will be joined by newly elected Amanda Beggs, CAS, and Mary H. Ellis, CAS, who are taking the seats of outgoing  board members Chris Newman CAS and Lisa Pinero, CAS.

Incumbent board members (Post Production) who were reelected are Bob Bronow CAS, and Mathew Waters, CAS, and they will be joined by newly elected Board Members Onnalee Blank, CAS, and Mike Minkler CAS, who will be taking the seats of board members Urban and Steve Venezia, CAS, who are now officers.

Continuing to serve as their terms were not up for reelection are for production Willie Burton, CAS, and Glen Trew, CAS, and for post production Tom Fleischman, CAS, Doc Kane CAS, Sherry Klein, CAS, and Marti Humphrey, CAS.

The new Board will be installed at the 55 Annual CAS Awards Saturday, February 16.

Sundance: Audio post for Honey Boy and The Death of Dick Long

By Jennifer Walden

Brent Kiser, an Emmy award-winning supervising sound editor/sound designer/re-recording mixer
at LA’s Unbridled Sound, is no stranger to the Sundance Film Festival. His resume includes such Sundance premieres as Wild Wild Country, Swiss Army Man and An Evening with Beverly Luff Lin.

He’s the only sound supervisor to work on two films that earned Dolby fellowships: Swiss Army Man back in 2016 and this year’s Honey Boy, which premiered in the US Dramatic Competition. Honey Boy is a biopic of actor Shia LaBeouf’s damaging Hollywood upbringing.

Brent Kiser (in hat) and Will Files mixing Honey Boy.

Also showing this year, in the Next category, was The Death of Dick Long. Kiser and his sound team once again collaborated with director Daniel Scheinert. For this dark comedy, the filmmakers used sound to help build tension as a group of friends tries to hide the truth of how their buddy Dick Long died.

We reached out to Kiser to find out more.

Honey Boy was part of the Sundance Institute’s Feature Film Program, which is supported by several foundations including the Ray and Dagmar Dolby Family Fund. You mentioned that this film earned a grant from Dolby. How did that grant impact your approach to the soundtrack?
For Honey Boy, Dolby gave us the funds to finish in Atmos. It allowed us to bring MPSE award-winning re-recording mixer Will Files on to mix the effects while I mixed the dialogue and music. We mixed at Sony Pictures Post Production on the Kim Novak stage. We got time and money to be on a big stage for 11 days — a five-day pre-dub and six-day final mix.

That was huge because the film opens up with these massive-robot action/sci-fi sound sequences and it throws the audience off the idea of this being a character study. That’s the juxtaposition, especially in the first 15 to 20 minutes. It’s blurring the reality between the film world and real life for Shia because the film is about Shia’s upbringing. Shia LaBeouf wrote the film and plays his father. The story focuses on the relationship of young actor Otis Lort (Lucas Hedges) and his alcoholic father James.

The story goes through Shia’s time on Disney Channel’s Even Stevens series and then on Transformers, and looks at how this lifestyle had an effect on him. His father was an ex-junkie, sex-offender, ex-rodeo clown and would just push his son. By age 12, Shia was drinking, smoking weed and smoking cigarettes — all supplied to him by his dad. Shia is isolated and doesn’t have too many friends. He’s not around his mother that much.

This year is the first year that Shia has been sober since age 12. So this film is one big therapeutic movie for him. The director Alma Har’el comes from an alcoholic family, so she’s able to understand where Shia is coming from. Working with Alma is great. She wants to be in every part of the process — pick each sound and go over every bit to make sure it’s exactly what she wants.

Honey Boy director Alma Har’el.

What were director Alma Har’el’s initial ideas for the role of sound in Honey Boy?
They were editing this film for six months or more, and I came on board around mid-edit. I saw three different edits of the film, and they were all very different.

Finally, they settled on a cut that felt really nice. We had spotting sessions before they locked and we were working on creating the environment of the motel where Otis and James were staying. We were also working on creating the sound of Otis being on-set. It had to feel like we were watching a film and when someone screams, “Cut!” it had to feel like we go back into reality. Being able to play with those juxtapositions in a sonic way really helped out. We would give it a cinematic sound and then pulled back into a cinéma vérité-type sound. That was the big sound motif in the movie.

We worked really close with the composer Alex Somers. He developed this little crank sound that helped to signify Otis’ dreams and the turning of events. It makes it feel like Otis is a puppet with all his acting jobs.

There’s also a harness motif. In the very beginning you see adult Otis (Lucas Hedges) standing in front of a plane that has crashed and then you hear things coming up behind him. They are shooting missiles at him and they blow up and he gets yanked back from the explosions. You hear someone say, “Cut!” and he’s just dangling in a body harness about 20 feet up in the air. They reset, pull him down and walk him back. We go through a montage of his career, the drunkenness and how crazy he was, and then him going to therapy.

In the session, he’s told he has PTSD caused by his upbringing and he says, “No, I don’t.” It kicks to the title and then we see young Otis (Noah Jupe) sitting there waiting, and he gets hit by a pie. He then gets yanked back by that same harness, and he dangles for a little while before they bring him down. That is how the harness motif works.

There’s also a chicken motif. Growing up, Otis has a chicken named Henrietta La Fowl, and during the dream sequences the chicken leads Otis to his father. So we had to make a voice for the chicken. We had to give the chicken a dreamy feel. And we used the old-school Yellow Sky wind to give it a Western-feel and add a dreaminess to it.

On the dub stage with director Alma Har’el and her team, plus Will Files (front left) and Andrew Twite (front right).

Andrew Twite was my sound designer. He was also with me on Swiss Army Man. He was able to make some rich and lush backgrounds for that. We did a lot of recording in our neighborhood of Highland Park, which is much like Echo Park where Shia grew up and where the film is based. So it’s Latin-heavy communities with taco trucks and that fun stuff. We gave it that gritty sound to show that, even though Otis is making $8,000 a week, they’re still living on the other side of the tracks.

When Otis is in therapy, it feels like Malibu. It’s nicer, quieter, and not as stressful versus the motel when Otis was younger, which is more pumped up.

My dialogue editor was Elliot Thompson, and he always does a great job for me. The production sound mixer Oscar Grau did a phenomenal job of capturing everything at all moments. There was no MOS (picture without sound). He recorded everything and he gave us a lot of great production effects. The production dialogue was tricky because in many of the scenes young Otis isn’t wearing a shirt and there are no lav mics on him. Oscar used plant mics and booms and captured it all.

What was the most challenging scene for sound design on Honey Boy?
The opening, the intro and the montage right up front were the most challenging. We recut the sound for Alma several different ways. She was great and always had moments of inspiration. We’d try different approaches and the sound would always get better, but we were on a time crunch and it was difficult to get all of those elements in place in the way she was looking for.

Honey Boy on the mix stage at Sony’s Kim Novak Theater.

In the opening, you hear the sound of this mega-massive robot (an homage to a certain film franchise that Shia has been part of in the past, wink, wink). You hear those sounds coming up over the production cards on a black screen. Then it cuts to adult Otis standing there as we hear this giant laser gun charging up. Otis goes, “No, no, no, no, no…” in that quintessential Shia LaBeouf way.

Then, there’s a montage over Missy Elliott’s “My Struggles,” and the footage goes through his career. It’s a music video montage with sound effects, and you see Otis on set and off set. He’s getting sick, and then he’s stuck in a harness, getting arrested in the movie and then getting arrested in real life. The whole thing shows how his life is a blur of film and reality.

What was the biggest challenge in regards to the mix?
The most challenging aspect of the mix, on Will [Files]’s side of the board, was getting those monsters in the pocket. Will had just come off of Venom and Halloween so he can mix these big, huge, polished sounds. He can make these big sound effects scenes sound awesome. But for this film, we had to find that balance between making it sound polished and “Hollywood” while also keeping it in the realm of indie film.

There was a lot of back and forth to dial-in the effects, to make it sound polished but still with an indie storytelling feel. Reel one took us two days on stage to get through. We even spent some time on it on the last mix day as well. That was the biggest challenge to mix.

The rest of the film is more straightforward. The challenge on dialogue was to keep it sounding dynamic instead of smoothed out. A lot of Shia’s performance plays in the realm of vocal dynamics. We didn’t want to make the dialogue lifeless. We wanted to have the dynamics in there, to keep the performance alive.

We mixed in Atmos and panned sounds into the ceiling. I took a lot of the composer’s stems and remixed those in Atmos, spreading all the cues out in a pleasant way and using reverb to help glue it together in the environment.

 

The Death of Dick Long

Let’s look at another Sundance film you’ve worked on this year. The Death of Dick Long is part of the Next category. What were director Daniel Scheinert’s initial ideas for the role of sound on this film?
Daniel Scheinert always shows up with a lot of sound ideas, and most of those were already in place because of picture editor Paul Rogers from Parallax Post (which is right down the hall from our studio Unbridled Sound). Paul and all the editors at Parallax are sound designers in their own right. They’ll give me an AAF of their Adobe Premiere session and it’ll be 80 tracks deep. They’re constantly running down to our studio like, “Hey, I don’t have this sound. Can you design something for me?” So, we feed them a lot of sounds.

The Death of Dick Long

We played with the bug sounds the most. They shot in Alabama, where both Paul and Daniel are from, so there were a lot of cicadas and bugs. It was important to make the distinction of what the bugs sounded like in the daytime versus what they sounded like in the afternoon and at night. Paul did a lot of work to make sure that the balance was right, so we didn’t want to mess with that too much. We just wanted to support it. The backgrounds in this film are rich and full.

This film is crazy. It opens up with a Creed song and ends with a Nickleback song, as a sort of a joke. They wanted to show a group of guys that never really made much of themselves. These guys are in a band called Pink Freud, and they have band practice.

The film starts with them doing dumb stuff, like setting off fireworks and catching each other on fire — just messing around. Then it cuts to Dick (Daniel Scheinert) in the back of a vehicle and he’s bleeding out. His friends just dump him at the hospital and leave. The whole mystery of how Dick dies unfolds throughout the course of the film. The two main guys are Earl (Andre Hyland) and Zeke (Michael Abbott, Jr.).

The Foley on this film — provided by Foley artist John Sievert of JRS Productions — plays a big role. Often, Foley is used to help us get in and out of the scene. For instance, the police are constantly showing up to ask more questions and you hear them sneaking in from another room to listen to what’s being said. There’s a conversation between Zeke and his wife Lydia (Virginia Newcomb) and he’s asking her to help him keep information from the police. They’re in another room but you hear their conversation as the police are questioning Dick Long’s wife, Jane (Jess Weixler).

We used sound effects to help increase the tension when needed. For example, there’s a scene where Zeke is doing the laundry and his wife calls saying she’s scared because there are murderers out there, and he has to come and pick her up. He knows it’s him but he’s trying to play it off. As he is talking to her, Earl is in the background telling Zeke what to say to his wife. As they’re having this conversation, the washing machine out in the garage keeps getting louder and it makes that scene feel more intense.

Director Daniel Scheinert (left) and Puddle relaxing during the mix.

“The Dans” — Scheinert and Daniel Kwan — are known for Swiss Army Man. That film used sound in a really funny way, but it was also relevant to the plot. Did Scheinert have the same open mind about sound on The Death of Dick Long? Also, were there any interesting recording sessions you’d like to talk about?
There were no farts this time, and it was a little more straightforward. Manchester Orchestra did the score on this one too, but it’s also more laid back.

For this film, we really wanted to depict a rural Alabama small-town feel. We did have some fun with a few PA announcements, but you don’t hear those clearly. They’re washed out. Earl lives in a trailer park, so there are trailer park fights happening in the background to make it feel more like Jerry Springer. We had a lot of fun doing that stuff. Sound effects editor Danielle Price cut that scene, and she did a really great job.

What was the most challenging aspect of the sound design on The Death of Dick Long?
I’d say the biggest things were the backgrounds, engulfing the audience in this area and making sure the bugs feel right. We wanted to make sure there was off-screen movement in the police station and other locations to give them all a sense of life.

The whole movie was about creating a sense of intensity. I remember showing it to my wife during one of our initial sound passes, and she pulled the blanket over her face while she was watching it. By the end, only her eyes were showing. These guys keep messing up and it’s stressful. You think they’re going to get caught. So the suspense that the director builds in — not being serious but still coming across in a serious manner — is amazing. We were helping them to build that tension through backgrounds, music and dropouts, and pushing certain everyday elements (like the washing machine) to create tension in scenes.

What scene in this film best represents the use of sound?
I’d say the laundry scene. Also, in the opening scene you hear the band playing in the garage and the perspective slowly gets closer and closer.

During the film’s climax, when you find out how Dick dies, we’re pulling down the backgrounds that we created. For instance, when you’re in the bedroom you hear their crappy fan. When you’re in the kitchen, you hear the crappy compressor on the refrigerator. It’s all about playing up these “bad” sounds to communicate the hopelessness of the situation they are living in.

I want to shout out all of my sound editors for their exceptional work on The Death of Dick Long. There was Jacob “Young Thor” Flack and Elliot Thompson, and Danielle Price who did amazing backgrounds. Also, a shout out to Ian Chase for help on the mix. I want to make sure they share the credit.

I think there needs to be more recognition of the contribution of sound and the sound departments on a film. It’s a subject that needs to be discussed, particularly in these somber days following the death of Oscar-winning re-recording mixer Gregg Rudloff. He was the nicest guy ever. I remember being an intern on the sound stage and he always took the time to talk to us and give us advice. He was one of the good ones.

When post sound gets a credit after the caterers’ on-set, it doesn’t do us justice. On Swiss Army Man, initially I had my own title card because The Dans wanted to give me a title card that said, “Supervising Sound Editor Brent Kiser,” but the Directors Guild took it away. They said it wasn’t appropriate. Their reasoning is that if they give it to one person then they’ll have to give it to everybody. I get it — the visual effects department is new on the block. They wrote their contract knowing what was going on, so they get a title card. But try watching a film on mute and then talk to me about the importance of sound. That needs to start changing, for the sheer fact of burnout and legacy.

At the end of the day, you worked so hard to get these projects done. You’re taking care of someone else’s baby and helping it to grow up to be this great thing, but then we’re only seen as the hired help. Or, we never even get a mention. There is so much pressure and stress on the sound department, and I feel we deserve more recognition for what we give to a film.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

AES/SMPTE panel: Spider-Man: Into the Spider-Verse sound

By Mel Lambert

As part of its successful series of sound showcases, a recent joint meeting of the Los Angeles Section of the Audio Engineering Society and SMPTE’s Hollywood Section focused on the soundtrack of the animated features Spider-Man: Into the Spider-Verse, which has garnered several Oscar, BAFTA, CAS and MPSE award nominations, plus a Golden Globes win.

On January 31 at Sony Pictures Studios’ Kim Novak Theater in Culver City many gathered to hear a panel discussion between the film’s sound and picture editors and re-recording mixers. Spider-Man: Into the Spider-Verse was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman, the creative minds behind The Lego Movie and 21 Jump Street.

The panel

The Sound Showcase panel included supervising sound editors Geoffrey Rubay and Curt Schulkey, re-recording mixer/sound designer Tony Lamberti, re-recording mixer Michael Semanick and associate picture editor Vivek Sharma. The Hollywood Reporter’s Carolyn Giardina moderated. The event concluded with a screening of Spider-Man: Into the Spider-Verse, which represents a different Spider-Man Universe, since it introduces Brooklyn teen Miles Morales and the expanding possibilities of the Spider-Verse, where more than one entity can wear the arachnid mask.

Following the screening of an opening sequence from the animated feature, Rubay acknowledged that the film’s producers were looking for a different look for the Spider-Man character based on the Marvel comic books, but with a reference to previous live-action movies in the franchise. “They wanted us to make more of the period in which the new film is set,” he told the standing-room audience in the same dubbing stage where the soundtrack was re-recorded.

“[EVPs] Phil Lord and Chris Miller have a specific style of soundtrack that they’ve developed,” stated Lamberti, “and so we premixed to get that overall shape.”

“The look is unique,” conceded Semanick, “and our mix needed to match that and make it sound like a comic book. It couldn’t be too dynamic; we didn’t want to assault the audience, but still make it loud here and softer there.”

Full house

“We also kept the track to its basics,” Rubay added, “and didn’t add a sound for every little thing. If the soundtrack had been as complicated as the visuals, the audience’s heads would have exploded.”

“Yes, simpler was often better,” Lamberti confirmed, “to let the soundtrack tell the story of the visuals.”

In terms of balancing sound effects against dialog, “We did a lot of experimentation and went with what seemed the best solution,” Semanick said. “We kept molding the soundtrack until we were satisfied.” As Lamberti confirmed: “It was always a matter of balancing all the sound elements, using trial and error.”

=Nominated for a Cinema Audio Society Award in the Motion Picture — Animated category, Brian Smith, Aaron Hasson and Howard London served as original dialogue mixers on the film, with Sam Okell as scoring mixer and Randy K. Singer as Foley mixer. The crew also included sound designer John Pospisil, Foley supervisor Alec G. Rubay, SFX editors Kip Smedley, Andy Sisul, David Werntz, Christopher Aud, Ando Johnson, Benjamin Cook, Mike Reagan and Donald Flick.

During picture editorial, “we lived with many versions until we got to the sound,” explained Sharma. “The premix was fantastic and worked very well. Visuals are important but sound fulfils a complementary role. Dialogue is always key; the audience needs to hear what the characters say!”

“We present ideas and judge the results until everybody is happy,” said Semanick. “[Writer/producer] Phil Lord was very good at listening to everybody; he made the final decision, but deferred to the directors. ‘Maybe we should drop the music?’ ‘Does the result still pull the audience into the music?’ We worked until the elements worked very well together.”

The lead character’s “Spidey Sense” also discussed. As co-supervisor Schulkey explained: “Our early direction was that it was an internal feeling … like a warm, fuzzy feeling. But warm and fuzzy didn’t cut through the music. In the end there was not just a single Spidey Sense — it was never the same twice. The web slings were a classic sound that we couldn’t get too far from.”

“And we used [Dolby] Atmos to spin and pan those sounds around the room,” added Lamberti, who told the audience that Spider-Man: Into the Spider-Verse marked Sony Animation’s first native Atmos mix. “We used the format to get the most out of it,” concluded the SFX re-recording mixer, who mixed sound effects “in the box” using an Avid S6 console/controller, while Semanick handled dialogue and music on the Kim Novak Theater’s Harrison MPC4D X-Range digital console.


Mel Lambert has been intimately involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists. 

Audio post pro Julienne Guffain joins Sonic Union

NYC-based audio post studio Sonic Union has added sound designer/mix engineer Julienne Guffain to its creative team. Working across Sonic Union’s Bryant Park and Union Square locations, Guffain brings over a decade of experience in audio post production to her new role. She has worked on television, film and branded projects for clients such as Google, Mountain Dew, American Express and Cadillac among others.

A Virginia native, Guffain came to Manhattan to attend New York University’s Tisch School of the Arts. She found herself drawn to sound in film, and it was at NYU where she cut her teeth as a Foley artist and mixer on student films and independent projects. She landed her first industry gig at Hobo Audio, working with clients such as The History Channel, The Discovery Channel and mixing the Emmy-winning television documentary series “Rising: Rebuilding Ground Zero.”

Making her way to Crew Cuts, she began lending her talents to a wide range of spot and brand projects, including the documentary feature “Public Figure,” which examines the psychological effects of constant social media use. It is slated for a festival run later this year.

 

Sundance 2019: Creating sound for The Sound of Silence

By Jennifer Walden

Research has proven that music can stimulate emotions. Different notes together creating a major chord can promote happiness, for example. Those notes resonate at particular frequencies that are pleasing to the ear. If you consider that every sound resonates at some frequency, then perhaps different sound sources in an environment — like a refrigerator hum and radiator hiss — could be creating chords, and maybe those chords aren’t so pleasing.

Then the environment could cause the people in it to be depressed or anxious. Is there a way to alter a discordant toaster buzz, so that it resonates in harmony with the other environmental sounds?

Director Michael Tyburski explores this concept in his film The Sound of Silence, which premiered in the US Dramatic Competition at the 2019 Sundance Film Festival. Protagonist Peter Lucian (Peter Sarsgaard) is a “house tuner,” who tweaks the sounds in his clients’ spaces in order to alter their negative moods.

Harbor’s Ian Gaffney-Rosenfeld

Supervising sound editors/sound designers/re-recording mixers Grant Elder and Ian Gaffney-Rosenfeld of Harbor Picture Company in New York City helped director Tyburski translate this story idea into a sound idea by playing Peter’s sonic experience for the audience. “If Peter says the toaster was making a certain note and the refrigerator is making a different note, we actually tuned our sound effects to those notes to keep true to the situation that Peter is describing,” says Gaffney-Rosenfeld.

The film opens with Peter in a client’s apartment. The client is skeptical about Peter’s “house tuning” theory and so the audience only subtly hears the problem that Peter hears — that the radiator is resonating in a B-flat. But as Peter gets closer to the radiator, and the camera gets closer to Peter, the radiator sound becomes more apparent in the mix. The audience hears what Peter hears, an experience Elder and Gaffney-Rosenfeld call “Peter vision.”

Other times, the audience is meant to question Peter’s theory, so the sound is played straight. “We had fun with toeing the line of ‘are we inside of Peter’s head?’ Is this real? Or, is he crazy?” says Elder. “But it’s definitely clear when we’re in ‘Peter vision’ and when we’re not.”

Foley
The Foley team at Alchemy Post Sound supplied Elder and Gaffney-Rosenfeld with close-up recordings of appliances, which they then pitched via Serato’s Pitch n’ Time Pro to match the specified tones. “We cut this movie with tuners in hand, so that we knew we were in key with the chord or achieving the note that the script actually called for,” says Elder.

Harbor’s Grant Elder

The Foley team also created effects for the tools that Peter uses to adjust the sound of different objects. For instance, Peter wraps a foil-like material around the radiator to change its resonance from a B-flat to a C. “As he’s wrapping the foil, we are continuously bending the pitch to get to the note that Peter wanted. For this, I used Pitch n’ Time Pro in a variable setting. The sound effects were rendered out but I had timed it appropriately to make it go up in pitch to match the action and have it end with a specific note.”

In one scene, Peter visits a client’s busy office, replete with phones ringing, printers printing, faxes coming in and lots of people talking. For this scene, Gaffney-Rosenfeld and Elder wanted Peter’s experience to increase from unpleasant to overwhelming. They consulted withPoppy Crum, chief scientist at Dolby Laboratories, to find out what happens when a person experiences tinnitus or a painful overload of sound.

“She informed us that the person will lose all the low-end and what he or she is hearing will turn into this high-pitched, tinny, high-endy sound,” says Gaffney-Rosenfeld. “So as we were creating that scene of Peter getting overwhelmed by the office sounds, we were gradually removing all of the low-end using a filter sweep in Avid’s Channel Strip. At the same time, we were gradually pitching up all of the effects to a higher frequency, to the point where it’s uncomfortable. What you’re left with is a vacuum with just this harsh, tinny sound remaining.”

Peter is eventually pulled out of the experience by the voice of the client who he’s there to visit. Though Peter is able to tune out the office sounds, he’s left with a “ringing, high-pitched sound that drives him a little nuts. You see that throughout the course of the story and how that affects him as a character,” explains Gaffney-Rosenfeld.

Harbor’s Grand Stage

Dolby Fellowship & Dolby Atmos
Thanks to the SFFILM Dolby Fellowship, Tyburski and producer/co-writer Ben Nabors were able to collaborate with Elder and Gaffney-Rosenfeld prior to production. That’s a luxury for independent filmmakers, but for a film that uses sound as a story point specifically, it was a vital opportunity. One of Dolby’s goals for the fellowship is “to incorporate the sound process into a film’s creative development,” says Nabors. “The chance for us to connect and exchange ideas before the film was shot was special for us.”

The Fellowship also allowed The Sound of Silence to be mixed in Dolby Atmos at Harbor Picture Company’s Harbor Grand studio, which houses a Euphonix System 5 console and is equipped to mix multiple formats, including Dolby Atmos, IMAX, 7.1 and 5.1 surround.

“The Sound of Silence is really about our main character’s struggles and his view of the world, and the Atmos format provided us with this unlimited palette of ways to help the audience experience that,” explains Gaffney-Rosenfeld.

The Atmos surround field was especially beneficial on a scene in which Peter is visiting a construction site as a consultant for the architecture for a building in Manhattan. As he’s gazing at the cityscape through the window, the audience hears an orchestra of city sounds that Elder and Gaffney-Rosenfeld designed from effects like a horse and buggy going by, car horns honking, people yelling out on the street and birds flapping around. Through a collaboration with the film’s music department, they’re able to morph the effects into the sound of orchestral instruments tuning up before a performance.

During the mix, they “freeform with the Dolby Atmos panners and move those sounds in a circular motion all around the room. It showed that we aren’t in reality, but in Peter’s experience,” says Gaffney-Rosenfeld. “That was a moment conceived by the director that perfectly painted the way that Peter hears tangible sounds out in the world and experiences them as musical elements. That was a moment where Atmos gave us the flexibility to take what would have been a two-dimensional sound and gave us unlimited space to add depth and movement.”

Director Michael Tyburski

Another Atmos highlight includes a scene in which Peter goes to Central Park and uses a tuning fork to tune himself to the location, which happens to be a G-major chord. “Using a combination of the tuning fork sounds, our custom sound design and some elements from the composer, we were able to create in Atmos a moving sound that helps the viewer to get sucked into Peter’s head and to experience what he is in that moment,” says Elder.

Using the tuning forks is something that Peter does throughout the film, so having those tuning fork sounds was essential. “Michael Tyburski, the director, helped us get all the props from set that we wanted to record specifically in Foley. He even helped us with the tuning forks, to make the exact chords that are being described in the film and we recorded those in our studio here at Harbor,” says Gaffney-Rosenfeld. “That willing and highly collaborative relationship that we had with Michael really makes this soundtrack that much more authentic and interesting.”

Elder adds, “The whole team on the film really lent themselves to us. The production sound team did an excellent job. Also the picture editor Matthew Hart took a lot of time with Michael to try out some sound design. Ian and I had the chance to jump in early also and create sounds for the picture editor to cut into scenes. It was an amazing collaborative effort.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

Pixelogic London adds audio mix, digital cinema theaters

Pixelogic has added new theaters and production suites to its London facility, which offers creation and mastering of digital cinema packages and theatrical screening of digital cinema content, as well as feature and episodic audio mixing.

Pixelogic’s London location now features six projector-lit screening rooms: three theaters and three production suites. Purpose-built from the ground up, the theaters offer HDR picture and immersive audio technologies, including Dolby Atmos and DTS:X.

The equipment offered in the three theaters includes Avid S6 and S3 consoles and Pro Tools systems that support a wide range of theatrical mixing services, complemented by two new ADR booths.

Quick Chat: Crew Cuts’ Nancy Jacobsen and Stephanie Norris

By Randi Altman

Crew Cuts, a full-service production and post house, has been a New York fixture since 1986. Originally established as an editorial house, over the years as the industry evolved they added services that target all aspects of the workflow.

This independently-owned facility is run by executive producer/partner Nancy Jacobsen, senior editor/partner Sherri Margulies Keenan and senior editor/partner Jake Jacobsen. While commercial spots might be in their wheelhouse, their projects vary and include social media, music videos and indie films.

We decided to reach out to Nancy Jacobsen, as well as EP of finishing Stephanie Norris, to find out about trends, recent work and succeeding in an industry and city that isn’t always so welcoming.

Can you talk about what Crew Cuts provides and how you guys have evolved over the years?
Jacobsen: We pretty much do it all. We have 10 offline editors as well as artists working in VFX, 2D/3D animation, motion graphics/design, audio mix and sound design, VO record, color grading, title treatment, advanced compositing and conform. Two of our editors double as directors.

In the beginning, Crew Cuts primarily offered only editorial. As the years went by and the industry climate changed we began to cater to the needs of clients and slowly built out our entire finishing department. We started with some minimal graphics work and one staff artist in 2008.

In 2009, we expanded the team to include graphics, conform and audio mix. From there we just continued to grow and expand our department to the full finishing team we have today.

As a woman owner of a post house, what challenges have you had to overcome?
Jacobsen: When I started in this business, the industry was very different. I made less money than my male counterparts and it took me twice as long to be promoted because I am a woman. I have since seen great change where women are leading post houses and production houses and are finally getting the recognition for the hard work they deserve. Unfortunately, I had to “wait it out” and silently work harder than the men around me. This has paid off for me, and now I can help women get the credit they rightly deserve

Do you see the industry changing and becoming less male-dominated?
Jacobsen: Yes, the industry is definitely becoming less male-dominated. In the current climate, with the birth of the #metoo movement and specifically in our industry with the birth of Diet Madison Avenue (@dietmadisonave), we are seeing a lot more women step up and take on leading roles.

Are you mostly a commercial house? What other segments of the industry do you work in?
Jacobsen: We are primarily a commercial house. However, we are not limited to just broadcast and digital commercial advertising. We have delivered specs for everything from the Godzilla screen in Times Square to :06 spots on Instagram. We have done a handful of music videos and also handle a ton of B2B videos for in-house client meetings, etc., as well as banner ads for conferences and trade shows. We’ve even worked on display ads for airports. Most recently, one of our editors finished a feature film called Public Figure that is being submitted around the film festival circuit.

What types of projects are you working on most often these days?
Jacobsen: The industry is all over the place. The current climate is very messy right now. Our projects are extremely varied. It’s hard to say what we work on most because it seems like there is no more norm. We are working on everything from sizzle pitch videos to spots for the Super Bowl.

What trends have you seen over the last year, and where do you expect to be in a year?
Jacobsen: Over the last year, we have noticed that the work comes from every angle. Our typical client is no longer just the marketing agency. It is also the production company, network, brand, etc. In a year we expect to be doing more production work. Seeing as how budgets are much smaller than they used to be and everyone wants a one-stop shop, we are hoping to stick with our gut and continue expanding our production arm.

Crew Cuts has beefed up its finishing services. Can you talk about that?
Stephanie Norris: We offer a variety of finishing services — from sound design to VO record and mix, compositing to VFX, 2D and 3D motion graphics and color grading. Our fully staffed in-house team loves the visual effects puzzle and enjoys working with clients to help interpret their vision.

Can you name some recent projects and the services you provided?
Norris: We just worked on a new campaign for New Jersey Lottery in collaboration with Yonder Content and PureRed. Brian Neaman directed and edited the spots. In addition to editorial, Crew Cuts also handled all of the finishing, including color, conform, visual effects, graphics, sound design and mix. This was one of those all-hands-on-deck projects. Keeping everything under one roof really helped us to streamline the process.

New Jersey Lottery

Working with Brian to carefully plan the shooting strategy, we filmed a series of plate shots as elements that could later be combined in post to build each scene. We added falling stacks of cash to the reindeer as he walks through the loading dock and incorporated CG inflatable decorations into a warehouse holiday lawn scene. We also dramatically altered the opening and closing exterior warehouse scenes, allowing one shot to work for multiple seasons. Keeping lighting and camera positions consistent was mission-critical, and having our VFX supervisor, Dulany Foster, on set saved us hours of work down the line.

For the New Jersey Lottery Holiday spots, the Crew Cuts CG team, led by our creative director Ben McNamara created a 3D Inflatable display of lottery tickets. This was something that proved too costly and time consuming to manufacture and shoot practically. After the initial R&D, our team created a few different CG inflatable simulations prior to the shoot, and Dulany was able to mock them up live while on set. Creating the simulations was crucial for giving the art department reference while building the set, and also helped when shooting the plates needed to composite the scene together.

Ben and his team focused on the physics of the inflation, while also making sure the fabric simulations, textures and lighting blended seamlessly into the scene — it was important that everything felt realistic. In addition to the inflatables, our VFX team turned the opening and closing sunny, summer shots of the warehouse into a December winter wonderland thanks to heavy compositing, 3D set extension and snow simulations.

New Jersey Lottery

Any other projects you’d like to talk about?
Jacobsen: We are currently working on a project here that we are handling soup to nuts from production through finishing. It was a fun challenge to take on. The spot contains a hand model on a greenscreen showing the audience how to use a new product. The shoot itself took place here at Crew Cuts. We turned our common area into a stage for the day and were able to do so without interrupting any of the other employees and projects going on.

We are now working on editorial and finishing. The edit is coming along nicely. What really drives the piece here is the graphic icons. Our team is having a lot of fun designing these elements and implementing them into the spot. We are so proud because we budgeted wisely to make sure to accommodate all of the needs of the project so that we could handle everything and still turn a profit. It was so much fun to work in a different setting for the day and has been a very successful project so far. Clients are happy and so are we.

Main Image: (L-R) Stephanie Norris and Nancy Jacobsen

Behind the Title: New Math Managing Partner/EP Kala Sherman

Name: Kala Sherman

Company: New Math

Can you describe your company?
We are a bicoastal audio production company, with offices in NYC and LA, specializing in original music, sound design, audio mix and music supervision.

What’s your job title?
Managing Partner/EP

What does that entail?
I do everything from managing our staff to producing projects to sales and development.

What would surprise people the most about what’s underneath that title?
I am an untrained, but really good psychotherapist.

New Math, New York

What have you learned over the years about running a business?
It’s highly competitive and you have to continue to hustle and push the creative product in order to stay relevant. Also, it’s paramount to assemble the best talent and treat them with the utmost respect; without our producers or composers there wouldn’t be a business.

A lot of it must be about trying to keep employees and clients happy. How do you balance that?
We face at least one root challenge: How do you keep both your clients and your creative staff happy? I think how you approach and sell an idea to the composers while still delivering what the client needs is a real art form. It gets tricky with limited music budgets these days, but I’ve found over the years that there are ways to structure the deals where the clients feel like they can get the music and sound design they need while the composers feel well-compensated and creatively fulfilled.

What’s your favorite part of the job?
I love the fact that we are creating music and I get to be part of that process.

What’s your least favorite?
Competitive demoing. Partnering with clients is just way more fun than knowing you are competing with other companies. And not too ironically, it usually results in the best and freshest creative product.

What is your favorite time of the day?
I love the evenings when I get home and hang with my daughter.

If you didn’t have this job, what would you be doing instead?
I always knew I had to work in music, so I would have probably stayed on the label side of the music business.

Can you name some recent clients?
Google, Trojan, Smirnoff, KFC, Chobani, Walmart, Zappos and ESPN.

Name three pieces of technology you can’t live without.
Spotify. Laptop. iPhone.

You recently added final mix capabilities in both of your locations. Can you talk about why now was the time?
We want to be a full-service audio company for our clients. It just makes sense when many of our clients want to work with one company for all audio needs. If we are already providing the music and sound design, why not record the VO and provide mix as well. Plus, it’s really fun to have clients in the studio.

What tools will be used for the mixing rooms?
Focal 5.1 monitor system in both the NY and LA mix rooms. Pro Tools mix system with the latest plugin suites. High-quality analog outboard gear from Neve, API, DW Fearn, Summit and more.

Any recent jobs in these studios you can talk about?
Yes. We just completed Chobani, Acuvue and Yellawood mixes.

Main Image: (L-R) New Math partners David Wittman, Kala Sherman, Raymond Loewy

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.