Tag Archives: audio mixing

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Andy Greenberg on One Union Recording’s fire and rebuild

San Francisco’s One Union Recording Studios has been serving the sound needs of ad agencies, game companies, TV and film producers, and corporate media departments in the Bay Area and beyond for nearly 25 years.

In the summer of 2017, the facility was hit by a terrible fire that affected all six of its recording studios. The company, led by president John McGleenan, immediately began an ambitious rebuilding effort, which it completed earlier this year. One Union Recording is now back up to full operation and its five recording studios, outfitted with the latest sound technologies including Dolby Atmos capability, are better than ever.

Andy Greenberg, One Union Recording’s facility engineer and senior mix engineer, who works alongside engineers Joaby Deal, Eben Carr, Matt Wood and Isaac Olsen. We recently spoke with Greenberg about the company’s rebuild and plans for the future.

Rebuilding the facility after the fire must have been an enormous task.
You’re not kidding. I’ve worked at One Union for 22 years, and I’ve been through every growth phase and upgrade. I was very proud of the technology we had in place in 2017. We had six rooms, all cutting-edge. The software was fully up to date. We had few if any technical problems and zero downtime. So, when the fire hit, we were devastated. But John took a very business-oriented approach to it, and within a few days he was formulating a plan. He took it as an opportunity to implement new technology, like Dolby Atmos, and to grow. He turned sadness into enthusiasm.

How did the facility change?
Ironically, the timing was good. A lot of new technology had just come out that I was very excited about. We were able to consolidate what were large systems into smaller units while increasing quality 10-fold. We moved leaps and bounds beyond where we had been.

Prior to the fire, we were running Avid Pro Tools 12.1. Now we’re on Pro Tools Ultimate. We had just purchased four Avid/Euphonix System 5 digital audio consoles with extra DSP in March of 2017 but had not had time to install them before the fire due to bookings. These new consoles are super powerful. Our number of inputs and outputs quadrupled. The routing power and the bus power are vastly improved. It’s phenomenal.

We also installed Avid MTRX, an expandable interface designed in Denmark and very popular now, especially for Atmos. The box feels right at home with the Avid S5 because it’s MADI and takes the physical outputs of our ProTools systems up to 64 or 128 channels.

That’s a substantial increase.
A lot of delivered projects use from two to six channels. Complex projects might go to 20. Being able to go far beyond that increases the power and flexibility of the studio tremendously. And then, of course, our new Atmos room requires that kind of channel count to work in immersive surround sound.

What do you do for data storage?
Even before the fire, we had moved to a shared storage network solution. We had a very strong infrastructure and workflow in terms of data storage, archiving and the ability to recall sessions. Our new infrastructure includes 40TB of active storage of client data. Forty terabytes is not much for video, but for audio, it’s a lot. We also have 90TB of instantly recallable data.

We have client data archived back 25 years, and we can have anything online in any room in just a few minutes. It’s literally drag and drop. We pride ourselves on maintaining triple redundancy in backups. Even during the fire, we didn’t lose any client data because it was all backed up on tape and off site. We take backup and data security very seriously. Backups happen automatically every day…  actually every three hours.

What are some of the other technical features of the rebuilt studios?
There’s actually a lot. For example, our rooms — including the two Dolby-certified Atmos rooms — have new Genelec SAM studio monitors. They are “smart” speakers that are self-tuning. We can run some test tones and in five minutes the rooms are perfectly tuned. We have custom tunings set up for 5.1 and Atmos. We can adjust the tuning via computer and the speakers have built-in DPS, so we don’t have to rely on external systems.

Another cool technology that we are using is Dante, which is part of the Avid MTRX interface. Dante is basically audio-over-IP or audio-over-Cat6. It essentially replaced our AES router. We were one of the first facilities in San Francisco to have a full audio AES router, and it was very strong for us at the time. It was a 64×64 stereo-paired AES router. It has been replaced by the MTRX interface box that has, believe it or not, a three-inch by two-inch card that handles 64×64 routing per room. So, each room’s routing capability went up exponentially by 64.

We use Dante to route secondary audio, like our ISDN and web-based IP communication devices. We can route signals from room to room and over the web securely. It’s seamless, and it comes up literally into your computer. It’s amazing technology. The other day, I did a music session and used a 96K sample rate, which is very high. The quality of the headphone mix was astounding. Everyone was happy and it took just one, quick setting and we were off and running. The sound is fantastic and there is no noise and no latency problems. It’s super-clean, super-fast and easy to use.

What about video monitoring?
We have 4K monitors and 4K projection in all the rooms via Sony XBR 55A1E Bravia OLED monitors, Sony VPL-VW885ES True 4K Laser Projectors and a DLP 4K550 projector.Our clients appreciate the high-quality images and the huge projection screens.

Izotope’s Neutron 3 streamlines mix workflows with machine learning

Izotope, makers of the RX audio tools, has introduced Neutron 3, a plug-in that — thanks to advances in machine learning — listens to the entire session and communicates with every track in the mix. Mixers can use Neutron 3’s new Mix Assistant to create a balanced starting point for an initial-level mix built around their chosen focus, saving time and energy when making creative mix decisions. Once a focal point is defined, Neutron 3 automatically set levels before the mixer ever has to touch a fader.

Neutron 3 also has a new module called Sculptor (available in Neutron 3 Standard and Advanced) for sweetening, fixing and creative applications. Using never-before-seen signal processing, Sculptor works like a per-band army of compressors and EQs to shape any track. It also communicates with Track Assistant to understand each instrument and gives realtime feedback to help mixers shape tracks to a target EQ curve or experiment with new sounds.

In addition, Neutron 3 includes many new improvements and enhancements based on feedback from the community, such as the redesigned Masking Meter that automatically flags masking issues and allows them to be fixed from a convenient one-window display. This improvement prevents tracks from stepping on each other and muddying the mix.

Neutron 3 has also had a major overhaul in performance for faster processing and load times and smooth metering. Sessions with multiple Neutrons open much quicker, and refresh rates for visualizations have doubled.

Other Neutron 3 Features
• Visual Mixer and Izotope Relay: Users can launch Mix Assistant directly from Visual Mixer and move tracks in a virtual space, tapping into Izotope-enabled inter-plug-in communication
• Improved interface: Smooth visualizations and a resizable interface
• Improved Track Assistant listens to audio and creates a custom preset based on what it hears
• Eight plug-ins in one: Users can build a signal chain directly within one highly connected, intelligent interface with Sculptor, EQ with Soft Saturation mode, Transient Shaper, 2 Compressors, Gate, Exciter, and Limiter
• Component plug-ins: Users can control Neutron’s eight modules as a single plug-in or as eight individual plug-ins
• Tonal Balance Control: Updated to support Neutron 3
• 7.1 Surround sound support and zero-latency mode in all eight modules for professional, lightweight processing for audio post or surround music mixes

Visual Mixer and Izotope Relay will be Included free with all Neutron 3 Advanced demo downloads. In addition, Music Production Suite 2.1 will now include Neutron 3 Advanced, and iZotope Elements Suite will be updated to include Neutron Elements (v3).

Neutron 3 will be available in three different options — Neutron Elements, Neutron 3 Standard and Neutron 3 Advanced. See the comparison chart for more information on what features are included in each version.

Neutron will be available June 30. Check out the iZotope site for pricing.

Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy

Karol Urban is president of CAS, others named to board

As a result of the Cinema Audio Society board of Directors election Karol Urban will replace CAS president Mark Ulano, whose term has come to an end.  Steve Venezia with replace treasurer Peter Damski who opted not to run for re-election.

“I am so incredibly honored to have garnered the confidence of our esteemed members,” says Urban. “After years of serving under different presidents and managing the content for the CAS Quarterly I have learned so much about the achievements, interests, talents and concerns of our membership. I am excited to given this new platform to celebrate the achievements and herald new opportunities to serve this incredibly dynamic and talented community.”

For 2019 the Executive Committee with include newly elected Urban and Venezia as well as VP Phillip W. Palmer, CAS, and secretary David J. Bondelevitch, CAS,  who were not up for election.

The incumbent CAS Board of Directors (Production) that were re-elected are  Peter J. Devlin CAS, Lee Orloff CAS, and Jeffrey W. Wexler, CAS. They will be joined by newly elected Amanda Beggs, CAS, and Mary H. Ellis, CAS, who are taking the seats of outgoing  board members Chris Newman CAS and Lisa Pinero, CAS.

Incumbent board members (Post Production) who were reelected are Bob Bronow CAS, and Mathew Waters, CAS, and they will be joined by newly elected Board Members Onnalee Blank, CAS, and Mike Minkler CAS, who will be taking the seats of board members Urban and Steve Venezia, CAS, who are now officers.

Continuing to serve as their terms were not up for reelection are for production Willie Burton, CAS, and Glen Trew, CAS, and for post production Tom Fleischman, CAS, Doc Kane CAS, Sherry Klein, CAS, and Marti Humphrey, CAS.

The new Board will be installed at the 55 Annual CAS Awards Saturday, February 16.

Pixelogic London adds audio mix, digital cinema theaters

Pixelogic has added new theaters and production suites to its London facility, which offers creation and mastering of digital cinema packages and theatrical screening of digital cinema content, as well as feature and episodic audio mixing.

Pixelogic’s London location now features six projector-lit screening rooms: three theaters and three production suites. Purpose-built from the ground up, the theaters offer HDR picture and immersive audio technologies, including Dolby Atmos and DTS:X.

The equipment offered in the three theaters includes Avid S6 and S3 consoles and Pro Tools systems that support a wide range of theatrical mixing services, complemented by two new ADR booths.

Making audio pop for Disney’s Mary Poppins Returns

By Jennifer Walden

As the song says, “It’s a jolly holiday with Mary.” And just in time for the holidays, there’s a new Mary Poppins musical to make the season bright. In theaters now, Disney’s Mary Poppins Returns is directed by Rob Marshall, who with Chicago, Nine and Into the Woods on his resume, has become the master of modern musicals.

Renée Tondelli

In this sequel, Mary Poppins (Emily Blunt) comes back to help the now-grown up Michael (Ben Whishaw) and Jane Banks (Emily Mortimer) by attending to Michael’s three children: Annabel (Pixie Davies), John (Nathanael Saleh) and Georgie (Joel Dawson). It’s a much-needed reunion for the family as Michael is struggling with the loss of his wife.

Mary Poppins Returns is another family reunion of sorts. According to Renée Tondelli, who along with Eugene Gearty, supervised and co-designed the sound, director Marshall likes to use the same crews on all his films. “Rob creates families in each phase of the film, so we all have a shorthand with each other. It’s really the most wonderful experience you can have in a filmmaking process,” says Tondelli, who has worked with Marshall on five films, three of which were his musicals. “In the many years of working in this business, I have never worked with a more collaborative, wonderful, creative team than I have on Mary Poppins Returns. That goes for everyone involved, from the picture editor down to all of our assistants.”

Sound editorial took place in New York at Sixteen 19, the facility where the picture was being edited. Sound mixing was also done in New York, at Warner Bros. Sound.

In his musicals, Marshall weaves songs into scenes in a way that feels organic. The songs are coaxed from the emotional quotient of the story. That’s not only true for how the dialogue transitions into the singing, but also for how the music is derived from what’s happening in the scene. “Everything with Rob is incredibly rhythmic,” she says. “He has an impeccable sense of timing. Every breath, every footstep, every movement has a rhythmic cadence to it that relates to and works within the song. He does this with every artform in the production — with choreography, production design and sound design.”

From a sound perspective, Tondelli and her team worked to integrate the songs by blending the pre-recorded vocals with the production dialogue and the ADR. “We combined all of those in a micro editing process, often syllable by syllable, to create a very seamless approach so that you can’t really tell where they stop talking and start singing,” she says.

The Conversation
For example, near the beginning of the film, Michael is looking through the attic of their home on Cherry Tree Lane as he speaks to the spirit of his deceased wife, telling her how much he misses her in a song called “The Conversation.” Tondelli explains, “It’s a very delicate scene, and it’s a song that Michael was speaking/singing. We constantly cut between his pre-records and his production dialogue. It was an amazing collaboration between me, the supervising music editor Jennifer Dunnington and re-recording mixer Mike Prestwood Smith. We all worked together to create this delicate balance so you really feel that he is singing his song in that scene in that moment.”

Since Michael is moving around the attic as he’s performing the song, the environment affects the quality of the production sound. As he gets closer to the window, the sound bounces off the glass. “Mike [Prestwood Smith] really had his work cut out for him on that song. We were taking impulse responses from the end of the slates and feeding them into Audio Ease’s Altiverb to get the right room reverb on the pre-records. We did a lot of impulse responses and reverbs, and EQs to make that scene all flow, but it was worth it. It was so beautiful.”

The Bowl
They also captured impulse responses for another sequence, which takes place inside a ceramic bowl. The sequence begins with the three Banks children arguing over their mother’s bowl. They accidentally drop it and it breaks. Mary and Jack (Lin-Manuel Miranda) notice the bowl’s painted scenery has changed. The horse-drawn carriage now has a broken wheel that must be fixed. Mary spins the bowl and a gust of wind pulls them into the ceramic bowl’s world, which is presented in 2D animation. According to Tondelli, the sequence was hand-drawn, frame by frame, as an homage to the original Mary Poppins. “They actually brought some animators out of retirement to work on this film,” she says.

Tondelli and co-supervising sound editor/co-sound designer Eugene Gearty placed mics inside porcelain bowls, in a porcelain sink, and near marble tiles, which they thumped with rubber mallets, broken pieces of ceramic and other materials. The resulting ring-out was used to create reverbs that were applied to every element in the ceramic bowl sequence, from the dialogue to the Foley. “Everything they said, every step they took had to have this ceramic feel to it, so as they are speaking and walking it sounds like it’s all happening inside a bowl,” Tondelli says.

She first started working on this hand-drawn animation sequence when it showed little more than the actors against a greenscreen with a few pencil drawings. “The fastest and easiest way to make a scene like that come alive is through sound. The horse, which was possibly the first thing that was drawn, is pullling the carriage. It dances in this syncopated rhythm with the music so it provides a rhythmic base. That was the first thing that we tackled.”

After the carriage is fixed, Mary and her troupe walk to the Royal Doulton Music Hall where, ultimately, Jack and Mary are going to perform. Traditionally, a music hall in London is very rowdy and boisterous. The audience is involved in the show and there’s an air of playfulness. “Rob said to me, ‘I want this to be an English music hall, Renée. You really have to make that happen.’ So I researched what music halls were like and how they sounded.”

Since the animation wasn’t complete, Tondelli consulted with the animators to find out who — or rather what — was going to be in the audience. “There were going to be giraffes dressed up in suits with hats and Indian elephants in beautiful saris, penguins on the stage dancing with Jack and Mary, flamingos, giant moose and rabbits, baby hippos and other animals. The only way I thought I could do this was to go to London and hire actors of all ages who could do animal voices.”

But there were some specific parameters that had to be met. Tondelli defines the world of Mary Poppins Returns as being “magical realism,” so the animals couldn’t sound too cartoony. They had to sound believably like animal versions of British citizens. Also, the actors had to be able to sing in their animal voices.

According to Tondelli, they recorded 15 actors at a time for a period of five days. “I would call out, ‘Who can do an orangutan?’ And then the actors would all do voices and we’d choose one. Then they would do the whole song and sing out and call out. We had all different accents — Cockney, Welsh and Scottish,” she says. “All the British Isles came together on this and, of course, they all loved Mary and knew all the songs so they sang along with her.”

On the Dolby Atmos mix, the music hall scene really comes alive. The audience’s voices are coming from the rafters and all around the walls and the music is reverberating into the space — which, by the way, no longer sounds like it’s in a ceramic bowl even though the music hall is in the ceramic bowl world. In addition to the animal voices, there are hooves and paws for the animals’ clapping. “We had to create the clapping in Foley because it wasn’t normal clapping,” explains Tondelli. “The music hall was possibly the most challenging, but also the funnest scene to do. We just loved it. All of us had a great time on it.”

The Foley
The Foley elements in Mary Poppins Returns often had to be performed in perfect sync with the music. On the big dance numbers, like “Trip the Light Fantastic,” the Foley was an essential musical element since the dances were reconstructed sonically in post. “Everything for this scene was wiped away, even the vocals. We ended up using a lot of the records for this one and a lot less production sound,” says Tondelli.

In “Trip the Light Fantastic,” Jack is bringing the kids back home through the park, and they emerge from a tunnel to see nearly 50 lamplighters on lampposts. Marshall and John DeLuca (choreographer/producer/screen story writer) arranged the dance to happen in multiple layers, with each layer doing something different. “The background dancers were doing hand slaps and leg swipes, and another layer was stepping on and off of these slate surfaces. Every time the dancers would jump up on the lampposts, they’d hit it and each would ring out in a different pitch,” explains Tondelli.

All those complex rhythms were performed in Foley in time to the music. It’s a pretty tall order to ask of any Foley artist but Tondelli has the perfect solution for that dilemma. “I hire the co-choreographers (for this film, Joey Pizzi and Tara Hughes) or dancers that actually worked on the film to do the Foley. It’s something that I always do for Rob’s films. There’s such a difference in the performance,” she says.

Tondelli worked with the Foley team of Marko Costanzo and George Lara at c5 Sound in New York, who helped to build custom surfaces — like a slate-on-sand surface for the lamplighter dance — and arrange multi-surface layouts to optimally suit the Foley performer’s needs.

For instance, in the music hall sequence, the dance on stage incorporates books, so they needed three different surfaces: wood, leather and a papery-sounding surface set up in a logical, easily accessible way. “I wanted the dancer performing the Foley to go through the entire number while jumping off and on these different surfaces so you felt like it was a complete dance and not pieced together,” she says.

For the lamplighter dance, they had a big, thick pig iron pipe next to the slate floor so that the dancer performing the Foley could hit it every time the dancers on-screen jumped up on the lampposts. “So the performer would dance on the slate floor, then hit the pipe and then jump over to the wood floor. It was an amazingly syncopated rhythmic soundtrack,” says Tondelli.

“It was an orchestration, a beautiful sound orchestra, a Foley orchestra that we created and it had to be impeccably in sync. If there was a step out of place you’d hear it,” she continues. “It was really a process to keep it in sync through all the edit conforms and the changes in the movie. We had to be very careful doing the conforms and making the adjustments because even one small mistake and you would hear it.”

The Wind
Wind plays a prominent role in the story. Mary Poppins descends into London on a gust of wind. Later, they’re transported into the ceramic bowl world via a whirlwind. “It’s everywhere, from a tiny leaf blowing across the sidewalk to the huge gale in the park,” attests Tondelli. “Each one of those winds has a personality that Eugene [Gearty] spent a lot of time working on. He did amazing work.”

As far as the on-set fans and wind machines wreaking havoc on the production dialogue, Tondelli says there were two huge saving graces. First was production sound mixer Simon Hayes, who did a great job of capturing the dialogue despite the practical effects obstacles. Second was dialogue editor Alexa Zimmerman, who was a master at iZotope RX. All told, about 85% of the production dialogue made it into the film.

“My goal — and my unspoken order from Rob — was to not replace anything that we didn’t have to. He’s so performance-oriented. He arduously goes over every single take to make sure it’s perfect,” says Tondelli, who also points out that Marshall isn’t afraid of using ADR. “He will pick words from a take and he doesn’t care if it’s coming from a pre-record and then back to ADR and then back to production. Whichever has the best performance is what wins. Our job then is to make all of that happen for him.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeny

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Capturing realistic dialogue for The Front Runner

By Mel Lambert

Early on in his process, The Front Runner director Jason Reitman asked frequent collaborator and production sound mixer Steve Morrow, CAS, to join the production. “It was maybe inevitable that Jason would ask me to join the crew,” says Morrow, who has worked with the director on Labor Day, Up in the Air and Thank You for Smoking. “I have been part of Jason’s extended family for at least 10 years — having worked with his father Ivan Reitman on Draft Day — and know how he likes to work.”

Steve Morrow

This Sony Pictures film was co-written by Reitman, Matt Bai and Jay Carson, and based on Bai’s book, “All the Truth Is Out.” The Front Runner follows the rise and fall of Senator Gary Hart, set during his unsuccessful presidential campaign in 1988 when he was famously caught having an affair with the much younger Donna Rice. Despite capturing the imagination of young voters, and being considered the overwhelming front runner for the Democratic nomination, Hart’s campaign was sidelined by the affair.

It stars Hugh Jackman as Gary Hart, Vera Farmiga as his wife Lee, J.K. Simmons as campaign manager Bill Dixon and Alfred Molina as the Washington Post’s managing editor, Ben Bradlee.

“From the first read-through of the script, I knew that we would be faced with some production challenges,” recalls Morrow, a 20-year industry veteran. “There were a lot of ensemble scenes with the cast talking over one another, and I knew from previous experience that Jason doesn’t like to rely on ADR. Not only is he really concerned about the quality of the sound we secure from the set — and gives the actors space to prepare — but Jason’s scripts are always so well-written that they shouldn’t need replacement lines in post.”

Ear Candy Post’s Perry Robertson and Scott Sanders, MPSE, served as co-supervising sound editors on the project, which was re-recorded on Deluxe Stage 2 — the former Glen Glenn Sound facility — by Chris Jenkins handling dialogue and music and Jeremy Peirson, CAS, overseeing sound effects. Sebastian Sheehan Visconti was sound effects editor.

With as many as two dozen actors in a busy scene, Morrow soon realized that he would have to mic all of the key campaign team members. “I knew that we were shooting a political film like Robert Altman’s All the President’s Men or [Michael Ritchie’s] The Candidate, so I referred back to the multichannel techniques pioneered by Jim Webb and his high-quality dialogue recordings. I elected to use up to 18 radio mics for those ensemble scenes,” including Reitman’s long opening sequence in which the audience learns who the key participants are on the campaign trail. I did this “while recording each actor on a separate track, together with a guide mono mix of the key participants for the picture editor Stefan Grube.”

Reitman is well known for his films’ elaborate opening title sequences and often highly subjective narration from a main character. His motion pictures typically revolve around characters that are brashly self-confident, but then begin to rethink their lives and responsibilities. He is also reported to be a fan of ‘70s-style cinema verite, which uses a meandering camera and overlapping dialogue to draw the audience into an immersive reality. The Front Runner’s soundtrack is layered with dialogue, together with a constant hum of conversation — from the principals to the press and campaign staff. Since Bai and Carson have written political speeches, Reitman had them on set to ensure that conversations sounded authentic.

Even though there might be four or so key participants speaking in a scene, “Jason wants to capture all of the background dialogue between working press and campaign staff, for example,” Morrow continues.

“He briefed all of the other actors on what the scene was about so they could develop appropriate conversations and background dialogue while the camera roamed around the room. In other words, if somebody was on set they got a mic — one track per actor. In addition to capturing everything, Jason wanted me to have fun with the scene; he likes a solid mix for the crew, dailies and picture editorial, so I gave him the best I could get. And we always had the ability to modify it later in post production from the iso mic channels.”

Morrow recorded the pre-fader individual tracks at between 10dB and 15dB lower than the main mix, “which I rode hot, knowing that we could go back and correct it in post. Levels on that main mix were within ±5 dB most of the time,” he says. Assisting Morrow during the 40-day shoot, which took place in and around Atlanta and Savannah, were Collin Heath and Craig Dollinger, who also served as the boom operator on a handful of scenes.

The mono production mix was also useful for the camera crew, says Morrow. “They sometimes had problems understanding the dramatic focus of a particular scene. In other words, ‘Where does my eye go?’ When I fed my mix to their headphones they came to understand which actors we were spotlighting from the script. This allowed them to follow that overview.”

Production Tools
Morrow used a Behringer Midas Model M32R digital console that features 16 rear-channel inputs and 16 more inputs via a stage box that connects to the M32R via a Cat-5 cable. The console provided pre-fader and mixed outputs to Morrow’s pair of 64-track Sound Devices 970 hard-disk recorders — a main and a parallel backup — via Audinate Dante digital ports. “I also carried my second M32R mixer as a spare,” Morrow says. “I turned over the Compact Flash media at the end of each day’s shooting and retained the contents of the 970’s internal 1TB SSDs and external back-up drives until the end of post, just in case. We created maybe 30GB of data per recorder per day.”

Color coding helps Morrow mix dialogue more accurately.

For easy level checking, the two recorders with front-panel displays were mounted on Morrow’s production sound cart directly above his mixing console. “When I can, I color code the script to highlight the dialogue of key characters in specific scenes,” he says. “It helps me mix more accurately.”

RF transmitters comprised two dozen Lectrosonics SSM Micro belt-pack units — Morrow bought six or seven more for the film — linked to a bank of Lectrosonics Venue2 modular four-channel and three-channel VR receivers. “I used my collection of Sanken COS-11D miniature lavalier microphones for the belt packs. They are my go-to lavs with clean audio output and excellent performance. I also have some DPA lavaliers, if needed.”

With 20+ RF channels simultaneously in use within metropolitan centers, frequency coordination was an essential chore to ensure consistent operation for all radio systems. “The Lectrosonics Venue receivers can auto-assign radio-mic frequencies,” Morrow explains. “The best way to do this is to have everything turned off, and then one by one let the system scan the frequency spectrum. When it finds a good channel, you assign it to the first microphone and then repeat that process for the next radio transmitters. I try to keep up with FCC deliberations [on diminishing RF spectrum space], but realize that companies who manufacture this equipment also need to be more involved. So, together, I feel good that we’ll have the separation we all need for successful shoots.”

Morrow’s setup.

Morrow also made several location recordings on set. “I mounted a couple of lavaliers on bumpers to secure car-byes and other sounds for supervising sound editor Perry Robertson, as well as backgrounds in the house during a Labor Day gathering. We also recorded Vera Farmiga playing the piano during one scene — she is actually a classically-trained pianist — using a DPA Model 5099 microphone (which I also used while working on A Star is Born). But we didn’t record much room tone, because we didn’t find it necessary.”

During scenes at a campaign rally, Morrow provided a small PA system that comprised a couple of loudspeakers mounted on a balcony and a vocal microphone on the podium. “We ran the system at medium-levels, simply to capture the reverb and ambiance of the auditorium,” he explains, “but not so much that it caused problems in post production.”

Summarizing his experience on The Front Runner, Morrow offers that Reitman, and his production partner Helen Estabrook, bring a team spirit to their films. “The set is a highly collaborative environment. We all hang out with one another and share birthdays together. In my experience, Jason’s films are always from the heart. We love working with him 120%. The low point of the shoot is going home!”


Mel Lambert has been involved with production and post on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

Cinema Audio Society sets next awards date and timeline

The Cinema Audio Society (CAS) will be holding its 55th Annual CAS Awards on Saturday, February 16, 2019 at the InterContinental Los Angeles Downtown in the Wilshire Grand Ballroom. The CAS Awards recognize outstanding sound mixing in film and television as well as outstanding products for production and post. Recipients for the CAS Career Achievement Award and CAS Filmmaker Award will be announced later in the year.

The InterContinental Los Angeles Downtown is a new venue for the awards. They were held at the Omni Los Angeles Hotel at California Plaza last year.

The timeline for the awards is as follows:
• Entry submission form will be available online on the CAS website on Thursday, October 11, 2018.
• Entry submissions are due online by 5:00pm PST on Thursday, November 15, 2018.
• Outstanding product entry submissions are due online by 5:00pm PST on Friday December 7, 2018.
• Nomination ballot voting begins online on Thursday, December 13, 2018.
• Nomination ballot voting ends online at 5:00pm PST on Thursday, January 3, 2019.
• Final nominees in each category will be announced on Tuesday, January 8, 2019.
• Final voting begins online on Thursday, January 24, 2019.
• Final voting ends online at 5:00pm PST on Wednesday, February 6, 2019.

 

Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.

Michael Semanick: Mixing SFX, Foley for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic. (Look for next week’s Star Wars story, in which re-recording mixer Ren Klyce talks about his approach to mixing John Williams’ score.)

Michael Semanick

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage.

You had all of these amazing elements — Skywalker’s effects, John Williams’ score and the dialogue. How did you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Oscar Watch: The Shape (and sound) of Water

Post production sound mixers Christian Cooke and Brad Zoern, who are nominated (with production mixer Glen Gauthier) for their work on Fox’s The Shape of Water, have sat side-by-side at mixing consoles for nearly a decade. The frequent collaborators, who handle mixing duties at Deluxe Toronto, faced an unusual assignment given that the film’s two lead characters never utter a single word of actual dialogue. In The Shape of Water, which has been nominated for 13 Academy Awards, Elisa (Sally Hawkins) is mute and the creature she falls in love with makes undefined sounds. This creative choice placed more than the usual amount of importance on the rest of the soundscape to support the story.

L-R: Nathan Robitaille, J. Miles Dale, Brad Zoern, director Guillermo del Toro, Christian Cooke, Nelson Ferreira, Filip Hosek, Cam McLauchlin, video editor Sidney Wolinsky, Rob Hegedus, Doug Wilkinson.

Cooke, who focused on dialogue and music, and Zoern, who worked with effects, backgrounds and Foley, knew from the start that their work would need to fit into the unique and delicate tone that infused the performances and visuals. Their work began, as always, with pre-dubs followed by three temp mixes of five days each, which allowed for discussion and input from director Guillermo del Toro. It was at the premixes that the mixers got a feel for del Toro’s conception for the film’s soundtrack. “We were more literal at first with some of the sounds,” says Zoern. “He had ideas about blending effects and music. By the time we started on the five-week-long mix, we had a very clear idea about what he was looking for.”

The final mix took place in one of Deluxe Toronto’s five stages, which have identical acoustic qualities and the same Avid Pro Tools-based Harrison MP4D/Avid S6 hybrid console, JBL M2 speakers and Crown amps.

The mixers worked to shape sonic moments that do more than represent “reality,” but create mood and tension. This includes key moments such as the sound of a car’s windshield wipers that build in volume until they take over the track in the form of a metronome-like beat underlining the tension of the moment. One pivotal scene finds Richard Strickland (Michael Shannon) paying a visit to Zelda Fuller (Octavia Spencer). As Strickland speaks, Zelda’s husband Brewster (Martin Roach) watches television. “It was an actual mono track from a real show,” Cooke explains. “It starts out sounding roomy and distant as it would really have sounded. As the scene progresses, it expands, getting more prominent and spreading out around the speakers [for the 5.1 version]. By the end of the scene, the audio from the TV has become something totally different from what it started the scene as and then we melded that seamlessly into Alexandre Desplat’s score.”

Beyond the aesthetic work of building a sound mix, particularly one so fluid and expressionistic, post production mixers must also collaborate on a large number of technical decisions during the mix to ensure the elements have the right amount of emotional punch without calling attention to themselves. Individual sounds, even specific frequencies, vie for audience attention and the mixers orchestrate and layer them.

“It’s raining outside when they come into the room,” Zoern notes about the above scene. “We want to initially hear the sound of the rain to have a context for the scene. You never just want dialogue coming out of nowhere; it needs to live in a space. But then we pull that back to focus on the dialogue, and then the [augmented] audio from the TV gains prominence. During the final mix, Chris and I are always working together, side by side, to meld the hundreds of sounds the editors have built in a way that reflects the story and mood of the film.”

“We’re like an old married couple,” Cooke jokes. “We finish each other’s sentences. But it’s very helpful to have that kind of shorthand in this job. We’re blending so many pieces together and if people notice what we’ve done, we haven’t done our jobs.”

Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.

Capturing Foley for Epix’s Berlin Station

Now in its second season on Epix, the drama series Berlin Station centers on undercover agents, diplomats and whistleblowers inhabiting a shadow world inside the German capital.

Leslie Bloome

Working under the direction of series supervising sound editor Ruy Garcia, Westchester, New York-based Foley studio Alchemy Post Sound is providing Berlin Station with cinematic sound. Practical effects, like the clatter of weapons and clinking glass, are recorded on the facility’s main Foley stage. Certain environmental effects are captured on location at sites whose ambience is like the show’s settings. Interior footsteps, meanwhile, are recorded in the facility’s new “live” room, a 1,300-square-foot space with natural reverb that’s used to replicate the environment of rooms with concrete, linoleum and tile floors.

Garcia wants a soundtrack with a lot of detail and depth of field,” explains lead Foley artist and Alchemy Post founder Leslie Bloome. “So, it’s important to perform sounds in the proper perspective. Our entire team of editors, engineers and Foley artists need to be on point regarding the location and depth of field of sounds we’re recording. Our aim is to make every setting feel like a real place.”

A frequent task for the Foley team is to come up with sounds for high-tech cameras, surveillance equipment and other spy gadgetry. Foley artist Joanna Fang notes that sophisticated wall safes appear in several episodes, each one featuring differing combinations of electronic, latch and door sounds. She adds that in one episode a character has a microchip concealed in his suit jacket and the Foley team needed to invent the muffled crunch the chip makes when the man is frisked. “It’s one of those little ‘non-sounds’ that Foley specializes in,” she says. “Most people take it for granted, but it helps tell the story.”

The team is also called on to create Foley effects associated with specific exterior and interior locations. This can include everything from seedy safe houses and bars to modern office suites and upscale hotel rooms. When possible, Alchemy prefers to record such effects on location at sites closely resembling those pictured on-screen. Bloome says that recording things like creaky wood floors on location results in effects that sound more real. “The natural ambiance allows us to grab the essence of the moment,” he explains, “and keep viewers engaged with the scene.”

Footsteps are another regular Foley task. Fang points out that there is a lot of cat-and-mouse action with one character following another or being pursued, and the patter of footsteps adds to the tension. “The footsteps are kind of tough,” she says. “Many of the characters are either diplomats or spies and they all wear hard soled shoes. It’s hard to build contrast, so we end up creating a hierarchy, dark powerful heels for strong characters, lighter shoes for secondary roles.”

For interior footsteps, large theatrical curtains are used to adjust the ambiance in the live stage to fit the scene. “If it’s an office or a small room in a house, we draw the curtains to cut the room in half; if it’s a hotel lobby, we open them up,” Fang explains. “It’s amazing. We’re not only creating depth and contrast by using different types of shoes and walking surfaces, we’re doing it by adjusting the size of the recording space.”

Alchemy edits their Foley in-house and delivers pre-mixed and synced Foley that can be dropped right into the final mix seamlessly. “The things we’re doing with location Foley and perspective mixing are really cool,” says Foley editor and mixer Nicholas Seaman. “But it also means the responsibility for getting the sound right falls squarely on our shoulders. There is no ‘fix in the mix.’ From our point of view, the Foley should be able to stand on its own. You should be able to watch a scene and understand what’s going on without hearing a single line of dialogue.”

The studio used Neumann U87 and KMR81 microphones, a Millennia mic-pre and Apogee converter, all recorded into Avid Pro Tools on a C24 console. In addition to recording a lot of guns, Alchemy also borrowed a Doomsday prep kit for some of the sounds.

The challenge to deliver sound effects that can stand up to that level of scrutiny keeps the Foley team on its toes. “It’s a fascinating show,” says Fang. “One moment, we’re inside the station with the usual office sounds and in the next edit, we’re in the field in the middle of a machine gun battle. From one episode to the next, we never know what’s going to be thrown at us.”

Emmy Awards: Anthony Bourdain: Parts Unknown

Re-recording mixer Brian Bracken and supervising sound editor Benny Mouthon

By Jennifer Walden

CNN’s Anthony Bourdain: Parts Unknown is an award-winning travel series about food and politics. Or is it a food series about travel and politics? Perhaps it’s best described as a three-course mind-meal of food, travel and regional political/economic commentary with a dash of history. Whatever it is, it’s addicting, and Bourdain’s candor is refreshing. And if even some of the dishes that Bourdain consumes seem less than appetizing, the show itself is totally binge-worthy. Now on Netflix, all nine season are available for mass consumption.

Benny Mouthon

From its inception, String & Can in New York City has handled the post sound on Parts Unknown. Sound designers/re-recording mixers Benny Mouthon and Brian Bracken have amassed a total of nine Emmy nominations for their sound work on the show. This year Mouthon is nominated for Outstanding Sound Editing For A Nonfiction Program on Season 8, Episode 1 “Hanoi,” and Bracken is nominated for Outstanding Sound Mixing For A Nonfiction Program on Season 8’s finale, Episode 9 “Rome.”

Even though their nominations are specifically for sound editing and sound mixing, Mouthon and Bracken handle all the audio post needs for each episode they work on, from dialogue editing and sound design to final mix. It’s a substantial amount of work per episode considering Bourdain generally doesn’t use a production sound mixer. Sound-wise, it’s often just a case of catch what you can on the busy streets and crowded eateries.

Here, Mouthon and Bracken share details about what went into their Emmy-nominated episodes.

You’re nine seasons into Parts Unknown and the show just gets better and better. Sound-wise, how has the show grown? What’s changed over the years?
Benny Mouthon: We usually don’t have a location sound mixer on the show, but I feel that the camera crew has been paying more attention to mic placements. Also, the converters on the cameras have gotten better as the cameras have evolved. They can record much better quality sound than a few years ago, though still not as good as a high-end field sound recorder.

The bulk of the dialogue that we get is a lavaliere on Tony Bourdain and his guest or guests. Then they have Sanken shotgun mics on the cameras. As they move around the subjects in the frame, the shotguns tend to not be very usable so we rely on the lav mics a lot. Since the producers are the only ones that spend time both in the field and in the mix, we have had many discussions after the screenings over the years as to what works, what doesn’t and how things can be done better next time. Thanks to this dialogue I can definitely say that the quality of the audio has gotten better with time.

On the post side, we have more powerful tools than we did when the series first started. With the iZotope RX tools we’ve been able to clean up tracks that would have been unusable before. We often have to deal with distortion, or clothing rustle, or wind noise, and now all of those issues are easier to deal with.

Editors will often send us problematic audio scenes during the edit to see if they can be salvaged. In the past we used to turn down many, but in the last couple of years our “success rate” has gotten much better.

Brian Bracken

Brian Bracken: The cinematic landscape lends itself to us being able to enhance the show more by using the production audio. For instance, in the “Rome” episode, there’s a highway scene with fast “car-bys.” Those were very well recorded, and the cars sound very powerful when they pass by. That wasn’t really how it was delivered to us back in the earlier seasons. Like Benny said, with the equipment getting better the recordings get better and the attention to detail in terms of sound has really paid off.

Mouthon: As the show evolves, the cameramen get to play with nicer toys, and they’re also recording more b-roll. They started to use the Canon D5 early on for this, with better lenses — lenses that gave the show a much more filmic look. Tony also likes to pay homage to films quite a lot, Brian’s “Rome” episode is just one great example.

As this “cinema-style” became more the norm, I think they realized that the edits can be limited if they don’t have very specific sounds that were recorded while they were in a particular place. So they have grown more aware of what will make for a better edit and therefore a better soundscape afterwards for us.

Talking about soundscapes, let’s look at the “Hanoi” episode. You start with a rural soundscape of wind in grass, chickens and bugs, and then it changes to urban sounds like motorbikes and horns. How much sound was taken from production?
Mouthon: A fair amount was taken from production, and I have to give a big credit to Hunter Gross, the picture editor on that episode. He’s very good at laying out a lot of the B-roll and complementing that with sound effects so that I have a great starting point. The huge advantage I had was that I was in Hanoi about 10 years ago on a personal holiday and I had a little Zoom recorder with me. I was able to use a lot of my own recordings of Hanoi, which included a lot of great stereo street sounds.

The downside to not having a location sound mixer is that the camera crew gets everything they can but it’s in mono. They don’t have the time to go back to a location with a stereo recorder or an X/Y mic configuration on a camera to record that way. There’s just no time. So I was able to use a lot of my own recordings to complement what they had gotten in mono, along with my memory of how absolutely insane traffic is in Hanoi.

It’s busy even on the smaller side streets. I remember just standing on the curb on my first day and not knowing how to cross the street. It is just completely flooded with scooters everywhere. There was an old lady standing next to me who looked at me pitifully and she just walked right out into the street, staring straight ahead to where she was going. I decided to just follow her and miraculously the scooters just avoid you and you just trust that you won’t be hit.

Sound-wise, I remember that everyone honks, and they go pretty fast. So I was given a lot of B-roll from Hunter to complement the scenes. I really like to pan the sound and follow an individual scooter from left to right. I also put in a lot of my stereo recordings to complement their sound a bit better and I was able to add a lot of Italian scooter sounds, as well as some Honda bikes. I try to stay as true as possible to what I am seeing but the idea was to make the sound feel a little bit claustrophobic.

What sounds would you say are characteristic of Hanoi’s soundscape? What sounds make that city sound like that city?
Mouthon: The two-stroke engine. There are a lot of scooters, very whiney and mid-ranged. The sound of motorbikes is relentless, and it’s coming from everywhere — left, right, up, down — you are constantly making sure that you’re not getting in the way of someone who is driving very fast.

The scooter sounds were useful in another way. We could take you out of one scene and bring you into a completely different situation, one that is much more present. It can be loud, fill the space and give your senses a shock.

For the first half of the episode, Bourdain is eating outside on the street and there’s traffic and crowds. Tell me about the principle dialog for those locations. It seems like it would have been quite a challenge to clean and edit the dialog there.
Mouthon: There was a lot of street noise, but that’s kind of the charm — having Bourdain sitting on a plastic stool on the pavement, eating a bowl of soup. Thankfully, the camera work is such that they do pan over into the street and you see a bunch of scooters going back and forth.

As a viewer, it’s easier to accept the sound of the scooters and the noise when you get to see how dense the traffic is. But it was still tricky. There were a lot of scooter sounds and traffic noises that had to be finessed out of sentences because the noise sounded cut off. Often I had to grab B-roll sound to help match that sentence into the upcoming sentence that they decided to use.

In general, the rain was more problematic than the traffic noise. There is a scene where he is outside late in the evening and it’s pouring rain. That was harder to deal with. I used a little bit of EQ and compression to control it a little but the sound overall is pretty true to what it sounded like there.

President Obama shows up in Hanoi. How cool is that? Sound-wise, was there anything to note about that sequence?
Mouthon: The scene was shot in a restaurant and they asked people not to speak too loudly but, as is often the case in those smaller restaurants, the walls are very straight and parallel, and the floor is made of tile and the sound just echoes. There isn’t much in there to absorb the sound, so it was a little bit echoey, a little live, but not unmanageable.

I did add a little bit of stereo rain as former President Obama was coming out of the limousine because he was holding an umbrella. I also added in a little bit of crowd sounds just to enhance it a bit overall. At the end of act four, we see Tony walking through the rain and I complemented that with stereo ambience of growling thunder.

The “Hanoi” episode wraps up on an emotional note. The music does a lot to carry the emotion. Did you do anything sound-wise to help support that?
Mouthon: Hunter and the producer Tom Vitale often like to end on an emotional note. They like it to be a little poetic, and I agree with that. There were just a couple little hints of B-roll sound there, but I kept it very low because it’s the music that’s supposed to take the show out. Also, ending on music was a great way to tie it back to the beginning.

For “Rome,” the mood is very tongue-in-cheek. The episode opens with a street performer singing a spirited song, and the lyrics are about killing her lover. From a mix standpoint Brian, were you able to enhance the playfulness of this episode?
Bracken: Hunter did this episode as well. He was the one who really sold that tongue-in-cheek aspect of the episode, and I just tried to enhance it. I was there to support that performance in the mix. During that scene you have two performers in the market, and the market sounds were getting in the way of their guitars, and it wasn’t an easy task to make it sound as clean as it did.

In terms of the mix, what were some creative opportunities you had on the “Rome” episode?
Bracken: There were some cool things during the Mussolini section where they showed archival footage. Recreating the sound for that, making it feel as real as possible was fun. I like doing all of that marching stuff, with the very militant crowds. That was fun to do.

I also really liked doing that car scene — where the cars are whizzing by on the highway. It starts out far back and you hear this gentle rumble, then all of a sudden when that first car passes it’s like a punch in the face. The power continues throughout that whole little section until it is over-the-top loud. It’s almost like you’re standing on the side of the road. I was able to take their production audio and enhance that with other car-bys to really give it that sweeping stereo image. When a car goes by it just doesn’t cut away — you hear it decay a lot longer as the next car comes by.

The boxing scene was fun too because there were those hits. When they punch each other, I basically wanted it to sound the way Bourdain describes it: as a slap of leather against wet skin. When you hear him say that you have this picture in your head of what it should sound like and hopefully it matches everybody’s expectations when they hear those punches being thrown and landing.

What was the most challenging scene for you to mix in this episode, and how did you handle it?
Bracken: There’s a scene where Bourdain is talking with a group of people and they are in a café right on the side of the road. There are cars driving by, but I didn’t have that camera pan-over to show that there was traffic. I had to cut out all the stuff in between but not have gaps in the ambience. That’s a challenge you face all the time with any restaurant scene. So I cut out what I didn’t like.

I cut out the sound between words and layered in a nice crowd bed in mono. Then I did a separate bed in stereo. I find that when I only do the bed in stereo it sounds too wide. So I need something to marry the wide aspect of the scene and the narrow aspect of the voices. So between the mono bed, the dialogue, and the stereo bed I can do fader movements to make it sound smooth.

Of all the episodes in Season 8, why did you choose the mix on “Rome” for Emmy consideration?
Bracken: The episode really had great production audio. I had a lot to work with. They did a great job out there in the field.

Also, the episode is very cinematic. I love how Hunter and Tom end on a low note. They do that for this episode as well. I love the echoey footsteps that are leading you through the Palazzo Del Congressi. To me, the “Rome” episode sounded the best, and it was the most artistic one that I worked on this season.

Benny, of all the episodes in Season 8, why did you choose the sound editing on “Hanoi” for Emmy consideration?
Mouthon: For me, it was a mixture of having enjoyed playing around with all of the sounds of the scooters and knowing that they were almost a secondary character in the episode. But it was also a very nostalgic episode for me since it reminded me of the week I spent there and so maybe it was a bit more present in my head than the other episodes. No offense to the other episodes of course!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Multiple Emmy-winner Edward J. Greene, CAS, has passed away

Ed Greene died peacefully in Los Angeles on August 9, with his family by his side. He was 82 years old and is survived by his wife and children.

Born and raised in New York City, Greene attended Rensselaer Polytechnic Institute.  He began his pro audio career with a summer job in 1954 at Allegro Studios in New York doing voice and piano demos for music publishers. Within two years the studio was doing full recording sessions. Greene joined the army 1956 and served as recording engineer for the US Army Band and Chorus in Washington, DC. Upon discharge from the Army he co-founded Edgewood Studios in Washington, with partners radio and television commentator Charles Osgood and composer George Wilkins. Some of his recordings are legendary and include Charlie Byrd and Stan Getz’s “Jazz Samba” and Ramsey Lewis’ “The In Crowd.”

In 1970, Greene came to California as chief engineer for MGM Records and worked with Sammy Davis Jr., The Osmonds, Lou Rawls and the prominent artists of that time. When many of these artists started doing television programs, he was asked to participate. He was brought into television mixing by Frank Sinatra, at a production meeting for Sinatra’s first broadcast.

Greene mixed many music, variety and award shows as well as earned the well-deserved reputation of being the “go-to” guy when it came to doing live drama for television like ER, Fail Safe and The West Wing Live. Ed garnered 22 Emmy wins, his most recent in 2015, and an astonishing 61 Emmy nominations (ranking him 3rd for most nominations and 2nd for most wins by an individual). He was a member of the Television Academy when it formed its current incarnation in 1977.

 

had a special affinity for mixing live broadcasts. His live productions included decades of The Kennedy Center Honors, The Grammy Awards, The Tony Awards, The Academy Awards and The SAG Awards. He also mixed the Live from Lincoln Center specials, Carnegie Hall, Live at 100, numerous Macy’s Thanksgiving Day Parades, Tournament of Roses Parade, The AFI Life Achievement Awards, The 52nd Presidential Inaugural Gala, The 1996 Summer Olympics, The 2002 Winter Olympics Opening and Closing Ceremonies and years of American Idol. His live production work garnered him a Cinema Audio Society Award and four additional CAS nominations.

Green served on the Board of Directors of the Cinema Audio Society from 2005 to his death. In 2007, Ed was presented with the CAS Career Achievement Award recognizing his career, his willingness to mentor and his contribution to the art of sound.

 

Creating sounds of science for Bill Nye: Science Guy

By Jennifer Walden

Bill Nye, the science hero of a generation of school children, has expanded his role in the science community over the years. His transformation from TV scientist to CEO of The Planetary Society (the world’s largest non-profit space advocacy group) is the subject of Bill Nye: Science Guy — a documentary directed by David Alvarado and Jason Sussberg.

The doc premiered in the US at the SXSW Film Festival and had its international premiere at the Hot Docs Canadian International Documentary Festival in Toronto.

Peter Albrechtsen – Credit: Povl Thomsen

Supervising sound editor/sound designer Peter Albrechtsen, MPSE, started working with directors Alvarado and Sussberg in 2013 on their first feature-length documentary The Immortalists. When they began shooting the Bill Nye documentary in 2015, Albrechtsen was able to see the rough cuts and started collecting sounds and ambiences for the film. “I love being part of projects very early on. I got to discuss some sonic and musical ideas with David and Jason. On documentaries, the actual sound design schedule isn’t typically very long. It’s great knowing the vibe of the film as early as I can so I can then be more focused during the sound editing process. I know what the movie needs and how I should prioritize my work. That was invaluable on a complicated, complex and multilayered movie like this one.”

Before diving in, Albrechtsen, dialogue editor Jacques Pedersen, sound effects editor Morten Groth Brandt and sound effects recordist/assistant sound designer Mikkel Nielsen met up for a jam session — as Albrechtsen calls it — to share the directors’ notes for sound and discuss their own ideas. “It’s a great way of getting us all on the same page and to really use everyone’s talents,” he says.

Albrechtsen and his Danish sound crew had less than seven weeks for sound editorial at Offscreen in Copenhagen. They divided their time evenly between dialogue editing and sound effects editing. During that time, Foley artist Heikki Kossi spent three days on Foley at H5 Film Sound in Kokkola, Finland.

Foley artist Heikki Kossi. Credit: Clas-Olav Slotte

Bill Nye: Science Guy mixes many different media sources — clips from Bill Nye’s TV shows from the ‘90s, YouTube videos, home videos on 8mm film, TV broadcasts from different eras, as well as the filmmakers’ own footage. It’s a potentially headache-inducing combination. “Some of the archival material was in quite bad shape, but my dialogue editor Jacques Pedersen is a magician with iZotope RX and he did a lot of healthy cleaning up of all the rough pieces and low-res stuff,” says Albrechtsen. “The 8mm videos actually didn’t have any sound, so Heikki Kossi did some Foley that helped it to come alive when we needed it to.”

Sound Design
Albrechtsen’s sound edit was also helped by the directors’ dedication to sound. They were able to acquire the original sound effects library from Bill Nye’s ‘90s TV show, making it easy for the post sound team to build out the show’s soundscape from stereo to surround, and also to make it funnier. “A lot of humor in the old TV show came from the imaginative soundtrack that was often quite cartoonish, exaggerated and hilariously funny,” he explains. “I’ve done sound for quite a few documentaries now and I’ve never tried adding so many cartoonish sound effects to a track. It made me laugh.”

The directors’ dedication goes even deeper, with director Sussberg handling the production sound himself when they’re out shooting. He records dialogue with both a boom mic and radio mics, and also records wild tracks of room tones and ambience. He even captures special sound signatures for specific locations when applicable.

For example, Nye visits the creationist theme park called Noah’s Ark, built by Christian fundamentalist Ken Ham. The indoor park features life-size dioramas and animatronics to explain creationism. There are lots of sound effects and demonstrations playing from multiple speaker setups. Sussberg recorded all of them, providing Albrechtsen with the means of creating an authentic sound collage.

“People might think we added lots of sounds for these sequences, but actually we just orchestrated what was already there,” says Albrechtsen. “At moments, it’s like a cacophony of noises, with corny dinosaur screams, savage human screams and violent war noises. When I heard the sounds from the theme park that David and Jason had recorded, I didn’t believe my own ears. It’s so extreme.”

Albrechtsen approaches his sound design with texture in mind. Not every sound needs to be clean. Adding texture, like crackling or hiss, can change the emotional impact of a sound. For example, while creating the sound design for the archival footage of several rocket launches, Albrechtsen pulled clean effects of rocket launches and explosions from Tonsturm’s “Massive Explosions” sound effects library and transferred those recordings to old NAGRA tape. “The special, warm, analogue distortion that this created fit perfectly with the old, dusty images.”

In one of Albrechtsen’s favorite sequences in the film, there’s a failure during launch and the rocket explodes. The camera falls over and the video glitches. He used different explosions panned around the room, and he panned several low-pitched booms directly to the subwoofer, using Waves LoAir plug-in for added punch. “When the camera falls over, I panned explosions into the surrounds and as the glitches appear I used different distorted textures to enhance the images,” he says. “Pete Horner did an amazing job on mixing that sequence.”

For the emotional sequences, particularly those exploring Nye’s family history, and the genetic disorder passed down from Nye’s father to his two siblings, Albrechtsen chose to reduce the background sounds and let the Foley pull the audience in closer to Nye. “It’s amazing what just a small cloth rustle can do to get a feeling of being close to a person. Foley artist Heikki Kossi is a master at making these small sounds significant and precise, which is actually much more difficult than one would think.”

For example, during a scene in which Nye and his siblings visit a clinic Albrechtsen deliberately chose harsh, atonal backgrounds that create an uncomfortable atmosphere. Then, as Nye shares his worries about the disease, Albrechtsen slowly takes the backgrounds out so that only the delicate Foley for Nye plays. “I love creating multilayered background ambiences and they really enhanced many moments in the film. When we removed these backgrounds for some of the more personal, subjective moments the effect was almost spellbinding. Sound is amazing, but silence is even better.”

Bill Nye: Science Guy has layers of material taking place in both the past and present, in outer space and in Nye’s private space, Albrechtsen notes. “I was thinking about how to make them merge more. I tried making many elements of the soundtrack fit more with each other.”

For instance, Nye’s brother has a huge model train railway set up. It’s a legacy from their childhood. So when Nye visits his childhood home, Albrechtsen plays the sound of a distant train. In the 8mm home movies, the Nye family is at the beach. Albrechtsen’s sound design includes echoes of seagulls and waves. Later in the film, when Nye visits his sister’s home, he puts in distant seagulls and waves. “The movie is constantly jumping through different locations and time periods. This was a way of making the emotional storyline clearer and strengthening the overall flow. The sound makes the images more connected.”

One significant story point is Nye’s growing involvement with The Planetary Society. Before Carl Sagan’s death, Sagan conceptualized a solar sail — a sail for use in space that could harness the sun’s energy and use it as a means of propulsion. The Planetary Society worked hard to actualize Sagan’s solar sail idea. Albrechtsen needed to give the solar sail a sound in the film. “How does something like that sound? Well, in the production sound you couldn’t really hear the solar sail and when it actually appeared it just sounded like boring, noisy cloth rustle. The light sail really needed an extraordinary, unique sound to make you understand the magnitude of it.”

So they recorded different kinds of materials, in particular a Mylar blanket, which has a glittery and reflective surface. Then Albrechtsen tried different pitches and panning of those recordings to create a sense of its extraordinary size.

While they handled post sound editorial in Denmark, the directors were busy cutting the film stateside with picture editor Annu Lilja. When working over long distances, Albrechtsen likes to send lots of QuickTimes with stereo downmixes so the directors can hear what’s happening. “For this film, I sent a handful of sound sketches to David and Jason while they were busy finishing the picture editing,” he explains. “Since we’ve done several projects together we know each other very well. David and Jason totally trust me and I know that they like their soundtracks to be very detailed, dynamic and playful. They want the sound to be an integral part of the storytelling and are open to any input. For this movie, they even did a few picture recuts because of some sound ideas I had.”

The Mix
For the two-week final mix, Albrechtsen joined re-recording mixer Pete Horner at Skywalker Sound in Marin County, California. Horner started mixing on the John Waters stage — a small mix room featuring a 5.1 setup of Meyer Sound’s Acheron speakers and an Avid ICON D-Command control surface, while Albrechtsen finished the sound design and premixed the effects against William Ryan Fritch’s score in a separate editing suite. Then Albrechtsen sat with Horner for another week, as Horner crafted the final 5.1 mix.

One of Horner’s mix challenges was to keep the dialogue paramount while still pushing the layered soundscapes that help tell the story. Horner says, “Peter [Albrechtsen] provided a wealth of sounds to work with, which in the spirit of the original Bill Nye show were very playful. But this, of course, presented a challenge because there were so many sounds competing for attention. I would say this is a problem that most documentaries would be envious of, and I certainly appreciated it.”

Once they had the effects playing along with the dialogue and music, Horner and Albrechtsen worked together to decide which sounds were contributing the most and which were distracting from the story. “The result is a wonderfully rich, sometimes manic track,” says Horner.

Albrechtsen adds, “On a busy movie like this, it’s really in the mix where everything comes together. Pete [Horner] is a truly brilliant mixer and has the same musical approach to sound as me. He is an amazing listener. The whole soundtrack — both sound and score — should really be like one piece of music, with ebbs and flows, peaks and valleys.”

Horner explains their musical approach to mixing as “the understanding that the entire palette of sound coming through the faders can be shaped in a way that elicits an emotional response in the audience. Music is obviously musical, but sound effects are also very musical since they are made up of pitches and rhythmic sounds as well. I’ve come to feel that dialogue is also musical — the person speaking is embedding their own emotions into the way they speak using both pitch (inflection or emphasis) and rhythm (pace and pauses).”

“I’ll go even further to say that the way the images are cut by the picture editor is inherently musical. The pace of the cuts suggests rhythm and tempo, and a ‘hard cut’ can feel like a strong downbeat, as emotionally rich as any orchestral stab. So I think a musical approach to mixing is simply internalizing the ‘music’ that is already being communicated by the composer, the sound designer, the picture editor and the characters on the screen, and with the guidance of the director shaping the palette of available sounds to communicate the appropriate complexity of emotion,” says Horner.

In the mix, Horner embraces the documentary’s intention of expressing the duality of Nye’s life: his celebrity versus his private life. He gives the example of the film’s opening, which starts with sounds of a crowd gathering to see Nye. Then it cuts to Nye backstage as he’s preparing for his performance by quietly tying his bowtie in a mirror. “Here the exceptional Foley work of Heikki Kossi creates the sense of a private, intimate moment, contrasting with the voice of the announcer, which I treated as if it’s happening through the wall in a distant auditorium.”

Next it cuts to that announcer, and his voice is clearly amplified and echoing all around the auditorium of excited fans. There’s an interview with a fan and his friends who are waiting to take their seats. The fan describes his experience of watching Nye’s TV show in the classroom as a kid and how they’d all chant “Bill, Bill, Bill” as the TV cart rolled in. Underneath, plays the sound of the auditorium crowd chanting “Bill, Bill, Bill” as the picture cuts to Nye waiting in wings.

Horner says, “Again, the Foley here keeps us close to Bill while the crowd chants are in deep echo. Then the TV show theme kicks on, blasting through the PA. I embraced the distorted nature of the production recording and augmented it with hall echo and a liberal use of the subwoofer. The energy in this moment is at a peak as Bill takes the stage exclaiming, “I love you guys!” and the title card comes on. This is a great example of how the scene was already cut to communicate the dichotomy within Bill, between his private life and his public persona. By recognizing that intention, the sound team was able to express that paradox more viscerally.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Lime opens sound design division led by Michael Anastasi, Rohan Young

Santa Monica’s Lime Studios has launched a sound design division. LSD (Lime Sound Design), featuring newly signed sound designer Michael Anastasi and Lime sound designer/mixer Rohan Young has already created sound design for national commercial campaigns.

“Having worked with Michael since his early days at Stimmung and then at Barking Owl, he was always putting out some of the best sound design work, a lot of which we were fortunate to be final mixing here at Lime,” says executive producer Susie Boyajan, who collaborates closely with Lime and LSD owner Bruce Horwitz and the other company partners — mixers Mark Meyuhas and Loren Silber. “Having Michael here provides us with an opportunity to be involved earlier in the creative process, and provides our clients with a more streamlined experience for their audio needs. Rohan and Michael were often competing for some of the same work, and share a huge client base between them, so it made sense for Lime to expand and create a new division centered around them.”

Boyajan points out that “all of the mixers at Lime have enjoyed the sound design aspect of their jobs, and are really talented at it, but having a new division with LSD that operates differently than our current, hourly sound design structure makes sense for the way the industry is continuing to change. We see it as a real advantage that we can offer clients both models.”

“I have always considered myself a sound designer that mixes,” notes Young. “It’s a different experience to be involved early on and try various things that bring the spot to life. I’ve worked closely with Michael for a long time. It became more and more apparent to both of us that we should be working together. Starting LSD became a no-brainer. Our now-shared resources, with the addition of a Foley stage and location audio recordists only make things better for both of us and even more so for our clients.”

Young explains that setting up LSD as its own sound design division, as opposed to bringing in Michael to sound design at Lime, allows clients to separate the mix from the sound design on their production if they choose.

Anastasi joins LSD from Barking Owl, where he spent the last seven years creating sound design for high-profile projects and building long-term creative collaborations with clients. Michael recalls his fortunate experiences recording sounds with John Fasal, and Foley sessions with John Roesch and Alyson Dee Moore as having taught him a great deal of his craft. “Foley is actually what got me to become a sound designer,” he explains.

Projects that Anastasi has worked on include the PSA on human trafficking called Hide and Seek, which won an AICP Award for Sound Design. He also provided sound design to the feature film Casa De Mi Padre, starring Will Ferrell, and was sound supervisor as well. For Nike’s Together project, featuring Lebron James, a two-minute black-and-white piece, Anastasi traveled back to Lebron’s hometown of Cleveland to record 500+ extras.

Lime is currently building new studios for LSD, featuring a team of sound recordists and a stand-alone Foley room. The LSD team is currently in the midst of a series of projects launching this spring, including commercial campaigns for Nike, Samsung, StubHub and Adobe.

Main Image: Michael Anastasi and Rohan Young.

The A-List: The sound of La La Land

By Jennifer Walden

Director/writer Damien Chazelle’s musical La La Land has landed an incredible 14 Oscar nominations — not to mention fresh BAFTA wins for Best Film, Best Cinematography, Original Music and Best Leading Actress, in addition to many, many other accolades.

The story follows aspiring actress Mia (Emma Stone) who meets the talented-but-struggling jazz pianist Sebastian (Ryan Gosling) at a dinner club, where he’s just been fired from his gig of plinking out classic Christmas tunes for indifferent diners. Mia throws out a compliment as Sebastian approaches, but he just breezes right past, ignoring her completely. Their paths cross again at a Los Angeles pool party, and this time Mia makes a lasting impression on Sebastian. They eventually fall in love, but their life together is complicated by the realities of making their own dreams happen.

Sounds of the City
La La Land is a love story but it’s also a love letter to Los Angeles, says supervising sound editor Ai-Ling Lee, who shares an Oscar nomination for Best Sound Editing on the film with co-supervising sound editor Mildred Iatrou Morgan. One of Chazelle’s initial directives was to have the cityscape sound active and full of life. “He gave me film references, like Boogie Nights and Mean Streets, even though the latter was a New York film. He liked the amount of sound coming out from the city, but wanted a more romantic approach to the soundscape on La La Land. He likes the idea of the city always being bustling,” says Lee.

Mildred Iatrou Morgan and Ai-Ling Lee. Photo Credit: Jeffrey Harlacker

In addition to La La Land’s musical numbers, director Chazelle wanted to add musical moments throughout the film, some obvious, like the car radios in the opening traffic jam, and some more subtle. Lee explains, “You always hear music coming from different sources in the city, like music coming out of a car going by or mariachi music coming from down the hallway of Sebastian’s apartment building.” The culturally diverse incidental music, traffic sounds, helicopters, and local LA birds, like mourning doves, populate the city soundscape and create a distinct Los Angeles vibe.

For Lee’s sound editorial and sound design, she worked in a suite at EPS-Cineworks in Burbank — the same facility where the picture editor and composer were working. “Damien and Tom Cross [film editor] were cutting the picture there, and Justin Hurwitz the composer was right next door to them, and I was right across the hall from them. It was a very collaborative environment so it was easy to bring someone over to review a scene or sounds. I could pop over there to see them if I had any questions,” says Lee, who was able to design sound against the final music tracks. That was key to helping those two sound elements gel into one cohesive soundtrack.

Bursting Into Song
Director Chazelle’s other initial concern for sound was the music, particularly how the spoken dialogue would transitions into the studio recorded songs. That’s where supervising sound editor Morgan got to flex her dialogue editing muscles. “Milly [Morgan] knows this style of ADR, having worked on musicals before,” says Lee. “Damien wanted the dialogue to seamlessly transition into a musical moment. He didn’t want it to feel like suddenly we’re playing a pre-recorded song. He liked to have things sound more natural, with realistic grounded sounds, to help blend the music into the scene,” says Lee.

To achieve a smooth dialogue transition, Morgan recorded ADR for every line that led into a song to ensure she had a good transition between production dialogue and studio recorded dialogue, which would transition more cleanly into the studio-recorded music. “I cued that way for La La Land, but I ended up not having to use a lot of that. The studio recorded vocals and the production sound were beautifully recorded using the same mics in both cases. They were matching very well and I was able to go with the more emotional, natural sounding songs that were sung on-set in some cases,” says Morgan, who worked from her suite at 20th Century Fox studios along with ADR editor Galen Goodpaster.

Mia’s audition song, “The Fools Who Dream,” was one track that Morgan and the director were most concerned about. As Mia gives her impromptu audition she goes from speaking softly to suddenly singing, and then she starts singing louder. That would have been difficult to recreate in post because her performance on-set — captured by production mixer Steven Morrow — was so beautiful and emotional. The trouble was there were creaking noises on the track. Morgan explains, “As Mia starts singing, the camera moves in on her. It moves through the office and through the desk. It was a breakaway desk and they broke it apart so that the camera could move through it. That created all the creaking I heard on the track.”

Morgan was able to save the live performance by editing in clean ambience between words, and finding alternate takes that weren’t ruined by the creaking noise. She used Elastic Audio inside Pro Tools, as well as the Pro Tools TCE tool (time compression/expansion tool) to help tweak the alt takes into place. “I had to go through all of the outtakes, word by word, syllable by syllable, and find ones that fit in with the singing, and didn’t have creaks on them… and fit in terms of sync. It was very painstaking. It took me a couple of days to do it but it was a very rewarding result. That took a lot of time but it was so worth it because that was a really important moment in the movie,” says Morgan.

Reality Steps In
Not all on-set song performances could be used in the final track, so putting the pre-recorded songs in the space helped to make the transition into musical moments feel more realistic. Precisely crafted backgrounds, made with sounds that fit the tone of the impending song, gradually step aside as the music takes over. But not all of the real-world sounds go away completely. Foley helped to ground a song into the reality on screen by marrying it to the space. For example, Mia’s roommates invite her to a party in a song called “Someone in the Crowd.” Diegetic sounds, such as the hairdryer, the paper fan flicking open, occasional footsteps, and clothing rustles helped the pre-recorded song fit naturally into the scene. Additionally, Morgan notes that production mixer Morrow “did an excellent job of miking the actors with body mics and boom mics, even during the musical numbers that were sung to playback, like ‘Someone in the Crowd,’ just in case there was something to capture that we could use. There were a couple of little vocalizations that we were able to use in the number.”

Foley also played a significant role in the tap dance song “A Lovely Night.” Originally performed as a soft shoe dance number, director Chazelle decided to change it to a tap dance number in post. Lee reveals, “We couldn’t use the production sound since there was music playback in the scene for the actors to perform to. So, we had to fully recreate everything with the sound. Damien had a great idea to try to replace the soft shoe sound with tap shoes. It was an excellent idea because the tap sound plays so much better with the dance music than the soft shoe sound does.”

Lee enlisted Mandy Moore, the dance choreographer on the film, and several dancers to re-record the Foley on that scene. Working with Foley artist Dan O’Connell, of One Step Up located on The Jane Russell Foley Stage at 20th Century Fox Studios, they tried various weights of tap shoes on different floor surfaces before narrowing it down to the classic “Fred and Ginger” sound that Chazelle was looking for. “Even though they are dancing on asphalt, we ended up using a wooden floor surface on the Foley stage. Damien was very precise about playing up a step here and playing up a scuff there, because it plays better against the music. It was really important to have the taps done to the rhythm of the song as opposed to being in sync with the picture. It fools your brain. Once you have everything in rhythm with the music, the rest flows like butter,” says Lee. She cut the tap dance Foley to picture according to Chazelle’s tastes, and then invited Moore to listen to the mix to make sure that the tap dance routine was realistic from a dancer’s point of view.

Inside the Design
One of Lee’s favorite scenes to design was the opening sequence of the film, which starts with the sound of a traffic jam on a Los Angeles freeway. The sound begins in mono with a long horn honk over a black and white Cinemascope logo. As the picture widens and the logo transitions into color, Lee widens the horn honk into stereo and then into the surrounds. From that, the sound builds to a few horns and cars idling. Morgan recorded a radio announcer to establish the location as Los Angeles. The 1812 Overture plays through a car radio, and the sound becomes futzed as the camera pans to the next car in the traffic jam. With each car the camera passes the radio station changes. “This is Los Angeles and it is a mixed cultural city. Damien wanted to make sure there was a wide variety of music styles, so Justin [Hurwitz] gave me a bunch of different music choices, an eclectic selection to choose from,” says Lee. She added radio tuning sounds, car idling sounds, and Foley of tapping on the steering wheel to ground the scene in reality. “We made sure that the sound builds but doesn’t overpower the first musical number. The first trumpet hit comes through this traffic soundscape, and gradually the real city sounds give way to the first song, ‘Another Day of Sun.’”

One scene that stood out for Morgan was after Mia’s play, when she’s in her dressing room feeling sad that the theater was mostly empty for her performance. Not even Sebastian showed up. As she’s sitting there, we hear two men from the audience disparaging her and her play. Initially, Chazelle and his assistant recorded a scratch track for that off-stage exchange, but he asked Morgan to reshoot it with actors. “He wanted it to sound very naturalistic, so we spent some time finding just the right actors who didn’t sound like actors. They sound like regular people,” says Morgan.

She had the actors improvise their lines on why they hated the play, how superficial it was and how pretentious it was. Following some instruction from Chazelle, they cut the scene together. “We screened it and it was too mean, so we had to tone it back a little,” shares Morgan. “That was fun because I don’t always get to do that, to create an ADR scene from scratch. Damien is meticulous. He knows what he wants and he knows what he doesn’t want. But in this case, he didn’t know exactly what they should say. He had an idea. So I do my version and he gave me ideas and it went back and forth. That was a big challenge for me but a very enjoyable one.”

The Mix
In addition to sound editing, Lee also mixed the final soundtrack with re-recording mixer Andy Nelson at Fox Studios in Los Angeles. She and Nelson share an Oscar nomination for Best Sound Mixing on La La Land. Lee says, “Andy and I had made a film together before, called Wild, directed by Jean-Marc Vallée. So it made sense for me to do both the sound design and to mix the effects. Andy mixed the music and dialogue. And Jason Ruder was the music editor.”

From design to mix, Chazelle’s goal was to have La La Land sound natural — as though it was completely natural for these people to burst into song as they went through their lives. “He wanted to make sure it sounded fluid. With all the work we did, we wanted to make the film sound natural. The sound editing isn’t in your face. When you watch the movie as a whole, it should feel seamless. The sound shouldn’t take you out of the experience and the music shouldn’t stand apart from the sound. The music shouldn’t sound like a studio recording,” concludes Lee. “That was what we were trying to achieve, this invisible interaction of music and sound that ultimately serves the experience.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Jon Hamm

Audio post for Jon Hamm’s H&R Block spots goes to Eleven

If you watch broadcast television at all, you’ve likely seen the ubiquitous H&R Block spots featuring actor Jon Hamm of Mad Men fame. The campaign out of Fallon Worldwide features eight spots — all take place either on a film set or a studio backlot, and all feature Hamm in costume for a part. Whether he’s breaking character dressed in traditional Roman garb to talk about how H&R Block can help with your taxes, or chatting up a zombie during a lunch break, he’s handsome, funny and on point: use H&R Block for your tax needs. Simon McQuoid from Imperial Woodpecker directed.

Studio C /Katya Jeff Payne

Jeff Payne

The campaign’s audio post was completed at Eleven in Santa Monica. Eleven founder Jeff Payne worked the spots. “As well as mixing, I created sound design for all of the spots. The objective was to make the sound design feel very realistic and to enhance the scenes in a natural way, rather than a sound design way. For example, on the spot titled Donuts the scene was set on a studio back lot with a lot of extras moving around, so it was important to create that feel without distracting from the dialogue, which was very subtle and quiet. On the spot titled Switch, there was a very energetic music track and fast cutting scenes, but again it needed support with realistic sounds that gave all the scenes more movement.”

Payne says the major challenge for all the spots was to make the dialogue feel seamless. “There were many different angle shots with different microphones that needed to be evened out so that the dialogue sounded smooth.”

In terms of tools, all editing and mixing was done with Avid’s Pro Tools HDX system and S6 console. Sound design was done through Soundminer software.

Jordan Meltzer was assistant mixer on the campaign, and Melissa Elston executive produced for Eleven. Arcade provided the edit, Timber the VFX and post and color was via MPC.

Patriots Day

Augmenting Patriots Day‘s sound with archival audio

By Jennifer Walden

Fresh off the theatrical release of his dramatized disaster film Deepwater Horizon, director Peter Berg brings another current event to the big screen with Patriots Day. The film recounts the Boston Marathon bombing by combining Berg’s cinematic footage with FBI-supplied archival material from the actual bombing and investigation.

Once again, Berg chose to partner with Technicolor’s supervising sound editor/re-recording mixer Dror Mohar, who contributed to the soundtrack of Berg’s Deepwater Horizon (2016) and Lone Survivor (2013).  He earned an MPSE award nomination for sound editing on the latter.

According to Mohar, Berg’s intention for Patriots Day was not to make a film about tragedy and terrorism, but rather to tell the story of a community’s courage in the face of this disaster. “This was personal for Peter [Berg]. His conviction about not exploiting or sensationalizing any of it was in every choice he made,” says Mohar. “He was vigilant about the cinematic attributes never compromising the authenticity and integrity of the story of the events and the people who were there — the law enforcement, victims and civilians. Peter wanted to evolve and explore the sound continuously. My compass throughout was to create a soundtrack that was as immersive as it was genuine.”

From a sound design perspective, Mohar was conscious of keeping the qualities and character of the sounds in check — favoring raw, visceral sounds over treated or polished ones. He avoided oversized “Hollywood” treatments. For example, Mohar notes the Watertown shootout sequence. The lead-up to the firefight was inspired by a source audio recording of the Watertown shootout captured by a neighbor on a handheld camera.

“Two things grabbed my attention — the density of the firefight, which sounded like Chinese New Year, and the sound of wind chimes from a nearby home,” he explains. “Within what sounded like war and chaos, there was a sweet sound that referenced home, family, porch… This shootout is happening in a residential area, in the middle of everyday life. Throughout the film, I wanted to maintain the balance between emotional and visceral sounds. Working closely with picture editors Colby Parker Jr. and Gabriel Fleming, we experimented with sound design that aligned directly with the dramatic effect of the visuals versus designs that counteracted the drama and created an experience that was less comfortable but ultimately more emotional.”

Tension was another important aspect of the design. The bombing disrupted life, and not just the lives of those immediately or physically affected by the bombing. Mohar wanted the sound to express those wider implications. “When the city is hit, it affects everyone. Something in that time period is just not the same. I used a variety of recordings of calls to prayer and crowds of people from all over the world to create soundscapes that you could expect to hear in a city but not in Boston. I incorporated these in different times throughout the film. They aren’t in your face, but used subtly.”

Patriots DayThe Mix
On the mix, he and re-recording mixer Mike Prestwood-Smith chose a realistic approach to their sonic treatments.

Prestwood-Smith notes that for an event as recent and close to the heart as the Boston Marathon bombing, the goal was to have respect for the people who were involved — to make Patriots Day feel real and not sensationalized in any sense. “We wanted it to feel believable, like you are witnessing it, rather than entertaining people. We want to be entertaining, engaging and dramatic, but ultimately we don’t want this to feel gratuitous, as though we are using these events to our advantage. That’s a tight rope to tread, not just for sound but for everything, like the shooting and the performances. All of it.”

Mohar reinforces the idea of enabling the audience to feel the events of the bombing first-hand through sound. “When we experience an event that shocks us, like a car crash, or in this case, an act of terror, the way we experience time is different. You assess what’s right there in front of you and what is truly important. I wanted to leverage this characteristic in the soundtrack to represent what it would be like to be there in real time, objectively, and to create a singular experience.”

Archival Footage
Mohar and Prestwood-Smith had access to enormous amounts of archival material from the FBI, which was strategically used throughout the soundtrack. In the first two reels, up to and including the bombing, Prestwood-Smith explains that picture editors Fleming and Parker Jr. intercut between the dramatized footage and the archived footage “literally within seconds of each other. Whole scenes became a dance between the original footage and the footage that Peter shot. In many cases, you’re not aware of the difference between the two and I think that is a very clever and articulate thing they accomplished. The sound had to adhere to that and it had to make you feel like you were never really shifting from one thing to the other.”

It was not a simple task to transition from the Hollywood-quality sound of the dramatized footage to sound captured on iPhones and low-resolution cameras. Prestwood-Smith notes that he and Mohar were constantly evolving the qualities of the sounds and mix treatments so all elements would integrate seamlessly. “We needed to keep a balance between these very different sound sources and make them feel coherently part of one story rather than shifting too much between them all. That was probably the most complex part of the soundtrack.”

Berg’s approach to perspective — showing the event from a reporter’s point of view as opposed to a spectator’s point of view — helped the sound team interweave the archival material and fictionalized material. For example, Prestwood-Smith reports the crowd sounds were 90 percent archival material, played from the perspective of different communication sources, like TV broadcasts, police radio transmissions and in-ear exchanges from production crews on the scene. “These real source sounds are mixed with the actors’ dialogue to create a thread that always keeps the story together as we alternate through archival and dramatized picture edits.”

While intercutting various source materials for the marathon and bombing sequences, Mohar and Prestwood-Smith worked shot by shot, determining for each whether to highlight an archival sound, carry the sound across from the previous shot or go with another specific sound altogether, regardless of whether it was one they created or one that was from the original captured audio.

“There would be archival footage with screaming on it that would go across to another shot and connect the archive footage to the dramatized, or sometimes not. We literally worked inch-by-inch to make it feel like it all belonged in one place,” explains Prestwood-Smith. “We did it very boldly. We embraced it rather than disguised it. Part of what makes the soundtrack so dynamic is that we allow each shot to speak in its genuine way. In the earlier reels, where there is more of the archival footage, the dynamics of it really shift dramatically.”

Patriots Day is not meant to be a clinical representation of the event. It is not a documentary. By dramatizing the Boston Marathon bombing, Berg delivers a human story on an emotional level. He uses music to help articulate the feeling of a scene and guide the audience through the story emotionally.

“On an emotional level, the music did an enormous amount of heavy lifting because so much of the sound work was really there to give the film a sense of captured reality and truth,” says Prestwood-Smith. “The music is one of the few things that allows the audience to see the film — the event — slightly differently. It adds more emotion where we want it to but without ever tipping the balance too far.”

The Score
Composers Trent Reznor and Atticus Ross had a definitive role for each cue. Their music helps the audience decompress for certain moments before being thrust right back into the action. “Their compositions were so intentional and so full of character and attitude. It’s not generic,” says Mohar. “Each cue feels like a call to action. The tracks have eyes and mouths and teeth. It’s very intentional. The music is not just an emotional element; it’s part of the sound design and sound overall. The sound and music work together to contribute equally to this film.”

The way that we go back and forth between the archival footage and the dramatized footage was the same way we went from designed audio to source audio, from music to musical, from sound effects to sound effective,” he continues. “On each scene, we decided to either blur the line between music and effects, between archival sound and designed sound, or to have a hard line between each.”

To complement the music, Mohar experimented with rhythmic patterns of sounds to reinforce the level of intensity of certain scenes. “I brought in mechanical keyboards of various types, ages and material, and recorded different typing rhythms on them. These sounds were used in many of the Black Falcon terminal scenes. I used softer sounding keyboards with slower tempos when I wanted the level of tension to be lower, and then accelerated them into faster tempos with harsher sounding keyboards as the drama in the terminal increased,” he says. “By using modest, organic sounds I could create a subliminal sense of tension. I treated the recordings with a combination of plug-ins, delays, reverbs and EQs to create sounds that were not assertive.”

Dialogue
In terms of dialogue, the challenge was to get the archive material and the dramatized material to live in the same space emotionally and technically, says Prestwood-Smith. “There were scenes where Mark Wahlberg’s character is asking for ambulances or giving specific orders and playing underneath that dialogue is real, archival footage of people who have just been hurt by these explosions talking on their phones. Getting those two things to feel integrated was a complex thing to do. The objective was to make the sound believable. ‘Is this something I can believe?’ That was the focus.”

Prestwood-Smith used a combination of Avid and FabFilter plug-ins for EQ and dynamics, and created reverbs using Exponential Audio’s PhoenixVerb and Audio Ease’s Altiverb.

Staying in The Box
From sound editorial through to the final mix, Mohar and Prestwood-Smith chose to keep the film in Pro Tools. Staying in the box offered the best workflow solution for Patriots Day. Mohar designed and mixed for the first phase of the film at his studio at Technicolor’s Tribeca West location in Los Angeles, a satellite of Technicolor at Paramount’s main sound facility while Prestwood-Smith worked out of his own mix room in London. The two collaborated remotely, sharing their work back and forth, continuously developing the mix to match the changing picture edit. “We were on a very accelerated schedule, and they were cutting the film all the way through mastering. Having everything in the box meant that we could constantly evolve the soundtrack,” says Prestwood-Smith.

7.1 Surround Mix
Mohar and Prestwood-Smith met up for the final 7.1 surround mix at 424 Post in Hollywood and mixed the immersive versions at Technicolor Hollywood.

While some mix teams prefer to split the soundtrack, with one mixer on music and dialogue and the other handling sound effects and Foley, Mohar and Prestwood-Smith have a much more fluid approach. There is no line drawn across the board; they share the tracks equally.

“Mike has great taste and instincts; he doesn’t operate like a mixer. He operates like a filmmaker and I look to him to make the final decisions and direct the shape of the soundtrack,” explain Mohar. “The best thing about working with Mike is that it’s truly collaborative, no part of the mix belonged to just one person. Anything was up for grabs and the sound as a whole belonged to the story. It makes the mix more unified, and I wouldn’t have it any other way.”


Jennifer Walden is a New Jersey-based audio pro and writer. 

Cory Melious

Behind the Title: Heard City senior sound designer/mixer Cory Melious

NAME: Cory Melious

COMPANY: Heard City (@heardcity)

CAN YOU DESCRIBE YOUR COMPANY?
We are an audio post production company.

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
I provide final mastering of the audio soundtrack for commercials, TV shows and movies. I combine the production audio recorded on set (typically dialog), narration, music (whether it’s an original composition or artist) and sound effects (often created by me) into one 5.1 surround soundtrack that plays on both TV and Internet.

Heard City

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
I think most people without a production background think the sound of a spot just “is.” They don’t really think about how or why it happens. Once I start explaining the sonic layers we combine to make up the final mix they are really surprised.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The part that really excites me is the fact that each spot offers its own unique challenge. I take raw audio elements and tweak and mold them into a mix. Working with the agency creatives, we’re able to develop a mix that helps tell the story being presented in the spot. In that respect I feel like my job changes day in and day out and feels fresh every day.

WHAT’S YOUR LEAST FAVORITE?
Working late! There are a lot of late hours in creative jobs.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I really like finishing a job. It’s that feeling of accomplishment when, after a few hours, I’m able to take some pretty rough-sounding dialog and manipulate that into a smooth-sounding final mix. It’s also when the clients we work with are happy during the final stages of their project.

WHAT TOOLS DO YOU USE ON A DAY-TO-DAY BASIS?
Avid Pro Tools, Izotope RX, Waves Mercury, Altiverb and Revibe.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
One of my many hobbies is making furniture. My dad is a carpenter and taught me how to build at a very young age. If I never had the opportunity to come to New York and make a career here, I’d probably be building and making furniture near my hometown of Seneca Castle, New York.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think this profession chose me. When I was a kid I was really into electronics and sound. I was both the drummer and the front of house sound mixer for my high school band. Mixing from behind the speakers definitely presents some challenges! I went on to college to pursue a career in music recording, but when I got an internship in New York at a premier post studio, I truly fell in love with creating sound for picture.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, I’ve worked on Chobani, Google, Microsoft, and Budweiser. I also did a film called The Discovery for Netflix.

The Discovery for Netflix.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I’d probably have to say Chobani. That was a challenging campaign because the athletes featured in it were very busy. In order to capture the voiceover properly I was sent to Orlando and Los Angeles to supervise the narration recording and make sure it was suitable for broadcast. The spots ran during the Olympics, so they had to be top notch.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iPhone, iPad and depth finder. I love boating and can’t imagine navigating these waters without knowing the depth!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on the basics — Facebook, LinkedIn and Instagram. I dabble with SnapChat occasionally and will even open up Twitter once in a while to see what’s trending. I’m a fan of photography and nature, so I follow a bunch of outdoor Instagramers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I joke with my friends that all of my hobbies are those of retired folks — sailing, golfing, fly fishing, masterful dog training, skiing, biking, etc. I joke that I’m practicing for retirement. I think hobbies that force me to relax and get out of NYC are really good for me.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

TrumpLand

TrumpLand gets quick turnaround via Technicolor Postworks

Michael Moore in TrumpLand is a 73-minute film that documents a one-man show performed by Moore over two nights in October to a mostly Republican crowd at a theater in Ohio. It made its premiere just 11 days later at New York’s IFC Center.

The very short timeframe between live show and theatrical debut included a brisk five days at Technicolor PostWorks New York, where sound and picture were finalized. [Editor’s note: The following isn’t any sort of political statement. It’s just a story about a very quick post turnaround and the workflow involved. Enjoy!]

TrumplandMichael Kurihara was supervising sound editor and re-recording mixer on the project. He was provided with the live feeds from more than a dozen microphones used to record the event. “Michael had a hand-held mic and a podium mic, and there were boom mics throughout the crowd,” Kurihara recalls. “They set it up like they were recording an orchestra with mics everywhere. I was able to use those boom mics and some on stage to push sound into the surrounds to really give you the feeling that you are sitting in the theater.”

Kurihara’s main objectives, naturally, were to ensure that the dialogue was clear and that the soundtrack, which included elements from both nights, was consistent, but he also worked to capture the flavor of the event. He notes, for example, that Moore wanted to preserve the way that he used his microphone to produce comic effects. “He did a funny bit about the Clinton Foundation, and used the mic the way stand-up comics do, holding it closer or further a way to underscore the joke,” Kurihara says. “By holding the mic at different angles, he makes the sound warmer or punchier.”

Kurihara adds that the mix sessions did not follow a conventional, linear path as creative editorial was still ongoing. “That made it a particularly exciting project,” he notes. “We were never just mixing. Editorial changes continued to arrive right up to the point of print.”

Focusing on Picture
Colorist Allie Ames handled the film’s picture finishing. Similar to Kurihara, her task was to cement visual consistency while maintaining the immediacy of the live event. She worked from a conformed version of the film, supplied by the editing team.

According to Ames, “It already had a beautiful look from the way it was staged and shot, therefore, my goal was to embrace and enhance the intimacy of the location and create a consistent look that would draw the film audience into the world of the theatrical audience without distracting from Michael’s stage performance.”

Moore and his producers attended most of the sound mixing and picture grading sessions. “It was an unusual and exciting process,” says Ames. “Usually, you have weeks to finish a film, but in this case we had to get it out quickly. It was an honor to contribute to this project.”

Technicolor PostWorks has provided post services for several of Moore’s documentaries, including Where to Invade Next, which debuted earlier this year. For TrumpLand the facility created deliverables for the premiere at IFC, and subsequent theatrical and Netflix releases.

Says Moore, “Simply put, there would have been no TrumpLand movie without Technicolor PostWorks. They have a dedicated team of artists who are passionate about filmmaking, and especially about documentaries. In this instance, they went above and beyond what was asked of them to ensure we were ready in record time for our premiere — and they did so without compromising quality or creativity. I did my previous film with them a year ago and in just 14 months they were already using technology so new it made our 2015 experience feel so… 2015.”

The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”

iZotope intros mixing plug-in Neutron at AES show

iZotope was at last week’s AES show in LA with Neutron, their newest plug-in, which is geared toward simplifying and enhancing the mixing process. Neutron’s Track Assistant saves you time by listening to your audio and recommending custom starting points for tracks. According to iZotope, analysis intelligence within Neutron allows Track Assistant to automatically detect instruments, recommend the placement of EQ nodes and set optimal settings for other modules. Users still maintain full control over all their mix decisions, but Track Assistant gives them more time to focus on their creative take on the mix.

Neutron’s Masking Meter allows you to visually identify and fix perceptual frequency collisions between instruments, which can result in guitars masking lead vocals, bass covering up drums and other issues that can cause a “muddy” or overly crowded mix. Easily tweak each track to carve away muddiness and reveal new sonic possibilities.

“[Neutron] has a deep understanding of the tracks and where they compete with one another, and it offers subtle enhancements to the sound based on that understanding,” explains iZotope CEO/co-founder Mark Ethier.

Neutron can be used on every track, offering zero-latency, CPU-efficient performance. It offers static /dynamic EQ, two multiband compressors, a multiband Transient Shaper, a multiband Exciter and a True Peak Limiter.

What the plug-in offers:
• The ability to automatically detect different instruments — such as vocals, dialogue, guitar, bass, and drums — and then apply the spectral shaping technology within Neutrino to provide subtle clarity and balance to each track.
• Recommendations for optimal starting points using Track Assistant, including EQ nodes, compressor thresholds, saturation types and multiband crossover points.
• It carves out sonic space using the Masking Meter to help each instrument sit better in the mix.
• The ability to create the a mix with five mixing processors integrated into one CPU-efficient channel strip, offering both clean digital and warm vintage-flavored processing.
• There is surround support [Advanced Only] for audio post pros that need to enhance the audio for picture experience.
• There are individual plug-ins [Advanced Only] for the Equalizer, Compressor, Transient Shaper and Exciter.

Neutron and Neutron Advanced is available now. Neutron Advanced will also be available as part of iZotope’s new Music Production Bundle 2. This combines iZotope’s latest products with its other tools, including Ozone 7 Advanced, Nectar 2 Production Suite, VocalSynth, Trash 2 Expanded, RX Plug-in Pack and Insight.

Once available, Neutron, Neutron Advanced, and the Music Production Bundle 2 will be discounted through October 31, 2016: Neutron will be available for $199 (reg.$249); Neutron Advanced will be available for $299 (reg. $349); and the Music Production Bundle 2 will be available for $499 (reg. $699).

Deepwater Horizon’s immersive mix via Twenty Four Seven Sound

By Jennifer Walden

The Peter Berg-directed film Deepwater Horizon, in theaters now, opens on a black screen with recorded testimony from real-life Deepwater Horizon crew member Mike Williams recounting his experience of the disastrous oil spill that began April 20, 2010 in the Gulf of Mexico.

“This documentary-style realism moves into a wide, underwater immersive soundscape. The transition sets the music and sound design tone for the entire film,” explains Eric Hoehn, re-recording mixer at Twenty Four Seven Sound in Topanga Canyon, California. “We intentionally developed the immersive mixes to drop the viewer into this world physically, mentally and sonically. That became our mission statement for the Dolby Atmos design on Deepwater Horizon. Dolby empowered us with the tools and technology to take the audience on this tightrope journey between anxiety and real danger. The key is not to push the audience into complete sensory overload.”

eric-and-wylie

L-R: Eric Hoehn and Wylie Stateman.  Photo Credit: Joe Hutshing

The 7.1 mix on Deepwater Horizon was crafted first with sound designer Wylie Stateman and re-recording mixers Mike Prestwood Smith (dialogue/music) and Dror Mohar (sound effects) at Warner Bros in New York City. Then Hoehn mixed the immersive versions, but it wasn’t just a technical upmix. “We spent four weeks mixing the Dolby Atmos version, teasing out sonic story-point details such as the advancing gas pressure, fire and explosions,” Hoehn explains. “We wanted to create a ‘wearable’ experience, where your senses actually become physically involved with the tension and drama of the picture. At times, this movie is very much all over you.”

The setting for Deepwater Horizon is interesting in that the vertical landscape of the 25-story oil rig is more engrossing than the horizontal landscape of the calm sea. This dynamic afforded Hoehn the opportunity to really work with the overhead Atmos environment, making the audience feel as though they’re experiencing the story and not just witnessing it. “The story takes place 40 miles out at sea on a floating oil drilling platform. The challenge was to make this remote setting experiential for the audience,” Hoehn explains. “For visual artists, the frame is the boundary. For us, working in Atmos, the format extends the boundaries into the auditorium. We wanted the audience to feel as if they too were trapped with our characters aboard the Deepwater Horizon. The movement of sound into the theater adds to the sense of disorientation and confusion that they’re viewing on screen, making the story more immediate and disturbing.”

In their artistic approach to the Atmos mix, Stateman and sound effects designers Harry Cohen and Sylvain Lasseur created an additional sound design layer — specific Atmos objects that help to reinforce the visuals by adding depth and weight via sound. For example, during a sequence after a big explosion and blow out, Mike Williams (Mark Wahlberg) wakes up with a pile of rubble and a broken door on top of him. Twisted metal, confusing announcements and alarms were designed from scratch to become objects that added detail to the space above the audience. “I think it’s one of the most effective Atmos moments in the film. You are waking up with Williams in the aftermath of this intense, destructive sequence. The entire rig is overwhelmed by off-stage explosions, twisting metal, emergency announcements and hissing steam. Things are falling apart above you and around you,” details Hoehn.

Hoehn shares another example: during a scene on the drill deck they created sound design objects to describe the height and scale of the 25-story oil derrick. “We put those sounds into the environment by adding delays and echoes that make it feel like those sounds are pinging around high above you. We wanted the audience to sense the vertical layers of the Deepwater Horizon oil rig,” says Hoehn, who created the delays and echoes using a multichannel delay plug-in called Slapper by The Cargo Cult. “I had separate mix control over the objects and the acoustic echoes applied. I could put the discrete echoes in distinct places in the Atmos environment. It was an agitative design element. It was designed to make the audience feel oriented and at the same time disoriented.”

The additional sounds they created were not an attempt to reimagine the soundtrack, but rather a means of enhancing what was there. “We were deliberate about what we added,” Hoehn explains. “As a team we strived to maximize the advantages of an Atmos theater, which allows us to keep a film mentally, physically and sonically intense. That was the filmmaker’s primary goal.”

The landscape in Deepwater Horizon doesn’t just tower over the audience; it extends under them as well. The underwater scenes were an opportunity to feature the music since these “sequences don’t contain metal banging and explosions. These moments allow the music to give an emotional release,” says Hoehn.

Hoehn explains that the way music exists in Atmos is sort of like a big womb of sound; it surrounds the audience. The underwater visuals depict the catastrophic failure of the blowout preventer — a valve that can close off the well and prevent an uncontrolled flow of oil, and the music punctuates this emotional and pivotal point in the film. It gives a sense of calm that contrasts what’s happening on screen. Sonically, it’s also a contrast to the stressful soundscape happening on-board the rig. Hoehn says, “It’s good for such an intense film and story to have moments where you can find comfort, and I think that is where the music provides such emotional depth. It provides that element of comfort between the moments where your senses are being flooded. We played with dynamic range, going to silence and using the quiet to heighten the anticipation of a big release.”

Hoehn mixed the Atmos version in Twenty Four Seven Sound’s Dolby Atmos lab, which uses an Avid S6 console running Pro Tools 12 and features Meyer Acheron mains and 26 JBL AC28 monitors for the surrounds and overheads. It is an environment designed to provide sonic precision so that when the mixer turns a knob or pushes a fader, the change can instantly be heard. “You can feel your cause-and-effect happen immediately. Sometimes when you’re in a bigger room, you are battling the acoustics of the space. It’s helpful to work under a magnifying glass, particularly on a soundtrack that is as detailed as Deepwater Horizon’s,” says Hoehn.

Hoehn spent a month on the Atmos mix, which served as the basis for the other immersive formats, such as the IMAX 5 and IMAX 12 mixes. “The IMAX versions maintain the integrity of our Atmos design,” says Hoehn, “A lot of care had to be taken in each of the immersive versions to make sure the sound worked in service of the storytelling process.”

Bring On VR
In addition to the theatrical release, Hoehn discussed the prospect of a Deepwater Horizon VR experience. “Working with our friends at Dolby, we’re looking at virtual reality and experimenting with sequences from Deepwater Horizon. We are working to convert the Atmos mix to a headset, virtual sound environment,” says Hoehn. He explains that binaural sound or surround sound in headphones present its own design challenges; it’s not just a direct lift of the 7.1 or Atmos mix.

“Atmos mixing for a theatrical sound pressure environment is different than the sound pressure environment in headphones,” explains Hoehn. “It’s a different sound pressure that you have to design for, and the movement of sounds needs to be that much more precise. Your brain needs to track movement and so maybe you have less objects moving around. Or, you have one sound object hand off to another object and it’s more of a parade of sound. When you’re in a theater, you can have audio coming from different locations and your brain can track it a lot easier because of the fixed acoustical environment of a movie theater. So that’s a really interesting challenge that we are excited to sink our teeth into.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Call of the Wild —Tarzan’s iconic yell

By Jennifer Walden

For many sound enthusiasts, Tarzan’s iconic yell is the true legend of that story. Was it actually actor Johnny Weissmuller performing the yell? Or was it a product of post sound magic involving an opera singer, a dog, a violin and a hyena played backwards as MGM Studios claims? Whatever the origin, it doesn’t impact how recognizable that yell is, and this fact wasn’t lost on the filmmakers behind the new Warner Bros. movie The Legend of Tarzan.

The updated version is not a far cry from the original, but it is more guttural and throaty, and less like a yodel. It has an unmistakable animalistic quality. While we may never know the true story behind the original Tarzan yell, postPerspective went behind the scenes to learn how the new one was created.

Supervising sound editor/sound designer Glenn Freemantle and sound designer/re-recording mixer Niv Adiri at Sound24, a multi-award winning audio post company located on the lot of Pinewood Film Studios in Buckinghamshire, UK, reveal that they went through numerous iterations of the new Tarzan yell. “We had quite a few tries on that but in the end it’s quite a simple sound. It’s actor Alexander Skarsgård’s voice and there are some human and animal elements, like gorillas, all blended together in it,” explains Freemantle.

Since the new yell always plays in the distance, it needed to feel powerful and raw, as though Tarzan is waking up the jungle. To emphasize this, Freemantle says, “We have animal sounds rushing around the jungle after the Tarzan yell, as if he is taking control of it.”

The jungle itself is a marvel of sight and sound. Freemantle notes that everything in the film, apart from the actors on screen, was generated afterward — the Congo, the animals, even the villages and people, a harbor with ships and an action sequence involving a train. Everything.

LEGEND OF TARZANThe film was shot on a back lot of Warner Bros. Studios in Leavesden, UK, so making the CGI-created Congo feel like the real deal was essential. They wanted the Congo to feel alive, and have the sound change as the characters moved through the space. Another challenge was grounding all the CG animals — the apes, wildebeests, ostriches, elephants, lions, tigers, and other animals — in that world.

When Sound24 first started on the film, a year and a half before its theatrical release, Freemantle says there was very little to work with visually. “Basically it was right from the nuts and bolts up. There was nothing there, nothing to see in the beginning apart from still pictures and previz. Then all the apes, animals and jungles were put in and gradually the visuals were built up. We were building temp mixes for the editors to use in their cut, so it was like a progression of sound over time,” he says.

Sound24’s sound design got increasingly detailed as the visuals presented more details. They went from building ambient background for different parts of Africa — from the deep jungle to the open plains — at different times of the day and night to covering footsteps for the CG gorillas. The sound design team included Ben Barker, Tom Sayers, and Eilam Hoffman, with sound effects editing by Dan Freemantle and Robert Malone. Editing dialogue and ADR was Gillian Dodders. Foley was recorded at Shepperton Studios by Foley mixer Glen Gathard.

Capturing Sounds
Since capturing their own field recordings in the Congo would have proved too challenging, Sound 24 opted to source sound recordings authentic to that area. They also researched and collected the best animal sounds they could find, which were particularly useful for the gorilla design.

Sound24’s sound design team designed the gorillas to have a range of reactions, from massive roars and growls to smaller grunts and snorts. They cut and layered different animal sounds, including processed human vocalizations, to create a wide range of gorilla sounds.

There were three main gorillas, and each sounds a bit different, but the most domineering of all was Akut. During a fight between Akut and Tarzan, Adiri notes that in the mix, they wanted to communicate Akut’s presence and power through sound. “We tried to create dynamics within Akut’s voice so that you feel that he is putting in a lot of effort into the fight. You see him breathing hard and moving, so his voice had to have his movement in it. We had to make it dynamic and make sure that there was space for the hits, and the falls, and whatever is happening visually. We had to make sure that all of the sounds are really tied to the animal and you feel that he’s not some super ape, but he’s real,” Adiri says. They also designed sounds for the gang of gorillas that came to egg on Akut in his fight.

The Mix
All the effects, Foley and backgrounds were edited and premixed in Avid Pro Tools 11. Since Sound24 had been working on The Legend of Tarzan for over a year, keeping everything in the box allowed them to update their session over time and still have access to previous elements and temp mixes. “The mix was evolving throughout the sound editorial process. Once we had that first temp mix we just kept working with that, remixing sounds and reworking scenes but it was all done in the box up until the final mix. We never started the mix from scratch on the dub stage,” says Adiri.

For the final Dolby Atmos mix at Warner Bros. De Lane Lea Studios in London, Adiri and Freemantle brought in their Avid S6 console to studio. “That surface was brilliant for us,” says Adiri, who mixed the effects/Foley/backgrounds. He shared the board with re-recording mixer Ian Tapp, on dialogue/music.

Adiri feels the Atmos surround field worked best for quiet moments, like during a wide aerial shot of the jungle where the camera moves down through the canopy to the jungle floor. There he was able to move through layers of sounds, from the top speakers down, and have the ambience change as the camera’s position changed. Throughout the jungle scenes, he used the Atmos surrounds to place birds and distant animal cries, slowly panning them around the theater to make the audience feel as though they are surrounded by a living jungle.

He also likes to use the overhead speakers for rain ambience. “It’s nice to use them in quieter scenes when you can really feel the space, moving sounds around in a more subliminal way, rather than using them to be in-your-face. Rain is always good because it’s a bright sound. You know that it is coming from above you. It’s good for that very directional sort of sound.”

Ambience wasn’t the only sound that Adiri worked with in Atmos. He also used it to pan the sounds of monkeys swinging through the trees and soaring overhead, and for Tarzan’s swinging. “We used it for these dynamic moments in the storytelling rather than filling up those speakers all the time. For the moments when we do use the Atmos field, it’s striking and that becomes a moment to remember, rather than just sound all the time,” concludes Freemantle.

Jennifer Walden is a New Jersey-based writer and audio engineer. 

Larson Studios pulls off an audio post slam dunk for FX’s ‘Baskets’

By Jennifer Walden

Turnarounds for TV series are notoriously fast, but imagine a three-day sound post schedule for a single-camera half-hour episodic series? Does your head hurt yet? Thankfully, Larson Studios in Los Angeles has its workflow on FX’s Baskets down to a science. In the show, Zach Galifianakis stars as Chip Baskets, who works as a California rodeo clown after failing out of a prestigious French clown school.

So how do you crunch a week and a half’s worth of work into three days without sacrificing quality or creativity? Larson’s VP, Rich Ellis, admits they had to create a very aggressive workflow, which was made easier thanks to their experience working with Baskets post supervisor Kaitlin Menear on a few other shows.

Ellis says having a supervising sound editor — Cary Stacy — was key in setting up the workflow. “There are others competing for space in this market of single-camera half-hours, and they treat post sound differently — they don’t necessarily bring a sound supervisor to it. The mixer might be cutting and mixing and wrangling all of the other elements, but we felt that it was important to continue to maintain that traditional sound supervisor role because it actually helps the process to be more efficient when it comes to the stage.”

John Chamberlin and Cary Stacy

John Chamberlin and Cary Stacy

This allows re-recording mixer John Chamberlin to stay focused on the mix while sound supervisor Stacy handles any requests that pop-up on stage, such as alternate lines or options for door creaks. “I think director Jonathan Krisel, gave Cary at least seven honorary Emmy awards for door creaks over the course of our mix time,” jokes Menear. “Cary can pull up a sound effect so quickly, and it is always exactly perfect.”

Every second counts when there are only seven hours to mix an episode from top to bottom before post producer Menear, director Krisel and the episode’s picture editor join the stage for the two-hour final fixes and mix session. Having complete confidence in Stacy’s alternate selections, Chamberlin says he puts them into the session, grabs the fader and just lets it roll. “I know that Cary is going to nail it and I go with it.”

Even before the episode gets to the stage, Chamberlin knows that Stacy won’t overload the session with unnecessary elements, which are time consuming. Even still, Chamberlin says the mix is challenging in that it’s a lot for one person to do. “Although there is care taken to not overload what is put on my plate when I sit down to mix, there are still 8 to 10 tracks of Foley, 24 or more tracks of backgrounds and, depending on the show, the mono and stereo sound effects can be 20 tracks. Dialogue is around 10 and music can be another 10 or 12, plus futz stuff, so it’s a lot. You have to have a workflow that’s efficient and you have to feel confident about what you’re doing. It’s about making decisions quickly.”

Chamberlin mixed Baskets in 5.1 — using a Pro Tools 11 system with an Avid ICON D-Command — on Stage 4 at Larson Studios, where he’s mixed many other shows, such as Portlandia, Documentary Now, Man Seeking Woman, Dice, the upcoming Netflix series Easy, Comedy Bang Bang, Meltdown With Jonah and Kumail and Kroll Show. “I’m so used to how Stage 4 sounds that I know when the mix is in a good place.”

Another factor of the three-day turn-around is choosing to forgo loop group and minimizing ADR to only when it’s absolutely necessary. The post sound team relied on location sound mixer Russell White to capture all the lines as clearly as possible on set, which was a bit of a challenge with the non-principal characters.

Baskets

Tricky On-Set Audio
According to Menear, director Krisel loves to cast non-actors in the majority of the parts. “In Baskets, outside of our three main roles, the other people are kind of random folk that Jonathan has collected throughout his different directing experiences,” she says. While that adds a nice flavor creatively, the inexperienced cast members tend to step on each other’s lines, or not project properly — problems you typically won’t have with experienced actors.

For example, Louie Anderson plays Chip’s mom Christine. “Louie has an amazing voice and it’s really full and resonant,” explains Chamberlin. “There was never a problem with Louie or the pro actors on the show. The principals were very well represented sonically, but the show has a lot of local extras, and that poses a challenge in the recording of them. Whether they were not talking loud enough or there was too much talking.”

A good example is the Easter brunch scene in Episode 104. Chip, his mother and grandmother encounter Martha (Chip’s insurance agent/pseudo-friend played by Martha Kelly) and her parents having brunch in the casino. They decide to join their tables together. “There were so many characters talking at the same time, and a lot of the side characters were just having their own conversations while we were trying to pay attention to the main characters,” says Stacy. “I had to duck those side conversations as much as possible when necessary. There was a lot of that finagling going on.”

Stacy used iZotope RX 5 features like Decrackle and Denoise to clean up the tracks, as well as the Spectral Repair feature for fixing small noises.

Multiple Locations
Another challenge for sound mixer White was that he had to quickly shoot in numerous locations for any given episode. That Easter brunch episode alone had at least eight different locations, including the casino floor, the casino’s buffet, inside and outside of a church, inside the car, and inside and outside of Christine’s house. “Russell mentioned how he used two rigs for recording because he would always have to just get up and go. He would have someone else collect all of the gear from one location while he went off to a new location,” explains Chamberlin. “They didn’t skimp on locations. When they wanted to go to a place they would go. They went to Paris. They went to a rodeo. So that has challenges for the whole team — you have to get out there and record it and capture it. Russell did a pretty fantastic job considering where he was pushed and pulled at any moment of the day or night.”

Sound Effects
White’s tracks also provided a wealth of production effects, which were a main staple of the sound design. The whole basis for the show, for picture and sound, was to have really funny, slapstick things happen, but have them play really straight. “We were cutting the show to feel as real and as normal as possible, regardless of what was actually happening,” says Menear. “Like when Chip was walking across a room full of clown toys and there were all of these strange noises, or he was falling down, or doing amazing gags. We played it as if that could happen in the real world.”

Stacy worked with sound effects editor TC Spriggs to cut in effects that supported the production effects, never sounding too slapstick or over the top, even if the action was. “There is an episode where Chip knocks over a table full of champagne glasses and trips and falls. He gets back up only to start dancing, breaking even more glasses,” describes Chamberlin.

That scene was a combination of effects and Foley provided by Larson’s Foley team of Adam De Coster (artist) and Tom Kilzer (recordist). “Foley sync had to be perfect or it fell apart. Foley and production effects had to be joined seamlessly,” notes Chamberlin. “The Foley is impeccably performed and is really used to bring the show to life.”

Spriggs also designed the numerous backgrounds. Whether it was the streets of Paris, the rodeo arena or the doldrums of Bakersfield, all the locations needed to sound realistic and simple yet distinct. On the mix side, Chamberlin used processing on the dialogue to help sell the different environments – basic interiors and exteriors, the rodeo arena and backstage dressing room, Paris nightclubs, Bakersfield dive bars, an outdoor rave concert, a volleyball tournament, hospital rooms and dream-like sequences and a flashback.

“I spent more time on the dialogue than any other element. Each place had to have its own appropriate sounding environments, typically built with reverbs and delays. This was no simple show,” says Chamberlin. For reverbs, Chamberlin used Avid’s ReVibe and Reverb One, and for futzing, he likes McDSP’s FutzBox and Audio Ease’s Speakerphone plug-ins.

One of Chamberlin’s favorite scenes to mix was Chip’s performance at the rodeo, where he does his last act as his French clown alter ego Renoir. Chip walks into the announcer booth with a gramophone and asks for a special song to be played. Chamberlin processed the music to account for the variable pitch of the gramophone, and also processed the track to sound like it was coming over the PA system. In the center of the ring you can hear the crowds and the announcer, and off-screen a bull snorts and grinds it hooves into the dirt before rushing at Chip.

Another great sequence happens in the Easter brunch episode where we see Chip walking around the casino listening to a “Learn French” lesson through ear buds while smoking a broken cigarette and dreaming of being Renoir the clown on the streets of Paris. This scene summarizes Chip’s sad clown situation in life. It’s thoughtful, and charming and lonely.

“We experimented with elaborate sound design for the voice of the narrator, however, we landed on keeping things relatively simple with just an iPhone futz,” says Stacy. “I feel this worked out for the best, as nothing in this show was over done. We brought in some very light backgrounds for Paris and tried to keep the transitions as smooth as possible. We actually had a very large build for the casino effects, but played them very subtly.”

Adds Chamberlin, “We really wanted to enhance the inner workings of Chip and to focus in on him there. It takes a while in the show to get to the point where you understand Chip, but I think that is great. A lot of that has to do with the great writing and acting, but our support on the sound side, in particular on that Easter episode, was not to reinvent the wheel. Picture editors Micah Gardner and Michael Giambra often developed ideas for sound, and those had a great influence on the final track. We took what they did in picture editorial and just made it more polished.”

The post sound process on Baskets may be down and dirty, but the final product is amazing, says Menear. “I think our Larson Studios team on the show is awesome!”

Ergonomics from a post perspective (see what I did there?)

By Cory Choy

Austin’s SXSW is quite a conference, with pretty much something for everyone. I attended this year for three reasons: I’m co-producer and re-recording mixer on director Musa Syeed’s narrative feature film in competition, A Stray; I’m a member of the New York Post Alliance and was helping out at our trade show booth; and I’m a blogger and correspondent for this here online publication.

Given that my studio, Silver Sound in New York, has been doing a lot of sound for virtual reality recently, and with the mad scramble that every production company, agency and corporation has been in to make virtual reality content, I was pretty darn sure that my first post was going to be about VR (and don’t fear, I will be following up with one soon), but while I was checking out the new 360-degree video camera and rig offerings from Theta360 and 360Heros, and taking a good look at the new Micro Cinema Camera from Blackmagic, I noticed a pretty enthused and sizable crowd at one of the booths. The free Stella Artois beer samples were behind me, so I was pretty excited to go check out what I was sure must be the hip, new virtual reality demonstration, The Martian VR Experience.

To my surprise, the hot demo wasn’t for a new camera rig or stitching software. It was for a chair… sort of. Folks were gathered around a tall table playing with Legos while resting on the Mogo, the latest “leaning seat” offering from inventor Martin Keen’s company, Focal Upright. It’s kind of a mix between a monopod, a swivel stool and an exercise ball chair, and it comes in a neat little portable bag — have chair, will travel! Leaning chairs allow people to comfortably maintain good posture while at their workstations. They also encourage you to work in a position that, unlike most traditional chairs, allows for good blood flow through the legs.

They were raffling off one of those suckers, hence all the people around. I didn’t win, but I did have the opportunity to talk to Keen about his products — a full line of leaning chairs, standing desks and workstations. Keen’s a really nice fellow, and I’m going to follow-up with a more in-depth interview in the future. For now, though, the basics are that Keen’s company, Focal Upright, is one of several companies that have emerged to help folks who spend the majority of their days sitting (i.e. all of us post professionals) figure out a way to bring better posture and health back into their daily routines.

As a sound engineer, and therefore as someone who spends a whole lot of time every day at a console or mixing board, ergonomics is something I’ve had to pay a lot of attention to. So I thought I might share some of my, and my colleagues’, ergonomics experiences, thoughts and solutions.

Standing, Sitting and Posture
We’ve all been hearing about it for a while — sitting for extended periods of time can be bad for you. Sitting with bad posture can be even worse. My buddy and co-worker Luke Allen has been doing design and editing at a standing desk for the last couple of years, and he swears that it’s one of the best work decisions he’s ever made. After the first couple of months though, I noticed that he was complaining that his feet were getting tired and his knees hurt. In the same pickle? Luke solved his problem with a chef’s mat, like this one. Want to move around a little more at the standing desk? Check out the Level from FluidStance, another exhibitor at this year’s SXSW show. Not ready for a standing desk? Maybe try exploring a ball chair or fluid disc from physical therapy equipment manufacturer Isokinetics Inc.

Feel a little silly with that stuff? Instead, try getting up and walking around, or stretching every 20 minutes or so — 30 seconds to a minute should do. When I was getting started in this business, I was lucky enough to have the opportunity to apprentice under sound master craftsman Bernie Hajdenberg. I first got to observe him working in the mix, and then after some time, I had the privilege of operating sessions with him. One of the things that struck me was that Bernie usually stood up for the majority of the mixing sessions, and he would pace while discussing changes. When I was operating for him, he had me sit in a seat with no arms that could be raised pretty high. He told me this was very important, and it’s something that I’ve continued throughout my career. And lo and behold, I now realize that part of what Bernie had me do was to make sure that I wasn’t cutting off the circulation in my legs by keeping them extended and a little in front of me. And the chair with no arms helped keep my back straight.

Repetitive Stress
People who use their fingers a lot, whether typing or using a mouse, run the risk of developing a repetitive stress injury. Personally, I had a lot of wrist pain after my first year or so. What to do? First, make sure that your set-up isn’t forcing you to put your hands or wrists in an uncomfortable position. One of the things I did was elevate my mouse pad and keyboard. My buddy Tarcisio, and many others, use a trackball mouse. Try to break up your typing or mouse movements every couple of minutes with frequent, short bursts of finger stretches. After a few weeks of introducing stretching into my routine, my wrist and finger pain was alleviated greatly.

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City. He was recently nominated for an Emmy for “Outstanding Sound Mixing for Live Action” for Born To Explore.

Skywalker’s Randy Thom helps keep it authentic for ‘Peanuts’

By Jennifer Walden

Snoopy, Woodstock, Charlie Brown, Lucy… all the classic Peanuts characters hit the big screen earlier this month thanks to the Blue Sky Studios production The Peanuts Movie (20th Century Fox).

For those of you who might have worried that the Peanuts gang would “go Hollywood,” there is no need for concern. These beloved characters look and sound like they did in the Charles M. Schulz TV specials — which started airing in the 1960s — but they have been updated to fit the theatrical expectations of 2015.

While the latest technology has given depth and texture to these 2D characters, director Steve Martino and the Schulz family made sure the film didn’t stray far from Charles Schulz’s original creations.

Randy Thom

Randy Thom

According to Skywalker Sound supervising sound editor/sound designer/re-recording mixer Randy Thom, “Steve Martino (from Blue Sky) spent most of the year hanging out in Santa Rosa, California, which is where the Schulz family still lives. He worked with them very closely to make sure that this film had the same feel and look as not only the cartoon strip, but also the TV specials. They did a wonderful job of staying true to all those visual and sonic tropes that we so much associate with Peanuts.”

Thom and the Skywalker sound team, based at the Skywalker Ranch in Marin County, California, studied the style of sound effects used in the original Peanuts TV specials and aimed to evoke those sounds as closely as they could for The Peanuts Movie, while also adding a modern vibe. “Often, on animated films, the first thing the director tells us is that it shouldn’t sound like a cartoon — they don’t want it to be cartoony with sound effects,” explains Thom, who holds an Oscar for his sound design on the animated feature The Incredibles, and has two Oscar nominations for his sound editing on The Polar Express and Ratatouille. “In The Peanuts Movie, we were liberated to play around with boings and other classic cartoon type sounds. We even tried to invent some of our own.”

PEANUTS PEANUTS

The Red Baron and Subtle Sounds
The sound design is a mix of Foley effects, performed at Skywalker by Foley artists Sean England and Ronni Pittman, and cartoon classics like zips, boinks and zings. One challenge was creating a kid-friendly machine gun sound for Snoopy’s Red Baron air battles. “It couldn’t be scary, but it had to suggest the kinds of guns that were used on those planes in that era,” says Thom. The solution? Thom vocalized “ett-ett-ett-ett-ett” sounds, which they processed and combined with a “rat-tat-tat-tat-tat” rhythm that they banged out on pots and pans. The result is a faux machine gun that’s easy on little ears.

Another key element in the Red Baron sequences was the sound of the planes. Charles Schulz’s son, Craig, who was very involved with the film, owns a vintage WWI plane that, amazingly, still flies. “Craig [Schulz] flew the plane and a couple of people on our sound team rode in it. They were very brave and kept the recorder running the whole time,” says Thom, who completed the sound edit and premix in Avid Pro Tools 12

PEANUTS

They captured recordings on the plane, as well as from the ground as the plane performed a few acrobatic aerial maneuvers. During the final 7.1 mix in Mix G at Skywalker Sound, via the Neve DFC console, Thom says the challenge was to make the film sound exciting without being too dynamic. The final plane sounds were very mellow without any harsh upper frequencies or growly tones. “We had to be careful of the nature of the sounds,” he says. “If you make the airplanes too scary or intimidating, or sound to animalistic, little kids are going to be scared and cover their ears. We wanted to make sure it was fun without being scary.”

Many of the scenes in The Peanuts Movie have subtle sound design, with Foley being a big part of the track. There are a few places where sound gets to deliver the joke. One of Thom’s favorite scenes was when Charlie Brown visits the library to find the book “Leo’s Toy Store.”

“The library is supposed to be quiet and we had to be very playful with the sound of Charlie’s feet squeaking on the floor and making too much noise,” says Thom. “After he leaves the library, he slides down the hillside in the snow and ice and ends up running right through a house. That was a fun sequence also.”

PEANUTS PEANUTS

One surprising piece of the soundtrack was the music. The name Vince Guaraldi is practically synonymous with Peanuts. His jazzy compositions are part of the Peanuts cultural lexicon. If someone says Peanuts, it instantly recalls to mind the melody of Guaraldi’s “Linus and Lucy” tune. And while “Linus and Lucy” is part of the film’s soundtrack, the majority of the score is orchestral compositions by Christophe Beck. “The music is mostly orchestral but even that has a Peanuts feel somehow,” concludes Thom.

Creating the sonic world of ‘Macbeth’

By Jennifer Walden

On December 4, we will all have the opportunity to hail Michael Fassbender as he plays Macbeth in director Justin Kurzel’s film adaption of the classic Shakespeare play. And while Macbeth is considered to be the Bard’s darkest tragedy, audiences at the Cannes Film Festival premiere felt there was nothing tragic about Kurzel’s fresh take on it.

As evidenced in his debut film, The Snowton Murders, Kurzel’s passion for dark imagery fits The Weinstein Co’s Macbeth like a custom-fitted suit of armor. “The Snowtown Murders was brutal, beautiful, uncompromising and original, and I felt sure Justin would approach Macbeth with the same vision,” says freelance supervising sound editor Steve Single. “He’s a great motivator and demanded more of the team than almost any director I’ve worked with, but we always felt that we were an important part of the process. We all put more of ourselves into this film, not only for professional pride, but to make sure we were true to Justin’s expectations and vision.”

Single, who was also the re-recording mixer on the dialogue/music, worked with London-based sound designers Markus Stemler and Alastair Sirkett to translate Kurzel’s abstract and esoteric ideas — like imagining the sound of mist — and place them in the reality of Macbeth’s world. Whether it was the sound of sword clashes or chimes for the witches, Kurzel looked beyond traditional sound devices. “He wanted the design team to continually look at what elements they were adding from a very different perspective,” explains Single.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

Sirkett notes that Kurzel’s bold cinematic style — immediately apparent by the slow-motion-laced battle sequence in the opening — led him and Stemler to make equally bold choices in sound. Adds Stemler, “I love it when films have a strong aesthetic, and it was the same with the sound design. Justin certainly pushed all of us to go for the rather unconventional route here and there.  In terms of the creative process, I think that’s a truly wonderful situation.”

Gathering, Creating Sounds
Stemler and Sirkett split up the sound design work by different worlds, as Kurzel referred to them, to ensure that each world sounded distinctly different, with its own, unique sonic fingerprint. Stemler focused on the world of the battles, the witches and the village of Inverness. “The theme of the world of the witches was certainly a challenge. Chimes had always been a key element in Justin’s vision,” says Stemler, whose approach to sound design often begins with a Schoeps mic and a Sound Devices recorder.

As he started to collect and record a variety of chimes, rainmakers and tiny bells, Stemler realized that just shaking them wasn’t going to give him the atmospheric layer he was looking for. “It needed to be way softer and smoother. In the process I found some nacre chimes (think mother-of-pearl shells) that had a really nice resonance, but the ‘clonk’ sound just didn’t fit. So I spent ages trying to kind of pet the chimes so I would only get their special resonance. That was quite a patience game.”

By having distinct sonic themes for each “world,” re-recording mixers Single and Gilbert Lake (who handled the effects/Foley/backgrounds) were able to transition back and forth between those sonic themes, diving into the next ‘world’ without fully leaving the previous one.

There’s the “gritty reality of the situation Macbeth appears to be forging, the supernatural world of the witches whose prophecy has set out his path for him, the deterioration of Macbeth’s mental state, and how Macbeth’s actions resonate with the landscape,” says Lake, explaining the contrast between the different worlds. “It was a case of us finding those worlds together and then being conscious about how they relate to one another, sometimes contrasting and sometimes blending.”

Skirett notes that the sonic themes were particularly important when crafting Macbeth’s craziness. “Justin wanted to use sound to help with Macbeth’s deterioration into paranoia and madness, whether it be using the sound of the witches, harking back to the prophecy or the initial battle and the violence that had occurred there. Weaving that into the scenes as we moved forward was alMACBETHways going to be a tricky balancing act, but I think with the sounds that we created, the fantastic music from composer Jed Kurzel, and with Steve [Single] and Gilly [Lake] mixing, we’ve achieved something quite amazing.”

Sirkett details a moment of Macbeth’s madness in which he recalls the memory of war. “I spent a lot of time finding elements from the opening battle — whether it be swords, clashes or screams — that worked well once they were processed to feel as though they were drifting in and out of his mind without the audience being able to quite grasp what they were hearing, but hopefully sensing what they were and the implication of the violence that had occurred.”

Sirkett used Audio Ease’s Altiverb 7 XL in conjunction with a surround panning tool called Spanner by The Cargo Cult “to get some great sounds and move them accurately around the theatre to help give a sense of unease for those moments that Justin wanted to heighten Macbeth’s state of mind.”

The Foley, Score, Mix
The Foley team on Macbeth included Foley mixer Adam Mendez and Foley artist Ricky Butt from London’s Twickenham Studios. Additional Foley for the armies and special sounds for the witches was provided by Foley artist Carsten Richter and Foley mixer Marcus Sujata at Tonstudio Hanse Warns in Berlin, Germany. Sirkett points out that the sonic details related to the costumes that Macbeth and Banquo (Paddy Considine) wore for the opening battle. “Their costumes look huge, heavy and bloodied by the end of the opening battle. When they were moving about or removing items, you felt the weight, blood and sweat that was in them and how it was almost sticking to their bodies,” he says.

Composer Jed Kurzel’s score often interweaves with the sound design, at times melting into the soundscape and at other times taking the lead. Stemler notes the quiet church scene in which Lady Macbeth sits in the chapel of an abandoned village. Dust particles gently descend to the sound of delicate bells twinkling in the background. “They prepare for the moment where the score is sneaking in almost like an element of the wind.  It took us some time in the mix to find that perfect balance between the score and our sound elements. We had great fun with that kind of dance between the elements.”

MACBETHDuring the funeral of Macbeth’s child in the opening of the film, Jed Kurzel’s score (the director’s brother) emotes a gentle mournfulness as it blends with the lashing wind and rain sound effects. Single feels the score is almost like another character. “Bold and unexpected, it was an absolute pleasure to bring each cue into the mix. From the rolling reverse percussion of the opening credits to the sublime theme for Lady Macbeth’s decline into madness, he crafted a score that is really very special.”

Single and Lake mixed Macbeth in 5.1 at Warner Bros.’ De Lane Lea studio in London, using an AMS Neve DFC console. On Lake’s side of the board, he loved mixing the final showdown between Macbeth and Macduff — a beautifully edited sequence where the rhythm of the fighting perfectly plays against Jed Kurzel’s score.

“We wanted the action to feel like Macbeth and Macduff were wrenching their weapons from the earth and bringing the full weight of their ambitions down on one another,” says Lake. “Markus [Stemler] steered clear of traditional sword hits and shings and I tried to be as dynamic as possible and to accentuate the weight and movement of their actions.”

To create original sword sounds, Stemler took the biggest screw wrench he could find and recorded himself banging on every big piece of metal available in their studio’s warehouse. “I hit old heaters, metal staircases, stands and pipes. I definitely left a lot of damage,” he jokes. After a bit of processing, those sounds became major elements in the sword sounds.

Director Kurzel wanted the battle sequences to immerse the audience in the reality of war, and to show how deeply it affects Macbeth to be in the middle of all that violence. “I think the balance between “real” action and the slo-mo gives you a chance to take in the horror unfolding,” says Lake. “Jed’s music is very textural and it was about finding the right sounds to work with it and knowing when to back off with the effects and let it become more about the score. It was one of those rare and fortunate events where everyone is pulling in the same direction without stepping on each other’s toes!”

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

To paraphrase the famous quote, “Copy is King” holds true for any project, in a Shakespeare adaptation, the copy is as untouchable as Vito Corleone in The Godfather. “You have in Macbeth some of the most beautiful and insightful language ever written and you have to respect that,” says Single. His challenge was to make every piece of poetic verse intelligible while still keeping the intimacy that director Kurzel and the actors had worked for on-set, which Single notes, was not an easy task. “The film was shot entirely on location, during the worst storms in the UK for the past 100 years. Add to this an abundance of smoke machines and heavy Scottish accents and it soon became apparent that no matter how good production sound mixer Stuart Wilson’s recordings were — he did a great job under very tough conditions — there was going to be a lot of cleaning to do and some difficult decisions about ADR.”

Even though there was a good bit of ADR recorded, in the end Single found he was able to restore and polish much of the original recordings, always making sure that in the process of achieving clarity the actors’ performances were maintained. In the mix, Single says it was about placing the verse in each scene first and then building up the soundtrack around that. “This was made especially easy by having such a good effects mixer in Gilly Lake,” he concludes.

Jennifer Walden is a New Jersey-based writer and audio engineer.

Mark Mangini keynotes The Art of Sound 
Design at Sony Studios

Panels focus on specifics of music, effects and dialog sound design, and immersive soundtracks

By Mel Lambert

Defining a sound designer as somebody “who uses sound to tell stories,” Mark Mangini, MPSE, was adamant that “sound editors and re-recording mixers should be authors of a film’s content, and take creative risks. Art doesn’t get made without risk.”

A sound designer/re-recording mixer at Hollywood’s Formosa Group Features, Mangini outlined his sound design philosophy during a keynote speech at the recent The Art of Sound Design: Music, Effects and Dialog in an Immersive World conference, which took place at Sony Pictures Studios in Culver City.

Mangini is recipient of three Academy Award nominations for The Fifth Element (1997), Aladdin (1992) and Star Trek IV: The Voyage Home (1986).

Acknowledging that an immersive soundtrack should fully engage the audience, Mangini outlined two ways to achieve that goal. “Physically, we can place sound around an audience, but we also need to engage them emotionally with the narrative, using sound to tell the story,” he explained to the 500-member audience. “We all need to better understand the role that sound plays in the filmmaking process. For me, sound design is storytelling — that may sound obvious, but it’s worth reminding ourselves on a regular basis.”

While an understanding of the tools available to a sound designer is important, Mangini readily concedes, “Too much emphasis on technology keeps us out of the conversation; we are just seen as technicians. Sadly, we are all too often referred to as ‘The Sound Guy.’ How much better would it be for us if the director asked to speak with the ‘Audiographer,’ for example. Or the ‘Director of Sound’ or the ‘Sound Artist?’ — terms that better describe what we actually do? After all, we don’t refer to a cinematographer as ‘The Image Guy.’”

Mangini explained that he always tries to emphasize the why and not the how, and is not tempted to imitate somebody else’s work. “After all, when you imitate you ensure that you will only be ‘almost’ as good as the person or thing you imitate. To understand the ‘why,’ I break down the script into story arcs and develop a sound script so I can reference the dramatic beats rather than the visual cues, and articulate the language of storytelling using sound.”

Past Work
Offering up examples of his favorite work as a soundtrack designer, Mangini provided two clips during his keynote. “While working on Star Trek [in 2009] with supervising sound editor Mark Stoeckinger, director J. J. Abrams gave me two days to prepare — with co-designer Mark Binder — a new soundtrack for the two-minute mind meld sequence. J. J. wanted something totally different from what he already had. We scrapped the design work we did on the first day, because it was only different, not better. On day two we rethought how sound could tell the story that J. J. wanted to tell. Having worked on three previous Star Trek projects [different directors], I was familiar with the narrative. We used a complex combination of orchestral music and sound effects that turned the sequence on its head; I’m glad to say that J. J. liked what we did for his film.”

The two collaborators received the following credit: “Mind Meld Soundscape by Mark Mangini and Mark Binder.”

Turning to his second soundtrack example, Mangini recalled receiving a call from Australia about the in-progress soundtrack for George Miller’s Mad Max: Fury Road, the director’s fourth outing with the franchise. “The mix they had prepared in Sydney just wasn’t working for George. I was asked to come down and help re-invigorate the track. One of the obstacles to getting this mix off the ground was the sheer abundance of material to choose from. When you have so many choices on a soundtrack, the mix can be an agonizing process of ‘Sound Design by Elimination.’ We needed to tell him, ‘Abandon what you have and start over.’ It was up to me, as an artist, to tell George that his V8 needed an overhaul and not just a tune-up!”

“We had 12 weeks, working at Formosa with co-supervising sound editor Scott Hecker — and at Warner Bros Studios with re-recording mixers Chris Jenkins and Greg Rudloff — to come up with what George Miller was looking for. We gave each vehicle [during the extended car-chase sequence that opens the film] a unique character with sound, and carefully defined [the lead proponent Max Rockatansky’s] changing mental state during the film. The desert chase became ‘Moby Dick,’ with the war rig as the white whale. We focused on narrative decisions as we reconstructed the soundtrack, always referencing ‘the why’ for our design choices in order to provide a meaningful sonic immersion. Miller has been quoted as saying, ‘Mad Max is a film where we see with our ears.’ This from a director who has been making films for 40 years!”

His advice to fledgling sound designers? Mangini kept it succinct: “Ask yourself why, not how. Be the author of content, take risks, tell stories.”

Creating a Sonic Immersive Experience
Subsequent panels during the all-day conference addressed how to design immersive music, sound effects and dialog elements used on film and TV soundtracks. For many audiences, a 5.1-channel format is sufficient for carrying music, effects and dialog in an immersive, surround experience, but 7.1-channel — with added side speakers, in addition to the new Dolby Atmos, Barco/Auro 3D and DTS:X/MDA formats — can extend that immersive experience.

“During editorial for Guardians of the Galaxy we had so many picture changes that the re-recording mixers needed all of the music stems and breakouts we could give them,” said music editor Will Kaplan, MPSE, from Warner Bros. Studio Facilities, during the “Music: Composing, Editing and Mixing Beyond 5.1” panel. It was presented by Formosa Group and moderated by scoring mixer Dennis Sands, CAS. “In a quieter movie we can deliver an entire orchestral track that carries the emotion of a scene.”

Music: Composing, Editing and Mixing Beyond 5.1 panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

‘Music:Composing, Editing and Mixing Beyond 5.1’ panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

Describing his collaboration with Tim Burton, music editor Bill Abbott, MPSE from Formosa reported that the director “liked to hear an entire orchestral track for its energy, and then we recorded it section by section with the players remaining on the stage, which can get expensive!”

Joseph Magee, CAS, (supervising music mixer on such films as Pitch Perfect 2, The Wedding Ringer, Saving Mr. Banks and The Muppets) likes to collaborate closely with the effects editor to decide who handles which elements from each song. “Who gets the snaps and dance shoes How do we divide up the synchronous ambience and the design ambience? The synchronous ambience from the set might carry tails from the sing-offs, and needs careful matching. What if they pitch shift the recorded music in post? We then need to change the pitch of the music captured in the audience mics using DAW plug-ins.”

“I like to invite the sound designer to the music spotting session,” advised Abbott, “and discuss who handles what — is it a music cue or a sound effect?”

“We need to immerse audiences with sound and use the surrounds for musical elements,” explained Formosa’s re-recording mixer, Andy Koyama, CAS. “That way we have more real estate in the front channels for sound effects.”

“We should get the sound right on the set because it can save a lot of processing time on the dub stage,” advised production mixer Lee Orloff, CAS, during the “A Dialog on Dialog: From Set to Screen” panel moderated by Jeff Wexler, CAS.

A Dialog on Dialog: From Set to Screen panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

‘A Dialog on Dialog: From Set to Screen’ panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

“I recall working on The Patriot, where the director [Roland Emmerich] chose to create ground mist using smoke machines known as a Smoker Boats,” recalled Orloff, who received Oscar and BAFTA Awards for Terminator 2: Judgment Day (1991). “The trouble was that they contained noisy lawnmower engines, whose sound can be heard under all of the dialog tracks. We couldn’t do anything about it! But, as it turned out, that low-level noise added to the sense of being there.”

“I do all of my best work in pre-production,” added Wexler, “by working out the noise problems we will face on location. It is more than just the words that we capture; a properly recorded performance tells you so much about the character.”

“I love it when the production track is full of dynamics,” added dialog/music re-recording mixer Gary Bourgeois, CAS. “The voice is an instrument; if I mask out everything that is not needed I lose the ‘essence’ of the character’s performance. The clarity of dialog is crucial.”

“We have tools that can clean up dialog,” conceded supervising sound editor Marla McGuire, MPSE, “but if we apply them too often and too deeply it takes the life out of the track.”

“Sound design can make an important scene more impactful, but you need to remember that you’re working in the service of the film,” advised sound designer/supervising sound editor Richard King, MPSE, during the “Sound Effects: How Far Can You Go?” moderated by David Bondelevitch, MPSE, CAS.

Sound Effects: How Far Can You Go? panel L_R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

‘Sound Effects: How Far Can You Go?’ panel L-R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

In terms of music co-existing with sound effects, Formosa’s Scott Gershin, MPSE, advised, “During a plane crash sequence, I pitch shifted the sound effect to match the music.”

“I like to go to the music spotting session and ask if the director wants the music to serve as a rhythmic or thematic/tonal part of the soundtrack,” added sound effects re-recording mixer Will File from Fox Post Production Services. “I just take the other one. Or if it’s all rhythm — a train ride, for example — we’ll agree to split [the elements].”

“On the stage, I’m constantly shifting sync and pitch shifting the sound effects to match the music track,” stated Gershin. “For Pacific Rim we had many visual effects arriving late with picture changes. Director Guillermo del Toro received so many new eight-frame VFX cues he wanted to use that the music track ended up looking like bar code” in the final Pro Tools sessions.

In terms of working with new directors, “I like to let them see some good movies with good sound design to start the conversation” offered Files. “I front load the process by giving the director and picture editors a great sounding temp track using dialog predubs that they can load into the Avid Media Composer to get them used to our sound ideas It also helps the producers dazzle the studio!”

“Successful soundtrack design is a collaborative effort from production sound onwards,” advised re-recording mixer Mike Minkler, CAS, during “The Mix: Immersive Sound, Film and Television” panel, presented by DTS and moderated by Mix editor Tom Kenny. “It’s about storytelling. Somebody has to be the story’s guardian during the mix,” stated Minkler, who received Academy Awards for Dreamgirls (2006), Chicago (2002) and Black Hawk Down (2001). “Filmmaking is the ultimate collaboration. We need to be aware of what the director wants and what the picture needs. To establish your authority you need to gain their confidence.”

“For immersive mixes, you should start in Dolby Atmos as your head mix,” advised Jeremy Pearson, CAS, who is currently re-recording The Hunger Games: Mockingjay – Part 2 at Warner Bros. Studio. He also worked in that format on Mockingjay – Part 1 and Catching Fire. “Atmos is definitely the way to go; it’s what everyone can sign off on. In terms of creative decisions during an Atmos mix, I always ask myself, ‘Am I helping the story by moving a sound, or distracting the audience?’ After all, the story is up on the screen. We can enhance sound depth to put people into the scene, or during calmer, gentler scenes you can pinpoint sounds that engage the audience with the narrative.”

Kim Novak Theater at Sony Pictures Studios

Kim Novak Theater at Sony Pictures Studios.

Minkler reported that he is currently working on director Quentin Tarantino’s The Hateful Eight, “which will be released initially for two weeks in a three-hour version on 70mm film to 100 screens, with an immersive 5.1-channel soundtrack mastered to 35 mm analog mag.”

Subsequently, the film will be released next year in a slightly different version via a conventional digital DCP.

“Our biggest challenge,” reported Matt Waters, CAS, sound effects re-recording mixer for HBO’s award-winning Game of Thrones, “is getting everything competed in time. Changes are critical and we might spend half a day on a sequence and then have only 10 minutes to update the mix when we receive picture changes.”

“When we receive new visuals,” added Onnalee Blank, CAS, who handles music and dialog re-recording on the show, “[the showrunners] tell us, ‘it will not change the sound.’ But if the boats become dragons…”

Photos by Mel Lambert.

Mel Lambert is principal of Content Creators, an LA-based editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Quick Chat: Walter Biscardi on his new Creative Hub co-op

Post production veteran and studio owner Walter Biscardi has opened The Creative Hub within his Atlanta-area facility Biscardi Creative Media (BCM). This co-op workspace for creatives is designed to offer indie filmmakers and home-based video producers a place to work, screen their work, meet with clients and collaborate.

Biscardi has had this idea in the back of his head for the past few years, but it was how he started his post company that inspired The Creative Hub. After spending years at CNN and in the corporate world, Biscardi launched his post business in 2001, working out of a spare bedroom in his house. In 2003 he added 1,200 square feet to the back of his house, where he ran the company until 2010. In January 2011 he moved into his current facility. So he knows a thing or two about starting small and growing a business naturally.

color

Color grading

Let’s find out more.

Why was this the right time to launch this co-op?
The tools keep getting smaller and more powerful, so it’s easier than ever to work at home.  But from time to time there is still a need for “bigger iron” to help get the job done.  There’s also a need for peripherals that you might want to use such as the Tangent Element panels and FSI monitors for color grading, but making that investment for just one project isn’t feasible. Or maybe you’re planning a large project and would like to lay out your storyboards and planning where everyone can see it. Our conference room has 30 feet of corkboard and a 10-foot dry erase wall that is killer for production planning.

How will it work?
We have a beautiful space here and oftentimes we have rooms available for use. In the “traditional post production world” you would charge $50- $175/hour just for the suite, but many indie filmmakers — and even many long-form projects like reality shows and episodics — just don’t have that kind of budget.  So I looked at the co-op office space for inspiration on how to set up a pricing structure that would allow the maximum benefit for indie creatives and yet allow us to pay the bills. So we came up with the basic hourly/daily/weekly/monthly pricing structure that’s easy to follow with no commitments.

I think the time has been right for the co-op creative space for at least two years now, it just took this much time for me to finally get my act together and get everything down on paper.

What’s great about the co-op space too is that we hope it’ll foster collaboration by getting folks out of their houses for the day and into a common space where you can bounce ideas off each other, create those, “Hey, can you come look at this” moments. You see a lot of that online, but being able to actually talk to the person in the same room always leads to much better collaboration than a thread of responses to your online video.

One of the edit rooms

One of the edit rooms

Can you talk more about the pricing and room availability?
Depending on the room, we have availability by the hour, day, week and month. Prices are very straightforward such as $100/day for a fully furnished edit suite. (See pricing here.) That includes the workstation, dual monitors, Flanders Scientific reference monitor and two KRK Rokit 5 audio monitors. Those rates are definitely below “market value” but we have the space, the gear and we’re happy to open our doors and let filmmakers and creatives come on in and have some fun in our sandbox.

The caveat to all the low pricing is that it is restricted to standard business hours only. Right now that’s 8am-6pm. This follows with most of the co-ops I researched and if folks wanted to have 24-hour access or longer access to the space, that would be priced according to their needs. But the rates would revert to more market standard rates with overnight being more. We’ll see how this goes and if it takes off, we could always run a second shift at night to help maintain a lower rate in those hours.

What about gear?
For editorial, graphics, animation, sound and design, we have the full Adobe Creative Cloud in every Creative Suite.  Four of the suites run Mac and one room runs Windows.  Every suite has a Flanders Scientific Reference monitor connected via AJA or BMD hardware.

Color grading is offered via Blackmagic’s DaVinci Resolve and Adobe’s SpeedGrade on a Mac Pro with a Tangent Elements control surface and an FSI OLED Reference Monitor.

The sound mixing theater features ProTools|HD 5.1 mixing system with Genelec audio monitoring.  The main system is a Mac Pro. That theater has an eight-foot projection screen (pictured right) and can serve as a screening room for up to 12 people or a classroom for workshops with seating for up to 18 people. It’s a great workshop space.

None of our pricing includes high-speed storage as we assume people will bring their own. We do have 96TB of high-speed networked storage on site, which is available for $15/TB per day should it be needed.

So you are mostly Adobe CC based?
Adobe is provided because that’s what we use here so it’s already on all of the systems. By not having to invest in additional software, we can keep the rates low. We do have Avid Media Composer and Final Cut Pro on site, but they are older versions. If we get enough requests for Avid and FCP, we can update our software packages at a later date.

———
Walter Biscardi is a staple on social media. Follow him at @walterbiscardi.

The audio team at Sony Pictures Post takes on Netflix’s ‘Bloodline’

By Jennifer Walden

For many in the country, this past winter was a rough one. In fact, at times it felt never-ending. While spring now limps toward us just a little too slowly, you might want to imagine yourself in a tropical climate. May we recommend settling in with a refreshing drink and binge watching Netflix’s new drama series Bloodline. Shot on-location in Islamorada, a six-island section of the Florida Keys, Bloodline unravels the story of the locally renowned Rayburn family, whose children engage in a bit of Floridian fratricide.

Re-recording mixer Joe Barnett at Sony Pictures Post notes, unsurprisingly, that bug sounds Continue reading

Cleaning, creating and mixing sounds for ‘The Americans’

Sync Sound digs into its third season of audio post for this FX series

By Jennifer Walden

The concept of FX’s The Americans, now in its third season, is incredibly compelling — two Cold War-era Soviet spies, who look and sound as American as the proverbial apple pie. They have two kids, a house in the D.C. suburbs and a very dangerous double life dedicated to gathering intel for the Motherland. The couple, Elizabeth (Keri Russell) and Phillip Jennings (Matthew Rhys), struggles to balance family values with espionage.

To learn the secrets of The Americans sound, I infiltrated the inner circle at New York-based audio post house Sync Sound, which has handled the audio post on all three seasons. Continue reading

Yulik Yagudin’s Top Ten: Why audio engineers should be respected

Yulik Yagudin has spent the last three and a half years working as senior audio engineer at one of the largest private audio post houses in Moscow, CineLab SoundMix. They offer all aspects of audio post production.

Yagudin has music in his blood. He was born in Moscow, Russia, to a family of musicians. After graduating from Moscow Conservatory College of Music as a percussion player, he stayed on for two more years… that’s when his love of audio engineering drove him into the recording studio and the television world.

What followed always involved many various aspects of sound work, including writing/arranging music for TV and commercials, editing and mixing sound for TV, Continue reading

The sound of fire, tires and more for Disney’s ‘Planes: Fire and Rescue’

By Jennifer Walden

Disney’s follow-up to Planes, Planes: Fire and Rescue, like many Disney offerings, has a message — this one involves overcoming a handicap and finding another path in life. But if you think it’s a film for very young children, you’d be mistaken. It’s a family action-adventure film, complete with intense visuals, situations and sound to match.

There was no holding back on the native Dolby Atmos mix. With full-range surround speakers in a 62.2 configuration, the mix team of David E. Fluhr (dialogue and music) and Dean Zupancic (sound effects) had all the sonic real estate they needed to immerse the audience in the action. According to Formosa Group supervising sound editor/sound designer Todd Toon, “Director Roberts Gannaway not only gave us permission, but demanded that we deliver a big Continue reading