Category Archives: Sound Design

Sundance: Audio post for Honey Boy and The Death of Dick Long

By Jennifer Walden

Brent Kiser, an Emmy award-winning supervising sound editor/sound designer/re-recording mixer
at LA’s Unbridled Sound, is no stranger to the Sundance Film Festival. His resume includes such Sundance premieres as Wild Wild Country, Swiss Army Man and An Evening with Beverly Luff Lin.

He’s the only sound supervisor to work on two films that earned Dolby fellowships: Swiss Army Man back in 2016 and this year’s Honey Boy, which premiered in the US Dramatic Competition. Honey Boy is a biopic of actor Shia LaBeouf’s damaging Hollywood upbringing.

Brent Kiser (in hat) and Will Files mixing Honey Boy.

Also showing this year, in the Next category, was The Death of Dick Long. Kiser and his sound team once again collaborated with director Daniel Scheinert. For this dark comedy, the filmmakers used sound to help build tension as a group of friends tries to hide the truth of how their buddy Dick Long died.

We reached out to Kiser to find out more.

Honey Boy was part of the Sundance Institute’s Feature Film Program, which is supported by several foundations including the Ray and Dagmar Dolby Family Fund. You mentioned that this film earned a grant from Dolby. How did that grant impact your approach to the soundtrack?
For Honey Boy, Dolby gave us the funds to finish in Atmos. It allowed us to bring MPSE award-winning re-recording mixer Will Files on to mix the effects while I mixed the dialogue and music. We mixed at Sony Pictures Post Production on the Kim Novak stage. We got time and money to be on a big stage for 11 days — a five-day pre-dub and six-day final mix.

That was huge because the film opens up with these massive-robot action/sci-fi sound sequences and it throws the audience off the idea of this being a character study. That’s the juxtaposition, especially in the first 15 to 20 minutes. It’s blurring the reality between the film world and real life for Shia because the film is about Shia’s upbringing. Shia LaBeouf wrote the film and plays his father. The story focuses on the relationship of young actor Otis Lort (Lucas Hedges) and his alcoholic father James.

The story goes through Shia’s time on Disney Channel’s Even Stevens series and then on Transformers, and looks at how this lifestyle had an effect on him. His father was an ex-junkie, sex-offender, ex-rodeo clown and would just push his son. By age 12, Shia was drinking, smoking weed and smoking cigarettes — all supplied to him by his dad. Shia is isolated and doesn’t have too many friends. He’s not around his mother that much.

This year is the first year that Shia has been sober since age 12. So this film is one big therapeutic movie for him. The director Alma Har’el comes from an alcoholic family, so she’s able to understand where Shia is coming from. Working with Alma is great. She wants to be in every part of the process — pick each sound and go over every bit to make sure it’s exactly what she wants.

Honey Boy director Alma Har’el.

What were director Alma Har’el’s initial ideas for the role of sound in Honey Boy?
They were editing this film for six months or more, and I came on board around mid-edit. I saw three different edits of the film, and they were all very different.

Finally, they settled on a cut that felt really nice. We had spotting sessions before they locked and we were working on creating the environment of the motel where Otis and James were staying. We were also working on creating the sound of Otis being on-set. It had to feel like we were watching a film and when someone screams, “Cut!” it had to feel like we go back into reality. Being able to play with those juxtapositions in a sonic way really helped out. We would give it a cinematic sound and then pulled back into a cinéma vérité-type sound. That was the big sound motif in the movie.

We worked really close with the composer Alex Somers. He developed this little crank sound that helped to signify Otis’ dreams and the turning of events. It makes it feel like Otis is a puppet with all his acting jobs.

There’s also a harness motif. In the very beginning you see adult Otis (Lucas Hedges) standing in front of a plane that has crashed and then you hear things coming up behind him. They are shooting missiles at him and they blow up and he gets yanked back from the explosions. You hear someone say, “Cut!” and he’s just dangling in a body harness about 20 feet up in the air. They reset, pull him down and walk him back. We go through a montage of his career, the drunkenness and how crazy he was, and then him going to therapy.

In the session, he’s told he has PTSD caused by his upbringing and he says, “No, I don’t.” It kicks to the title and then we see young Otis (Noah Jupe) sitting there waiting, and he gets hit by a pie. He then gets yanked back by that same harness, and he dangles for a little while before they bring him down. That is how the harness motif works.

There’s also a chicken motif. Growing up, Otis has a chicken named Henrietta La Fowl, and during the dream sequences the chicken leads Otis to his father. So we had to make a voice for the chicken. We had to give the chicken a dreamy feel. And we used the old-school Yellow Sky wind to give it a Western-feel and add a dreaminess to it.

On the dub stage with director Alma Har’el and her team, plus Will Files (front left) and Andrew Twite (front right).

Andrew Twite was my sound designer. He was also with me on Swiss Army Man. He was able to make some rich and lush backgrounds for that. We did a lot of recording in our neighborhood of Highland Park, which is much like Echo Park where Shia grew up and where the film is based. So it’s Latin-heavy communities with taco trucks and that fun stuff. We gave it that gritty sound to show that, even though Otis is making $8,000 a week, they’re still living on the other side of the tracks.

When Otis is in therapy, it feels like Malibu. It’s nicer, quieter, and not as stressful versus the motel when Otis was younger, which is more pumped up.

My dialogue editor was Elliot Thompson, and he always does a great job for me. The production sound mixer Oscar Grau did a phenomenal job of capturing everything at all moments. There was no MOS (picture without sound). He recorded everything and he gave us a lot of great production effects. The production dialogue was tricky because in many of the scenes young Otis isn’t wearing a shirt and there are no lav mics on him. Oscar used plant mics and booms and captured it all.

What was the most challenging scene for sound design on Honey Boy?
The opening, the intro and the montage right up front were the most challenging. We recut the sound for Alma several different ways. She was great and always had moments of inspiration. We’d try different approaches and the sound would always get better, but we were on a time crunch and it was difficult to get all of those elements in place in the way she was looking for.

Honey Boy on the mix stage at Sony’s Kim Novak Theater.

In the opening, you hear the sound of this mega-massive robot (an homage to a certain film franchise that Shia has been part of in the past, wink, wink). You hear those sounds coming up over the production cards on a black screen. Then it cuts to adult Otis standing there as we hear this giant laser gun charging up. Otis goes, “No, no, no, no, no…” in that quintessential Shia LaBeouf way.

Then, there’s a montage over Missy Elliott’s “My Struggles,” and the footage goes through his career. It’s a music video montage with sound effects, and you see Otis on set and off set. He’s getting sick, and then he’s stuck in a harness, getting arrested in the movie and then getting arrested in real life. The whole thing shows how his life is a blur of film and reality.

What was the biggest challenge in regards to the mix?
The most challenging aspect of the mix, on Will [Files]’s side of the board, was getting those monsters in the pocket. Will had just come off of Venom and Halloween so he can mix these big, huge, polished sounds. He can make these big sound effects scenes sound awesome. But for this film, we had to find that balance between making it sound polished and “Hollywood” while also keeping it in the realm of indie film.

There was a lot of back and forth to dial-in the effects, to make it sound polished but still with an indie storytelling feel. Reel one took us two days on stage to get through. We even spent some time on it on the last mix day as well. That was the biggest challenge to mix.

The rest of the film is more straightforward. The challenge on dialogue was to keep it sounding dynamic instead of smoothed out. A lot of Shia’s performance plays in the realm of vocal dynamics. We didn’t want to make the dialogue lifeless. We wanted to have the dynamics in there, to keep the performance alive.

We mixed in Atmos and panned sounds into the ceiling. I took a lot of the composer’s stems and remixed those in Atmos, spreading all the cues out in a pleasant way and using reverb to help glue it together in the environment.

 

The Death of Dick Long

Let’s look at another Sundance film you’ve worked on this year. The Death of Dick Long is part of the Next category. What were director Daniel Scheinert’s initial ideas for the role of sound on this film?
Daniel Scheinert always shows up with a lot of sound ideas, and most of those were already in place because of picture editor Paul Rogers from Parallax Post (which is right down the hall from our studio Unbridled Sound). Paul and all the editors at Parallax are sound designers in their own right. They’ll give me an AAF of their Adobe Premiere session and it’ll be 80 tracks deep. They’re constantly running down to our studio like, “Hey, I don’t have this sound. Can you design something for me?” So, we feed them a lot of sounds.

The Death of Dick Long

We played with the bug sounds the most. They shot in Alabama, where both Paul and Daniel are from, so there were a lot of cicadas and bugs. It was important to make the distinction of what the bugs sounded like in the daytime versus what they sounded like in the afternoon and at night. Paul did a lot of work to make sure that the balance was right, so we didn’t want to mess with that too much. We just wanted to support it. The backgrounds in this film are rich and full.

This film is crazy. It opens up with a Creed song and ends with a Nickleback song, as a sort of a joke. They wanted to show a group of guys that never really made much of themselves. These guys are in a band called Pink Freud, and they have band practice.

The film starts with them doing dumb stuff, like setting off fireworks and catching each other on fire — just messing around. Then it cuts to Dick (Daniel Scheinert) in the back of a vehicle and he’s bleeding out. His friends just dump him at the hospital and leave. The whole mystery of how Dick dies unfolds throughout the course of the film. The two main guys are Earl (Andre Hyland) and Zeke (Michael Abbott, Jr.).

The Foley on this film — provided by Foley artist John Sievert of JRS Productions — plays a big role. Often, Foley is used to help us get in and out of the scene. For instance, the police are constantly showing up to ask more questions and you hear them sneaking in from another room to listen to what’s being said. There’s a conversation between Zeke and his wife Lydia (Virginia Newcomb) and he’s asking her to help him keep information from the police. They’re in another room but you hear their conversation as the police are questioning Dick Long’s wife, Jane (Jess Weixler).

We used sound effects to help increase the tension when needed. For example, there’s a scene where Zeke is doing the laundry and his wife calls saying she’s scared because there are murderers out there, and he has to come and pick her up. He knows it’s him but he’s trying to play it off. As he is talking to her, Earl is in the background telling Zeke what to say to his wife. As they’re having this conversation, the washing machine out in the garage keeps getting louder and it makes that scene feel more intense.

Director Daniel Scheinert (left) and Puddle relaxing during the mix.

“The Dans” — Scheinert and Daniel Kwan — are known for Swiss Army Man. That film used sound in a really funny way, but it was also relevant to the plot. Did Scheinert have the same open mind about sound on The Death of Dick Long? Also, were there any interesting recording sessions you’d like to talk about?
There were no farts this time, and it was a little more straightforward. Manchester Orchestra did the score on this one too, but it’s also more laid back.

For this film, we really wanted to depict a rural Alabama small-town feel. We did have some fun with a few PA announcements, but you don’t hear those clearly. They’re washed out. Earl lives in a trailer park, so there are trailer park fights happening in the background to make it feel more like Jerry Springer. We had a lot of fun doing that stuff. Sound effects editor Danielle Price cut that scene, and she did a really great job.

What was the most challenging aspect of the sound design on The Death of Dick Long?
I’d say the biggest things were the backgrounds, engulfing the audience in this area and making sure the bugs feel right. We wanted to make sure there was off-screen movement in the police station and other locations to give them all a sense of life.

The whole movie was about creating a sense of intensity. I remember showing it to my wife during one of our initial sound passes, and she pulled the blanket over her face while she was watching it. By the end, only her eyes were showing. These guys keep messing up and it’s stressful. You think they’re going to get caught. So the suspense that the director builds in — not being serious but still coming across in a serious manner — is amazing. We were helping them to build that tension through backgrounds, music and dropouts, and pushing certain everyday elements (like the washing machine) to create tension in scenes.

What scene in this film best represents the use of sound?
I’d say the laundry scene. Also, in the opening scene you hear the band playing in the garage and the perspective slowly gets closer and closer.

During the film’s climax, when you find out how Dick dies, we’re pulling down the backgrounds that we created. For instance, when you’re in the bedroom you hear their crappy fan. When you’re in the kitchen, you hear the crappy compressor on the refrigerator. It’s all about playing up these “bad” sounds to communicate the hopelessness of the situation they are living in.

I want to shout out all of my sound editors for their exceptional work on The Death of Dick Long. There was Jacob “Young Thor” Flack and Elliot Thompson, and Danielle Price who did amazing backgrounds. Also, a shout out to Ian Chase for help on the mix. I want to make sure they share the credit.

I think there needs to be more recognition of the contribution of sound and the sound departments on a film. It’s a subject that needs to be discussed, particularly in these somber days following the death of Oscar-winning re-recording mixer Gregg Rudloff. He was the nicest guy ever. I remember being an intern on the sound stage and he always took the time to talk to us and give us advice. He was one of the good ones.

When post sound gets a credit after the caterers’ on-set, it doesn’t do us justice. On Swiss Army Man, initially I had my own title card because The Dans wanted to give me a title card that said, “Supervising Sound Editor Brent Kiser,” but the Directors Guild took it away. They said it wasn’t appropriate. Their reasoning is that if they give it to one person then they’ll have to give it to everybody. I get it — the visual effects department is new on the block. They wrote their contract knowing what was going on, so they get a title card. But try watching a film on mute and then talk to me about the importance of sound. That needs to start changing, for the sheer fact of burnout and legacy.

At the end of the day, you worked so hard to get these projects done. You’re taking care of someone else’s baby and helping it to grow up to be this great thing, but then we’re only seen as the hired help. Or, we never even get a mention. There is so much pressure and stress on the sound department, and I feel we deserve more recognition for what we give to a film.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Audio post pro Julienne Guffain joins Sonic Union

NYC-based audio post studio Sonic Union has added sound designer/mix engineer Julienne Guffain to its creative team. Working across Sonic Union’s Bryant Park and Union Square locations, Guffain brings over a decade of experience in audio post production to her new role. She has worked on television, film and branded projects for clients such as Google, Mountain Dew, American Express and Cadillac among others.

A Virginia native, Guffain came to Manhattan to attend New York University’s Tisch School of the Arts. She found herself drawn to sound in film, and it was at NYU where she cut her teeth as a Foley artist and mixer on student films and independent projects. She landed her first industry gig at Hobo Audio, working with clients such as The History Channel, The Discovery Channel and mixing the Emmy-winning television documentary series “Rising: Rebuilding Ground Zero.”

Making her way to Crew Cuts, she began lending her talents to a wide range of spot and brand projects, including the documentary feature “Public Figure,” which examines the psychological effects of constant social media use. It is slated for a festival run later this year.

 

DigitalGlue 2.5

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.


Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.


Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.


Behind the Title: Heard City mixer Elizabeth McClanahan

A musician from an early age, this mixer/sound designer knew her path needed to involve music and sound.

Name: Elizabeth McClanahan

Company: New York City’s Heard City (@heardcity)

Can you describe your company?
We are an audio post production company.

What’s your job title?
Mixer and sound designer.

What does that entail?
I mix and master audio for advertising, television and film. Working with creatives, I combine production audio, sound effects, sound design, score or music tracks and voiceover into a mix that sounds smooth and helps highlight the narrative of each particular project.

What would surprise people the most about what falls under that title?
I think most people are surprised by the detailed nature of sound design and by the fact that we often supplement straightforward diegetic sounds with additional layers of more conceptual design elements.

What’s your favorite part of the job?
I enjoy the collaborative work environment, which enables me to take on different creative challenges.

What’s your least favorite?
The ever-changing landscape of delivery requirements.

What is your favorite time of the day?
Lunch!

If you didn’t have this job, what would you be doing instead?
I think I would be interested in pursuing a career as an archivist or law librarian.

Why did you choose this profession?
Each project allows me to combine multiple tools and skill sets: music mixing, dialogue cleanup, sound design, etc. I also enjoy the problem solving inherent in audio post.

How early on did you know this would be your path?
I began playing violin at age four, picking up other instruments along the way. As a teenager, I often recorded friends’ punk bands, and I also started working in live sound. Later, I began my professional career as a recording engineer and focused primarily on jazz. It wasn’t until VO and ADR sessions began coming into the music studio in which I was working that I became aware of the potential paths in audio post. I immediately enjoyed the range and challenges of projects that post had to offer.

Can you name some recent projects you have worked on?
Lately, I’ve worked on projects for Google, Budweiser, Got Milk?, Clash of Clans, and NASDAQ.

I recently completed work on a feature film, called Nancy. This was my first feature in the role of supervising sound editor and re-recording mixer, and I appreciated the new experience on both a technical and creative level. Nancy was particularly unique in that all department heads (in both production and post) were women. It was an incredible opportunity to work with so many talented people.

Name three pieces of technology you can’t live without.
The Teenage Engineering OP-1, my phone and the UAD plugins that allow me to play bass at home without bothering my neighbors.

What social media channels do you follow?
Although I am not a heavy social media user, I follow a few pragmatic-yet-fun YouTube channels: Scott’s Bass Lessons, Hicut Cake and the gear review channel Knobs. I love that Knobs demonstrates equipment in detail without any talking.

What do you do to de-stress from it all?
In addition to practicing yoga, I love to read and visit museums, as well as play bass and work with modular synths.


Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.


Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.


Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.