OWC 12.4

Category Archives: Sound Design

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Butter Music and Sound adds new ECDs in NYC and LA

Music shop Butter Music and Sound has expanded its in-house creative offerings with the addition of two new executive creative directors (ECDs): Tim Kvasnosky takes the helm in Los Angeles and Aaron Kotler in New York.

The newly appointed ECDs will maintain creative oversight on all projects going through the Los Angeles and New York offices, managing workflow across staff and freelance talent, composing on a wide range of projects and supporting and mentoring in-house talent and staff.

Kvasnosky and Kotler both have extensive experience as composers and musicians, with backgrounds crafting original music for commercials, film and television. They also maintain active careers in the entertainment and performance spaces. Kvasnosky recently scored the feature film JT LeRoy, starring Kristen Stewart and Laura Dern. Kotler performs and records regularly.

Kvasnosky is a composer and music producer with extensive experience across film, TV, advertising and recording. A Seattle native who studied at NYU, he worked as a jazz pianist and studio musician before composing for television and film. His tracks have been licensed in many TV shows and films. He has scored commercial campaigns for Nike, Google, McDonald’s, Amazon, Target and VW. Along with Detroit-based music producer Waajeed and singer Dede Reynolds, Kvasnosky formed the electronic group Tiny Hearts.

Native New Yorker Kotler holds a Bachelor of Music from Northwestern University School of Music and a Master of Music from Manhattan School of Music, both in jazz piano performance. He began his career as a performer and studio musician, playing in a variety of bands and across genres including neo-soul, avante garde jazz, funk, rock and more. He also music directed Jihad! The Musical to a month of sold-out performances at the Edinburgh Festival Fringe. Since then, he has composed commercials, themes and sonic branding campaigns for AT&T, Coca-Cola, Nike, Verizon, PlayStation, Samsung and Honda. He has also arranged music for American Idol and The Emmys, scored films that were screened at a variety of film festivals, and co-produced Nadje Noordhuis’ debut record. In 2013, he teamed up with Michael MacAllister to co-design and build Creekside Sound, a recording and production studio in Brooklyn.

Main Image: (L-R) Tim Kvasnosky and Aaron Kotler

OWC 12.4

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney


Sundance: Audio post for Honey Boy and The Death of Dick Long

By Jennifer Walden

Brent Kiser, an Emmy award-winning supervising sound editor/sound designer/re-recording mixer
at LA’s Unbridled Sound, is no stranger to the Sundance Film Festival. His resume includes such Sundance premieres as Wild Wild Country, Swiss Army Man and An Evening with Beverly Luff Lin.

He’s the only sound supervisor to work on two films that earned Dolby fellowships: Swiss Army Man back in 2016 and this year’s Honey Boy, which premiered in the US Dramatic Competition. Honey Boy is a biopic of actor Shia LaBeouf’s damaging Hollywood upbringing.

Brent Kiser (in hat) and Will Files mixing Honey Boy.

Also showing this year, in the Next category, was The Death of Dick Long. Kiser and his sound team once again collaborated with director Daniel Scheinert. For this dark comedy, the filmmakers used sound to help build tension as a group of friends tries to hide the truth of how their buddy Dick Long died.

We reached out to Kiser to find out more.

Honey Boy was part of the Sundance Institute’s Feature Film Program, which is supported by several foundations including the Ray and Dagmar Dolby Family Fund. You mentioned that this film earned a grant from Dolby. How did that grant impact your approach to the soundtrack?
For Honey Boy, Dolby gave us the funds to finish in Atmos. It allowed us to bring MPSE award-winning re-recording mixer Will Files on to mix the effects while I mixed the dialogue and music. We mixed at Sony Pictures Post Production on the Kim Novak stage. We got time and money to be on a big stage for 11 days — a five-day pre-dub and six-day final mix.

That was huge because the film opens up with these massive-robot action/sci-fi sound sequences and it throws the audience off the idea of this being a character study. That’s the juxtaposition, especially in the first 15 to 20 minutes. It’s blurring the reality between the film world and real life for Shia because the film is about Shia’s upbringing. Shia LaBeouf wrote the film and plays his father. The story focuses on the relationship of young actor Otis Lort (Lucas Hedges) and his alcoholic father James.

The story goes through Shia’s time on Disney Channel’s Even Stevens series and then on Transformers, and looks at how this lifestyle had an effect on him. His father was an ex-junkie, sex-offender, ex-rodeo clown and would just push his son. By age 12, Shia was drinking, smoking weed and smoking cigarettes — all supplied to him by his dad. Shia is isolated and doesn’t have too many friends. He’s not around his mother that much.

This year is the first year that Shia has been sober since age 12. So this film is one big therapeutic movie for him. The director Alma Har’el comes from an alcoholic family, so she’s able to understand where Shia is coming from. Working with Alma is great. She wants to be in every part of the process — pick each sound and go over every bit to make sure it’s exactly what she wants.

Honey Boy director Alma Har’el.

What were director Alma Har’el’s initial ideas for the role of sound in Honey Boy?
They were editing this film for six months or more, and I came on board around mid-edit. I saw three different edits of the film, and they were all very different.

Finally, they settled on a cut that felt really nice. We had spotting sessions before they locked and we were working on creating the environment of the motel where Otis and James were staying. We were also working on creating the sound of Otis being on-set. It had to feel like we were watching a film and when someone screams, “Cut!” it had to feel like we go back into reality. Being able to play with those juxtapositions in a sonic way really helped out. We would give it a cinematic sound and then pulled back into a cinéma vérité-type sound. That was the big sound motif in the movie.

We worked really close with the composer Alex Somers. He developed this little crank sound that helped to signify Otis’ dreams and the turning of events. It makes it feel like Otis is a puppet with all his acting jobs.

There’s also a harness motif. In the very beginning you see adult Otis (Lucas Hedges) standing in front of a plane that has crashed and then you hear things coming up behind him. They are shooting missiles at him and they blow up and he gets yanked back from the explosions. You hear someone say, “Cut!” and he’s just dangling in a body harness about 20 feet up in the air. They reset, pull him down and walk him back. We go through a montage of his career, the drunkenness and how crazy he was, and then him going to therapy.

In the session, he’s told he has PTSD caused by his upbringing and he says, “No, I don’t.” It kicks to the title and then we see young Otis (Noah Jupe) sitting there waiting, and he gets hit by a pie. He then gets yanked back by that same harness, and he dangles for a little while before they bring him down. That is how the harness motif works.

There’s also a chicken motif. Growing up, Otis has a chicken named Henrietta La Fowl, and during the dream sequences the chicken leads Otis to his father. So we had to make a voice for the chicken. We had to give the chicken a dreamy feel. And we used the old-school Yellow Sky wind to give it a Western-feel and add a dreaminess to it.

On the dub stage with director Alma Har’el and her team, plus Will Files (front left) and Andrew Twite (front right).

Andrew Twite was my sound designer. He was also with me on Swiss Army Man. He was able to make some rich and lush backgrounds for that. We did a lot of recording in our neighborhood of Highland Park, which is much like Echo Park where Shia grew up and where the film is based. So it’s Latin-heavy communities with taco trucks and that fun stuff. We gave it that gritty sound to show that, even though Otis is making $8,000 a week, they’re still living on the other side of the tracks.

When Otis is in therapy, it feels like Malibu. It’s nicer, quieter, and not as stressful versus the motel when Otis was younger, which is more pumped up.

My dialogue editor was Elliot Thompson, and he always does a great job for me. The production sound mixer Oscar Grau did a phenomenal job of capturing everything at all moments. There was no MOS (picture without sound). He recorded everything and he gave us a lot of great production effects. The production dialogue was tricky because in many of the scenes young Otis isn’t wearing a shirt and there are no lav mics on him. Oscar used plant mics and booms and captured it all.

What was the most challenging scene for sound design on Honey Boy?
The opening, the intro and the montage right up front were the most challenging. We recut the sound for Alma several different ways. She was great and always had moments of inspiration. We’d try different approaches and the sound would always get better, but we were on a time crunch and it was difficult to get all of those elements in place in the way she was looking for.

Honey Boy on the mix stage at Sony’s Kim Novak Theater.

In the opening, you hear the sound of this mega-massive robot (an homage to a certain film franchise that Shia has been part of in the past, wink, wink). You hear those sounds coming up over the production cards on a black screen. Then it cuts to adult Otis standing there as we hear this giant laser gun charging up. Otis goes, “No, no, no, no, no…” in that quintessential Shia LaBeouf way.

Then, there’s a montage over Missy Elliott’s “My Struggles,” and the footage goes through his career. It’s a music video montage with sound effects, and you see Otis on set and off set. He’s getting sick, and then he’s stuck in a harness, getting arrested in the movie and then getting arrested in real life. The whole thing shows how his life is a blur of film and reality.

What was the biggest challenge in regards to the mix?
The most challenging aspect of the mix, on Will [Files]’s side of the board, was getting those monsters in the pocket. Will had just come off of Venom and Halloween so he can mix these big, huge, polished sounds. He can make these big sound effects scenes sound awesome. But for this film, we had to find that balance between making it sound polished and “Hollywood” while also keeping it in the realm of indie film.

There was a lot of back and forth to dial-in the effects, to make it sound polished but still with an indie storytelling feel. Reel one took us two days on stage to get through. We even spent some time on it on the last mix day as well. That was the biggest challenge to mix.

The rest of the film is more straightforward. The challenge on dialogue was to keep it sounding dynamic instead of smoothed out. A lot of Shia’s performance plays in the realm of vocal dynamics. We didn’t want to make the dialogue lifeless. We wanted to have the dynamics in there, to keep the performance alive.

We mixed in Atmos and panned sounds into the ceiling. I took a lot of the composer’s stems and remixed those in Atmos, spreading all the cues out in a pleasant way and using reverb to help glue it together in the environment.

 

The Death of Dick Long

Let’s look at another Sundance film you’ve worked on this year. The Death of Dick Long is part of the Next category. What were director Daniel Scheinert’s initial ideas for the role of sound on this film?
Daniel Scheinert always shows up with a lot of sound ideas, and most of those were already in place because of picture editor Paul Rogers from Parallax Post (which is right down the hall from our studio Unbridled Sound). Paul and all the editors at Parallax are sound designers in their own right. They’ll give me an AAF of their Adobe Premiere session and it’ll be 80 tracks deep. They’re constantly running down to our studio like, “Hey, I don’t have this sound. Can you design something for me?” So, we feed them a lot of sounds.

The Death of Dick Long

We played with the bug sounds the most. They shot in Alabama, where both Paul and Daniel are from, so there were a lot of cicadas and bugs. It was important to make the distinction of what the bugs sounded like in the daytime versus what they sounded like in the afternoon and at night. Paul did a lot of work to make sure that the balance was right, so we didn’t want to mess with that too much. We just wanted to support it. The backgrounds in this film are rich and full.

This film is crazy. It opens up with a Creed song and ends with a Nickleback song, as a sort of a joke. They wanted to show a group of guys that never really made much of themselves. These guys are in a band called Pink Freud, and they have band practice.

The film starts with them doing dumb stuff, like setting off fireworks and catching each other on fire — just messing around. Then it cuts to Dick (Daniel Scheinert) in the back of a vehicle and he’s bleeding out. His friends just dump him at the hospital and leave. The whole mystery of how Dick dies unfolds throughout the course of the film. The two main guys are Earl (Andre Hyland) and Zeke (Michael Abbott, Jr.).

The Foley on this film — provided by Foley artist John Sievert of JRS Productions — plays a big role. Often, Foley is used to help us get in and out of the scene. For instance, the police are constantly showing up to ask more questions and you hear them sneaking in from another room to listen to what’s being said. There’s a conversation between Zeke and his wife Lydia (Virginia Newcomb) and he’s asking her to help him keep information from the police. They’re in another room but you hear their conversation as the police are questioning Dick Long’s wife, Jane (Jess Weixler).

We used sound effects to help increase the tension when needed. For example, there’s a scene where Zeke is doing the laundry and his wife calls saying she’s scared because there are murderers out there, and he has to come and pick her up. He knows it’s him but he’s trying to play it off. As he is talking to her, Earl is in the background telling Zeke what to say to his wife. As they’re having this conversation, the washing machine out in the garage keeps getting louder and it makes that scene feel more intense.

Director Daniel Scheinert (left) and Puddle relaxing during the mix.

“The Dans” — Scheinert and Daniel Kwan — are known for Swiss Army Man. That film used sound in a really funny way, but it was also relevant to the plot. Did Scheinert have the same open mind about sound on The Death of Dick Long? Also, were there any interesting recording sessions you’d like to talk about?
There were no farts this time, and it was a little more straightforward. Manchester Orchestra did the score on this one too, but it’s also more laid back.

For this film, we really wanted to depict a rural Alabama small-town feel. We did have some fun with a few PA announcements, but you don’t hear those clearly. They’re washed out. Earl lives in a trailer park, so there are trailer park fights happening in the background to make it feel more like Jerry Springer. We had a lot of fun doing that stuff. Sound effects editor Danielle Price cut that scene, and she did a really great job.

What was the most challenging aspect of the sound design on The Death of Dick Long?
I’d say the biggest things were the backgrounds, engulfing the audience in this area and making sure the bugs feel right. We wanted to make sure there was off-screen movement in the police station and other locations to give them all a sense of life.

The whole movie was about creating a sense of intensity. I remember showing it to my wife during one of our initial sound passes, and she pulled the blanket over her face while she was watching it. By the end, only her eyes were showing. These guys keep messing up and it’s stressful. You think they’re going to get caught. So the suspense that the director builds in — not being serious but still coming across in a serious manner — is amazing. We were helping them to build that tension through backgrounds, music and dropouts, and pushing certain everyday elements (like the washing machine) to create tension in scenes.

What scene in this film best represents the use of sound?
I’d say the laundry scene. Also, in the opening scene you hear the band playing in the garage and the perspective slowly gets closer and closer.

During the film’s climax, when you find out how Dick dies, we’re pulling down the backgrounds that we created. For instance, when you’re in the bedroom you hear their crappy fan. When you’re in the kitchen, you hear the crappy compressor on the refrigerator. It’s all about playing up these “bad” sounds to communicate the hopelessness of the situation they are living in.

I want to shout out all of my sound editors for their exceptional work on The Death of Dick Long. There was Jacob “Young Thor” Flack and Elliot Thompson, and Danielle Price who did amazing backgrounds. Also, a shout out to Ian Chase for help on the mix. I want to make sure they share the credit.

I think there needs to be more recognition of the contribution of sound and the sound departments on a film. It’s a subject that needs to be discussed, particularly in these somber days following the death of Oscar-winning re-recording mixer Gregg Rudloff. He was the nicest guy ever. I remember being an intern on the sound stage and he always took the time to talk to us and give us advice. He was one of the good ones.

When post sound gets a credit after the caterers’ on-set, it doesn’t do us justice. On Swiss Army Man, initially I had my own title card because The Dans wanted to give me a title card that said, “Supervising Sound Editor Brent Kiser,” but the Directors Guild took it away. They said it wasn’t appropriate. Their reasoning is that if they give it to one person then they’ll have to give it to everybody. I get it — the visual effects department is new on the block. They wrote their contract knowing what was going on, so they get a title card. But try watching a film on mute and then talk to me about the importance of sound. That needs to start changing, for the sheer fact of burnout and legacy.

At the end of the day, you worked so hard to get these projects done. You’re taking care of someone else’s baby and helping it to grow up to be this great thing, but then we’re only seen as the hired help. Or, we never even get a mention. There is so much pressure and stress on the sound department, and I feel we deserve more recognition for what we give to a film.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney


Audio post pro Julienne Guffain joins Sonic Union

NYC-based audio post studio Sonic Union has added sound designer/mix engineer Julienne Guffain to its creative team. Working across Sonic Union’s Bryant Park and Union Square locations, Guffain brings over a decade of experience in audio post production to her new role. She has worked on television, film and branded projects for clients such as Google, Mountain Dew, American Express and Cadillac among others.

A Virginia native, Guffain came to Manhattan to attend New York University’s Tisch School of the Arts. She found herself drawn to sound in film, and it was at NYU where she cut her teeth as a Foley artist and mixer on student films and independent projects. She landed her first industry gig at Hobo Audio, working with clients such as The History Channel, The Discovery Channel and mixing the Emmy-winning television documentary series “Rising: Rebuilding Ground Zero.”

Making her way to Crew Cuts, she began lending her talents to a wide range of spot and brand projects, including the documentary feature “Public Figure,” which examines the psychological effects of constant social media use. It is slated for a festival run later this year.

 


Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.


Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.


Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

Behind the Title: Heard City mixer Elizabeth McClanahan

A musician from an early age, this mixer/sound designer knew her path needed to involve music and sound.

Name: Elizabeth McClanahan

Company: New York City’s Heard City (@heardcity)

Can you describe your company?
We are an audio post production company.

What’s your job title?
Mixer and sound designer.

What does that entail?
I mix and master audio for advertising, television and film. Working with creatives, I combine production audio, sound effects, sound design, score or music tracks and voiceover into a mix that sounds smooth and helps highlight the narrative of each particular project.

What would surprise people the most about what falls under that title?
I think most people are surprised by the detailed nature of sound design and by the fact that we often supplement straightforward diegetic sounds with additional layers of more conceptual design elements.

What’s your favorite part of the job?
I enjoy the collaborative work environment, which enables me to take on different creative challenges.

What’s your least favorite?
The ever-changing landscape of delivery requirements.

What is your favorite time of the day?
Lunch!

If you didn’t have this job, what would you be doing instead?
I think I would be interested in pursuing a career as an archivist or law librarian.

Why did you choose this profession?
Each project allows me to combine multiple tools and skill sets: music mixing, dialogue cleanup, sound design, etc. I also enjoy the problem solving inherent in audio post.

How early on did you know this would be your path?
I began playing violin at age four, picking up other instruments along the way. As a teenager, I often recorded friends’ punk bands, and I also started working in live sound. Later, I began my professional career as a recording engineer and focused primarily on jazz. It wasn’t until VO and ADR sessions began coming into the music studio in which I was working that I became aware of the potential paths in audio post. I immediately enjoyed the range and challenges of projects that post had to offer.

Can you name some recent projects you have worked on?
Lately, I’ve worked on projects for Google, Budweiser, Got Milk?, Clash of Clans, and NASDAQ.

I recently completed work on a feature film, called Nancy. This was my first feature in the role of supervising sound editor and re-recording mixer, and I appreciated the new experience on both a technical and creative level. Nancy was particularly unique in that all department heads (in both production and post) were women. It was an incredible opportunity to work with so many talented people.

Name three pieces of technology you can’t live without.
The Teenage Engineering OP-1, my phone and the UAD plugins that allow me to play bass at home without bothering my neighbors.

What social media channels do you follow?
Although I am not a heavy social media user, I follow a few pragmatic-yet-fun YouTube channels: Scott’s Bass Lessons, Hicut Cake and the gear review channel Knobs. I love that Knobs demonstrates equipment in detail without any talking.

What do you do to de-stress from it all?
In addition to practicing yoga, I love to read and visit museums, as well as play bass and work with modular synths.

Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Review: Krotos Reformer Pro for customizing sounds

By Robin Shore

Krotos has got to be one of the most.innovative developers of sound design tools in the industry right now. That is a strong statement, but I stand by it. This Scottish company has become well known over the past few years for its Dehumaniser line of products, which bring a fresh approach to the creation of creature vocals and monster sounds. Recently, they released a new DAW plugin, Reformer Pro, which aims to give sound editors creative new ways of accessing and manipulating their sound effects.

Reformer Pro brings a procedural approach to working with sound effects libraries. According to their manual, “Reformer Pro uses an input to control and select segments of prerecorded audio automatically, and recompiles them in realtime, based on the characteristics of the incoming signal.” In layman’s terms this means you can “perform” sound effects from a library in realtime, using only a microphone and your voice.

It’s dead simple to use. A menu inside the plugin lets you choose from a list of libraries that have been pre-analyzed for use with Reformer Pro. Once you’ve loaded up the library you want, all that’s left to do is provide some sort of sonic input and let the magic happen. Whatever sound you put in will be instantly “reformed” into a new sound effect of your choosing. A number of libraries come bundled in when you buy Reformer Pro and additional libraries can be purchased from the Krotos website. The choice to include the Black Leopard library as a default when you first open the plugin was a very good one. There is just something so gratifying about breathing and grunting into a microphone and hearing a deep menacing growl come out the speakers instead of your own voice. It made me an immediate fan.

There are a few knobs and switches that let you tweak the response characteristics of Reformer Pro’s output, but for the most part you’ll be using sound to control things, and the amount of control you can get over the dynamics and rhythm of Reformer Pro’s output is impressive. While my immediate instinct was to drive Reformer Pro by vocalizing through a mic, any sound source can work well as an input. I also got great results by rubbing and tapping my fingers directly against the grill of a microphone and by dragging the mic across the surface of my desk.

Things get even more interesting if you start feeding pre-recorded audio into Reformer Pro. Using a Foley footstep track as the input for library of cloth and leather sounds creates a realistic and perfectly synced rustle track. A howling wind used as the input for a library of creaks and rattles can add a nice layer of texture to a scenes ambience tracks. Pumping music through Reformer Pro can generate some really wacky sounds and is great way to find inspiration and test out abstract sound design ideas.

If the only libraries you could use with Reformer Pro’s were the 100 or so available on the Krotos website it would still be a fun and innovative tool, but its utility would be pretty limited. What makes Reformer Pro truly powerful is its analysis tool. This lets you create custom libraries out of sounds from your own collection. The possibilities here are literally endless. As long as sound exists it can turned into a unique new plugin. To be sure some sounds are better for this than others, but it doesn’t take long at all figure out what kind of sounds will work best and I was pleasantly surprised with how well most of the custom libraries I created turned out. This is a great way to breath new life into an old sound effects collection.

Summing Up
Reformer Pro adds a sense liveliness, creativity and most importantly fun to the often tedious task of syncing sound effects to picture. It’s also a great way to breath new life into an old sound effects collection. Anyone who spends their days working with sound effects would be doing themselves a disservice by not taking Reformer Pro for a test drive, I imagine most will be both impressed and excited by it’s novel approach to sound effects editing and design.


Robin Shore is an audio engineer at NYC’s Silver Sound Studios

Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.

Ren Klyce: Mixing the score for Star Wars: The Last Jedi

By Jennifer Walden

There are space battles and epic music, foreign planets with unique and lively biomes, blasters, lightsabers, a universe at war and a force that connects it all. Over the course of eight “Episodes” and through numerous spin-off series and games, fans of Star Wars have become well acquainted with its characteristic sound.

Creating the world, sonically, is certainly a feat, but bringing those sounds together is a challenge of equal measure. Shaping the soundtrack involves sacrifice and egoless judgment calls that include making tough decisions in service of the story.

Ren Klyce

Skywalker Sound’s Ren Klyce was co-supervising sound editor, sound designer and a re-recording mixer on Star Wars: The Last Jedi. He not only helped to create the film’s sounds but he also had a hand in shaping the final soundtrack. As re-recording mixer of the music, Klyce got a new perspective on the film’s story.

He’s earned two Oscar nominations for his work on the Rian Johnson-directed The Last Jedi — one for sound editing and another for sound mixing. We reached out to Klyce to ask about his role as a re-recording mixer, what it was like to work with John Williams’ Oscar-nominated score, and what it took for the team to craft The Last Jedi’s soundtrack.

You had all the Skywalker-created effects, the score and all the dialog coming together for the final mix. How did you bring clarity to what could have been be a chaotic soundtrack?
Mostly, it’s by forcing ourselves to potentially get rid of a lot of our hard work for the sake of the story. Getting rid of one’s work can be difficult for anyone, but it’s the necessary step in many instances. When you initially premix sound for a film, there are so many elements and often times we have everything prepared just in case they’re asked for. In the case of Star Wars, we didn’t know what director Rian Johnson might want and not want. So we had everything at the ready in either case.

On Star Wars, we ended up doing a blaze pass where we played everything from the beginning to the end of a reel all at once. We could clearly see that it was a colossal mess in one scene, but not so bad in another. It was like getting a 20-minute Cliff Notes of where we were going to need to spend some time.

Then it comes down to having really skilled mixers like David Parker (dialog) and Michael Semanick (sound effects), whose skill-sets include understanding storytelling. They understand what their role is about — which is making decisions as to what should stay, what should go, what should be loud or quiet, or what should be turned off completely. With sound effects, Michael is very good at this. He can quickly see the forest for the trees. He’ll say, “Let’s get rid of this. These elements can go, or the background sounds aren’t needed here.” And that’s how we started shaping the mix.

After doing the blaze pass, we will then go through and listen to just the music by itself. John Williams tells his story through music and by underscoring particular scenes. A lot of the process is learning what all the bits and pieces are and then weighing them up against each other. We might decide that the music in a particular scene tells the story best.

That is how we would start and then we worked together as a team to continue shaping the mix into a rough piece. Rian would then come in and give his thoughts to add more sound here or less music there, thus shaping the soundtrack.

After creating all of those effects, did you wish you were the one to mix them? Or, are you happy mixing music?
For me personally, it’s a really great experience to listen to and be responsible for the music because I’ve learned so much about the power of the music and what’s important. If it were the other way around, I might be a little more overly focused on the sound effects. I feel like we have a good dynamic. Michael Semanick has such great instincts. In fact, Rian described Michael as being an incredible storyteller, and he really is.

Mixing the music for me is a wonderful way to get a better scope of the entire soundtrack. By not touching the sound effects on the stage, those faders aren’t so precious. Instead, the movie itself and the soundtrack takes precedence instead of the bits and pieces that make it up.

What was the trickiest scene to mix in terms of music?
I think that would have to be the ski speeder sequence on the salt planet of Crait. That was very difficult because there was a lot of dodging and burning in the mix. In other words, Rian wanted to have loud music and then the music would have to dive down to expose a dialogue line, and then jump right back up again for more excitement and then dive down to make way for another dialogue line. Then boom, some sound effects would come in and the Millennium Falcon would zoom by. Then the Star Wars theme would take over and then it had to come down for the dialogue. So we worked that sequence quite a bit.

Our picture editor Bob Ducsay really guided us through the shape of that sequence. What was so great about having the picture editor present was that he was so intimate with the rhythm of the dialogue and his picture cutting. He knew where all of the story points were supposed to be, what motivated a look to the left and so on. Bob would say something like, “When we see Rose here, we really need to make sure we hear her musical theme, but then when we cut away, we need to hear the action.”

Were you working with John Williams’ music stems? Did you feel bad about pulling things out of his score? How do you dissect the score?
Working with John is obviously an incredible experience, and on this film I was lucky enough to work with Shawn Murphy as well, who is really one of my heroes and I’ve known him for years. He is the one who records the orchestra for John Williams and balances everything. Not only does he record the orchestra, but Shawn is a true collaborator with John as well. It’s incredible the way they communicate.

John is really mixing his own soundtrack when he’s up there on the podium conducting, and he’s making initial choices as to which instruments are louder than others — how loud the woodwinds play, how loud the brass plays, how loud the percussion is and how loud the strings are. He’s really shaping it. Between Williams and Murphy, they work on intonation, tuning and performance. They go through and record and then do pickups for this measure and that measure to make sure that everything is as good as it can be.

I actually got to witness John Williams do this incredible thing — which was during the recording of the score for the Crait scene. There was this one section where the brass was playing and John (who knows every single person’s name in that orchestra) called out to three people by name and said something like, “Mark, on bar 63, from beat two to beat six, can you not play please. I just want a little more clarity with two instruments instead of three. Thank you.” So they backed up and did a pick-up on that bar and that gentleman dropped out for those few beats. It was amazing.

In the end, it really is John who is creating that mix. Then, editorially, there would be moments where we had to change things. Ramiro Belgardt, another trusted confidant of John Williams, was our music editor. Once the music is recorded and premixed, it was up to Ramiro to keep it as close to what John intended throughout all of the picture changes.

A scene would be tightened or opened up, and the music isn’t going to be re-performed. That would be impossible to do, so it has to be edited or stretched or looped or truncated. Ramiro had the difficult job of making the music seem exactly how it was on the day it was performed. But in truth, if you look at his Pro Tools session, you’ll see all of these splices and edits that he did to make everything function properly.

Does a particular scene stick out?
There was one scene where Rey ignites the lightsaber for the very first time on Jedi Island, and there we did change the balance within the music. She’s on the cliff by the ocean and Luke is watching her as she’s swinging the lightsaber. Right when she ignites the lightsaber, her theme comes in, which is this beautiful piano melody. The problem was when they mixed the piano they didn’t have a really loud lightsaber sound going with it. We were really struggling because we couldn’t get that piano melody to speak right there. I asked Ramiro if there was any way to get that piano separately because I would love it if we could hear that theme come in just as strong as that lightsaber. Those are the types of little tiny things that we would do, but those are few and far between. For the most part, the score is how John and Shawn intended the mix to be.

It was also wonderful having Ramiro there as John’s spokesperson. He knew all of the subtle little sacred moments that Williams had written in the score. He pointed them out and I was able to push those and feature those.

Was Rian observing the sessions?
Rian attended every single scoring session and knew the music intricately. He was really excited for the music and wanted it to breathe. Rian’s knowledge of the music helped guide us.

Where did they perform and record the score?
This was recorded at the Barbra Streisand Scoring Stage on the Sony Pictures Studios lot in Culver City, California.

Are there any Easter eggs in terms of the score?
During the casino sequence there’s a beautiful piece of music that plays throughout, which is something like an homage that John Williams wrote, going back to the Cantina song that he wrote for the original Star Wars.

So, the Easter egg comes as the Fathiers are wreaking havoc in the casino and we cut to the inside of a confectionery shop. There’s an abrupt edit where all the music stops and you hear this sort of lounge piano that’s playing, like a piece of source music. That lounge piano is actually John Williams playing “The Long Goodbye,” which is the score that he wrote for the film The Long Goodbye. Rian is a huge fan of that score and he somehow managed to get John Williams to put that into the Star Wars film. It’s a wonderful little Easter egg.

John Williams is, in so many ways, the closest thing we have to Beethoven or Brahms in our time. When you’re in his presence — he’s 85 years old now — it’s humbling. He still writes all of his manuscripts by hand.

On that day that John sat down and played “The Long Goodbye” piano piece, Rian was so excited that he pulled out his iPhone and filmed the whole thing. John said, “Only for you, Rian, do I do this.” It was a very special moment.

The other part of the Easter egg is that John’s brother Donald Williams is a timpanist in the orchestra. So what’s cool is you hear John playing the piano and the very next sound is the timpani, played by his brother. So you have these two brothers and they do a miniature solo next to each other. So those are some of the fun little details.

John Williams earned an Oscar nomination for Best Original Music Score for Star Wars: The Last Jedi.
It’s an incredible score. One of the fortunate things that occurred on this film was that Rian and producer Ram Bergman wanted to give John Williams as much time as possible so they started him really early. I think he had a year to compose, which was great. He could take his time and really work diligently through each sequence. When you listen to just the score, you can hear all of the little subtle nuances that John composed.

For example, Rose stuns Finn and she’s dragging him on this little cart and they’re having this conversation. If you listen to just the music through there, the way that John has scored every single little emotional beat in that sequence is amazing. With all the effects and dialogue, you’re not really noticing the musical details. You hear two people arguing and then agreeing. They hate each other and now they like each other. But when you deconstruct it, you hear the music supporting each one of those moments. Williams does things like that throughout the entire film. Every single moment has all these subtle musical details. All the scenes with Snoke in his lair have these ominous, dark musical choir phrases for example. It’s phenomenal.

The moments where the choice was made to remove the score completely, was that a hard sell for the director? Or, was he game to let go of the score in those effects-driven moments?
No, it wasn’t too difficult. There was one scene that we did revert on though. It was on Crait, and Rian wanted to get rid of the whole big music sequence when Leia sees that the First Order is approaching and they have to shut the giant door. There was originally a piece of music, and that was when the crystal foxes were introduced. So we got rid of the music there. Then we watched the film and Rian asked us to put that music back.

A lot of the music edits were crafted in the offline edit, and those were done by music editor Joseph Bonn. Joe would craft those moments ahead of time and test them. So a lot of that was decided before it got to my hands.

But on the stage, we were still experimenting. Ramiro would suggest trying to lose a cue and we’d mute it from the sequence. That was a fun part of collaborating with everyone. It’s a live experiment. I would say that on this film most of the music editorial choices were decided before we got to the final mix. Joe Bonn spent months and months crafting the music guide, which helped immensely.

What is one audio tool that you could not have lived without on the mix? Why?
Without a doubt, it’s our Avid Pro Tools editing software. All the departments —dialog, Foley, effects and music were using Pro Tools. That is absolutely hands-down the one tool that we are addicted to. At this point, not having Pro Tools is like not having a hammer.

But you used a console for the final mix, yes?
Yes. Star Wars: The Last Jedi was not an in-the-box mix. We mixed it on a Neve DFC Gemini console in the traditional manner. It was not a live Pro Tools mix. We mixed it through the DFC console, which had its own EQ, dynamics processing, panning, reverb sends/returns, AUX sends/returns and LFE sends/returns.

The pre-pre-mixing was done in Pro Tools. Then, looking at the sound effects for example, that was shaped roughly in the offline edit room, and then that would go to the mix stage. Michael Semanick would pre-mix the effects through the Neve DFC in a traditional premixing format that we would record to 9.1 pre-dubs and objects. A similar process was done with the dialogue. So that was done with the console.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.

Behind the Titles: Something’s Awry Productions

NAME: Amy Theorin

NAME: Kris Theorin

NAME: Kurtis Theorin

COMPANY: Something’s Awry Productions

CAN YOU DESCRIBE YOUR COMPANY?
We are a family owned production company that writes, creates and produces funny sharable web content and commercials mostly for the toy industry. We are known for our slightly offbeat but intelligent humor and stop-motion animation. We also create short films of our own both animated and live action.

WHAT’S YOUR JOB TITLE?
Amy: Producer, Marketing Manager, Business Development
Kris: Director, Animator, Editor, VFX, Sound Design
Kurtis: Creative Director, Writer

WHAT DOES THAT ENTAIL?
Amy: A lot! I am the point of contact for all the companies and agencies we work with. I oversee production schedules, all social media and marketing for the company. Because we operate out of a small town in Pennsylvania we rely on Internet service companies such as Tongal, Backstage.com, Voices.com, Design Crowd and Skype to keep us connected with the national brands and talent we work with who are mostly based in LA and New York. I don’t think we could be doing what we are doing 10 years ago without living in a hub like LA or NYC.

Kris: I handle most of production, post production and some pre-production. Specifically, storyboarding, shooting, animating, editing, sound design, VFX and so on.

Kurtis: A lot of writing. I basically write everything that our company does, including commercials, pitches and shorts. I help out on our live-action shoots and occasionally direct. I make props and sets for our animation. I am also Something Awry’s resident voice actor.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Amy: Probably that playing with toys is something we get paid to do! Building Lego sets and setting up Hot Wheels jumps is all part of the job, and we still get excited when we get a new toy delivery — who wouldn’t? We also get to explore our inner child on a daily basis.

Hot Wheels

Kurtis: A lot of the arts and crafts knowledge I gathered from my childhood has become very useful in my job. We have to make a lot of weird things and knowing how to use clay and construction paper really helps.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Amy: See above. Seriously, we get to play with toys for a living! Being on set and working with actors and crew in cool locations is also great. I also like it when our videos exceed our client’s expectations.

Kris: The best part of my job is being able to work with all kinds of different toys and just getting the chance to make these weird and entertaining movies out of them.

Kurtis: Having written something and seeing others react positively to it.

WHAT’S YOUR LEAST FAVORITE?
Amy/Kris: Working through the approval process with rounds of changes and approvals from multiple departments throughout a large company. Sometimes it goes smoothly and sometimes it doesn’t.

Kurtis: Sitting down to write.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Amy: Since most of the companies we work with are on the West Coast my day kicks into high gear around 4:00pm East Coast time.

Kris: I work best in the morning.

Kurtis: My day often consists of hours of struggling to sit down and write followed by about three to four hours where I am very focused and get everything done. Most often those hours occur from 4pm to 7pm, but it varies a lot.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Amy: Probably helping to organize events somewhere. I am not happy unless I am planning or organizing a project or event of some sort.

Kris: Without this job, I’d likely go into some kind of design career or something involving illustration. For me, drawing is one of my secondary interests after filming.

Kurtis: I’d be telling stories in another medium. Would I be making a living doing it is another question.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Amy: I have always loved advertising and creative projects. When I was younger I was the advertising manager for PNC Bank, but left the corporate world when I had kids and started my own photography business, which I operated for 10 years. Once my kids became interested in film I wanted to foster that interest and here we are!

Kris: Filmmaking is something I’ve always had an interest in. I started when I was just eight years old and from there it’s always something I loved to do. The moment when I first realized this would be something I’d follow for an actual career was really around 10th grade, when I started doing it more on a professional level by creating little videos here and there for company YouTube channels. That’s when it all started to sink in that this could actually be a career for me.

Kurtis: I knew I wanted to tell stories very early on. Around 10 years old or so I started doing some home movies. I could get people to laugh and react to the films I made. It turned out to be the medium I could most easily tell stories in so I have stuck with it ever since.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Amy: We are currently in the midst of two major projects — one is a six-video series for Hot Wheels that involves creating six original song music videos parodying different music genres. The other is a 12-episode series for Warner Bros. Scooby Doo that features live-action and stop-motion animation. Each episode is a mini-mystery that Scooby and the gang solve. The series focuses on the imaginations of different children and the stories they tell.

We also have two short animations currently on the festival circuit. One is a hybrid of Lovecraft and a Scooby-Doo chase scene called Mary and Marsha in the Manor of Madness. The other is dark fairytale called The Gift of the Woods.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Amy: Although I am proud of a lot of our projects I am most proud of the fact that even though we are such a small company, and live in the middle of nowhere, we have been able to work with companies around the world like Lego, Warner Bros. and Mattel. Things we create are seen all over the world, which is pretty cool for us.

Lego

Kris: The Lego Yellow Submarine Beatles film we created is what I’m most proud of. It just turned out to be this nice blend of wacky visuals, crazy action, and short concise storytelling that I try to do with most of my films.

Kurtis: I really like the way Mary and Marsha in the Manor of Madness turned out. So far it is the closest we have come to creating something with a unique feel and a sense of energetic momentum; two long term goals I have for our work. We also recently wrapped filming for a twelve episode branded content web series. It is our biggest project yet and I am proud that we were able to handle the production of it really well.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Amy: Skype, my iPad and the rise of online technology companies such as Tongal, Voices.com, Backstage.com and DesignCrowd that help us get our job done.

Kris: Laptop computers, Wacom drawing tablets and iPhones.

Kurtis: My laptop (and it’s software Adobe Premiere and Final Draft), my iPhone and my Kindle.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Amy: Being in this position I like to know what is going on in the industry so I follow Ad Age, Ad Week, Ad Freak, Mashable, Toy Industry News, iO9, Geek Tyrant, and of course all the social media channels of our clients like Lego, Warner Bros., Hot Wheels and StikBots. We also are on Twitter (@AmyTheorin) Instagram (@Somethingsawryproductions) and Facebook (Somethingsawry).

Kris: Mostly YouTube and Facebook.

Kurtis: I follow the essays of Film Crit Hulk. His work on screenwriting and story-telling is incredibly well done and eye opening. Other than that I try to keep up with news and I follow a handful of serialized web-comics. I try to read, watch and play a lot of different things to get new ideas. You never know when the spaghetti westerns of Sergio Leone might give you the idea for your next toy commercial.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Amy: I don’t usually but I do like to listen to podcasts. Some of my favorites are: How I Built This, Yeah, That’s Probably an Ad and Fresh Air.

Kris: I listen to whatever pop songs are most popular at the time. Currently, that would be Taylor Swift’s “Look What You Made Me Do.”

Kurtis: I listen to an eclectic mix of soundtracks, classic rock songs I‘ve heard in movies, alternative songs I heard in movies, anime theme songs… basically songs I heard with a movie or game and can’t get out of my head. As for particular artists I am partial to They Might Be Giants, Gorillaz, Queen, and the scores of Ennio Morricone, Darren Korb, Jeff Williams, Shoji Meguro and Yoko Kanno.

IS WORKING WITH FAMILY EASIER OR MORE DIFFICULT THAN WORKING/MANAGING IN A REGULAR AGENCY?
Amy: Both! I actually love working with my sons, and our skill sets are very complimentary. I love to organize and my kids don’t. Being family we can be very upfront with each other in terms of telling our opinions without having to worry about hurting each other’s feelings.

We know at the end of the day we will always be there for each other no matter what. It sounds cliché but it’s true I think. We have a network of people we also work with on a regular basis who we have great relationships with as well. Sometimes it is hard to turn work off and just be a family though, and I find myself talking with them about projects more often than what is going on with them personally. That’s something I need to work on I guess!

Kris: It’s great because you can more easily communicate and share ideas with each other. It’s generally a lot more open. After a while, it really is just like working within an agency. Everything is fine-tuned and you have worked out a pipeline for creating and producing your videos.

Kurtis: I find it much easier. We all know how we do our best work and what our strengths are. It certainly helps that my family is very good at what they do. Not to mention working from home means I get to set my own hours and don’t have a commute. Sometimes it’s difficult to stay motivated when you’re not in a professional office setting but overall the pros far outweigh the cons.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Amy: I try to take time out to walk our dog, but mostly I love it so much I don’t mind working on projects all the time. If I don’t have something to work on I am not a happy camper. Sometimes I have to remember that not everyone is working on the weekends, so I can’t bother them with work questions!

Kris: It really helps that I don’t often get stressed. At least, not after doing this job for as long as I have. You really learn how to cope with it all. Oftentimes, it’s more just getting exhausted from working long hours. I’ll often just watch some YouTube videos at the end of a day or maybe a movie if there’s something I really want to see.

Kurtis: I like to read and watch interesting stories. I play a lot games: board games, video games, table-top roleplaying. I also find bike riding improves my mood a lot.

Creating sounds for Battle of the Sexes

By Jennifer Walden

Fox Searchlight’s biographical sports, drama Battle of the Sexes, delves into the personal lives of tennis players Bobby Riggs (Steve Carell) and Billie Jean King (Emma Stone) during the time surrounding their famous televised tennis match in 1973, known as the Battle of the Sexes. Directors Jonathan Dayton and Valerie Faris faithfully recreated the sports event using real-life tennis players Vince Spadea and Kaitlyn Christian as body doubles for Carell and Stone, and they used the original event commentary by announcer Howard Cosell to add an air of authenticity.

Oscar-nominated supervising sound editors Ai-Ling Lee (also sound designer/re-recording mixer) and Mildred Iatrou, from Fox Studios Post Production in LA, began their work during the director’s cut. Lee was on-site at Hula Post providing early sound support to film editor Pamela Martin, feeding her era-appropriate effects, like telephones, cars and cameras, and working on scenes that the directors wanted to tackle right away.

For director Dayton, the first priority scene was Billie Jean’s trip to a hair salon where she meets Marilyn Barnett (Andrea Riseborough). It’s the beginnings of a romantic relationship and Dayton wanted to explore the idea of ASMR (autonomous sensory meridian response, mainly an aural experience that causes the skin on the scalp and neck to tingle in a pleasing way) to make the hair cut feel close and sensual. Lee explains that ASMR videos are popular on YouTube, and topping the list of experience triggers are hair dryers blowing, cutting hair and running fingers through hair. After studying numerous examples, Lee discovered “the main trick to ASMR is to have the sound source be very close to the mic and to use slow movements,” she says. “If it’s cutting hair, the scissors move very slow and deliberate, and they’re really close to the mic and you have close-up breathing.”

Lee applied those techniques to the recordings she made for the hair salon scene. Using a Sennheiser MKH 8040 and MKH 30 in an MS setup, Lee recorded the up-close sound of slowly cutting a wig’s hair. She also recorded several hair dryers slowly panning back and forth to find the right sound and speed that would trigger an ASMR feeling. “For the hairdryers, you don’t want an intense sound or something that’s too loud. The right sound is one that’s soothing. A lot of it comes down to just having quiet, close-up, sensual movement,” she says.

Ai-Ling Lee capturing the sound of hair being cut.

Recording the sounds was the easy part. Getting that experience to translate in a theater environment was the challenge because most ASMR videos are heard through headphones as a binaural, close experience. “In the end, I just took the mid-side recording and mixed it by slowly panning the sound across the front speakers and a little bit into the surrounds,” explains Lee. “Another trick to making that scene work was to slowly melt away the background sounds of the busy salon, so that it felt like it was just the two of them there.”

Updating the Commentary
As Lee was working on the ASMR sound experience, Iatrou was back at Fox Studios working on another important sequence — the final match. The directors wanted to have Howard Cosell’s original commentary play in the film but the only recording available was a mixed mono track of the broadcast, complete with cheering crowds and a marching band playing underneath.

“At first, the directors sent us the pieces that they wanted to use and we brightened it a little because it was very dull sounding. They also asked us if we could get rid of the music, which we were not able to do,” says Iatrou.

As a work-around, the directors asked Iatrou to record Cosell’s lines using a soundalike. “We did a huge search. Our ADR/group leader Johnny Gidcomb at Loop De Loop held auditions of people who could do Howard Cosell. We did around 50 auditions and sent those to the directors. Finally, we got one guy they really liked.”

L-R: Mildred Iatrou and Ai-Ling Lee.

They spent a day recording the Cosell soundalike, using the same make and model mic that was used by Cosell and nearly all newscasters of that period — the Electro-Voice 635A Apple. Even with the “new” Cosell and the proper mic, the directors felt it still wasn’t right. “They really wanted to use Howard Cosell,” says Iatrou. “We ended up using all Howard Cosell in the film except for a word or a few syllables here and there, which we cut in from the Cosell soundalike. During the mix, re-recording mixer Ron Bartlett (dialogue/music) had to do very severe noise reduction in the segments with the music underneath. Then we put other music on top to help mask the degree of noise reduction that we did.”

Another challenge to the Howard Cosell commentary was that he wasn’t alone. Rosie Casals was also a commentator at the event. In the film, Rosie is played by actress Natalie Morales. Iatrou recorded Morales performing Casals’ commentary using the Electro-Voice 635A Apple mic. She then used iZotope RX 6’s EQ Match feature to help her lines sound similar to Cosell’s. “For the final mix, Ron Bartlett put more time and energy into getting the EQ to match. It’s interesting because we didn’t want Rosie’s lines to be as distressed as Cosell’s. We had to find this balance between making it work with Howard Cosell’s material but also make it a tiny bit better.”

After cutting Rosie’s new lines with Cosell’s original commentary, Iatrou turned her attention to the ambience. She played through the original match’s 90-minute mixed mono track to find clear sections of crowds, murmuring and cheering to cut under Rosie’s lines, so they would have a natural transition into Cosell’s lines. “For example, if there was a swell of the cheer on Howard Cosell’s line then I’d have to find a similar cheer to extend the sound under the actress’s line to fill it in.”

Crowd Sounds
To build up authentic crowd sounds for the recreated Battle of the Sexes match, Iatrou had the loop group perform call-outs that she and Lee heard in the original broadcast, like a woman yelling, “Come on Billie!” and a man shouting, “Come on Bobby baby!”

“The crowd is another big character in the match,” says Lee. “As the game went on, it felt like more of the women were cheering for Billie Jean and more of the men were cheering for Bobby Riggs. In the real broadcast, you hear one guy cheer for Bobby Riggs and then a woman would immediately cheer on Billie Jean. The guy would try to out cheer her and she would cheer back. It’s this whole secondary situation going on and we have that in the film because we wanted to make sure we were as authentic as possible.”

Lee also wanted the tennis rackets to sound authentic. She tracked down a wooden racket and an aluminum racket and had them restrung with a gut material at a local tennis store. She also had them strung with less tension than a modern racket. Then Lee and an assistant headed to an outdoor tennis court and recorded serves, bounces, net impacts, ball-bys and shoe squeaks using two mic setups — both with a Schoeps MK 41 and an MK 8 in an MS setup, paired with Sound Devices 702 and 722 recorders. “We miked it close and far so that it has some natural outdoor sound.”

Lee edited her recordings of tennis sounds and sporting event crowds with the production effects captured by sound mixer Lisa Pinero. “Lisa did a really good job of miking everything, and we were able to use some of the production crowd sounds, especially for the Margaret Court vs. Bobby Riggs match that happens before the final Battle of the Sexes match. In the final match, some of the tennis ball hits were layers of what I recorded and the production hits.”

Foley
Another key sonic element in the recreated Battle of the Sexes match was the Foley work by Dan O’Connell and John Cucci of One Step Up, located on the Fox Studios lot. During the match, Billie Jean’s strategy was to wear out the older and out-of-shape Bobby Riggs by making him run all over the court. “As the game went on, I wanted Bobby’s footsteps to feel heavier, with more thumps, as though he’s running out of steam trying to get the ball,” explains Lee. “Dan O’Connell did a good job of creating that heavy stomping foot, but with a slight wood resonance too. We topped that with shoe squeaks — some that Dan did and some that I recorded.”

The final Battle of the Sexes match was by far the most challenging scene to mix, says Lee. Re-recording mixers Bartlett and Doug Hemphill, as well as Lee, mixed the film in 7.1 surround at Formosa Group’s Hollywood location on Stage A using Avid S6 consoles. In the final match, they had Cosell’s original commentary blended with actress Morales commentary as Rosie Casals. There was music and layered crowds with call-outs. Production sound, field recordings, and Foley meshed to create the diegetic effects. “There were so many layers involved. Deciding how the sounds build and choosing what to play when — the crowds being tied to Howard Cosell, made it challenging to balance that sequence,” concludes Lee.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Emmy Awards: American Horror Story: Roanoke

A chat with supervising sound editor Gary Megregian

By Jennifer Walden

Moving across the country and buying a new house is an exciting and scary process, but when it starts raining teeth at that new residence the scary factor pretty much makes the exciting feelings void. That’s the situation that Matt and Shelby, a couple from Los Angeles, find themselves in for American Horror Story’s sixth season on FX Networks. After moving into an old mansion in Roanoke, North Carolina, they discover that the dwelling and the local neighbors aren’t so accepting of outsiders.

American Horror Story: Roanoke explores a true-crime-style format that uses re-enactments to play out the drama. The role of Matt is played by Andre Holland in “reality” and by Cuba Gooding, Jr. in the re-enactments. Shelby is played by Lily Rabe and Sarah Paulson, respectively. It’s an interesting approach that added a new dynamic to an already creative series.

Emmy-winning Technicolor at Paramount supervising sound editor Gary Megregian is currently working on his seventh season of American Horror Story, coming to FX in early September. He took some time out to talk about Season 6, Episode 1, Chapter 1, for which he and his sound editorial team have been nominated for an Emmy for Outstanding Sound Editing for a Limited Series. They won the Emmy in 2013, and this year marks their sixth nomination.

American Horror Story: Roanoke is structured as a true-crime series with re-enactments. What opportunities did this format offer you sound-wise?
This season was a lot of fun in that we had both the realistic world and the creative world to play in. The first half of the series dealt more with re-enactments than the reality-based segments, especially in Chapter 1. Aside from some interview segments, it was all re-enactments. The re-enactments were where we had more creative freedom for design. It gave us a chance to create a voice for the house and the otherworldly elements.

Gary Megregian

Was series creator Ryan Murphy still your point person for sound direction? For Chapter 1, did he have specific ideas for sound?
Ryan Murphy is definitely the single voice in all of his shows but my point person for sound direction is his executive producer Alexis Martin Woodall, as well as each episode’s picture editor.

Having been working with them for close to eight years now, there’s a lot of trust. I usually have a talk with them early each season about what direction Ryan wants to go and then talk to the picture editor and assistant as they’re building the show.

The first night in the house in Roanoke, Matt and Shelby hear this pig-like scream coming from outside. That sound occurs often throughout the episode. How did that sound come to be? What went into it?
The pig sounds are definitely a theme that goes through Season 6, but they started all the way back in Season 1 with the introduction of Piggy Man. Originally, when Shelby and Matt first hear the pig we had tried designing something that fell more into an otherworldly sound, but Ryan definitely wanted it to be real. Other times, when we see Piggy Man we went back to the design we used in Season 1.

The doors in the house sound really cool, especially that back door. What were the sources for the door sounds? Did you do any processing on the recordings to make them spookier?
Thanks. Some of the doors came from our library at Technicolor and some were from a crowd-sourced project from New Zealand-based sound designer Tim Prebble. I had participated in a project where he asked everyone involved to record a complete set of opens, closes, knocks, squeaks, etc. for 10 doors. When all was said and done, I gained a library of over 100GB of amazing door recordings. That’s my go-to for interesting doors.

As far as processing goes, nothing out of the ordinary was used. It’s all about finding the right sound.

When Shelby and Lee (Adina Porter) are in the basement, they watch this home movie featuring Piggy Man. Can you tell me about the sound work there?
The home movie was a combination of the production dialogue, Foley, the couple instances of hearing pig squeals and Piggy Man design along with VHS and CRT noise. For dialogue, we didn’t clean up the production tracks too much and Foley was used to help ground it. Once we got to the mix stage, re-recording mixers Joe Earle and Doug Andham helped bring it all together in their treatment.

What was your favorite scene to design? Why? What went into the sound?
One of my favorite scenes is the hail/teeth storm when Shelby’s alone in the house. I love the way it starts slow and builds from the inside, hearing the teeth on the skylight and windows. Once we step outside it opens up to surround us. I think our effects editor/designer Tim Cleveland did a great job on this scene. We used a number of hail/rain recordings along with Foley to help with some of the detail work, especially once we step outside.

Were there any audio tools that were helpful when working on Chapter 1? Can you share specific examples of how you used them?
I’m going to sound like many others in this profession, but I’d say iZotope RX. Ryan is not a big fan of ADR, so we have to make the production work. I can count on one hand the number of times we’ve had any actors in for ADR last season. That’s a testament to our production mixer Brendan Beebe and dialogue editor Steve Stuhr. While the production is well covered and recorded well, Steve still has his work cut out for him to present a track that’s clean. The iZotope RX suite helps with that.

Why did you choose Chapter 1 for Emmy consideration for its sound editorial?
One of the things I love about working on American Horror Story is that every season is like starting a new show. It’s fun to establish the sound and the tone of a show, and Chapter 1 is no exception. It’s a great representation of our crew’s talent and I’m really happy for them that they’re being recognized for it. It’s truly an honor.

Behind the Title: 3008 Editorial’s Matt Cimino and Greg Carlson

NAMES: Matt Cimino and Greg Carlson

COMPANY: 3008 Editorial in Dallas

WHAT’S YOUR JOB TITLE?
Cimino: We are sound designers/mixers.

WHAT DOES THAT ENTAIL?
Cimino: Audio is a storytelling tool. Our job is to enhance the story directly or indirectly and create the illusion of depth, space and a sense of motion with creative sound design and then mix that live in the environment of the visuals.

Carlson: And whenever someone asks, I always tend to prioritize sound design before mixing. Although I love every aspect of what we do, when a spot hits my room as a blank slate, it’s really the sound design that can take it down a hundred different paths. And for me, it doesn’t get better than that.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Carlson: I’m not sure a brief job title can encompass what anyone really does. I am a composer as well as a sound designer/mixer, so I bring that aspect into my work. I love musical elements that help stitch a unified sound into a project.

Cimino: That there really isn’t “a button” for that!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Carlson: The freedom. Having the opportunity to take a project where I think it should go and along the way, pushing it to the edge and back. Experimenting and adapting makes every spot a completely new trip.

Matt Cimino

Cimino: I agree. It’s the challenge of creating an expressive and aesthetically pleasing experience by taking the soundtrack to a whole new level.

WHAT’S YOUR LEAST FAVORITE?
Cimino: Not Much. However, being an imperfect perfectionist, I get pretty bummed when I do not have enough time to perfect the job.

Carlson: People always say, “It’s so peaceful and quiet in the studio, as if the world is tuned out.” The downside of that is producer-induced near heart attacks. See, when you’re rocking out at max volume and facing away from the door, well, people tend to come in and accidentally scare you to death.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Cimino: I’m a morning person!

Carlson: Time is an abstract notion in a dark room with no windows, so no time in particular. However, the funniest time of day is when you notice you’re listening about 15 dB louder than the start of the day. Loud is better.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cimino: Carny. Or Evel Knievel.

Carlson: Construction/carpentry. Before audio, I had lots of gritty “hands-on” jobs. My dad taught me about work ethic, to get my hands dirty and to take pride in everything. I take that same approach with every spot I touch. Now I just sit in a nice chair while doing it.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Cimino: I’ve had a love for music since high school. I used to read all the liner notes on my vinyl. One day I remember going through my father’s records and thinking at that moment, I want to be that “sound engineer” listed in the notes. This led me to study audio at Columbia College in Chicago. I quickly gravitated towards post production audio classes and training. When I wasn’t recording and mixing music, I was doing creative sound design.

Carlson: I was always good with numbers and went to Michigan State to be an accountant. But two years in, I was unhappy. All I wanted was to work on music and compose, so I switched to audio engineering and never looked back. I knew the second I walked into my first studio, I had found my calling. People always say there isn’t a dream job; I disagree.

CAN YOU DESCRIBE YOUR COMPANY?
Cimino: A fun, stress-free environment full of artistry and technology.

Carlson: It is a place I look forward to every day. It’s like a family, solely focused on great creative.

CAN YOU NAME SOME RECENT SPOTS YOU HAVE WORKED ON?
Cimino: Snapple, RAM, Jeep, Universal Orlando, Cricket Wireless, Maserati.

Carlson: AT&T, Lay’s, McDonald’s, Bridgestone Golf.

Greg Carlson

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Carlson: It’s nearly impossible to pick one, but there is a project I see as pivotal in my time here in Dallas. It was shortly after I arrived six years ago. I think it was a boost to my confidence and in turn, enhanced my style. The client was The Home Depot and the campaign was Lets Do This. A creative I admire greatly here in town gave me the chance to spearhead the sonic approach for the work. There are many moments, milestones and memories, but this was a special project to me.

Cimino: There are so many. One of the most fun campaigns I worked on was for Snapple, where each spot opened with the “pop!” of the Snapple cap. I recorded several pops (close-miced) and selected one that I manipulated to sound larger than life but also retain the sound of the brands signature cap pop being opened. After the cap pops, the spot transforms into an exploding fruit infusion. The sound was created by smashing Snapple bottles for the glass break, crushing, smashing and squishing fruit with my hands, and using a hydrophone to record splashing and underwater sounds to create the slow-motion effect of the fruit morphing. So much fun.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cimino: During a mix, my go-tos are iZotope, Sound Toys and Slate Digital. Outside the studio I can’t live without my Apple!

Carlson: ProTools, all things iZotope, Native Instruments.

THIS IS A HIGH-STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cimino: Family and friends. I love watching my kiddos play select soccer. Relaxing pool or beachside with a craft cider. Or on a single path/trail with my mountain bike.

Carlson: I work on my home, build things, like to be outside. When I need to detach for a bit, I prefer dangerous power tools or being on a body of water.

Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

King, along with Alex Gibson, recently won the Academy Award for Achievement in Sound Editing for Dunkirk.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com

Behind the Title: Nylon Studios creative director Simon Lister

NAME: Simon Lister

COMPANY: Nylon Studios

CAN YOU DESCRIBE YOUR COMPANY?
Nylon Studios is a New York- and Sydney-based music and sound house offering original composition and sound design for films and commercials. I am based in the Australia location.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
I help manage and steer the company, while also serving as a sound designer, client liaison, soundtrack creative and thinker.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People are constantly surprised with the amount of work that goes into making a soundtrack.

WHAT TOOLS DO YOU USE?
I use Avid Pro Tools, and some really cool plugins

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is being able to bring a film to life through sound.

WHAT’S YOUR LEAST FAVORITE?
At times, clients can be so stressed and make things difficult. However, sometimes we just need to sit back and look at how lucky we are to be in such a fun industry. So in that case, we try our best to make the client’s experience with us as relaxing and seamless as possible.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Anything that involves me having a camera in my hand and taking pictures.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was pretty young. I got a great break when I was 19 years old in one of the best music studios in New Zealand and haven’t stopped since. Now, I’ve been doing this for 31 years (cough).

Honda Civic spot

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
In the last couple of months I think I’ve counted several different car brand spots we’ve worked on, including Honda, Hyundai, Subaru, Audi and Toyota. All great spots to sink our teeth and ears into.

Also we have been working on the great wildlife series Tales by Light, which is being played on National Geographic and Netflix.

For Every Child

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It would be having the opportunity to film and direct my own commercial, For Every Child, for Unicef global rebranding TVC. We had the amazing voiceover of Liam Neeson and the incredible singing voice of Lisa Gerard (Gladiator, Heat, Black Hawk Down).

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My camera, my computer and my motorbike.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I ride motorbikes throughout Morocco, Baja, Himalayas, Mongolia, Vietnam, Thailand, New Zealand and in the traffic of India.

Audio post vet Rex Recker joins Digital Arts in NYC

Rex Recker has joined the team at New York City’s Digital Arts as a full-time audio post mixer and sound designer. Recker, who co-founded NYC’s AudioEngine after working as VP and audio post mixer at Photomag recording studios, is an award-winning mixer with a long list of credits. Over the span of his career he has worked on countless commercials with clients including McCann Erickson JWT, Ogilvy & Mather, BBDO, DDB, HBO and Warner Books.

Over the years, Recker has developed a following of clients who seek him out for his audio post mixer talents — they seek his expertise in surround sound audio mixing for commercials airing via broadcast, Web and cinemas. In addition to spots, Recker also mixes long-form projects, including broadcast specials and documentaries.

Since joining the Digital Arts team, Recker has already worked on several commercial campaigns, promos and trailers for such clients as Samsung, SlingTV, Ford, Culturelle, Orvitz, NYC Department of Health, and HBO Documentary Films.

Digital Arts, owned by Axel Ericson, is an end-to-end production, finishing and audio facility.

Sound — Wonder Woman’s superpower

By Jennifer Walden

When director Patty Jenkins first met with supervising sound editor James Mather to discuss Warner Bros. Wonder Woman, they had a conversation about the physical effects of low-frequency sound energy on the human body, and how it could be used to manipulate an audience.

“The military spent a long time investigating sound cannons that could fire frequencies at groups of people and debilitate them,” explains Mather. “They found that the lower frequencies were far more effective than the very high frequencies. With the high frequencies, you can simply plug your ears and block the sound. The low-end frequencies, however, impact the fluid content of the human body. Frequencies around 5Hz-9Hz can’t be heard, but can have physiological, almost emotional effects on the human body. Patty was fascinated by all of that. So, we had a very good sound-nerd talk at our first meeting — before we even talked about the story of the film.”

Jenkins was fascinated by the idea of sound playing a physical role as well as a narrative one, and that direction informed all of Mather’s sound editorial choices for Wonder Woman. “I was amazed by Patty’s intent, from the very beginning, to veer away from very high-end sounds. She did not want to have those featured heavily in the film. She didn’t want too much top-end sonically,” says Mather, who handled sound editorial at his Soundbyte Studios in West London.

James Mather (far right) and crew take to the streets.

Soundbyte Studios offers creative supervision, sound design, Foley and dialog editing. The facility is equipped with Pro Tools 12 systems and Avid S6 and S3 consoles. Their client list includes top studios like Warner Bros., Disney, Fox, Paramount, DreamWorks, Aardman and Pathe. Mather’s team includes dialog supervisor Simon Chase, and sound effects editors Jed Loughran and Samir Fočo. When Mather begins a project, he likes to introduce his team to the director as soon as possible “so that they are recognized as contributors to the soundtrack,” he says. “It gives the team a better understanding of who they are working with and the kind of collaboration that is expected. I always find that if you can get everyone to work as a collaborative team and everyone has an emotional investment or personal investment in the project, then you get better work.”

Following Jenkins’s direction, Mather and his team designed a tranquil sound for the Amazonian paradise of Themyscira. They started with ambience tracks that the film’s sound recordist Chris Munro captured while they were on-location in Italy. Then Mather added Mediterranean ambiences that he and his team had personally collected over the years. Mather embellished the ambience with songbirds from Asia, Australasia and the Amazon. Since there are white peacocks roaming the island, he added in modified peacock sounds. Howler monkeys and domestic livestock, like sheep and goats, round out the track. Regarding the sheep and goats, Mather says, “We pitched them and manipulated them slightly so that they didn’t sound quite so ordinary, like a natural history film. It was very much a case of keeping the soundtrack relatively sparse. We did not use crickets or cicadas — although there were lots there while they were filming, because we wanted to stay away the high-frequency sounds.”

Waterfalls are another prominent feature of Themyscira, according to Mather, but thankfully they weren’t really on the island so the sound recordings were relatively clean. The post sound team had complete control over the volume, distance and frequency range of the waterfall sounds. “We very much wanted the low-end roar and rumble of the waterfalls rather than high-end hiss and white noise.”

The sound of paradise is serene in contrast to London and the front lines of World War I. Mather wanted to exaggerate that difference by overplaying the sound of boats, cars and crowds as Steve [Chris Pine] and Diana [Gal Gadot] arrived in London. “This was London at its busiest and most industria

l time. There were structures being built on a major scale so the environment was incredibly active. There were buses still being drawn by horses, but there were also cars. So, you have this whole mishmash of old and new. We wanted to see Diana’s reaction to being somewhere that she has never experienced before, with sounds that she has never heard and things she has never seen. The world is a complete barrage of sensory information.”

They recorded every vehicle they could in the film, from planes and boats to the motorcycle that Steve uses to chase after Diana later on in the film. “This motorcycle was like nothing we had ever seen before,” explains Mather. “We knew that we would have to go and record it because we didn’t have anything in our sound libraries for it.”

The studio spent days preparing the century-old motorcycle for the recording session. “We got about four minutes of recording with it before it fell apart,” admits Mather. “The chain fell off, the sprockets broke and then it went up in smoke. It was an antique and probably shouldn’t have been used! The funny thing is that it sounded like a lawnmower. We could have just recorded a lawnmower and it would’ve sounded the same!”

(Mather notes that the motorcycle Steve rides on-screen was a modern version of the century-old one they got to record.)

Goosing Sounds
Mather and his sound team have had numerous opportunities to record authentic weapons, cars, tanks, planes and other specific war-era machines and gear for projects they’ve worked on. While they always start with those recordings as their sound design base, Mather says the audience’s expectation of a sound is typically different from the real thing. “The real sound is very often disappointing. We start with the real gun or real car that we recorded, but then we start to work on them, changing the texture to give them a little bit more punch or bite. We might find that we need to add some gun mechanisms to make a gun sound a bit snappier or a bit brighter and not so dull. It’s the same with the cars. You want the car to have character, but you also want it to be slightly faster or more detailed than it actually sounds. By the nature of filmmaking, you will always end up slightly embellishing the real sound.”

Take the gun battles in Wonder Woman, for instance. They have an obvious sequentiality. The gun fires, the bullet travels toward its target and then there is a noticeable impact. “This film has a lot of slow-motion bullets firing, so we had to amp up the sense of what was propelling that very slow-motion bullet. Recording the sound of a moving bullet is very hard. All of that had to be designed for the film,” says Mather.

In addition to the real era-appropriate vehicles, Wonder Woman has imaginary, souped-up creations too, like a massive bomber. For the bomber’s sound, Mather sought out artist Joe Rush who builds custom Mad Max-style vehicles. They recorded all of Rush’s vehicles, which had a variety of different V8, V12 and V6 engines. “They all sound very different because the engines are on solid metal with no suspension,” explains Mather. “The sound was really big and beefy, loud and clunky and it gave you a sense of a giant war monster. They had this growl and weight and threat that worked well for the German machines, which were supposed to feel threatening. In London, you had these quaint buses being drawn by horses, and the counterpoint to that were these military machines that the Germans had, which had to be daunting and a bit terrifying.

“One of the limitations of the WWI-era soundscapes is the lack of some very useful atmospheric sounds. We used tannoy (loudspeaker) effects on the German bomb factory to hint at the background activity, but had to be very sparing as these were only just invented in that era. (Same thing with the machine guns — a far more mechanical version than the ‘retatatat’ of the familiar WWII versions).”

One of Mather’s favorite scenes to design starts on the frontlines as Diana makes her big reveal as Wonder Woman. She crosses No Man’s Land and deflects the enemies’ fire with her bulletproof bracelets and shield. “We played with that in so many different ways because the music was such an important part of Patty’s vision for the film. She very much wanted the music to carry the narrative. Sound effects were there to be literal in many ways. We were not trying to overemphasize the machismo of it. The story is about the people and not necessarily the action they were in. So that became a very musical-based moment, which was not the way I would have normally done it. I learned a lot from Patty about the different ways of telling the story.”

The Powers
Following that scene, Wonder Woman recaptured the Belgian village they were fighting for by running ahead and storming into the German barracks. Mather describes it as a Guy Ritchie-style fight, with Wonder Woman taking on 25 German soldiers. “This is the first time that we really get to see her use all of her powers: the lasso, her bracelets, her shield, and even her shin guards. As she dances her way around the room, it goes from realtime into slow motion and back into realtime. She is repelling bullets, smashing guns with her back, using her shield as a sliding mat and doing slow-motion kicks. It is a wonderfully choreographed scene and it is her first real action scene.”

The scene required a fluid combination of realistic sounds and subdued, slow-motion sounds. “It was like pushing and pulling the soundtrack as things slowed down and then sped back up. That was a lot of fun.”

The Lasso
Where would Wonder Woman be without her signature lasso of truth? In the film, she often uses the lasso as a physical weapon, but there was an important scene where the lasso was called upon for its truth-finding power. Early in the film, Steve’s plane crashes and he’s washed onto Themyscira’s shore. The Amazonians bind Steve with the lasso and interrogate him. Eventually the lasso of truth overpowers him and he divulges his secrets. “There is quite a lot of acting on Chris Pine’s part to signify that he’s uncomfortable and is struggling,” says Mather. “We initially went by his performance, which gave the impression that he was being burned. He says, ‘This is really hot,’ so we started with sizzling and hissing sounds as if the rope was burning him. Again, Patty felt strongly about not going into the high-frequency realm because it distracts from the dialogue, so we wanted to keep the sound in a lower, more menacing register.”

Mather and his team experimented with adding a multitude of different elements, including low whispering voices, to see if they added a sense of personality to the lasso. “We kept the sizzling, but we pitched it down to make it more watery and less high-end. Then we tried a dozen or so variations of themes. Eventually we stayed with this blood-flow sound, which is like an arterial blood flow. It has a slight rhythm to it and if you roll off the top end and keep it fairly muted then it’s quite an intriguing sound. It feels very visceral.”

The last elements Mather added to the lasso were recordings he captured of two stone slabs grinding against each other in a circular motion, like a mill. “It created this rotating, undulating sound that almost has a voice. So that created this identity, this personality. It was very challenging. We also struggled with this when we did the Harry Potter films, to make an inert object have a character without making it sound a bit goofy and a bit sci-fi. All of those last elements we put together, we kept that very low. We literally raised the volume as you see Steve’s discomfort and then let it peel away every time he revealed the truth. As he was fighting it, the sound would rise and build up. It became a very subtle, but very meaningful, vehicle to show that the rope was actually doing something. It wasn’t burning him but it was doing something that was making him uncomfortable.”

The Mix
Wonder Woman was mixed at De Lane Lea (Warner Bros. London) by re-recording mixers Chris Burdon and Gilbert Lake. Mather reveals that the mixing process was exhausting, but not because of the people involved. “Patty is a joy to work with,” he explains. “What I mean is that working with frequencies that are so low and so loud is exhausting. It wasn’t even the volume; it was being exposed to those low frequencies all day, every day for nine weeks or so. It was exhausting, and it really took its toll on everybody.”

In the mix, Jenkins chose to have Rupert Gregson-Williams’s score lead nearly all of the action sequences. “Patty’s sensitivity and vision for the soundtrack was very much about the music and the emotion of the characters,” says Mather. “She was very aware of the emotional narrative that the music would bring. She did not want to lean too heavily on the sound effects. She knew there would be scenes where there would be action and there would be opportunities to have sound design, but I found that we were not pushing those moments as hard as you would expect. The sound design highs weren’t so high that you felt bereft of momentum and pace when those sound design heavy scenes were finished. We ended up maintaining a far more interesting soundtrack that way.”

With DC films like Batman v Superman: Dawn of Justice and Spider-Man, the audience expects a sound design-heavy track, but Jenkins’s music-led approach to Wonder Woman provides a refreshing spin on superhero film soundtracks. “The soundtrack is less supernatural and more down to earth,” says Mather. “I don’t think it could’ve been any other way. It’s not a predictable soundtrack and I really enjoyed that.”

Mather really enjoys collaborating with people who have different ideas and different approaches. “What was exciting about doing this film was that I was able to work with someone who had an incredibly strong idea about the soundtrack and yet was very happy to let us try different routes and options. Patty was very open to listening to different ideas, and willing to take the best from those ideas while still retaining a very strong vision of how the soundtrack was going to play for the audience. This is Patty’s DC story, her opportunity to open up the DC universe and give the audience a new look at a character. She was an extraordinary person to work with and for me that was the best part of the process. In the time of remakes, it’s nice to have a film that is fresh and takes a different approach.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @AudioJeney

FX’s Fargo features sounds as distinctive as its characters

By Jennifer Walden

In Fargo, North Dakota, in the dead of winter, there’s been a murder. You might think you’ve heard this story before, but Noah Hawley keeps coming up with a fresh, new version of it for each season of his Fargo series on FX. Sure, his inspiration was the Coen brothers’ Oscar-winning Fargo film, but with Season 3 now underway it’s obvious that Hawley’s series isn’t simply a spin-off.

Martin Lee and Kirk Lynds.

Every season of the Emmy-winning Fargo series follows a different story, with its own distinct cast of characters, set in its own specified point in time. Even the location isn’t always the same — Season 3 takes place in Minnesota. What does link the seasons together is Hawley’s distinct black humor, which oozes from these disparate small-town homicides. He’s a writer and director on the series, in addition to being the showrunner and an executive producer. “Noah is very hands-on,” confirms re-recording mixer Martin Lee at Tattersall Sound & Picture in Toronto, part of the SIM Group family of companies, who has been mixing the show with re-recording mixer Kirk Lynds since Season 2.

Fargo has a very distinct look, feel and sound that you have to maintain,” explains Lee. “The editors, producers and Noah put a lot of work into the sound design and sound ideas while they are cutting the picture. The music is very heavily worked while they are editing the show. By the time the soundtrack gets to us there is a pretty clear path as to what they are looking for. It’s up to us to take that and flesh it out, to make it fill the 5.1 environment. That’s one of the most unique parts of the process for us.”

Season 3 follows rival brothers, Emmit and Ray Stussy (both played by Ewan McGregor). Their feud over a rare postage stamp leads to a botched robbery attempt that ultimately ends in murder (don’t worry, neither Ewan character meets his demise…yet??).

One of the most challenging episodes to mix this season, so far, was Episode 3, “The Law of Non-Contradiction.” The story plays out across four different settings, each with unique soundscapes: Minnesota, Los Angeles in 2010, Los Angeles in 1975 and an animated sci-fi realm. As police officer Gloria Burgle (Carrie Coon) unravels the homicide in Eden Valley, Minnesota, her journey leads her to Los Angeles. There the story dives into the past, to 1975, to reveal the life story of science fiction writer Thaddeus Mobley (Thomas Mann). The episode side-trips into animation land when Gloria reads Mobley’s book titled The Planet Wyh.

One sonic distinction between Los Angeles in 2010 and Los Angeles of 1975 was the density of traffic. Lee, who mixed the dialogue and music, says, “All of the scenes that were taking place in 2010 were very thick with traffic and cars. That was a technical challenge, because the recordings were very heavy with traffic.”

Another distinction is the pervasiveness of technology in social situations, like the bar scene where Gloria meets up with a local Los Angeles cop to talk about her stolen luggage. The patrons are all glued to their cell phones. As the camera pans down the bar, you hear different sounds of texting playing over a contemporary, techno dance track. “They wanted to have those sounds playing, but not become intrusive. They wanted to establish with sound that people are always tapping away on their phones. It was important to get those sounds to play through subtly,” explains Lynds.

In the animated sequences, Gloria’s voice narrates the story of a small android named MNSKY whose spaceman companion dies just before they reach Earth. The robot carries on the mission and records an eon’s worth of data on Earth. The robot is eventually reunited with members of The Federation of United Planets, who cull the android’s data and then order it to shut down. “Because it was this animated sci-fi story, we wanted to really fill the room with the environment much more so than we can when we are dealing with production sound,” says Lee. “As this little robotic character is moving through time on Earth, you see something like the history of man. There’s voiceover, sound effects and music through all of it. It required a lot of finesse to maintain all of those elements with the right kind of energy.”

The animation begins with a spaceship crashing into the moon. MNSKY wakes and approaches the injured spaceman who tells the android he’s going to die. Lee needed to create a vocal process for the spaceman, to make it sound as though his voice is coming through his helmet. With Audio Ease’s Altiverb, Lee tweaked the settings on a “long plastic tube” convolution reverb. Then he layered that processed vocal with the clean vocal. “It was just enough to create that sense of a helmet,” he says.

At the end, when MNSKY rejoins the members of the Federation on their spaceship it’s a very different environment from Earth. The large, ethereal space is awash in long, warm reverbs which Lynds applied using plug-ins like PhoenixVerb 5.1 and Altiverb. Lee also applied a long reverb treatment to the dialogue. “The reverbs have quite a significant pre-delay, so you almost have that sense of a repeat of the voice afterwards. This gives it a very distinctive, environmental feel.”

Lynds and Lee spend two days premixing their material on separate dub stages. For the premix, Lynds typically has all the necessary tracks from supervising sound editor Nick Forshager while Lee’s dialogue and music tracks come in more piecemeal. “I get about half the production dialogue on day one and then I get the other half on day two,” says Lee. “ADR dribbles in the whole time, including well into the mixing process. ADR comes in even after we have had several playbacks already.”

Fortunately, the show doesn’t rely heavily on ADR. Lee notes that they put a lot of effort into preserving the production. “We use a combination of techniques. The editors find the cleanest lines and takes (while still keeping the performance), then I spent a lot of time cleaning that up,” he says.

This season Lee relies more on Cedar’s DNS One plug-in for noise reduction and less on the iZotope RX5 (Connect version). “I’m finding with Fargo that the showrunners are uniquely sensitive to the effects of the iZotope processing. This year it took more work to find the right sound. It ends up being a combination of both the Cedar and the RX5,” reports Lee.

After premixing, Lee and Lynds bring their tracks together on Tattersall’s Stage 1. They have three days for the 5.1 final mix. They spend one (very) long day building the episode in 5.1 and then send their mix to Los Angeles for Forshager and co-producer Gregg Tilson to review. Then Lee and Lynds address the first round of notes the next morning and send the mix back to Los Angeles for another playback. Each consecutive playback is played for more people. The last playback is for Hawley on the third day.

“One of the big challenges with the workflow is mixing an episode in one day. It’s a long mix day. At least the different time zones help. We send them a mix to listen to typically around 6-7pm PST, so it’s not super late for them. We start at 8am EST the next morning, which is three hours ahead of their time. By the time they’re in the studio and ready to listen, it is 10am their time and we’ve already spent three or four hours handling the revisions. That really works to our advantage,” says Lee.

Sound in the Fargo series is not an afterthought. It’s used to build tension, like a desk bell that rings for an uncomfortably long time, or to set the mood of a space, like an overly noisy fish tank in a cheap apartment. By the time the tracks have made it to the mixers, there’s been “a lot of time and effort spent thinking about what the show was going to sound like,” says Lynds. “From that sense, the entire mix for us is a creative opportunity. It’s our chance to re-create that in a 5.1 environment, and to make that bigger and better.”

You can catch new episodes of Fargo on FX Networks, Wednesdays at 10pm EST.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Music house Wolf at the Door opens in Venice

Wolf at the Door has opened in Venice, California, providing original music, music supervision and sound design for the ad industry and, occasionally, films. Founders Alex Kemp and Jimmy Haun have been making music for some time: Kemp was composer at Chicago-based Catfish Music and Spank, and was the former creative director of Hum in Santa Monica. Haun spent over 10 years as the senior composer at Elias, in addition to being a session musician.

Between the two of them they’ve been signed to four major labels, written music for 11 Super Bowl spots, and have composed music for top agencies, including W+K, Goodby, Chiat Day, Team One and Arnold, working with directors like David Fincher, Lance Acord, Stacy Wall and Gore Verbinski.

In addition to making music, Kemp linked up with his longtime friend Scott Brown, a former creative director at agencies including Chiat Day, 72and Sunny and Deutsch, to start a surf shop and brand featuring hand-crafted surf boards — Lone Wolfs Objets d’Surf.

With the Wolf at the Door recording studio and production office existing directly behind the Lone Wolfs retail store, Kemp and his partners bounce between different creative projects daily: writing music for spots, designing handmade Lone Wolfs surfboards, recording bands in the studio, laying out their own magazine, or producing their own original branded content.

Episodes of their original surf talk show/Web series Everything’s Not Working have featured guest pro surfers, including Dion Agius, Nabil Samadani and Eden Saul.

Wolf at the Door recently worked on an Experian commercial directed by the Malloy Brothers for the Martin Agency, as well as a Century Link spot directed by Malcom Venville for Arnold Worldwide. Kemp worked closely with Venville on the casting and arrangement for the spot, and traveled to Denver to record the duet of singer Kelvin Jones’ “Call You Home” with Karissa Lee, a young singer Kemp found specifically for the project.

“Our approach to music is always driven by who the brand is and what ideas the music needs to support,” says Kemp. “The music provides the emotional context.” Paying attention to messaging is something that goes hand in hand with carving out their own brand and making their own content. “The whole model seemed ready for a reset. And personally speaking, I like to live and work at a place where being inspired dictates the actions we take, rather than the other way around.”

Main Image L-R:  Jimmy Haun and Alex Kemp.

Lime opens sound design division led by Michael Anastasi, Rohan Young

Santa Monica’s Lime Studios has launched a sound design division. LSD (Lime Sound Design), featuring newly signed sound designer Michael Anastasi and Lime sound designer/mixer Rohan Young has already created sound design for national commercial campaigns.

“Having worked with Michael since his early days at Stimmung and then at Barking Owl, he was always putting out some of the best sound design work, a lot of which we were fortunate to be final mixing here at Lime,” says executive producer Susie Boyajan, who collaborates closely with Lime and LSD owner Bruce Horwitz and the other company partners — mixers Mark Meyuhas and Loren Silber. “Having Michael here provides us with an opportunity to be involved earlier in the creative process, and provides our clients with a more streamlined experience for their audio needs. Rohan and Michael were often competing for some of the same work, and share a huge client base between them, so it made sense for Lime to expand and create a new division centered around them.”

Boyajan points out that “all of the mixers at Lime have enjoyed the sound design aspect of their jobs, and are really talented at it, but having a new division with LSD that operates differently than our current, hourly sound design structure makes sense for the way the industry is continuing to change. We see it as a real advantage that we can offer clients both models.”

“I have always considered myself a sound designer that mixes,” notes Young. “It’s a different experience to be involved early on and try various things that bring the spot to life. I’ve worked closely with Michael for a long time. It became more and more apparent to both of us that we should be working together. Starting LSD became a no-brainer. Our now-shared resources, with the addition of a Foley stage and location audio recordists only make things better for both of us and even more so for our clients.”

Young explains that setting up LSD as its own sound design division, as opposed to bringing in Michael to sound design at Lime, allows clients to separate the mix from the sound design on their production if they choose.

Anastasi joins LSD from Barking Owl, where he spent the last seven years creating sound design for high-profile projects and building long-term creative collaborations with clients. Michael recalls his fortunate experiences recording sounds with John Fasal, and Foley sessions with John Roesch and Alyson Dee Moore as having taught him a great deal of his craft. “Foley is actually what got me to become a sound designer,” he explains.

Projects that Anastasi has worked on include the PSA on human trafficking called Hide and Seek, which won an AICP Award for Sound Design. He also provided sound design to the feature film Casa De Mi Padre, starring Will Ferrell, and was sound supervisor as well. For Nike’s Together project, featuring Lebron James, a two-minute black-and-white piece, Anastasi traveled back to Lebron’s hometown of Cleveland to record 500+ extras.

Lime is currently building new studios for LSD, featuring a team of sound recordists and a stand-alone Foley room. The LSD team is currently in the midst of a series of projects launching this spring, including commercial campaigns for Nike, Samsung, StubHub and Adobe.

Main Image: Michael Anastasi and Rohan Young.

The sound of John Wick: Chapter 2 — bigger and bolder

The director and audio team share their process.

By Jennifer Walden

To achieve the machine-like precision of assassin John Wick for director Chad Stahelski’s signature gun-fu-style action films, Keanu Reeves (Wick) goes through months of extensive martial arts and weapons training. The result is worth the effort. Wick is fast, efficient and thorough. You cannot fake his moves.

In John Wick: Chapter 2, Wick is still trying to retire from his career as a hitman, but he’s asked for one last kill. Bound by a blood oath, it’s a job Wick can’t refuse. Reluctantly, he goes to work, but by doing so, he’s dragged further into the assassin lifestyle he’s desperate to leave behind.

Chad Stahelski

Stahelski builds a visually and sonically engaging world on-screen, and then fills it full of meticulously placed bullet holes. His inspiration for John Wick comes from his experience as a stunt man and martial arts stunt coordinator for Lily and Lana Wachowski on The Matrix films. “The Wachowskis are some of the best world creators in the film industry. Much of what I know about sound and lighting has to do with their perspective that every little bit helps define the world. You just can’t do it visually. It’s the sound and the look and the vibe — the combination is what grabs people.”

Before the script on John Wick: Chapter 2 was even locked, Stahelski brainstormed with supervising sound editor Mark Stoeckinger and composer Tyler Bates — alumni of the first Wick film — and cinematographer Dan Laustsen on how they could go deeper into Wick’s world this time around. “It was so collaborative and inspirational. Mark and his team talked about how to make it sound bigger and more unique; how to make this movie sound as big as we wanted it to look. This sound team was one of my favorite departments to work with. I’ve learned more from those guys about sound in these last two films then I thought I had learned in the last 15 years,” says Stahelski.

Supervising sound editor Stoeckinger, at the Formosa Group in West Hollywood, knows action films. Mission Impossible II and III, both Jack Reacher films, Iron Man 3, and the upcoming (April) The Fate of the Furious, are just a part of his film sound experience. Gun fights, car chases, punches and impacts — Stoeckinger knows that all those big sound effects in an action film can compete with the music and dialogue for space in a scene. “The more sound elements you have, the more delicate the balancing act is,” he explains. “The director wants his sounds to be big and bold. To achieve that, you want to have a low-frequency punch to the effects. Sometimes, the frequencies in the music can steal all that space.”

The Sound of Music
Composer Bates’s score was big and bold, with lots of percussion, bass and strong guitar chords that existed in the same frequency range as the gunshots, car engines and explosions. “Our composer is very good at creating a score that is individual to John Wick,” says Stahelski. “I listened to just the music, and it was great. I listened to just the sound design, and that was great. When we put them together we couldn’t understand what was going on. They overlapped that much.”

During the final mix at Formosa’s Stage B on The Lot, re-recording mixers Andy Koyama and Martyn Zub — who both mixed the first John Wick — along with Gabe Serrano, approached the fight sequences with effects leading the mix, since those needed to match the visuals. Then Koyama made adjustments to the music stems to give the sound effects more room.

“Andy made some great suggestions, like if we lowered the bass here then we can hear the effects punch more,” says Stahelski. “That gave us the idea to go back to our composers, to the music department and the music editor. We took it to the next level conceptually. We had Tyler [Bates] strip out a lot of the percussion and bass sounds. Mark realized we have so many gunshots, so why not use those as the percussion? The music was influenced by the amount of gunfire, sound design and the reverb that we put into the gunshots.”

Mark Stoeckinger

The music and sound departments collaborated through the last few weeks of the final mix. “It was a really neat, synergistic effect of the sound and music complementing each other. I was super happy with the final product,” says Stahelski.

Putting the Gun in Gun-Fu
As its name suggests, gun-fu involves a range of guns —handguns, shotguns and assault rifles. It was up to sound designer Alan Rankin to create a variety of distinct gun effects that not only sounded different from weapon to weapon but also differentiated between John Wick’s guns and the bad guys’ guns. To help Wick’s guns sound more powerful and complex than his foes, Rankin added different layers of air, boom and mechanical effects. To distinguish one weapon from another, Rankin layered the sounds of several different guns together to make a unique sound.

The result is the type of gun sound that Stoeckinger likes to use on the John Wick films. “Even before this film officially started, Alan would present gun ideas. He’d say, ‘What do you think about this sound for the shotgun? Or, ‘How about this gun sound?’ We went back and forth many times, and once we started the film, he took it well beyond that.”

Rankin developed the sounds further by processing his effects with EQ and limiting to help the gunshots punch through the mix. “We knew we would inevitably have to turn the gunshots down in the mix due to conflicts with music or dialogue, or just because of the sheer quantity of shots needed for some of the scenes,” Rankin says.

Each gun battle was designed entirely in post, since the guns on-screen weren’t shooting live rounds. Rankin spent months designing and evolving the weapons and bullet effects in the fight sequences. He says, “Occasionally there would be a production sound we could use to help sell the space, but for the most part it’s all a construct.”

There were unique hurdles for each fight scene, but Rankin feels the catacombs were the most challenging from a design standpoint, and Zub agrees in terms of mix. “In the catacombs there’s a rapid-fire sequence with lots of shots and ricochets, with body hits and head explosions. It’s all going on at the same time. You have to be delicate with each gunshot so that they don’t all sound the same. It can’t sound repetitive and boring. So that was pretty tricky.”

To keep the gunfire exciting, Zub played with the perspective, the dynamics and the sound layers to make each shot unique. “For example, a shotgun sound might be made up of eight different elements. So in any given 40-second sequence, you might have 40 gunshots. To keep them all from sounding the same, you go through each element of the shotgun sound and either turn some layers off, tune some of them differently or put different reverb on them. This gives each gunshot its own unique character. Doing that keeps the soundtrack more interesting and that helps to tell the story better,” says Zub. For reverb, he used the PhoenixVerb Surround Reverb plug-in to create reverbs in 7.1.

Another challenge was the fight sequence at the museum. To score the first part of Wick’s fight, director Stahelski chose a classical selection from Vivaldi… but with a twist. Instead of relying solely on traditional percussion, “Mark’s team intermixed gunshots with the music,” notes Stahelski. “That is one of my favorite overall sound sequences.”

At the museum, there’s a multi-level mirrored room exhibit with moving walls. In there, Wick faces several opponents. “The mirror room battle was challenging because we had to represent the highly reflective space in which the gunshots were occurring,” explains Rankin. “Martyn [Zub] was really diligent about keeping the sounds tight and contained so the audience doesn’t get worn out from the massive volume of gunshots involved.”

Their goal was to make as much distinction as possible between the gunshot and the bullet impact sounds since visually there were only a few frames between the two. “There was lots of tweaking the sync of those sounds in order to make sure we got the necessary visceral result that the director was looking for,” says Rankin.

Stahelski adds, “The mirror room has great design work. The moment a gun fires, it just echoes through the whole space. As you change the guns, you change the reverb and change the echo in there. I really dug that.”

On the dialogue side, the mirror room offered Koyama an opportunity to play with the placement of the voices. “You might be looking at somebody, but because it’s just a reflection, Andy has their voice coming from a different place in the theater,” Stoeckinger explains. “It’s disorienting, which is what it is supposed to be. The visuals inspired what the sound does. The location design — how they shot it and cut it — that let us play with sound.”

The Manhattan Bridge
Koyama’s biggest challenge on dialogue was during a scene where Laurence Fishburne’s character The Bowery King is talking to Wick while they’re standing on a rooftop near the busy Manhattan Bridge. Koyama used iZotope RX 5 to help clean up the traffic noise. “The dialogue was very difficult to understand and Laurence was not available for ADR, so we had to save it. With some magic we managed to save it, and it actually sounds really great in the film.”

Once Koyama cleaned the production dialogue, Stoeckinger was able to create an unsettling atmosphere there by weaving tonal sound elements with a “traffic on a bridge” roar. “For me personally, building weird spaces is fun because it’s less literal,” says Stoeckinger.

Stahelski strives for a detailed and deep world in his John Wick films. He chooses Stoeckinger to lead his sound team because Stoeckinger’s “work is incredibly immersive, incredibly detailed,” says the director. “The depths that he goes, even if it is just a single sound or tone or atmosphere, Mark has a way to penetrate the visuals. I think his work stands out so far above most other sound design teams. I love my sound department and I couldn’t be happier with them.”


Jennifer Walden is a New Jersey-based writer and audio engineer.