Tag Archives: Jennifer Walden

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Netflix’s Lost in Space: New sounds for a classic series

By Jennifer Walden

Netflix’s Lost in Space series, a remake of the 1965 television show, is a playground for sound. In the first two episodes alone, the series introduces at least five unique environments, including an alien planet, a whole world of new tech — from wristband communication systems to medical analysis devices — new modes of transportation, an organic-based robot lifeform and its correlating technologies, a massive explosion in space and so much more.

It was a mission not easily undertaken, but if anyone could manage it, it was four-time Emmy Award-winning supervising sound editor Benjamin Cook of 424 Post in Culver City. He’s led the sound teams on series like Starz’s Black Sails, Counterpart and Magic City, as well as HBO’s The Pacific, Rome and Deadwood, to name a few.

Benjamin Cook

Lost in Space was a reunion of sorts for members of the Black Sails post sound team. Making the jump from pirate ships to spaceships were sound effects editors Jeffrey Pitts, Shaughnessy Hare, Charles Maynes, Hector Gika and Trevor Metz; Foley artists Jeffrey Wilhoit and Dylan Tuomy-Wilhoit; Foley mixer Brett Voss; and re-recording mixers Onnalee Blank and Mathew Waters.

“I really enjoyed the crew on Lost in Space. I had great editors and mixers — really super-creative, top-notch people,” says Cook, who also had help from co-supervising sound editor Branden Spencer. “Sound effects-wise there was an enormous amount of elements to create and record. Everyone involved contributed. You’re establishing a lot of sounds in those first two episodes that are carried on throughout the rest of the season.”

Soundscapes
So where does one begin on such a sound-intensive show? The initial focus was on the soundscapes, such as the sound of the alien planet’s different biomes, and the sound of different areas on the ships. “Before I saw any visuals, the showrunners wanted me to send them some ‘alien planet sounds,’ but there is a huge difference between Mars and Dagobah,” explains Cook. “After talking with them for a bit, we narrowed down some areas to focus on, like the glacier, the badlands and the forest area.”

For the forest area, Cook began by finding interesting snippets of animal, bird and insect recordings, like a single chirp or little song phrase that he could treat with pitching or other processing to create something new. Then he took those new sounds and positioned them in the sound field to build up beds of creatures to populate the alien forest. In that initial creation phase, Cook designed several tracks, which he could use for the rest of the season. “The show itself was shot in Canada, so that was one of the things they were fighting against — the showrunners were pretty conscious of not making the crash planet sound too Earthly. They really wanted it to sound alien.”

Another huge aspect of the series’ sound is the communication systems. The characters talk to each other through the headsets in their spacesuit helmets, and through wristband communications. Each family has their own personal ship, called a Jupiter, which can contact other Jupiter ships through shortwave radios. They use the same radios to communicate with their all-terrain vehicles called rovers. Cook notes these ham radios had an intentional retro feel. The Jupiters can send/receive long-distance transmissions from the planet’s surface to the main ship, called Resolute, in space. The families can also communicate with their Jupiters ship’s systems.

Each mode of communication sounds different and was handled differently in post. Some processing was handled by the re-recording mixers, and some was created by the sound editorial team. For example, in Episode 1 Judy Robinson (Taylor Russell) is frozen underwater in a glacial lake. Whenever the shot cuts to Judy’s face inside her helmet, the sound is very close and claustrophobic.

Judy’s voice bounces off the helmet’s face-shield. She hears her sister through the headset and it’s a small, slightly futzed speaker sound. The processing on both Judy’s voice and her sister’s voice sounds very distinct, yet natural. “That was all Onnalee Blank and Mathew Waters,” says Cook. “They mixed this show, and they both bring so much to the table creatively. They’ll do additional futzing and treatments, like on the helmets. That was something that Onna wanted to do, to make it really sound like an ‘inside a helmet’ sound. It has that special quality to it.”

On the flipside, the ship’s voice was a process that Cook created. Co-supervisor Spencer recorded the voice actor’s lines in ADR and then Cook added vocoding, EQ futz and reverb to sell the idea that the voice was coming through the ship’s speakers. “Sometimes we worldized the lines by playing them through a speaker and recording them. I really tried to avoid too much reverb or heavy futzing knowing that on the stage the mixers may do additional processing,” he says.

In Episode 1, Will Robinson (Maxwell Jenkins) finds himself alone in the forest. He tries to call his father, John Robinson (Toby Stephens — a Black Sails alumni as well) via his wristband comm system but the transmission is interrupted by a strange, undulating, vocal-like sound. It’s interference from an alien ship that had crashed nearby. Cook notes that the interference sound required thorough experimentation. “That was a difficult one. The showrunners wanted something organic and very eerie, but it also needed to be jarring. We did quite a few versions of that.”

For the main element in that sound, Cook chose whale sounds for their innate pitchy quality. He manipulated and processed the whale recordings using Symbolic Sound’s Kyma sound design workstation.

The Robot
Another challenging set of sounds were those created for Will Robinson’s Robot (Brian Steele). The Robot makes dying sounds, movement sounds and face-light sounds when it’s processing information. It can transform its body to look more human. It can use its hands to fire energy blasts or as a tool to create heat. It says, “Danger, Will Robinson,” and “Danger, Dr. Smith.” The Robot is sometimes a good guy and sometimes a bad guy, and the sound needed to cover all of that. “The Robot was a job in itself,” says Cook. “One thing we had to do was to sell emotion, especially for his dying sounds and his interactions with Will and the family.”

One of Cook’s trickiest feats was to create the proper sense of weight and movement for the Robot, and to portray the idea that the Robot was alive and organic but still metallic. “It couldn’t be earthly technology. Traditionally for robot movement you will hear people use servo sounds, but I didn’t want to use any kind of servos. So, we had to create a sound with a similar aesthetic to a servo,” says Cook. He turned to the Robot’s Foley sounds, and devised a processing chain to heavily treat those movement tracks. “That generated the basic body movement for the Robot and then we sweetened its feet with heavier sound effects, like heavy metal clanking and deeper impact booms. We had a lot of textures for the different surfaces like rock and foliage that we used for its feet.”

The Robot’s face lights change color to let everyone know if it’s in good-mode or bad-mode. But there isn’t any overt sound to emphasize the lights as they move and change. If the camera is extremely close-up on the lights, then there’s a faint chiming or tinkling sound that accentuates their movement. Overall though, there is a “presence” sound for the Robot, an undulating tone that’s reminiscent of purring when it’s in good-mode. “The showrunners wanted a kind of purring sound, so I used my cat purring as one of the building block elements for that,” says Cook. When the Robot is in bad-mode, the sound is anxious, like a pulsing heartbeat, to set the audience on edge.

It wouldn’t be Lost in Space without the Robot’s iconic line, “Danger, Will Robinson.” Initially, the showrunners wanted that line to sound as close to the original 1960’s delivery as possible. “But then they wanted it to sound unique too,” says Cook. “One comment was that they wanted it to sound like the Robot had metallic vocal cords. So we had to figure out ways to incorporate that into the treatment.” The vocal processing chain used several tools, from EQ, pitching and filtering to modulation plug-ins like Waves Morphoder and Dehumaniser by Krotos. “It was an extensive chain. It wasn’t just one particular tool; there were several of them,” he notes.

There are other sound elements that tie into the original 1960’s series. For example, when Maureen Robinson (Molly Parker) and husband John are exploring the wreckage of the alien ship they discover a virtual map room that lets them see into the solar system where they’ve crashed and into the galaxy beyond. The sound design during that sequence features sound material from the original show. “We treated and processed those original elements until they’re virtually unrecognizable, but they’re in there. We tried to pay tribute to the original when we could, when it was possible,” says Cook.

Other sound highlights include the Resolute exploding in space, which caused massive sections of the ship to break apart and collide. For that, Cook says contact microphones were used to capture the sound of tin cans being ripped apart. “There were so many fun things in the show for sound. From the first episode with the ship crash and it sinking into the glacier to the black hole sequence and the Robot fight in the season finale. The show had a lot of different challenges and a lot of opportunities for sound.”

Lost in Space was mixed in the Anthony Quinn Theater at Sony Pictures in 7.1 surround. Interestingly, the show was delivered in Dolby’s Home Atmos format. Cook explains, “When they booked the stage, the producer’s weren’t sure if we were going to do the show in Atmos or not. That was something they decided to do later so we had to figure out a way to do it.”

They mixed the show in Atmos while referencing the 7.1 mix and then played those mixes back in a Dolby Home Atmos room to check them, making any necessary adjustments and creating the Atmos deliverables. “Between updates for visual effects and music as well as the Atmos mixes, we spent roughly 80 days on the dub stage for the 10 episodes,” concludes Cook.

Michael Semanick: Mixing SFX, Foley for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic. (Look for next week’s Star Wars story, in which re-recording mixer Ren Klyce talks about his approach to mixing John Williams’ score.)

Michael Semanick

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage.

You had all of these amazing elements — Skywalker’s effects, John Williams’ score and the dialogue. How did you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

The new Tom and Jerry Show score combines vintage and modern sounds

By Jennifer Walden

Tom and Jerry have been locked in conflict since the 1940s when animators William Hanna and Joseph Barbera pitted cat against mouse in a theatrical animated series for MGM’s cartoon studio. Their Academy Award-winning Tom and Jerry short films spurred numerous iterations over the years by different directors and animation studios.

The latest reboot, The Tom and Jerry Show, produced by Warner Bros. Animation and Renegade Animation, and directed by Darrell Van Citters, started airing on Cartoon Network in 2014. It didn’t really come into its own until Season 2, which began airing in 2016.

Vivek Maddala

Vivek Maddala is co-composer on the series. “The storytelling is getting better and better. Ostensibly, it’s a children’s show but what I’m finding is the writers seem to be having a lot of fun with allegorical references. It features layered storytelling that children probably wouldn’t be able to appreciate. For example, Tom’s love interest, a cat named Toodles, is an aspiring dancer by night but her day job is being a spot welder for heavy construction. Obviously, this is a Flashdance reference, so I was able to thread oblique references to Flashdance in the score.”

New episodes of The Tom and Jerry Show are currently airing on Cartoon Network, and Maddala will be composing 39 of the episodes in Season 3.

As with Hanna-Barbera’s animated theatrical shorts, the characters of Tom and Jerry rarely talk, although other recurring characters are voiced. Music plays an essential role in describing the characters’ actions and reactions. Maddala’s compositions are reminiscent of composer Scott Bradley’s approach to the original Tom and Jerry animations. Comfortable cartoon tropes like trumpet blasts and trombone slides, pizzicato plucks and timpani bounces punctuate a string-and woodwind-driven score. “Scott Bradley’s scoring technique is the gold standard. It is beautiful writing,” he says.

In their initial conversations, director Van Citters regularly referenced Bradley’s scoring technique. Maddala studied those scores carefully and frequently revisits them while writing his own scores for the show. Maddala also listens to “music that is completely unrelated, like Led Zeppelin or Marvin Gaye, to help jog my imagination. The music I’m writing for the show very much sounds like me. I’m taking some of the approaches that Scott Bradley used but, ultimately, I am using my own musical vocabulary. I have a certain way of hearing drama and hearing action, and that’s what the score sounds like.”

Maddala’s vintage-meets-modern compositions incorporate contemporary instrumentation and genres like blues guitar for when the cool stray cat comes onto the scene, and an electro-organ of the muziak persuasion for a snack food TV commercial. His musical references to Flashdance can heard in the “Cat Dance Fever” episode, and he gives a nod to Elmer Bernstein’s score for The Magnificent Seven in the episode “Uncle Pecos Rides Again.”

Each new musical direction or change of instrument doesn’t feel abrupt. It all melts into the quintessential Tom and Jerry small orchestra sound. “Darrell Van Citters and Warner Bros. are giving me quite a bit of autonomy in coming up with my own musical solutions to the action on-screen and the situations that the characters are experiencing. I’m able to draw from a lot of different things that inspire me,” explains Maddala.

Instruments & Tools
His score combines live recordings with virtual instruments. His multi-room studio in Los Angeles houses a live room, his main composing room and a separate piano room. Maddala keeps a Yamaha C3 grand piano and a drum kit always mic’d up so he can perform those parts whenever he needs. He also records small chamber groups there, like double-string quartets and woodwind quartets. The string ensembles sometimes consist of seven violins (four first and three second), three violas and three cellos, captured using a Blumlein pair recording configuration (a stereo recording technique that produces a realistic stereo image) with ribbon mics to evoke a vintage sound. He chooses AEA N8 ribbon mics matched with AEA’s RPQ 500 mic pre-amps.

Maddala also uses several large diaphragm tube condenser mics he designed for Avid years ago, such as the Sputnik. “The Sputnik is a cross between a classic Neumann U47 capsule with the original M7 design, and an AKG C 12 mic with the original CK12 capsule. The capsule is sort of like a cross between those two mics. The head amp is based on the Telefunken ELA M 251.”

Maddala’s composing room.

Maddala uses three different DAWs. He composes in Cakewalk’s Sonar on a PC and runs video through Steinberg’s Cubase on a Mac. The two systems are locked together via SMPTE timecode. On the Mac, he also runs Avid Pro Tools 12 for delivering stems to the dub stage. “The dub is done in Pro Tools so they usually ask to have a Pro Tools session delivered to them. Once the score is approved, I copy the stems into a Pro Tools session so it’s self-contained, save that and post it to the FTP server.”

Maddala got his start in composing for film by scoring classic silent films from the 1920s, which Warner Bros. and TCM restored in order to release them to today’s audiences. He worked with recording/mix engineer Dan Blessinger on those silent films, and Blessinger — the sound designer on The Tom and Jerry Show, recommended Maddala for the gig. “A lot of the classic silent films from the 1920s never had a score associated with them because the technology didn’t exist to marry sound and picture. About 10 or 15 years ago, when TCM was releasing these films to modern audiences, they needed new scores. So I started doing that, which built up my chops for scoring something like a Tom & Jerry cartoon where there is wall-to-wall music,” concludes Maddala.


Jennifer Walden is a New Jersey-based writer and audio engineer.

Bates Motel’s Emmy-nominated composer Chris Bacon

By Jennifer Walden

The creators of A&E’s Bates Motel series have proven that it is possible to successfully rework a classic film for the small screen. The series, returning for Season 5 in 2017, is a contemporary prequel to Alfred Hitchcock’s Psycho. It tells the story of how a young Norman Bates becomes the Norman Bates of film legend.

Understandably, when the words “contemporary” and “prequel” are combined, it may induce a cringe or two, as LA-based composer Chris Bacon admits. “When I first heard about the series, I thought, ‘That sounds like a terrible idea.’ Usually when you mess with an iconic film, the project can go south pretty quick, but then I heard who was involved — writers/producers Carlton Cuse and Kerry Ehrin. I’m a huge fan of their work on Lost and Friday Night Lights, so the idea sounded much more appealing. I went from feeling like ‘this is a terrible idea’ to ‘how do I get involved in this!’”

Chris Bacon

Bacon, who has been the Bates Motel composer since Season 1, says their goal from the start was to make a series that wasn’t a Psycho knock-off. “It was not our goal to tip our hats in obvious ways to Psycho. We weren’t trying to make it an homage. We weren’t trying to inhabit the universe that was so masterfully created by Alfred Hitchcock and composer Bernard Herrmann,” he explains.

Borrowing Some Strings
Having a long-established love of Herrmann’s music, it was hard for Bacon not to follow the composer’s lead, particularly when it came to instrumentation. Bates Motel’s score strongly features — you guessed it — strings. “One reason Herrmann stuck solely to strings was because the film was black and white,” explains Bacon. “He chose a monochromatic palette, as far as sound goes, without having woodwind and percussion. On the series, I take it farther. I use percussion and synth effects, but it is mostly string driven.”

Since the strings are the core of the score, Bacon felt the expressive qualities that live musicians add to the music would be more emotionally impactful than what he could get from virtual instruments. “I did the first three episodes using all virtual instruments. They sounded good and they did their job dramatically, but in talking further to the people involved who handle the purse strings, I was able to convince them to try an episode with a real string section.”

Once Bacon was able to A/B the virtual strings against the real strings, there was no denying the benefit of a live string section. “They could hear what real musicians can bring to the music —the kind of homogenous imperfection that comes when you have that many people who are all outstanding but with each of them treating the music just a little bit differently. It brings new life to it. There’s a lot of depth to it. I feel very fortunate and appreciative that the account team on the show has been supportive of this.”

Tools & Workflow
For the score each week, Bacon composes in Steinberg Cubase and runs Ableton Live via ReWire by Propellerhead. His samples are hosted in Vienna Ensemble Pro on a separate computer while all audio is monitored and processed through Avid Pro Tools on a separate rig. Since his compositions start with virBates Motel -- "Forever" -- Cate Cameron/A&E Networks -- © 2016 A&E Networks, LLC. All Rights Reservedtual instruments, Bacon has an extensive collection of sample libraries. He uses string libraries from Cinesamples, 8Dio and Spitfire Audio. “I have lots of custom stuff,” he says. “I love the Vintage Steinway D Piano from Galaxy.”

Once he’s completed the cues, he hands his MIDI tracks over to orchestrator Robert Litton, who determines the note assignments for each member of the 18-piece string section. The group is recorded at The Bridge Recording studio in Glendale, California, owned by Greg Curtis. There, Bacon joins recording engineer James Hill in the control room. “I enjoy conducting if I can, but on this series it makes more sense for me to be in the control room because we often have only three hours to record roughly 35 minutes of music. Also, the music always sounds different in the control room. Ultimately, when you are doing this kind of work, what matters is what is coming out of the speakers because that’s what you’re actually going to hear in the soundtrack.”

After the live strings are recorded, engineer Hill mixes those against the virtual instrument stems of the woodwinds, percussion and synth elements. He creates a stem of the live strings to replace the virtual strings stem.

“We don’t do an in-depth mix like you would typically do for film,” says Bacon. “At this point, I’m able to leave Jim [Hill] the demo song as a reference and let him do a quick mix. Then, he creates a live string stem and all of those stems are sent over to the dub stage.”

In Season 4, Ep. 9 “Forever,” the moment arrived when young Norman (Freddie Highmore) did what he was destined to do— kill his mother Norma (Vera Farmiga). That’s not much of a spoiler if you’re familiar with the film Psycho, but what wasn’t known was just how Norman would do it. “This death scene was something I had been thinking about for four years. The way they did it — and I think they got it right — was that they made the death a very personal, emotional, thoughtful and, in a very twisted way, probably the most considerate way you can kill your mother,” laughs Bacon. “But really it was the only way that he and his mother, these two damaged broken people, could find peace together… and that was in death.”

The Norma/Norman Theme
In the four-and-a half-minute scene that reveals Norma and Norman’s lifeless bodies, Bacon portrays tension, fear and sadness by weaving a theme that he wrote for Norma with a theme he wrote for Norman and Norma. Norma’s theme in the death scene plays on a large section of violins as her new husband Alex (Nestor Carbonell) tries to resuscitate her.

“The foundation for her song has been laid over the course of two seasons, starting in Season 2, where we look at her family background, like her parents and her brother, and discover how she became her,” explains Bacon. “I didn’t really know which of her themes I was going to use for her death scene, but it seemed to feel, as we’re looking at Norma, that this is a moment about her, so I went back to that theme.”

As Norman wakes up, Bacon’s theme for Norman and Norma plays on sparse piano. In comparison to Norma’s theme on soaring strings, this theme feels small, and lost, highlighting Norman’s shock and sadness that he’s survived but now she’s gone. “The score had to convey a lot of emotions,” describes Bacon. “I tried to keep it relatively simple but as we went bigger it seemed to fit the enormity of the moment. This is a big moment that we all knew was coming and kind of dreaded. The piece feels different than the rest of the show, and rightfully so, while still being a part of the same sonicBates Motel -- "Norman" -- Cate Cameron/A&E Networks -- © 2016 A&E Networks, LLC. All Rights Reserved landscape.”

His original dramatic score on the “Forever” episode has been nominated for a 2016 Emmy award. Bacon says he chose this episode, above all the others in Season 4, because of the emotionally visceral death scene. It’s a big moment for the story and the score.

“During the whole death sequence there are barely any words. Alex is saying, ‘Stay with me,’ to Norma. Then you have Norman wake up and say, ‘Mother?’ at the very end. But that’s it. That section is like a silent movie in a lot of ways. It’s very reliant on the score.”

VR Audio: Crytek goes to new heights for VR game ‘The Climb’

By Jennifer Walden

Dealing with locomotion, such as walking and especially running, is a challenge for VR content developers — but what hasn’t been a challenge in creating VR content? Climbing, on the other hand, has proved to be a simple, yet interesting, locomotion that independent game developer Crytek found to be sustainable for the duration of a full-length game.

Crytek, known for the Crysis game series, recently released their first VR game title, The Climb, a rock climbing adventure exclusively for the Oculus Rift. Players climb, swing and jump their way up increasingly difficult rock faces modeled after popular climbing destinations in places like Indonesia, the Grand Canyon and The Alps.

Crytek’s director of audio, Simon Pressey, says their game engine, CryEngine, is capable of UltraHD resolutions higher than 8K. They could have taken GPS data of anywhere in the world and turned that into a level on The Climb. “But to make the climbing interesting and compelling, we found that real geography wasn’t the way to go. Still, we liked the idea of representing different areas of the world,” he says. While the locations Crytek designed aren’t perfect geographical imitations, geologically they’re pretty accurate. “The details of how the rocks look up close — the color, the graininess and texture — they are as close to photorealistic as we can get in the Oculus Rift. We are running at a resolution that the Rift can handle. So how detailed it looks depends on the Rift’s capabilities.”

Keep in mind that this is first-generation VR technology. “It’s going to get better,” promises Pressey. “By the third-generation of this, I’m sure we’ll have visuals you can’t tell apart from reality.”

Simon Pressey

Simon Pressey

The Sound Experience
Since the visuals aren’t perfect imitations of reality, the audio is vital for maintaining immersion and supporting the game play. Details in the audio actually help the brain process the visuals faster. Even still, flaws and all, first-gen VR headsets give the player a stronger connection to his/her actions in-game than was previously possible with traditional 2D (flat screen) games. “You can look away from the screen in a traditional game, but you can’t in VR. When you turn around in The Climb, you can see a thousand feet below you. You can see that it’s a long way down, and it feels like a long way down.”

One key feature of the Oculus Rift is the integrated audio — it comes equipped with headphones. For Pressey, that meant knowing the exact sound playback system of the end user, a real advantage from a design and mix standpoint. “We were designing for a known playback variable. We knew that it would be a binaural experience. Early on we started working with the Oculus-provided 3D encoder plug-in for Audiokinetic’s Wwise, which Oculus includes with their audio SDK. That plug-in provides HRTF binaural encoding, adding the z-axis that you don’t normally experience even with surround sound,” says Pressey.

He explains that the sounds start as mono source-points, positioned in a 3D space using middleware like Wwise. Then, using the Oculus audio SDK via the middleware, those audio signals are being downmixed to binaural stereo, which gets HRTF (head related transfer function) processing, adding a spatialized effect to the sounds. So even though the player is listening through two speakers, he/she perceives sounds as coming from the left, the right, in front, behind, above and below.

Since most VR is experienced with headphones, Pressey feels there is an opportunity to improve the binaural presentation of the audio [i.e., better headphones or in-ear monitors], and to improve 3D positional audio with personalized HRTFs and Ambisonics. “While the visuals are still very apparently a representation of reality, the audio is perceived as realistic, even if it is a totally manufactured reality. The headphone environment is very intimate and allows greater use of dynamic range, so subtle mixes and more realistic recordings and rendering are sort of mandatory.”

Realistic Sound
Pressey leads the Crytek audio team, and together they collaborated on The Climb’s audio design, which includes many different close-up hand movements and grabs that signify the quality of the player’s grip. There are sweaty, wet sounding hand grabs. There are drier, firmer hand grabs for when a player’s hands are freshly chalked. There are rock crumbles for when holds crumble away.

At times a player needs to wipe dirt away from a hold, or brush aside vegetation. These are very subtle details that in most games wouldn’t be sounded, says Pressey. “But in VR, we are going into very subtle detail. Like, when you rub your hands over plants searching for grips, we are following your movement speed to control how much sound it makes as you ruffle the leaves.” It’s that level of detail that makes the immersion work. Even though in real life a sound so small would probably be masked by other environmental sounds, in the intimacy of VR, those sounds engage the player in the action of climbing.

Crytek_TheClimb_Asia_Screenshot4

Breathing and heartbeat elements also pull a player into the game experience. After moving through several holds, a player’s hands get sweaty, and the breathing sound becomes more labored. If the hold crumbles or if a player is losing his/her grip, the audio design employs a heartbeat sound. “It is not like your usual game situation where you hear a heartbeat if you have low health. In The Climb you actually think, “I’ve got to jump!” Your heart is racing, and after you make the jump and chalk your hands, then your heartbeat and your breathing slow down, and you physically relax,” he says.

Crytek’s aim was to make The Climb believable, to have realistic qualities, dynamic environments and a focused sound to mimic the intensity of focus felt when concentrating on important life or death decisions. They wanted the environment sounds to change, such as the wind changing as a player moves around a corner. But, they didn’t want to intentionally draw the player’s attention away from climbing.

For example, there’s a waterfall near one of the climbs, and the sound for it plays subtly in the background. If the player turns to look at it, then the waterfall sound fades up. They are able to focus the player’s attention by attenuating non-immediate sounds. “You don’t want to hear that waterfall as the focus of your attention and so we steer the sound. But, if that is what you’re focusing on, then we want to be more obvious,” explains Pressey.

The Crytek audio team

The Crytek audio team

The Crytek audio team records, designs and edits sounds in Steinberg’s Nuendo 7, which works directly with Audiokinetic’s Wwise middleware that connects directly to the CryEngine. The audio team, which has been working this way for the past two years, feels the workflow is very iterative, with the audio flowing easily in that pipeline from Nuendo 7 to Wwise to CryEngine and back again. They are often able to verify the audio in-game without needing to request code support. If a sound isn’t working in-game, it can be tweaked in Wwise or completely reworked in Nuendo. All aspects of the pipeline are version controlled and built for sharing work across the audio team.

“It’s a really tight workflow and we can do things quickly. In the game world, speed is everything,” says Pressey. “The faster you get your game to market the sooner you recoup on your very heavy R&D.”

Two factors that propelled this workflow are the collaboration between Crytek, Audiokinetic and Steinberg in designing software tailored to the specific needs of game audio pros, and Crytek’s overhaul of CryEngine where they removed the integrated FMOD-based audio engine in favor of using an external audio engine. Running the audio engine separate from the game engine not only improves the game engine efficiency, it also allows updates to the audio engine as needed without fear of breaking the game engine.

Within hours of Wwise releasing an update, for example, Pressey says their system can be up to date. “Previously, it could’ve been a long and complicated process to incorporate the latest updates. There was always the risk of crashing the whole system by making a change because the code was so mixed up with the rest of the system. By separating them we can always be running the latest versions of things without risking anything.”

Having that adaptability is essential for VR content creation since the industry is changing all the time. For example, Sony’s PS4 VR headset release is slated for this fall, so they’re releasing a new SDK about every week or so, according to Pressey.

CryEngine is freely available for anyone to use. VR games developed with CryEngine will work for any VR platform. CryEngine is also audio middleware agnostic, meaning it can talk to any audio middleware, be it Wwise, FMOD or proprietary middleware. Users can choose a workflow that best suits the needs of their game.

Pressey finds creating for VR to be an intensely experimental process, for every discipline involved in game development. While most members on the Crytek team have solved problems relating to a new IP or a new console, Pressey says, “We were not prepared for this amount of new. We were all used to knowing what we were doing, and now we are experimenting with no net to fall back on. The experience is surprisingly different; the interaction using your eye and head tracking is much more physical. It is more intimate. There is an undeniable and inescapable immersion, in that you can’t look away as the game world is all around you. You can’t switch off your ears.” The first time Pressey put on a VR headset, he knew there was no going back. “Before that, I had no real idea. It is the difference between reading about a country and visiting it.”

Upcoming Release
Crytek will be presenting a new VR release titled Robinson — The Journey at E3 this month, and Pressey gives us a few hints as to what the game experience might be like. He says that VR offers new ways of storytelling, such as nonlinear storytelling. “Crytek and the CryEngine team have developed a radically new Dynamic Response System to allow the game to be intelligent in what dialog gets presented to the player at what time. Aspects of a story can be sewn together and presented based on the player’s approach to the game. This technology takes the idea of RPG-like branching storylines to a new level, and allows narrative progression in what I hope will be new and exciting territory for VR.”

The Climb uses this Dynamic Response System in a limited capacity during the tutorial where the instructor is responsive to the player’s actions. “Previously, to be that responsive, a narrative designer or level designer would have to write pages of logic to do what our new system does very simply,” concludes Pressey.

Jennifer Walden is an audio engineer and writer based in New Jersey.

The sound of Netflix’s ‘Wet Hot American Summer’ prequel

Supervising sound editor J.M. Davey weighs in about First Day of Camp.

By Jennifer Walden

David Wain’s homage to summer camp in the 1980s, Wet Hot American Summer (2001), managed to get away with casting actors in their late 20s and early 30s as teenage camp counselors — talk about suspending your disbelief. Well he took that premise a bit further recently.

Wain’s latest offering, the Netflix series Wet Hot American Summer: First Day of Camp, uses the same actors from the first film — Paul Rudd, Janeane Garofalo, Michael Showalter, Amy Poehler, Bradley Cooper, Zak Orth and Michael Ian Black, all now in their 40s — to recount what happened in those first days that summer at Camp Firewood. It’s a prequel to the Wet Hot American Summer film, but the kicker is that all the actors are now 14 years older in real life — the fact that the characters are noticeably older only serves to accentuate the joke.

J.M. Davey

J.M. Davey

Supervising sound editor J.M. Davey, who is LA-based, may not have worked on the Wet Hot American Summer film, but he’s worked with director/writer Wain on the Emmy award-winning comedy series Childrens Hospital, where he learned first hand about Wain’s comedic use of film references. For example, Wain did an Ocean’s Eleven-style episode that involved robbing a sperm bank. WHAS: First Day of Camp also references films, but in a more general way. “It’s more like we’re doing send-ups of genre conventions instead of referencing specific films,” says Davey.

Knives & Other Kitchen Stuff
That isn’t to say that specific films didn’t influence the sound for certain scenes in First Day of Camp. (Spoiler Alert!) In the action-packed final episode, camp cook Gene takes on government-hired assassin The Falcon (Jon Hamm). The sound for the Kung-Fu knife fight was inspired by the sound work of Tobias Poppe and John Marquis on G.I. Joe: Retaliation. “I really like the way they handled the blades in G.I. Joe: Retaliation. It had this foundation of being a great-sounding action moment with a layer of playfulness on top of it,” explains Davey. A perfect fit for the comedy series, where lightning-quick knife flicks are effortlessly caught between fingers and teeth.

Gene and The Falcon also employ pots, pans and utensils in their fight. The sounds for those were all added in post since the pots on-set were made of rubber for safety reasons. “They look really good, very convincing, especially that big pot that Gene kicks and The Falcon catches,” says Davey, who used a Doppler effect on the pot kick sound to give it a sense of motion. “It sounds like Gene kicked it with such force that it’s vibrating as it flies through the air. So instead of just whooshing, it has this metallic vibration sound.”

To add weight to the metal sounds, particularly on the sound of Gene rolling across a large metal table, Davey used reFuse’s Lowender to add sub-harmonics, as well as Altiverb by Audio Ease for a metallic ring-out. “I really wanted it to feel like it was a grown man rolling across the metal table. I only had two different sounds for that and I had to get creative with pitching and adding detail with reverb to make them work for all the times he rolled across it.”

For punches, Davey used the Boom Library Close Combat Construction Kit in combination with punch movements and swooshes he recorded for the TV miniseries Halo 4: Forward Unto Dawn. “I like Boom’s construction kits because the sounds are really well recorded and they aren’t over-baked. I can do my own limiting, distortion, and/or compression to shape them how I want to use them.”

In the Camp Firewood face-off with Camp Tiger Claw — the rich kid camp on the other side of the lake — Davey was balancing the sound effects of what’s happening on screen versus what was happening off screen, as well as mixing in a fair amount of dialogue and composer Craig Wedren’s full score.

“I had to braid the dialogue, the music and the effects throughout that scene,” says Davey. “It was very important to watch those scenes over and over again. That’s advice Andy Nelson (Oscar-winning re-recording mixer and sound engineer) once gave me: ‘Watch your work in the biggest chunk you can as many times as you can. Watch as if you were a viewer and not the sound designer.’”

The Music
Music plays a key role in the storyline of First Day of Camp. In addition to the campers staging the musical “Electro City” — complete with auditions, rehearsals and the final performance — there is also a reclusive rock star, Eric (Chris Pine), residing at Camp Firewood. He’s a camp legend. After being signed to a major label by an A&R man attending a camp concert, Eric lost his muse in the recording studio and retreated back to camp to hide.

DSC_3643.NEF WHAS-6483.CR2

Composer Wedren and the musicians at Pink Ape handled all the music, from the musical theater numbers to the rock hits. They replaced the guide tracks used during production with recordings of the actors in the studio. Davey explains that singing from the production tracks, for both the musical theater performances and Eric’s roof-top concert during the final episode, made their way into the final mix. “We wanted to be very careful about the balance between the music sounding realistic in the space and going into full on musical mode where everything sounds great,” he explains.

Davey asked composer Wedren to deliver two sets of stems: one set completely dry and another with Wedren’s studio polish on it. This allowed Davey the freedom to transition from realistic to enhanced musical numbers as it fit the scene. For example, all the auditions for the musical theater were production tracks with the exception of Katie’s performance where they decided to dub her performance to make her (Marguerite Moreau) stand out.

For Eric’s roof-top performance, Davey started out using the production track vocals with realistic reverb and slap and then moved into more musical reverb and slap. Throughout the song, Eric’s vocal track cuts back and forth between the production track and the vocals recorded in the studio. “There was even the loop group singing along for the ‘higher’ lines,” he says. “We were firing on all cylinders, using all the tools in the tool chest to create that moment.”

DSC_3550.NEF WHAS-4486.CR2

The Shofar!
Music editor Emily Kwong was the key to clear communication between Davey and the music team, wrangling the numerous tracks and stems sent by composer Wedren. Kwong also crafted the long shofar solo for Cooperberg’s performance in the final episode. The shofar, a musical instrument made from a ram’s horn and used for Jewish religious purposes, is very difficult to play.

“The shofar is not an instrument that can be played like a jazz trumpet,” says Davey. “We tried to create that solo from samples of shofars, and even other instruments we pitch- and tone-manipulated to sound like a shofar, but that didn’t work.”

Davey ended up recording himself making random noises for several minutes on the shofar. “Emily took that track and put it together in a way that it sounded like somebody struggling through a performance.”

At one point in the series, the entire camp is walking around, blowing shofars. To create that sound, Davey handed out shofars to the loop group and captured the session in L-C-R. “It was great because the loop group didn’t know how to play the shofar and neither did the kids in the camp,” says Davey. “By recording in L-C-R, I had the option of a nice wide stereo track and I could just place the shofars all around and they would all sound different and terrible.”

Production Sound & ADR
Production sound mixer Lee Ascher had his work cut out for him on-set. Davey notes the series is like a Robert Altman film, with many characters interacting and speaking all the time. The series was shot very quickly and in a challenging environment, with traffic and generator noise on some of the tracks. The combination of replacing noisy lines and recording the actors singing resulted in over 475 tracks of ADR. “I made a super-session of the ADR because there was so much of it,” says Davey. “I didn’t even have all the singing because some we recorded at Craig’s studio.”

One challenge for ADR was replacing the voice of George Dalton — the on-camera actor who played Arty, the radio station kid — with Samm Levine’s voice. “George Dalton did an excellent job and really brought that character to life, but it was decided to use Samm’s voice instead. Fortunately for me, Samm is very talented at ADR and voiceover,” reports Davey.

Using Pro Tool’s Elastic Time and Serato’s Pitch ’N Time Pro, Davey tweaked the voice into place. However, something about Levine’s voice wasn’t quite marrying to the visuals. “I experimented and eventually landed on using Pitch ’N Time Pro to pitch up Samm’s voice just slightly (without changing the speed of his dialog). “The latest version of Pitch ’N Time Pro has a ‘Voice’ algorithm that works amazingly well for human voice manipulation. The plug-in is a major tool in my work, from dialog to sound effects design. It’s not an inexpensive plug-in but it’s worth every penny.”

The camp environment is always busy with kid’s activities. Even in the quiet, semi-private moments, like the conversations between Cooperberg and Donna in the cabin, there is still the distant sound of kids. Davey captured unique ambiences of kids playing at parks and in a nature study on a hiking trail to use for the backgrounds. The only time we don’t hear kids, or birds for that matter, is at the toxic sludge site. “I wanted the audience to get a sense that it’s a dangerous place because wildlife stays away from that area. Sound effects editor Charles Maynes, added in some really great, creepy, weird, unfriendly insects in that location,” he says.

Davey also got to work a few vintage sound effects into the series, including a great horned owl hoot from the 1940s to highlight Eric’s creepy cabin, the classic red hawk screech as the sound of The Falcon and computer sounds from the ‘80s that no one gets to use anymore. “David and I are kindred spirits in that we both are computer nerds. He was definitely using computers in the ‘80s, so he was really excited about the sounds we used for that old computer.”

The series is available now on Netflix.

Jennifer Walden is a New Jersey-based audio engineer and writer.

Going back in time sonically for ‘Outlander’ series

By Jennifer Walden

While on the surface, it might seem surprising that writer Ron Moore, with his extensive Star Trek credits, created the popular Starz Originals period drama Outlander, but as you dig a bit deeper it all starts to make sense. Outlander is more than just a period piece; it’s about time travel. Who doesn’t love themselves a little time travel?

Outlander, based on the book series by Diana Gabaldon, follows Claire Randall, a British combat nurse on vacation in Scotland with her husband. After touching one large stone in an ancient stone circle she gets transported back in time, from 1945 to 1743. While time travel is sci-fi, that element of the story is but a minuscule moment, with the majority of the storyline happening in 1743. But her being from a different time and place is always front and center to the story, and that is the world that Moore knows well.

Outlander 2014 Outlander 2014

His sci-fi heavy resume includes starting as a writer on Star Trek: The Next Generation (1987) before becoming a producer on the show. That was just the beginning of his path to “where no man has gone before.” Work on Star Trek: Generations, Star Trek: First Contact, Star Trek: Deep Space Nine and, finally, Star Trek: Voyager followed. He also had a hand in the Battlestar Galactica franchise, and most recently worked on the sci-fi series Caprica and Helix.

Speaking of Battlestar Galactica, when it came time to get the team together for Outlander’s audio post, Moore called on a familiar face: supervising sound editor/dialogue editor, Vince Balunas, from audio post facility AnEFX in Burbank. Balunas previously worked with both Moore and Outlander’s picture editor, Michael O’Halloran, on Battlestar Galactica.

The Sound of 1743
Balunas says all that prior sci-fi experience may not be applicable to Outlander, but having that knowledge of what Moore and O’Halloran are looking for helped more than anything else when developing the overall sound for Outlander. “There’s a certain grit to the show. Yes it was shot in HD, and next season will possibly be shot in 4K, but there is still a visual grit to it much like there was on Battlestar,” says Balunas. Sonically, Outlander is like Battlestar Galactica in that both focus on sounds that make the world on screen seem tangible.

Vince Balunas

Vince Balunas

“In Battlestar, the ship would be constantly groaning and you’d hear all of this metal creaking,” he says. “There is this tactile feel of the CIC (Combat Information Center of the ship’s bridge). We grounded Outlander the same way; it’s like actually being there in 1743.”

Balunas notes the scope of Outlander, visually and sonically, is huge. Without any big music moments to hide behind, Balunas needed sound for every movement that happened on screen because without it, he says, the scene felt naked. He worked with lead sound designer/effects editor Jeff Brunello at AnEFX. “We understood that we were going to be building this show a whole lot bigger than other shows,” says Balunas. They filled out the soundtrack with elaborate backgrounds made from wind, rain and rivers — everything you’d find in the Scottish Highlands. “It’s a very wide build compared to other shows we do for network television.”

Small sound details help ground the show in reality, and pull the audience in close to the action. When the characters are on horses walking through the rolling fields of Scotland, Moore and the Starz team wanted to hear every step of that horse. “They wanted to hear a little bit of rattle, leather creaks and other small details to bring the scene to life,” says Balunas. “My Foley track count doubled in size for this show.”

AnEFX handles all of Outlander’s Foley in-house, with a team led by supervising Foley editor Sam Lewis and Foley artist Brian Straub. “Our two main Foley guys both recorded Foley and edited the Foley,” reports Balunas. “More than anything, the Foley on the show is very detailed and very specific.”

Outlander 2014 Outlander 2014

As expected with scenes set in 1743, it’s absolutely unacceptable to hear modern sounds, like airplanes or traffic. Luckily, Balunas didn’t have trouble in that department. The production tracks from sound mixer Brian Milliken were tremendously clean. “There was no evidence of any kind of modern sounds throughout the whole entire production of the first season. Brian [Miliken] did a really good job of giving us good clean audio to work with. There weren’t any challenges with the production dialogue.”

In contrast, for scenes that take place in 1945, Balunas and his team added sounds to intentionally emphasize technology. “When we’re in the police station, we really want to hear the phones ringing and cars go by,” says Balunas. “We want to make sure that people know that scene is in 1945 in Scotland.”

Balunas feels that starting with good production sound was really a key to the show sounding great. Without having to sync up tons of ADR, or heavily process the dialogue to improve clarity, he was able to focus on his sound team. “The biggest thing about Outlander is its size. It’s a very large show with a lot of elements to manage.”

Sound editorial, Foley, most ADR, and premixes were completed at AnEFX. Balunas and his team typically spent 8-10 days per episode on sound editorial. “The schedule was spread apart and we worked on the series in waves. We would do three episodes one month and then take a month and a half off before doing another two episodes.”

Their sound editorial schedule was dependent on how long it took for picture to lock. “We had a very liquid schedule that wasn’t your standard TV schedule of five days to get an episode done, and then next week it’s another five days for the next episode,” he says. “It wasn’t remotely close to that.”

The final mix happened with re-recording mixers Nello Torri and Alan Decker at BluWave Audio at NBC Universal in Studio B. Working with four days per episode, Torri and Decker mixed the show in 7.1 with delivery to Starz for air in 5.1. “For episodes like the witch trial episode, we needed every second of that four-day mix,” says Balunas. “We had everyone and their mother talking on-camera. That was a really big show for us.”

The Season 1 finale of Outlander was May 30, but feel free to binge watch on Starz.

Jennifer Walden is a New Jersey-based writer and audio engineer.

‘Modern Marvels’ takes on the Panama Canal expansion project

By Jennifer Walden

Have you noticed just how casually people throw around the word “epic” these days? For example, “That burrito I just ate was epic!” Or, “That concert I went to last night was epic.” For the record, those things are not epic. What truly is epic? The new Panama Canal expansion project that has been documented via a Modern Marvels special on the History Channel.

The episode, which premieres on April 11, focuses on a 50-mile-long construction site, populated with thousands of workers and segmented into roughly 150 micro job sites. They even had to build an on-site concrete plant to meet the concrete demand for 10-story high, Continue reading

Creating sounds, mix, more for ‘The Hunger Games: Mockingjay, Part 1’

By Jennifer Walden

It may be called The Hunger Games, but in Mockingjay, Part 1, the games are over. Life for the people of Panem, outside The Capitol, is about rebellion, war and survival. Supervising sound editor/sound designer/re-recording mixer Jeremy Peirson, at Warner Bros. Sound in Burbank, has worked with director Francis Lawrence on both Catching Fire and Mockingjay, Part 1.

Without the arena and its sinister array of “horrors” (for those who don’t remember Catching Fire, those horrors, such as blood rain, acid fog, carnivorous monkeys and lightening storms were released every hour in the arena), Mockingjay, Part 1 is not nearly as diverse, according to Peirson. “Catching Fire was such a huge story between The Capitol and all the various Districts. Continue reading

Outpost Studios on affordable Foley for indies

By Jennifer Walden

Foreign distribution companies insist on a fully formed M&E mix, complete with Foley, but for low-budget films there often isn’t room for it. It’s a catch-22. Not footing the bill for Foley work can keep indie filmmakers from making money in the global market.

Dave Nelson, owner/supervising sound editor/re-recording mixer at Outpost Studios in San Francisco, has an economical solution: using Foley Collection and Kontakt 5 to track Foley when filmmakers can’t afford live Foley sessions. “Films that aren’t financially successful here in the United States often do really well in foreign countries because people are fascinated with American lifestyle.”

Outpost Studios offers 7.1/5.1 Dolby mixing, dialogue editing, sound design, music composition, Foley, ADR and voice recording for the film, digital media and audio book Continue reading

‘Transformers 4: Age of Extinction’ offers heavy metal sound

By Jennifer Walden

Audiences can’t seem to get enough of the good versus evil story involving feuding alien races — Autobots and Decepticons — who hide among us here on earth as cars and trucks. How do I know? Well the fourth Transformers movie, Age of Extinction, pulled in an astonishing $301.3 million worldwide on its opening day. While critics and audiences are strongly divided on their opinion of the movie, Greg Russell, re-recording mixer on the film, sums it up well: “If you’re looking for Shakespeare in Love, this isn’t it.”

Russell, who works out of Technicolor Sound on the Paramount Pictures studio lot in Hollywood, refers to the Michael Bay-directed offering as “a lot of movie.”  And it is, in every sense of the word — this latest Transformers iteration is nearly three hours long. It’s a big story Continue reading

‘Chicago Fire’: the sound of drama

By Jennifer Walden

Whether it’s a raging high-rise fire, a horrific car accident or another life-threatening event, the firefighters, rescue squad and paramedics of Chicago Firehouse 51 on NBC’s Chicago Fire are always in the center of some sort of dramatic adventure. And sometimes that drama spills into the personal lives of these imperfect heroes.

To help enhance the feeling of action and drama throughout each episode, the audio post team at BluWave Audio, a division of Universal Studios Sound that handles TV mixing, audio restoration/preservation and digital mastering, fills in the backgrounds with off-screen sounds such as fire radios, equipment being moved around, phones ringing, voices and other more typical sounds.

“The fire house they’re stationed at is a hub for many different activities for the fire Continue reading

audioEngine helps create a ‘Hero’s Welcome’ for Bud

By Jennifer Walden

New York —  Just because he is the owner of New York- and Phoenix-based post house audioEngine, doesn’t mean that Bob Giammarco no longer occupies the mix chair.

In fact, he recently worked with ad agency Anomaly in New York on the Budweiser spot A Hero’s Welcome, which aired during Super Bowl XLVIII. In case you missed it, the 60-second spot followed the homecoming of Army Lt. Chuck Nadd — from his trip through the airport back to his home town in Florida, where a parade was held in honor of his return.

Continue reading

Session Confessions: from the edit suite

Language differences, questionable cell phone connections and beach days are just some things that can get in the way of a smooth edit.

By Jennifer Walden

Even when you’re speaking the same language, client emails can be cryptic. To puzzle out their meaning, video editor Matt, in New York, calls the client to get some clarification. But due to a language barrier, a convertible car, and the conference call feature on an iPhone, there is a lot that can get lost in translation.

“We’ve been doing a lot of commercials for hair products recently, for a huge company that sells their products worldwide,” explains Matt. “So suddenly we’re talking to a creative team here in New York, a creative team in England, a creative team in Singapore, a creative team in Australia, and all over. So, 24/7, around the world, somebody has to have a say about the spot we’re working on.”

Continue reading

Session Confessions

A new regular column about the interesting things that happen behind closed doors.

By Jennifer Walden

Post production is part creativity, part technical ability, and part customer service. It may even involve a bit of telepathy, especially when working with clients who have trouble articulating their creative vision. Audio engineer, Ron, in New York, shares his session story of a stock music search that required psychic powers.

Continue reading