Category Archives: Audio

Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

DG 7.9, 8.27, 9.26

RTW at AES NY with 19-inch rackmount TouchMonitor

RTW, which makes visual audio meters and monitoring devices for broadcast, production, post production and quality control, will be in the Avid Pavilion at AES NY this year with the 19-inch 4U rack-mount (MA4U) version of its TouchMonitor TM9.

This reconfigured unit brings all the audio monitoring features of the standalone RTW TM9 in a new design that is more easily accessible to users in studio settings.

The TM9 panel-mount version is 235x135x45mm (9.25x 5.35×1.8 inches) without the power supply and is ideal for mounting into front panels. The unit comes standard with a USB extension for the front panel and the mounting kit is compatible with DIN 41494/IEC 60297 19-inch 4U racks (483x177x91mm).

“With the continued evolution of studio spaces and workflows, we have seen an increased interest in rack-mountable formats of our loudness solutions,” says Andreas Tweitmann, CEO, RTW.

Equipped with RTW’s high-grade nine-inch touch screens and an easy-to-use GUI, the TouchMonitor TM9 is the latest in the company’s rack-mount solutions, which include the TM3, TM7 and RTW legacy products. The TM9 has a graphical user interface that can be scaled, randomly positioned and combined in almost every way for optimized use of available screen space. Multiple instruments of the same type, assigned to different input channels and configurations, can be displayed simultaneously. Plus, a context-sensitive, on-screen help feature supports the user, allowing for easy setup changes.

The latest firmware version of the TM9, which is also used with the TM7, expands the basic software to a four-channel display with 4x mono or 2x stereo/2x mono. Additionally, 1x stereo can be measured without the need of an activated multichannel license. Output routing can be individually adjusted for each preset and all presets can be exported or imported at the same time.


Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.


Behind the Title: Heard City mixer Elizabeth McClanahan

A musician from an early age, this mixer/sound designer knew her path needed to involve music and sound.

Name: Elizabeth McClanahan

Company: New York City’s Heard City (@heardcity)

Can you describe your company?
We are an audio post production company.

What’s your job title?
Mixer and sound designer.

What does that entail?
I mix and master audio for advertising, television and film. Working with creatives, I combine production audio, sound effects, sound design, score or music tracks and voiceover into a mix that sounds smooth and helps highlight the narrative of each particular project.

What would surprise people the most about what falls under that title?
I think most people are surprised by the detailed nature of sound design and by the fact that we often supplement straightforward diegetic sounds with additional layers of more conceptual design elements.

What’s your favorite part of the job?
I enjoy the collaborative work environment, which enables me to take on different creative challenges.

What’s your least favorite?
The ever-changing landscape of delivery requirements.

What is your favorite time of the day?
Lunch!

If you didn’t have this job, what would you be doing instead?
I think I would be interested in pursuing a career as an archivist or law librarian.

Why did you choose this profession?
Each project allows me to combine multiple tools and skill sets: music mixing, dialogue cleanup, sound design, etc. I also enjoy the problem solving inherent in audio post.

How early on did you know this would be your path?
I began playing violin at age four, picking up other instruments along the way. As a teenager, I often recorded friends’ punk bands, and I also started working in live sound. Later, I began my professional career as a recording engineer and focused primarily on jazz. It wasn’t until VO and ADR sessions began coming into the music studio in which I was working that I became aware of the potential paths in audio post. I immediately enjoyed the range and challenges of projects that post had to offer.

Can you name some recent projects you have worked on?
Lately, I’ve worked on projects for Google, Budweiser, Got Milk?, Clash of Clans, and NASDAQ.

I recently completed work on a feature film, called Nancy. This was my first feature in the role of supervising sound editor and re-recording mixer, and I appreciated the new experience on both a technical and creative level. Nancy was particularly unique in that all department heads (in both production and post) were women. It was an incredible opportunity to work with so many talented people.

Name three pieces of technology you can’t live without.
The Teenage Engineering OP-1, my phone and the UAD plugins that allow me to play bass at home without bothering my neighbors.

What social media channels do you follow?
Although I am not a heavy social media user, I follow a few pragmatic-yet-fun YouTube channels: Scott’s Bass Lessons, Hicut Cake and the gear review channel Knobs. I love that Knobs demonstrates equipment in detail without any talking.

What do you do to de-stress from it all?
In addition to practicing yoga, I love to read and visit museums, as well as play bass and work with modular synths.


Enhancing BlacKkKlansman’s tension with Foley

By Jennifer Walden

Director Spike Lee’s latest film, BlacKkKlansman, has gotten rave reviews from both critics and audiences. The biographical dramedy is based on Ron Stallworth’s true story of infiltrating the Colorado Springs chapter of the Ku Klux Klan back in the 1970s.

Stallworth (John David Washington) was a detective for the Colorado Springs police department who saw a recruitment advertisement for the KKK and decided to call the head of the local Klan chapter. He claimed he  was a racist white man wanting to join the Klan. Stallworth asks his co-worker Flip Zimmerman (Adam Driver) to act as Stallworth when dealing with the Klan face-to-face. Together, they try to thwart a KKK attack on an upcoming civil rights rally.

Marko Costanzo

The Emmy-award winning team (The Night Of and Boardwalk Empire) of Foley artist Marko Costanzo and Foley engineer George Lara at c5 Sound in New York City were tasked with recreating the sound of the ‘70s — from electric typewriters and rotary phones at police headquarters to the creak of leather jackets that were so popular in that era. “There are cardboard files and evidence boxes being moved around, phones dialing, newspapers shuffling and applause. We even had a car explosion which meant a lot of car parts landing on the ground,” explains Costanzo. “If you could listen to the film before our Foley, you would notice just how many of the extraneous noises had been removed, so we replaced all of that. Pretty much everything you hear in that film was replaced or at least sweetened.”

One important role of Foley is using it to define a character through sound. For example, Stallworth typically wears a leather jacket, and his jacket has a signature sound. But many of the police officers, and some Klan members, wear leather jackets, too, and they couldn’t all sound the same. The challenge was to create a unique sound that would represent each character.

According to Costanzo, the trickiest ones to define were the police officers, since they all have similar gear but still needed to sound different. “For the racist police officer Andy Landers (Frederick Weller), we wanted to make him noisy so he sounds a little more overzealous or full of himself. He’s got more of a presence.” The kit they created for Landers has more equipment for his belt, like bullets and handcuffs that rattle as he walks, a radio and a nightstick clattering, and they used extra leather creaking as well. “We did the night stick for him because he’s always ready and quick to pull out his nightstick to harass someone. He was a pretty nasty character, so we made him sound nasty with all our Foley trimmings.”

The police officer Foley really shines during the scene in which Stallworth apprehends Connie (Ashlie Atkinson), who just planted a bomb outside the residence of Patrice (Laura Harrier), president of the black student union at Colorado College. Stallworth is undercover, and he’s being arrested by local uniformed police officers instead of Connie the criminal. “The trick there was to make the police officer sound intimidating, and we did that through the sound of their belts,” says Costanzo. “They’re frisking the undercover cop and putting the handcuffs on and we covered all of those actions with sound.”

That scene is followed by a huge car explosion, which the Foley team also covered. While they didn’t do the actual explosion sound, they did perform the sounds of the glass shattering and many different debris impacts. “Our work helps to identify the perspective of the camera, and adds detail like parts hitting the bushes or parts hitting other cars. We go and pick out all the little things that you see and add those to the track,” he says.

Sometimes the Foley adds to the storytelling in less overt ways. For instance, during the scene when Stallworth calls up the head of the local KKK. As he’s on the phone listing all the types of people he hates, the other police officers in the station stop what they’re doing. Zimmerman swivels his chair around slowly and you hear it squeaking the whole time. It’s this uncomfortable sound, like the sonic equivalent of an eyebrow raise. Costanzo says, “Uncomfortable sounds are what we specialize in. Those are moments we embellish wherever possible so that it does tell part of the story. We wanted that moment to feel uncomfortable. Once those sounds are heard, it becomes part of the story, but it also just falls into the soundtrack.”

Foley can be helpful in communicating what’s happening off-screen as well. The police station is filled with officers. In Foley, they covered telephone hang-ups and grabs, the sound of the cords clattering and the chairs creaking, filing cabinets being opened and closed. “We try to create the feeling that you are located in that room and so we embellish off-camera sounds as well as the sounds for things on camera,” says Lara. Sometimes those off-camera sounds are atmospheric, like the police station, and other times they’re very specific. The director or supervising sound editor may ask to hear the characters walk away and out onto the street, or they need to hear a big crowd on the other side of a wall.

Part of the art of Foley is getting it to sound like it’s coming from the scene, like it’s production sound even though it isn’t. When a character waves an arm, you hear a cloth rustle. If people are walking down a long hallway, you hear their footsteps, and the sound diminishes as they get farther away from the camera. “We embellish all those movements, and that makes what we’re seeing feel more real,” explains Costanzo. To get those sounds to sit right, to feel like they’re coming from the scene, the Foley team strives to match the quality of the room for each scene, for each camera angle. “We try to do our best to match what we hear in production so the Foley will match that and sound like it was recorded there, live, on-set that day.”

Tools & Collaboration
Lara uses a four-mic approach to capturing the Foley. For the main mic (closest to Costanzo), he uses a Neumann KMR 81 D shotgun mic, which is a common boom mic used on-set. He has three other KMR 81 Ds placed at different distances and angles to the sound source. Those are all fed into an 8-channel Millennia mic pre-amp. By changing the balance of the mics in the mix, Lara can change the perspective of the sound. Because how well the Foley fits into the track isn’t just about volume, it’s about perspective and tonal quality. “Although we can EQ the sound, we try not to because we want to give the supervising sound editor the best sound, the fullest and richest sounding Foley possible,” he says.

Lara and Costanzo have been creating Foley together for 26 years. Both got their start at Sound One’s Foley stage in New York. “We have a really good idea of what’s good Foley and what’s bad Foley. Because George and I both learned the same way, I often refer to George as having the same ear as myself — meaning we both know when something works and when something doesn’t work,” shares Costanzo.

This dynamic allows the team to record anywhere from 300 to 400 sounds per day. For BlacKkKlansman, they were able to turn the film around in eight days. “The way that we work together, and why we work so well together, is because we both know what we are looking for and we have recorded many, many hours and years of Foley together,” says Lara.

Costanzo concludes, “Foley is a collaborative art but since we’ve been working together for many years, there are a lot of things that go unsaid. We don’t need to explain to each other everything that goes on. We both have imaginations that flourish when it comes to sound and we know how to take ideas and transfer them into working sounds. That’s something you learn over time.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 


The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.


The Emmy-nominated sound editing team’s process on HBO’s Vice Principals

By Jennifer Walden

HBO’s comedy series Vice Principals — starring Danny McBride and Walton Goggins as two rival vice principals of North Jackson High School — really went wild for the Season 2 finale. Since the school’s mascot is a tiger, they hired an actual tiger for graduation day, which wreaked havoc inside the school. (The tiger was part real and part VFX, but you’d never know thanks to the convincing visuals and sound.)

The tiger wasn’t the only source of mayhem. There was gunfire and hostages, a car crash and someone locked in a cage — all in the name of comedy.

George Haddad

Through all the bedlam, it was vital to have clean and clear dialogue. The show’s comedy comes from the jokes that are often ad-libbed and subtle.

Here, Warner Bros. Sound supervising sound editor George Haddad, MPSE, and dialogue/ADR editor Karyn Foster talk about what went into the Emmy-nominated sound editing on the Vice Principals Season 2 finale, “The Union Of The Wizard & The Warrior.”

Of all the episodes in Season 2, why did you choose “The Union of the Wizard & The Warrior” for award consideration?
George Haddad: Personally, this was the funniest episode — whether that’s good for sound or not. They just let loose on this one. For a comedy, it had so many great opportunities for sound effects, walla, loop group, etc. It was the perfect match for award consideration. Even the picture editor said beforehand that this could be the one. Of course, we don’t pay too much attention to its award-potential; we focus on the sound first. But, sure enough, as we went through it, we all agreed that this could be it.

Karyn Foster: This episode was pretty dang large, with the tiger and the chaos that the tiger causes.

In terms of sound, what was your favorite moment in this episode? Why?
Haddad: It was during the middle of the show when the tiger got loose from the cage and created havoc. It’s always great for sound when an animal gets loose. And it was particularly fun because of the great actors involved. This had comedy written all over it. You know no one is going to die, just because the nature of the show. (Actually, the tiger did eat the animal handler, but he kind of deserved it.)

Karyn Foster

I had a lot of fun with the tiger and we definitely cheated reality there. That was a good sound design sequence. We added a lot of kids screaming and adults screaming. The reaction of the teachers was even more scared than the students, so it was funny. It was a perfect storm for sound effects and dialogue.

Foster: My favorite scene was when Lee [Goggins] is on the ground after the tiger mauls his hand and he’s trying to get Neal [McBride] to say, “I love you.” That scene was hysterical.

What was your approach to the tiger sounds?
Haddad: We didn’t have production sound for the tiger, as the handler on-set kept a close watch on the real animal. Then in the VFX, we have the tiger jumping, scratching with its paws, roaring…

I looked into realistic tiger sounds, and they’re not the type of animal you’d think would roar or snarl — sounds we are used to having for a lion. We took some creative license and blended sounds together to make the tiger a little more ferocious, but not too scary. Because, again, it’s a comedy so we needed to find the right balance.

What was the most challenging scene for sound?
Haddad: The entire cast was in this episode, during the graduation ceremony. So you had 500 students and a dozen of the lead cast members. That was pretty full, in terms of sound. We had to make it feel like everyone is panicking at the same time while focusing on the tiger. We had to keep the tension going, but it couldn’t be scary. We had to keep the tone of the comedy going. That’s where the balance was tricky and the mixers did a great job with all the material we gave them. I think they found the right tone for the episode.

Foster: For dialogue, the most challenging scene was when they are in the cafeteria with the tiger. That was a little tough because there are a lot of people talking and there were overlapping lines. Also, it was shot in a practical location, so there was room reflection on the production dialogue.

A comedy series is all about getting a laugh. How do you use sound to enhance the comedy in this series?
Haddad: We take the lead off of Danny McBride. Whatever his character is doing, we’re not going to try to go over the top just because he and his co-stars are brilliant at it. But, we want to add to the comedy. We don’t go cartoonish. We try to keep the sounds in reality but add a little bit of a twist on top of what the characters are already doing so brilliantly on the screen.

Quite frankly, they do most of the work for us and we just sweeten what is going on in the scene. We stay away from any of the classic Hanna-Barbera cartoon sound effects. It’s not that kind of comedy, but at the same time we will throw a little bit of slapstick in there — whether it’s a character falling or slipping or it’s a gun going off. For the gunshots, I’ll have the bullet ricochet and hit a tree just to add to the comedy that’s already there.

A comedy series is all about the dialogue and the jokes. What are some things you do to help the dialogue come through?
Haddad: The production dialogue was clean overall, and the producers don’t want to change any of the performances, even if a line is a bit noisy. The mixers did a great job in making sure that clarity was king for dialogue. Every single word and every single joke was heard perfectly. Comedy is all about timing.

We were fortunate because we get clean dialogue and we found the right balance of all the students screaming and the sounds of panicking when the tiger created havoc. We wanted to make sure that Danny and his co-stars were heard loud and clear because the comedy starts with them. Vice Principals is a great and natural sounding show for dialogue.

Foster: Vice Principals was a pleasure to work on because the dialogue was in good shape. The editing on this episode wasn’t difficult. The lines went together pretty evenly.

We basically work with what we’ve been given. It’s all been chosen for us and our job is to make it sound smooth. There’s very minimal ADR on the show.

In terms of clarification, we make sure that any lines that really need to be heard are completely separate, so when it gets to the mix stage the mixer can push that line through without having to push everything else.

As far as timing, we don’t make any changes. That’s a big fat no-no for us. The picture editor and showrunners have already decided what they want and where, and we don’t mess with that.

There were a large number of actors present for the graduation ceremony. Was the production sound mixer able to record those people in that environment? Or, was that sound covered in loop?
Haddad: There are so many people in the scene. and that can be challenging to do solely in loop group. We did multiple passes with the actors we had in loop. We also had the excellent sound library here at Warner Bros. Sound. I also captured recordings at my kids’ high school. So we had a lot of resource material to pull from and we were able to build out that scene nicely. What we see on-camera, with the number of students and adults, we were able to represent that through sound.

As for recording at my kids’ high school, I got permission from the principal but, of course, my kids were embarrassed to have their dad at school with his sound equipment. So I tried to stay covert. The microphones were placed up high, in inconspicuous places. I didn’t ask any students to do anything. We were like chameleons — we came and set up our equipment and hit record. I had Røde microphones because they were easy to mount on the wall and easy to hide. One was a Røde VideoMic and the other was their NTG1 microphone. I used a Roland R-26 recorder because it’s portable and I love the quality. It’s great for exterior sounds too because you don’t get a lot of hiss.

We spent a couple hours recording and we were lucky enough to get material to use in the show. I just wanted to catch the natural sound of the school. There are 2,700 students, so it’s an unusually high student population and we were able to capture that. We got lucky when kids walked by laughing or screaming or running to the next class. That was really useful material.

Foster: There was production crowd recorded. For most of the episodes when they had pep rallies and events, there was production crowd recorded. They took the time to record some specific takes. When you’re shooting group on the stage, you’re limited to the number of people you have. You have to do multiple takes to try and mimic that many people.

Can you talk about the tools you couldn’t have done without?
Haddad: This show has a natural sound, so we didn’t use pitch shifting or reverb or other processing like we’d use on a show like Gotham, where we do character vocal treatments.

Foster: I would have to say iZotope RX 6. That tool for a dialogue editor is one that you can’t live without. There were some challenging scenes on Vice Principals, and the production sound mixer Christof Gebert did a really good job of getting the mics in there. The iso-mics were really clean, and that’s unusual these days. The dialogue on the show was pleasant to work on because of that.

What makes this show challenging in terms of dialogue is that it’s a comedy, so there’s a lot of ad-libbing. With ad-libbing, there’s no other takes to choose from. So if there’s a big clunk on a line, you have to make that work. With RX 6, you can minimize the clunk on a line or get rid of it. If those lines are ad-libs, they don’t want to have to loop those. The ad-libbing makes the show great but it also makes the dialogue editing a bit more complicated.

Any final thoughts you’d like to share on Vice Principals?
Haddad: We had a big crew because the show was so busy. I was lucky to get some of the best here at Warner Bros. Sound. They helped to make the show sound great, and we’re all very proud of it. We appreciate our peers selecting Vice Principals for Emmy nomination. That to us was a great feeling, to have all of our hard work pay off with an Emmy nomination.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.


Crafting sound for Emmy-winning Atlanta

By Jennifer Walden

FX Network’s dramedy series Atlanta, which recently won an Emmy for Outstanding Sound Editing For A Comedy or Drama Series (Half-Hour)tells the story of three friends from, well, Atlanta — a local rapper named Paper Boi whose star is on the rise (although the universe seems to be holding him down), his cousin/manager Earn and their head-in-the-clouds friend Darius.

Trevor Gates

Told through vignettes, each episode shows their lives from different perspectives instead of through a running narrative. This provides endless possibilities for creativity. One episode flows through different rooms at a swanky New Year’s party at Drake’s house; another ventures deep into the creepy woods where real animals (not party animals) make things tense.

It’s a playground for sound each week, and MPSE-award-winning supervising sound editor Trevor Gates of Formosa Group and his sound editorial team on Season 2 (aka, Robbin’ Season) got their 2018 Emmy based on the work they did on Episode 6 “Teddy Perkins,” in which Darius goes to pick up a piano from the home of an eccentric recluse but finds there’s more to the transaction than he bargained for.

Here, Gates discusses the episode’s precise use of sound and how the quiet environment was meticulously crafted to reinforce the tension in the story and to add to the awkwardness of the interactions between Darius and Teddy.

There’s very little music in “Teddy Perkins.” The soundtrack is mainly different ambiences and practical effects and Foley. Since the backgrounds play such an important role, can you tell me about the creation of these different ambiences?
Overall, Atlanta doesn’t really have a score. Music is pretty minimal and the only music that you hear is mainly source music — music coming from radios, cell phones or laptops. I think it’s an interesting creative choice by producers Hiro Murai and Donald Glover. In cases like the “Teddy Perkins” episode, we have to be careful with the sounds we choose because we don’t have a big score to hide behind. We have to be articulate with those ambient sounds and with the production dialogue.

Going into “Teddy Perkins,” Hiro (who directed the episode) and I talked about his goals for the sound. We wanted a quiet soundscape and for the house to feel cold and open. So, when we were crafting the sounds that most audience members will perceive as silence or quietness, we had very specific choices to make. We had to craft this moody air inside the house. We had to craft a few sounds for the outside world too because the house is located in a rural area.

There are a few birds but nothing overt, so that it’s not intrusive to the relationship between Darius (Lakeith Stanfield) and Teddy (Donald Glover). We had to be very careful in articulating our sound choices, to hold that quietness that was void of any music while also supporting the creepy, weird, tense dialogue between the two.

Inside the Perkins residence, the first ambience felt cold and almost oppressive. How did you create that tone?
That rumbly, oppressive air was the cold tone we were going for. It wasn’t a layer of tones; it was actually just one sound that I manipulated to be the exact frequency that I wanted for that space. There was a vastness and a claustrophobia to that space, although that sounds contradictory. That cold tone was kind of the hero sound of this episode. It was just one sound, articulately crafted, and supported by sounds from the environment.

There’s a tonal shift from the entryway into the parlor, where Darius and Teddy sit down to discuss the piano (and Teddy is eating that huge, weird egg). In there we have the sound of a clock ticking. I really enjoy using clocks. I like the meter that clocks add to a room.

In Ouija: Origin of Evil, we used the sound of a clock to hold the pace of some scenes. I slowed the clock down to just a tad over a second, and it really makes you lean in to the scene and hold what you perceive as silence. I took a page from that book for Atlanta. As you leave the cold air of the entryway, you enter into this room with a clock ticking and Teddy and Darius are sitting there looking at each other awkwardly over this weird/gross ostrich egg. The sound isn’t distracting or obtrusive; it just makes you lean into the awkwardness.

It was important for us to get the mix for the episode right, to get the right level for the ambiences and tones, so that they are present but not distracting. It had to feel natural. It’s our responsibility to craft things that show the audience what we want them to see, and at the same time we have to suspend their disbelief. That’s what we do as filmmakers; we present the sonic spaces and visual images that traverse that fine line between creativity and realism.

That cold tone plays a more prominent role near the end of the episode, during the murder-suicide scene. It builds the tension until right before Benny pulls the trigger. But there’s another element too there, a musical stinger. Why did you choose to use music at that moment?
What’s important about this season of Atlanta is that Hiro and Donald have a real talent for surrounding themselves with exceptional people — from the picture department to the sound department to the music department and everyone on-set. Through the season it was apparent that this team of exceptional people functioned with extreme togetherness. We had a homogeny about us. It was a bunch of really creative and smart people getting together in a room, creating something amazing.

We had a music department and although there isn’t much music and score, every once in a while we would break a rule that we set for ourselves on Season 2. The picture editor will be in the room with the music department and Hiro, and we’ll all make decisions together. That musical stinger wasn’t my idea exactly; it was a collective decision to use a stinger to drive the moment, to have it build and release at a specific time. I can’t attribute that sound to me only, but to this exceptional team on the show. We would bounce creative ideas off of each other and make decisions as a collective.

The effects in the murder-suicide scene do a great job of tension building. For example, when Teddy leans in on Darius, there’s that great, long floor creak.
Yeah, that was a good creak. It was important for us, throughout this episode, to make specific sound choices in many different areas. There are other episodes in the season that have a lot more sound than this episode, like “Woods,” where Paper Boi (Brian Tyree Henry) is getting chased through the woods after he was robbed. Or “Alligator Man,” with the shootout in the cold open. But that wasn’t the case with “Teddy Perkins.”

On this one, we had to make specific choices, like when Teddy leans over and there’s that long, slow creak. We tried to encompass the pace of the scene in one very specific sound, like the sound of the shackles being tightened onto Darius or the movement of the shotgun.

There’s another scene when Darius goes down into the basement, and he’s traveling through this area that he hasn’t been in before. We decided to create a world where he would hear sounds traveling through the space. He walks past a fan and then a water heater kicks on and there is some water gurgling through pipes and the clinking sound of the water heater cooling down. Then we hear Benny’s wheelchair squeak. For me, it’s about finding that one perfect sound that makes that moment. That’s hard to do because it’s not a composition of many sounds. You have one choice to make, and that’s what is going to make that moment special. It’s exciting to find that one sound. Sometimes you go through many choices until you find the right one.

There were great diegetic effects, like Darius spinning the globe, and the sound of the piano going onto the elevator, and the floor needle and the buttons and dings. Did those come from Foley? Custom recordings? Library sounds?
I had a great Foley team on this entire season, led by Foley supervisor Geordy Sincavage. The sounds like the globe spinning came from the Foley team, so that was all custom recorded. The elevator needle moving down was a custom recording from Foley. All of the shackles and handcuffs and gun movements were from Foley.

The piano moving onto the elevator was something that we created from a combination of library effects and Foley sounds. I had sound effects editor David Barbee helping me out on this episode. He gave me some library sounds for the piano and I went in and gave it a little extra love. I accentuated the movement of the piano strings. It was like piano string vocalizations as Darius is moving the piano into the elevator and it goes over the little bumps. I wanted to play up the movements that would add some realism to that moment.

Creating a precise soundtrack is harder than creating a big action soundtrack. Well, there are different sets of challenges for both, but it’s all about being able to tell a story by subtraction. When there’s too much going on, people can feel the details if you start taking things away. “Teddy Perkins” is the case of having an extremely precise soundtrack, and that was successful thanks to the work of the Foley team, my effects editor, and the dialogue editor.

The dialogue editor Jason Dotts is the unsung hero in this because we had to be so careful with the production dialogue track. When you have a big set — this old, creaky house and lots of equipment and crew noise — you have to remove all the extraneous noise that can take you out of the tension between Darius and Teddy. Jason had to go in with a fine-tooth comb and do surgery on the production dialogue just to remove every single small sound in order to get the track super quiet. That production track had to be razor-sharp and presented with extreme care. Then, with extreme care, we had to build the ambiences around it and add great Foley sounds for all the little nuances. Then we had to bake the cake together and have a great mix, a very articulate balance of sounds.

When we were all done, I remember Hiro saying to us that we realized his dream 100%. He alluded to the fact that this was an important episode going into it. I feel like I am a man of my craft and my fingerprint is very important to me, so I am always mindful of how I show my craft to the world. I will always take extreme care and go the extra mile no matter what, but it felt good to have something that was important to Hiro have such a great outcome for our team. The world responded. There were lots of Emmy nominations this year for Atlanta and that was an incredible thing.

Did you have a favorite scene for sound? Why?
It was cool to have something that we needed to craft and present in its entirety. We had to build a motif and there had to be consistency within that motif. It was awesome to build the episode as a whole. Some scenes were a bit different, like down in the basement. That had a different vibe. Then there were fun scenes like moving the piano onto the elevator. Some scenes had production challenges, like the scene with the film projector. Hiro had to shoot that scene with the projector running and that created a lot of extra noise on the production dialogue. So that was challenging from a dialogue editing standpoint and a mix standpoint.

Another challenging scene was when Darius and Teddy are in the “Father Room” of the museum. That was shot early on in the process and Donald wasn’t quite happy with his voice performance in that scene. Overall, Atlanta uses very minimal ADR because we feel that re-recorded performances can really take the magic out of a scene, but Donald wanted to redo that whole scene, and it came out great. It felt natural and I don’t think people realize that Donald’s voice was re-recorded in its entirety for that scene. That was a fun ADR session.

Donald came into the studio and once he got into the recording booth and got into the Teddy Perkins voice he didn’t get out of it until we were completely finished. So as Hiro and Donald are interacting about ideas on the performance, Donald stayed in the Teddy voice completely. He didn’t get out of it for three hours. That was an interesting experience to see Donald’s face as himself and hear Teddy’s voice.

Where there any audio tools that you couldn’t have lived without on this episode?
Not necessarily. This was an organic build and the tools that we used in this were really basic. We used some library sounds and recorded some custom sounds. We just wanted to make sure that we could make this as real and organic as possible. Our tool was to pick the best organic sounds that we could, whether we used source recordings or new recordings.

Of all the episodes in Season 2 of Atlanta, why did you choose “Teddy Perkins” for Emmy consideration?
Each episode had its different challenges. There were lots of different ways to tell the stories since each episode is different. I think that is something that is magical about Atlanta. Some of the episodes that stood out from a sound standpoint were Episode 1 “Alligator Man” with the shootout, and Episode 8 “Woods.” I had considered submitting “Woods” because it’s so surreal once Paper Boi gets into the woods. We created this submergence of sound, like the woods were alive. We took it to another level with the wildlife and used specific wildlife sounds to draw some feelings of anxiety and claustrophobia.

Even an episode like “Champagne Papi,” which seems like one of the most basic from a sound editorial perspective, was actually quite varied. They’re going between different rooms at a party and we had to build spaces of people that felt different but the same in each room. It had to feel like a real space with lots of people, and the different spaces had to feel like it belonged at the same party.

But when it came down to it, I feel like “Teddy Perkins” was special because there wasn’t music to hide behind. We had to do specific and articulate work, and make sharp choices. So it’s not the episode with the most sound but it’s the episode that has the most articulate sound. And we are very proud of how it turned out.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.