Category Archives: Audio Mixing

Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

DG 7.9, 8.27, 9.26

Behind the Title: Heard City mixer Elizabeth McClanahan

A musician from an early age, this mixer/sound designer knew her path needed to involve music and sound.

Name: Elizabeth McClanahan

Company: New York City’s Heard City (@heardcity)

Can you describe your company?
We are an audio post production company.

What’s your job title?
Mixer and sound designer.

What does that entail?
I mix and master audio for advertising, television and film. Working with creatives, I combine production audio, sound effects, sound design, score or music tracks and voiceover into a mix that sounds smooth and helps highlight the narrative of each particular project.

What would surprise people the most about what falls under that title?
I think most people are surprised by the detailed nature of sound design and by the fact that we often supplement straightforward diegetic sounds with additional layers of more conceptual design elements.

What’s your favorite part of the job?
I enjoy the collaborative work environment, which enables me to take on different creative challenges.

What’s your least favorite?
The ever-changing landscape of delivery requirements.

What is your favorite time of the day?
Lunch!

If you didn’t have this job, what would you be doing instead?
I think I would be interested in pursuing a career as an archivist or law librarian.

Why did you choose this profession?
Each project allows me to combine multiple tools and skill sets: music mixing, dialogue cleanup, sound design, etc. I also enjoy the problem solving inherent in audio post.

How early on did you know this would be your path?
I began playing violin at age four, picking up other instruments along the way. As a teenager, I often recorded friends’ punk bands, and I also started working in live sound. Later, I began my professional career as a recording engineer and focused primarily on jazz. It wasn’t until VO and ADR sessions began coming into the music studio in which I was working that I became aware of the potential paths in audio post. I immediately enjoyed the range and challenges of projects that post had to offer.

Can you name some recent projects you have worked on?
Lately, I’ve worked on projects for Google, Budweiser, Got Milk?, Clash of Clans, and NASDAQ.

I recently completed work on a feature film, called Nancy. This was my first feature in the role of supervising sound editor and re-recording mixer, and I appreciated the new experience on both a technical and creative level. Nancy was particularly unique in that all department heads (in both production and post) were women. It was an incredible opportunity to work with so many talented people.

Name three pieces of technology you can’t live without.
The Teenage Engineering OP-1, my phone and the UAD plugins that allow me to play bass at home without bothering my neighbors.

What social media channels do you follow?
Although I am not a heavy social media user, I follow a few pragmatic-yet-fun YouTube channels: Scott’s Bass Lessons, Hicut Cake and the gear review channel Knobs. I love that Knobs demonstrates equipment in detail without any talking.

What do you do to de-stress from it all?
In addition to practicing yoga, I love to read and visit museums, as well as play bass and work with modular synths.


Enhancing BlacKkKlansman’s tension with Foley

By Jennifer Walden

Director Spike Lee’s latest film, BlacKkKlansman, has gotten rave reviews from both critics and audiences. The biographical dramedy is based on Ron Stallworth’s true story of infiltrating the Colorado Springs chapter of the Ku Klux Klan back in the 1970s.

Stallworth (John David Washington) was a detective for the Colorado Springs police department who saw a recruitment advertisement for the KKK and decided to call the head of the local Klan chapter. He claimed he  was a racist white man wanting to join the Klan. Stallworth asks his co-worker Flip Zimmerman (Adam Driver) to act as Stallworth when dealing with the Klan face-to-face. Together, they try to thwart a KKK attack on an upcoming civil rights rally.

Marko Costanzo

The Emmy-award winning team (The Night Of and Boardwalk Empire) of Foley artist Marko Costanzo and Foley engineer George Lara at c5 Sound in New York City were tasked with recreating the sound of the ‘70s — from electric typewriters and rotary phones at police headquarters to the creak of leather jackets that were so popular in that era. “There are cardboard files and evidence boxes being moved around, phones dialing, newspapers shuffling and applause. We even had a car explosion which meant a lot of car parts landing on the ground,” explains Costanzo. “If you could listen to the film before our Foley, you would notice just how many of the extraneous noises had been removed, so we replaced all of that. Pretty much everything you hear in that film was replaced or at least sweetened.”

One important role of Foley is using it to define a character through sound. For example, Stallworth typically wears a leather jacket, and his jacket has a signature sound. But many of the police officers, and some Klan members, wear leather jackets, too, and they couldn’t all sound the same. The challenge was to create a unique sound that would represent each character.

According to Costanzo, the trickiest ones to define were the police officers, since they all have similar gear but still needed to sound different. “For the racist police officer Andy Landers (Frederick Weller), we wanted to make him noisy so he sounds a little more overzealous or full of himself. He’s got more of a presence.” The kit they created for Landers has more equipment for his belt, like bullets and handcuffs that rattle as he walks, a radio and a nightstick clattering, and they used extra leather creaking as well. “We did the night stick for him because he’s always ready and quick to pull out his nightstick to harass someone. He was a pretty nasty character, so we made him sound nasty with all our Foley trimmings.”

The police officer Foley really shines during the scene in which Stallworth apprehends Connie (Ashlie Atkinson), who just planted a bomb outside the residence of Patrice (Laura Harrier), president of the black student union at Colorado College. Stallworth is undercover, and he’s being arrested by local uniformed police officers instead of Connie the criminal. “The trick there was to make the police officer sound intimidating, and we did that through the sound of their belts,” says Costanzo. “They’re frisking the undercover cop and putting the handcuffs on and we covered all of those actions with sound.”

That scene is followed by a huge car explosion, which the Foley team also covered. While they didn’t do the actual explosion sound, they did perform the sounds of the glass shattering and many different debris impacts. “Our work helps to identify the perspective of the camera, and adds detail like parts hitting the bushes or parts hitting other cars. We go and pick out all the little things that you see and add those to the track,” he says.

Sometimes the Foley adds to the storytelling in less overt ways. For instance, during the scene when Stallworth calls up the head of the local KKK. As he’s on the phone listing all the types of people he hates, the other police officers in the station stop what they’re doing. Zimmerman swivels his chair around slowly and you hear it squeaking the whole time. It’s this uncomfortable sound, like the sonic equivalent of an eyebrow raise. Costanzo says, “Uncomfortable sounds are what we specialize in. Those are moments we embellish wherever possible so that it does tell part of the story. We wanted that moment to feel uncomfortable. Once those sounds are heard, it becomes part of the story, but it also just falls into the soundtrack.”

Foley can be helpful in communicating what’s happening off-screen as well. The police station is filled with officers. In Foley, they covered telephone hang-ups and grabs, the sound of the cords clattering and the chairs creaking, filing cabinets being opened and closed. “We try to create the feeling that you are located in that room and so we embellish off-camera sounds as well as the sounds for things on camera,” says Lara. Sometimes those off-camera sounds are atmospheric, like the police station, and other times they’re very specific. The director or supervising sound editor may ask to hear the characters walk away and out onto the street, or they need to hear a big crowd on the other side of a wall.

Part of the art of Foley is getting it to sound like it’s coming from the scene, like it’s production sound even though it isn’t. When a character waves an arm, you hear a cloth rustle. If people are walking down a long hallway, you hear their footsteps, and the sound diminishes as they get farther away from the camera. “We embellish all those movements, and that makes what we’re seeing feel more real,” explains Costanzo. To get those sounds to sit right, to feel like they’re coming from the scene, the Foley team strives to match the quality of the room for each scene, for each camera angle. “We try to do our best to match what we hear in production so the Foley will match that and sound like it was recorded there, live, on-set that day.”

Tools & Collaboration
Lara uses a four-mic approach to capturing the Foley. For the main mic (closest to Costanzo), he uses a Neumann KMR 81 D shotgun mic, which is a common boom mic used on-set. He has three other KMR 81 Ds placed at different distances and angles to the sound source. Those are all fed into an 8-channel Millennia mic pre-amp. By changing the balance of the mics in the mix, Lara can change the perspective of the sound. Because how well the Foley fits into the track isn’t just about volume, it’s about perspective and tonal quality. “Although we can EQ the sound, we try not to because we want to give the supervising sound editor the best sound, the fullest and richest sounding Foley possible,” he says.

Lara and Costanzo have been creating Foley together for 26 years. Both got their start at Sound One’s Foley stage in New York. “We have a really good idea of what’s good Foley and what’s bad Foley. Because George and I both learned the same way, I often refer to George as having the same ear as myself — meaning we both know when something works and when something doesn’t work,” shares Costanzo.

This dynamic allows the team to record anywhere from 300 to 400 sounds per day. For BlacKkKlansman, they were able to turn the film around in eight days. “The way that we work together, and why we work so well together, is because we both know what we are looking for and we have recorded many, many hours and years of Foley together,” says Lara.

Costanzo concludes, “Foley is a collaborative art but since we’ve been working together for many years, there are a lot of things that go unsaid. We don’t need to explain to each other everything that goes on. We both have imaginations that flourish when it comes to sound and we know how to take ideas and transfer them into working sounds. That’s something you learn over time.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 


The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.


Crafting sound for Emmy-winning Atlanta

By Jennifer Walden

FX Network’s dramedy series Atlanta, which recently won an Emmy for Outstanding Sound Editing For A Comedy or Drama Series (Half-Hour)tells the story of three friends from, well, Atlanta — a local rapper named Paper Boi whose star is on the rise (although the universe seems to be holding him down), his cousin/manager Earn and their head-in-the-clouds friend Darius.

Trevor Gates

Told through vignettes, each episode shows their lives from different perspectives instead of through a running narrative. This provides endless possibilities for creativity. One episode flows through different rooms at a swanky New Year’s party at Drake’s house; another ventures deep into the creepy woods where real animals (not party animals) make things tense.

It’s a playground for sound each week, and MPSE-award-winning supervising sound editor Trevor Gates of Formosa Group and his sound editorial team on Season 2 (aka, Robbin’ Season) got their 2018 Emmy based on the work they did on Episode 6 “Teddy Perkins,” in which Darius goes to pick up a piano from the home of an eccentric recluse but finds there’s more to the transaction than he bargained for.

Here, Gates discusses the episode’s precise use of sound and how the quiet environment was meticulously crafted to reinforce the tension in the story and to add to the awkwardness of the interactions between Darius and Teddy.

There’s very little music in “Teddy Perkins.” The soundtrack is mainly different ambiences and practical effects and Foley. Since the backgrounds play such an important role, can you tell me about the creation of these different ambiences?
Overall, Atlanta doesn’t really have a score. Music is pretty minimal and the only music that you hear is mainly source music — music coming from radios, cell phones or laptops. I think it’s an interesting creative choice by producers Hiro Murai and Donald Glover. In cases like the “Teddy Perkins” episode, we have to be careful with the sounds we choose because we don’t have a big score to hide behind. We have to be articulate with those ambient sounds and with the production dialogue.

Going into “Teddy Perkins,” Hiro (who directed the episode) and I talked about his goals for the sound. We wanted a quiet soundscape and for the house to feel cold and open. So, when we were crafting the sounds that most audience members will perceive as silence or quietness, we had very specific choices to make. We had to craft this moody air inside the house. We had to craft a few sounds for the outside world too because the house is located in a rural area.

There are a few birds but nothing overt, so that it’s not intrusive to the relationship between Darius (Lakeith Stanfield) and Teddy (Donald Glover). We had to be very careful in articulating our sound choices, to hold that quietness that was void of any music while also supporting the creepy, weird, tense dialogue between the two.

Inside the Perkins residence, the first ambience felt cold and almost oppressive. How did you create that tone?
That rumbly, oppressive air was the cold tone we were going for. It wasn’t a layer of tones; it was actually just one sound that I manipulated to be the exact frequency that I wanted for that space. There was a vastness and a claustrophobia to that space, although that sounds contradictory. That cold tone was kind of the hero sound of this episode. It was just one sound, articulately crafted, and supported by sounds from the environment.

There’s a tonal shift from the entryway into the parlor, where Darius and Teddy sit down to discuss the piano (and Teddy is eating that huge, weird egg). In there we have the sound of a clock ticking. I really enjoy using clocks. I like the meter that clocks add to a room.

In Ouija: Origin of Evil, we used the sound of a clock to hold the pace of some scenes. I slowed the clock down to just a tad over a second, and it really makes you lean in to the scene and hold what you perceive as silence. I took a page from that book for Atlanta. As you leave the cold air of the entryway, you enter into this room with a clock ticking and Teddy and Darius are sitting there looking at each other awkwardly over this weird/gross ostrich egg. The sound isn’t distracting or obtrusive; it just makes you lean into the awkwardness.

It was important for us to get the mix for the episode right, to get the right level for the ambiences and tones, so that they are present but not distracting. It had to feel natural. It’s our responsibility to craft things that show the audience what we want them to see, and at the same time we have to suspend their disbelief. That’s what we do as filmmakers; we present the sonic spaces and visual images that traverse that fine line between creativity and realism.

That cold tone plays a more prominent role near the end of the episode, during the murder-suicide scene. It builds the tension until right before Benny pulls the trigger. But there’s another element too there, a musical stinger. Why did you choose to use music at that moment?
What’s important about this season of Atlanta is that Hiro and Donald have a real talent for surrounding themselves with exceptional people — from the picture department to the sound department to the music department and everyone on-set. Through the season it was apparent that this team of exceptional people functioned with extreme togetherness. We had a homogeny about us. It was a bunch of really creative and smart people getting together in a room, creating something amazing.

We had a music department and although there isn’t much music and score, every once in a while we would break a rule that we set for ourselves on Season 2. The picture editor will be in the room with the music department and Hiro, and we’ll all make decisions together. That musical stinger wasn’t my idea exactly; it was a collective decision to use a stinger to drive the moment, to have it build and release at a specific time. I can’t attribute that sound to me only, but to this exceptional team on the show. We would bounce creative ideas off of each other and make decisions as a collective.

The effects in the murder-suicide scene do a great job of tension building. For example, when Teddy leans in on Darius, there’s that great, long floor creak.
Yeah, that was a good creak. It was important for us, throughout this episode, to make specific sound choices in many different areas. There are other episodes in the season that have a lot more sound than this episode, like “Woods,” where Paper Boi (Brian Tyree Henry) is getting chased through the woods after he was robbed. Or “Alligator Man,” with the shootout in the cold open. But that wasn’t the case with “Teddy Perkins.”

On this one, we had to make specific choices, like when Teddy leans over and there’s that long, slow creak. We tried to encompass the pace of the scene in one very specific sound, like the sound of the shackles being tightened onto Darius or the movement of the shotgun.

There’s another scene when Darius goes down into the basement, and he’s traveling through this area that he hasn’t been in before. We decided to create a world where he would hear sounds traveling through the space. He walks past a fan and then a water heater kicks on and there is some water gurgling through pipes and the clinking sound of the water heater cooling down. Then we hear Benny’s wheelchair squeak. For me, it’s about finding that one perfect sound that makes that moment. That’s hard to do because it’s not a composition of many sounds. You have one choice to make, and that’s what is going to make that moment special. It’s exciting to find that one sound. Sometimes you go through many choices until you find the right one.

There were great diegetic effects, like Darius spinning the globe, and the sound of the piano going onto the elevator, and the floor needle and the buttons and dings. Did those come from Foley? Custom recordings? Library sounds?
I had a great Foley team on this entire season, led by Foley supervisor Geordy Sincavage. The sounds like the globe spinning came from the Foley team, so that was all custom recorded. The elevator needle moving down was a custom recording from Foley. All of the shackles and handcuffs and gun movements were from Foley.

The piano moving onto the elevator was something that we created from a combination of library effects and Foley sounds. I had sound effects editor David Barbee helping me out on this episode. He gave me some library sounds for the piano and I went in and gave it a little extra love. I accentuated the movement of the piano strings. It was like piano string vocalizations as Darius is moving the piano into the elevator and it goes over the little bumps. I wanted to play up the movements that would add some realism to that moment.

Creating a precise soundtrack is harder than creating a big action soundtrack. Well, there are different sets of challenges for both, but it’s all about being able to tell a story by subtraction. When there’s too much going on, people can feel the details if you start taking things away. “Teddy Perkins” is the case of having an extremely precise soundtrack, and that was successful thanks to the work of the Foley team, my effects editor, and the dialogue editor.

The dialogue editor Jason Dotts is the unsung hero in this because we had to be so careful with the production dialogue track. When you have a big set — this old, creaky house and lots of equipment and crew noise — you have to remove all the extraneous noise that can take you out of the tension between Darius and Teddy. Jason had to go in with a fine-tooth comb and do surgery on the production dialogue just to remove every single small sound in order to get the track super quiet. That production track had to be razor-sharp and presented with extreme care. Then, with extreme care, we had to build the ambiences around it and add great Foley sounds for all the little nuances. Then we had to bake the cake together and have a great mix, a very articulate balance of sounds.

When we were all done, I remember Hiro saying to us that we realized his dream 100%. He alluded to the fact that this was an important episode going into it. I feel like I am a man of my craft and my fingerprint is very important to me, so I am always mindful of how I show my craft to the world. I will always take extreme care and go the extra mile no matter what, but it felt good to have something that was important to Hiro have such a great outcome for our team. The world responded. There were lots of Emmy nominations this year for Atlanta and that was an incredible thing.

Did you have a favorite scene for sound? Why?
It was cool to have something that we needed to craft and present in its entirety. We had to build a motif and there had to be consistency within that motif. It was awesome to build the episode as a whole. Some scenes were a bit different, like down in the basement. That had a different vibe. Then there were fun scenes like moving the piano onto the elevator. Some scenes had production challenges, like the scene with the film projector. Hiro had to shoot that scene with the projector running and that created a lot of extra noise on the production dialogue. So that was challenging from a dialogue editing standpoint and a mix standpoint.

Another challenging scene was when Darius and Teddy are in the “Father Room” of the museum. That was shot early on in the process and Donald wasn’t quite happy with his voice performance in that scene. Overall, Atlanta uses very minimal ADR because we feel that re-recorded performances can really take the magic out of a scene, but Donald wanted to redo that whole scene, and it came out great. It felt natural and I don’t think people realize that Donald’s voice was re-recorded in its entirety for that scene. That was a fun ADR session.

Donald came into the studio and once he got into the recording booth and got into the Teddy Perkins voice he didn’t get out of it until we were completely finished. So as Hiro and Donald are interacting about ideas on the performance, Donald stayed in the Teddy voice completely. He didn’t get out of it for three hours. That was an interesting experience to see Donald’s face as himself and hear Teddy’s voice.

Where there any audio tools that you couldn’t have lived without on this episode?
Not necessarily. This was an organic build and the tools that we used in this were really basic. We used some library sounds and recorded some custom sounds. We just wanted to make sure that we could make this as real and organic as possible. Our tool was to pick the best organic sounds that we could, whether we used source recordings or new recordings.

Of all the episodes in Season 2 of Atlanta, why did you choose “Teddy Perkins” for Emmy consideration?
Each episode had its different challenges. There were lots of different ways to tell the stories since each episode is different. I think that is something that is magical about Atlanta. Some of the episodes that stood out from a sound standpoint were Episode 1 “Alligator Man” with the shootout, and Episode 8 “Woods.” I had considered submitting “Woods” because it’s so surreal once Paper Boi gets into the woods. We created this submergence of sound, like the woods were alive. We took it to another level with the wildlife and used specific wildlife sounds to draw some feelings of anxiety and claustrophobia.

Even an episode like “Champagne Papi,” which seems like one of the most basic from a sound editorial perspective, was actually quite varied. They’re going between different rooms at a party and we had to build spaces of people that felt different but the same in each room. It had to feel like a real space with lots of people, and the different spaces had to feel like it belonged at the same party.

But when it came down to it, I feel like “Teddy Perkins” was special because there wasn’t music to hide behind. We had to do specific and articulate work, and make sharp choices. So it’s not the episode with the most sound but it’s the episode that has the most articulate sound. And we are very proud of how it turned out.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.


Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.


Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.


Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Sound Lounge, Mad Hat team on Sound Lounge Everywhere Atlanta

Sound Lounge has partnered with Atlanta’s Mad Hat Creative to bring its Sound Lounge Everywhere remote collaboration service to the Southeast. Sound Lounge Everywhere will allow advertising, broadcast and corporate clients in Atlanta and neighboring states to work with Sound Lounge sound editors, designers and mixers in New York in realtime and share high-quality audio and video.

This will allow clients access to top sound talent, while saving time, travel and production costs. Sound Lounge already has launched Sound Lounge Everywhere at sites in Boston and Boulder, Colorado.

At Mad Hat’s Atlanta offices, a suite dedicated to sound work is equipped with Bowers & Wilkins speakers and other leading-edge gear to ensure accurate playback of music and sound. Proprietary Sound Lounge Everywhere hardware and software facilitates realtime streaming of high-quality video and uncompressed, multichannel audio between the Mad Hat and Sound Lounge locations with virtually no latency. Web cameras and talkback modules support two-way communication.

For Mad Hat Creative, Sound Lounge Everywhere helps the company round out an offering that includes video production, editorial, visual effects, motion graphics, color correction and post services.

To help manage the new service, Sound Lounge has promoted Becca Falborn to senior producer. Falborn, who joined the studio as a producer last year, will coordinate sound sessions between the two sites, assist Sound Lounge head of production Liana Rosenberg in overseeing local sound production and serve as the studio’s social coordinator.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including posts with the post house Nice Shoes and the marketing agency Hogarth Worldwide.