NBCUni 7.26

Category Archives: Audio

Behind the Title: New Math Managing Partner/EP Kala Sherman

Name: Kala Sherman

Company: New Math

Can you describe your company?
We are a bicoastal audio production company, with offices in NYC and LA, specializing in original music, sound design, audio mix and music supervision.

What’s your job title?
Managing Partner/EP

What does that entail?
I do everything from managing our staff to producing projects to sales and development.

What would surprise people the most about what’s underneath that title?
I am an untrained, but really good psychotherapist.

New Math, New York

What have you learned over the years about running a business?
It’s highly competitive and you have to continue to hustle and push the creative product in order to stay relevant. Also, it’s paramount to assemble the best talent and treat them with the utmost respect; without our producers or composers there wouldn’t be a business.

A lot of it must be about trying to keep employees and clients happy. How do you balance that?
We face at least one root challenge: How do you keep both your clients and your creative staff happy? I think how you approach and sell an idea to the composers while still delivering what the client needs is a real art form. It gets tricky with limited music budgets these days, but I’ve found over the years that there are ways to structure the deals where the clients feel like they can get the music and sound design they need while the composers feel well-compensated and creatively fulfilled.

What’s your favorite part of the job?
I love the fact that we are creating music and I get to be part of that process.

What’s your least favorite?
Competitive demoing. Partnering with clients is just way more fun than knowing you are competing with other companies. And not too ironically, it usually results in the best and freshest creative product.

What is your favorite time of the day?
I love the evenings when I get home and hang with my daughter.

If you didn’t have this job, what would you be doing instead?
I always knew I had to work in music, so I would have probably stayed on the label side of the music business.

Can you name some recent clients?
Google, Trojan, Smirnoff, KFC, Chobani, Walmart, Zappos and ESPN.

Name three pieces of technology you can’t live without.
Spotify. Laptop. iPhone.

You recently added final mix capabilities in both of your locations. Can you talk about why now was the time?
We want to be a full-service audio company for our clients. It just makes sense when many of our clients want to work with one company for all audio needs. If we are already providing the music and sound design, why not record the VO and provide mix as well. Plus, it’s really fun to have clients in the studio.

What tools will be used for the mixing rooms?
Focal 5.1 monitor system in both the NY and LA mix rooms. Pro Tools mix system with the latest plugin suites. High-quality analog outboard gear from Neve, API, DW Fearn, Summit and more.

Any recent jobs in these studios you can talk about?
Yes. We just completed Chobani, Acuvue and Yellawood mixes.

Main Image: (L-R) New Math partners David Wittman, Kala Sherman, Raymond Loewy

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.

NBCUni 7.26

Quick Chat: Digital Arts’ Josh Heilbronner on Audi, Chase spots

New York City’s Digital Arts provided audio post on a couple of 30-second commercial spots that presented sound designer/mixer Josh Heilbronner with some unique audio challenges. They are Audi’s Night Watchman via agency Venables Bell & Partners in New York and Chase’s Mama Said Knock You Out, featuring Serena Williams from agency Droga5 in New York.

Josh Heilbronner

Heilbronner, who has been sound designing and mixing for broadcast and film for almost 10 years, has worked on large fashion brands like Nike and J Crew to Fortune 500 Companies like General Electric, Bank of America and Estee Lauder. He has also mixed promos and primetime broadcast specials for USA Network, CBS and ABC Television. In addition to commercial VO recording, editing and mixing, Heilbronner has a growing credit list of long-form documentaries and feature films, including The Broken Ones, Romance (In the Digital Age), Generation Iron 2, The Hurt Business and Giving Birth in America (a CNN special series).

We recently reached out to Heilbronner to find out more about these two very different commercial projects and how he tackled each.

Both Audi and Chase are very different assignments from an audio perspective. How did these projects come your way?
On Audi, we were asked to be part of their new 2019 A7 campaign, which follows a security guard patrolling the Audi factory in the middle of night. It’s sort of James Bond meets Night at the Museum. The factory is full of otherworldly rooms built to put the cars through their paces (extreme cold, isolation etc.). Q Department did a great job crafting the sounds of those worlds and really bringing the viewer into the factory. Agency Venables & Bell were looking to really pull everything together tightly and have the dialogue land up-front, while still maintaining the wonderfully lush and dynamic music and sound design that had been laid down already.

The Chase Serena campaign is an impact-driven series of spots. Droga5 has a great reputation for putting together cinematic spots and this is no exception. Drazen Bosnjak from Q Department originally reached out to see if I would be interested in mixing this one because one of the final deliverables was the Jumbotron at the US Open in Arthur Ashe Stadium.

Digital Arts has a wonderful 7.1 Dolby approved 4K theater, so we were able to really get a sense of what the finals would sound and look like up on the big screen.

Did you have any concerns going into the project about what would be required creatively or technically?
For Audi our biggest challenge was the tight deadline. We mixed in New York but we had three different time zones in play, so getting approvals could sometimes be difficult. With Chase, the amount of content for this campaign was large. We needed to deliver finals for broadcast, social media (Snapchat, Instagram, Facebook, Twitter), Jumbotron and cinema. Making sure they played back as loud and crisp as they could on all those platforms was a major focus.

What was the most challenging aspect for you on the project?
As with a lot of production audio, the noise on set was pretty extreme. For Audi they had to film the night watchman walking in different spaces, delivering the copy at a variety of volumes. It all needed to gel together as if he was in one smaller room talking directly to the camera, as if he were a narrator. We didn’t have access to re-record him, so we had to use a few different denoise tools, such as iZotope RX6, Brusfri and Waves WNS to clear out the clashing room tones.

The biggest challenge on Chase was the dynamic range and power of these spots. Serena beautifully hushed whisper narration is surrounded by impactful bass drops, cinematic hits and lush ambiences. Reigning all that in, building to a climax and still having her narration be the focus was a game of cat and mouse. Also, broadcast standards are a bit restrictive when it comes to large impacts, so finding the right balance was key.

Any interesting technology or techniques that you used on the project?
I mainly use Avid Pro Tools Ultimate 2018. They have made some incredible advancements — you can now do everything on one machine, all in the box. I can have 180 tracks running in a surround session and still print every deliverable (5.1, stereo, stems etc.) without a hiccup.

I’ve been using Penteo 7 Pro for stereo 5.1 upmixing. It does a fantastic job filling in the surrounds, but also folds down to stereo nicely (and passes QC). Spanner is another useful tool when working with all sorts of channel counts. It allows me to down-mix, rearrange channels and route audio to the correct buses easily.


Behind the Title: Mr. Bronx sound designer/mixer Dave Wolfe

NAME: Dave Wolfe

COMPANY: NYC’s Mr. Bronx Audio Post

CAN YOU DESCRIBE YOUR COMPANY?
Mr. Bronx is an audio post and sound design studio that works on everything from TV and film to commercials and installations.

WHAT’S YOUR JOB TITLE AND WHAT DOES THAT ENTAIL?
I am a partner and mixer. I do mostly sound design, dialogue editing and re-recording mixing. But I also have to manage the Bronx team, help create bids and get involved on the financial side.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER YOUR TITLE?
They would be surprised how often I change out old toilet paper rolls.

WHAT TOOLS DO YOU USE IN YOUR WORK?
Avid Pro Tools, and a ton of our sound design is created with Native Instruments Komplete, specifically, Reaktor and Kontakt.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love helping to push the story further. Also, I like how fast the turnover is on sound jobs. We’re always getting to tackle new challenges — we come in toward the end of a project, do our job and move on.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime. We’re blessed with a full-time in-house chef named Gen Sato. He’s been here maybe six or seven years. He makes great cold soba noodles in the summer and David Chang’s Bo Ssam in the winter. David Chang has a well-known NYC restaurant called Momofuku’s Ssam Bar. Bo Ssam is a slow-roasted pork shoulder with a sugary crust, placed in a lettuce wrap with rice and a ginger scallion sauce.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was going to be a lawyer before I had this job. Now it’s hard to imagine what I would do without this gig, but if I had to choose, I would open a Jewish deli in Rhinebeck, New York. I could sell pastrami and lox.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I came to it late. I didn’t get my first apprenticeship until I was 25. A lot of kids tend to go to school for audio now.

I have a business degree, and I wanted to work for a record label. The first opening I found was in business affairs, so I started moving down that path. After the first two to three years there, however, I realized I was unhappy because I was creatively unfulfilled.

One day I went to MetLife Stadium for a football game and a girl asked what I would rather be doing instead. I said, “I’d rather be a mixer.” She said, “I know someone who is hiring.” Two weeks later, I had left my job and took on an apprenticeship at a mix house.

Random Acts of Flyness

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We finished a TV show for HBO this summer that aired at the end of August called Random Acts of Flyness. It was a super creative challenge. It’s a variety show with live-action shorts, some sketch work, animated pieces and stop-motion animation. We would turn around an episode a week. Sound design, dialogue edit, ADR, music edit. Take the project from soup to nuts, from an audio perspective.

The creator, Terence Nance, had a very specific vision for the project. HBO said it’s, “A fluid, stream-of-conscious response to the contemporary American mediascape.” Originally, I didn’t know what that meant, but after a couple minutes of watching, it made perfect sense.

We’ve also completed the first season of the comedy show 2 Dope Queens on HBO, with the second season coming up. We also did another as-of-yet untitled project for Hulu, and there are many more exciting works to come.

2 Dope Queens

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
This would also be Random Acts of Flyness. We were so proud to help bring this to life by supplying some heavy sound design.We love to lend a hand in order to tell really necessary stories.

It was also big for our company. We hired a new mixer, Geoff Strasser, who led the charge for us on this project. We knew that he was going to be a great fit, personality and skill set-wise.

One of our other mixers, Eric Hoffman, mixed and sound designed Lemonade almost single-handedly. Speaking as someone who helped start the company, I couldn’t be prouder of the people I get to work with.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Like every other person who works in audio post, there’s something I heavily use called an iZotope RX Post Production Suite. It’s a set of audio restoration plugins, and you can’t live without it if you do our type of work.

When someone is making a movie, TV show or commercial, they tend to leave audio to the end. They don’t usually spend a lot of time on it in production — as the saying goes, “we’ll fix it in post,” and these tools are how we fix it.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a 1966 Ford pickup truck, so right now I’m meditatively polishing the hubcaps. That and playing my PS4.


Review: Audionamix IDC for cleaning dialogue

By Claudio Santos

Sound editing has many different faces. It is part of big-budget blockbuster movies and also an integral part of small hobby podcasting projects. Every project has its own sound needs. Some edit thousands upon thousands of sound effects. Others have to edit hundreds of hours of interviews. What most projects have in common, though, is that they circle around dialogue, whether in the form of character lines, interviews, narrators or any other format by which the spoken word guides the experience.

Now let’s be honest, dialogue is not always well recorded. Archival footage needs to be understood, even if the original recording was made with a microphone that was 20 feet away from the speaker in a basement full of machines. Interviews are often quickly recorded in the five minutes an artist has between two events while driving from point A to point B. And until electric cars are the norm, the engine sound will always be married to that recording.

The fact is, recordings are sometimes a little bit noisier than ideal, and it falls upon the sound editor to make it a little bit clearer.

To help with that endeavor, Audionamix has come out with the newest version of their IDC (Instant Dialogue Cleaner). I have been testing it on different kinds of material and must say that overall I’m very impressed with it.Let’s first get the awkward parts of this conversation out of the way. First, let’s see what the IDC is not.

– It is not a full-featured restoration workstation, such as Izotope RX.
– It does not depend on the cloud like other Audionamix plugins.
– It is not magic.

Honestly, all that is fine because what it does do, it does very well and in a very straightforward manner.

IDC aims to keep it simple. You get three controls plus output level and bypass. This makes trying out the plugin on different samples of audio a very quick task, which means you don’t waste time on clips that are beyond salvation.
The three controls you get are:
– Strength: The aggressiveness of the algorithm
– Background: Level of the separated background noise
– Speech: Level of the separated speech

Like all digital processing tools, things sound a bit techno glitchy toward the extremes of the scales, but within reasonable parameters the plugin makes a very good job of reducing background levels without gargling up the speech too noticeably. I personally had fairly good results with strengths around 40% to 60%, and background reductions of up to -24 dB. Anything more radical than that sounded heavily processed.

Now, it’s important to make a note that not all noise is the same. In fact, there are entirely different kinds of audio muck that obscures dialogue, and the IDC is more effective against some than others.

Noise reduction comparison between original clip (1), Cedar DNS Two VST (2), Audionamix IDC (3) and Izotope RX 7 Voice Denoise (4). The clip presents loud air conditioner noise in the background of close mic’d dialogue. All plugins had their level boosted by +4dB after processing.

– Constant broadband background noise (air conditioners, waterfalls, freezers): Here the IDC does fairly well. I couldn’t notice a lot of pumping at the beginning and end of phrases, and the background didn’t sound crippled either.

– Varying broadband background noise (distant cars passing, engines from inside cars): Here again, the IDC does a good job of increasing the dialogue/background ratio. It does introduce artifacts when the background noise spikes or varies very abruptly, but if the goal is to increase intelligibility then it is definitely a success in that area.

– Wind: On this kind of noise the IDC needs a little helping hand from other processes. I tried to clean up some heavily winded dialogue, and even though the wind was indeed lowered significantly so was the speech under it, resulting in a pumping clip that went up and down following the shadow of the removed wind. I believe with some pre-processing using high pass filters and a little bit of limiting the results could have been better, but if you are emergency buying this to clean up bad wind audio I’d definitely keep that in mind. It does work well on light wind reduction, but in those cases as well it seems it benefits from some pre-processing.

Summing Up
I am happily impressed by the plugin. It does not work miracles, but no one should really expect any tool to do so. It is great at improving the signal-to-noise ratio of your sound and does so in a very easy-to-use interface, which allows you to quickly decide whether you like the results or not. That alone is a plus that should be kept in consideration.


Claudio Santos is a sound mixer and tech aficionado who works at Silver Sound in NYC. He has worked on a wide range of sound projects ranging from traditional shows like I Was Prey for the Animal Planet and VR experiences like The Mile-Long Opera.


Cutters New York adds spot editor Alison Grasso

Cutters Studios in New York has added is commercial editor Alison Grasso to its staff. Previously a staff editor for Crew Cuts in New York, Grasso started her commercial career with the company immediately upon graduation from NYU (BFA, Film and Television Production).

She has experience in documentary-style visual storytelling, beauty and fashion and has collaborated with brands such as Garnier, Gatorade, L’Oreal, Pantene, Target and Verizon.

She cuts with Adobe Premiere on a Mac and uses After Effects when extra work is needed. Grasso also edits audio, such the entire second season of the podcast Limetown, and promotional audio material for the audio documentary The Wilderness, hosted by Pod Save America’s Jon Favreau.

When asked about editing audio, in particular Limetown, she says, “Premiere is obviously my ‘first language,’ so that made it much easier and faster to work with, versus something like Audition or Pro Tools), and I actually did use the video track to create visual slates and markers to help me through the edits. Since the episodes were often 30 to 60 minutes, it was incredibly helpful in jumping to certain scenes or sections, determining where mid-roll should be, how long certain scenes were playing out to be, etc. And when sharing with other people in the workflow (producers, directors, sound designers, etc.), I would export a QuickTime with a video track that made working remotely on comments and changes much quicker and easier, versus just referencing timecode and listening for contextual cues to get to a certain point in the edit.

Her talents don’t only include editing. Grasso is also a director, shooter, writer, editor and on-camera talent. Many New York stories — and in particular, those involving craft beer — have taken the spotlight in her latest projects.

“I aspire to do work that isn’t confined by boundaries,” says Grasso. “After seeing the breadth of work from Cutters Studios that supports global clients with projects that reach beyond the traditional, I’m confident the relationship will be a great fit. I’m really looking forward to contributing my sensibilities to the Cutters Studios culture, and being a positive, collaborative voice amongst my new peers, clients and colleagues.”


Making audio pop for Disney’s Mary Poppins Returns

By Jennifer Walden

As the song says, “It’s a jolly holiday with Mary.” And just in time for the holidays, there’s a new Mary Poppins musical to make the season bright. In theaters now, Disney’s Mary Poppins Returns is directed by Rob Marshall, who with Chicago, Nine and Into the Woods on his resume, has become the master of modern musicals.

Renée Tondelli

In this sequel, Mary Poppins (Emily Blunt) comes back to help the now-grown up Michael (Ben Whishaw) and Jane Banks (Emily Mortimer) by attending to Michael’s three children: Annabel (Pixie Davies), John (Nathanael Saleh) and Georgie (Joel Dawson). It’s a much-needed reunion for the family as Michael is struggling with the loss of his wife.

Mary Poppins Returns is another family reunion of sorts. According to Renée Tondelli, who along with Eugene Gearty, supervised and co-designed the sound, director Marshall likes to use the same crews on all his films. “Rob creates families in each phase of the film, so we all have a shorthand with each other. It’s really the most wonderful experience you can have in a filmmaking process,” says Tondelli, who has worked with Marshall on five films, three of which were his musicals. “In the many years of working in this business, I have never worked with a more collaborative, wonderful, creative team than I have on Mary Poppins Returns. That goes for everyone involved, from the picture editor down to all of our assistants.”

Sound editorial took place in New York at Sixteen 19, the facility where the picture was being edited. Sound mixing was also done in New York, at Warner Bros. Sound.

In his musicals, Marshall weaves songs into scenes in a way that feels organic. The songs are coaxed from the emotional quotient of the story. That’s not only true for how the dialogue transitions into the singing, but also for how the music is derived from what’s happening in the scene. “Everything with Rob is incredibly rhythmic,” she says. “He has an impeccable sense of timing. Every breath, every footstep, every movement has a rhythmic cadence to it that relates to and works within the song. He does this with every artform in the production — with choreography, production design and sound design.”

From a sound perspective, Tondelli and her team worked to integrate the songs by blending the pre-recorded vocals with the production dialogue and the ADR. “We combined all of those in a micro editing process, often syllable by syllable, to create a very seamless approach so that you can’t really tell where they stop talking and start singing,” she says.

The Conversation
For example, near the beginning of the film, Michael is looking through the attic of their home on Cherry Tree Lane as he speaks to the spirit of his deceased wife, telling her how much he misses her in a song called “The Conversation.” Tondelli explains, “It’s a very delicate scene, and it’s a song that Michael was speaking/singing. We constantly cut between his pre-records and his production dialogue. It was an amazing collaboration between me, the supervising music editor Jennifer Dunnington and re-recording mixer Mike Prestwood Smith. We all worked together to create this delicate balance so you really feel that he is singing his song in that scene in that moment.”

Since Michael is moving around the attic as he’s performing the song, the environment affects the quality of the production sound. As he gets closer to the window, the sound bounces off the glass. “Mike [Prestwood Smith] really had his work cut out for him on that song. We were taking impulse responses from the end of the slates and feeding them into Audio Ease’s Altiverb to get the right room reverb on the pre-records. We did a lot of impulse responses and reverbs, and EQs to make that scene all flow, but it was worth it. It was so beautiful.”

The Bowl
They also captured impulse responses for another sequence, which takes place inside a ceramic bowl. The sequence begins with the three Banks children arguing over their mother’s bowl. They accidentally drop it and it breaks. Mary and Jack (Lin-Manuel Miranda) notice the bowl’s painted scenery has changed. The horse-drawn carriage now has a broken wheel that must be fixed. Mary spins the bowl and a gust of wind pulls them into the ceramic bowl’s world, which is presented in 2D animation. According to Tondelli, the sequence was hand-drawn, frame by frame, as an homage to the original Mary Poppins. “They actually brought some animators out of retirement to work on this film,” she says.

Tondelli and co-supervising sound editor/co-sound designer Eugene Gearty placed mics inside porcelain bowls, in a porcelain sink, and near marble tiles, which they thumped with rubber mallets, broken pieces of ceramic and other materials. The resulting ring-out was used to create reverbs that were applied to every element in the ceramic bowl sequence, from the dialogue to the Foley. “Everything they said, every step they took had to have this ceramic feel to it, so as they are speaking and walking it sounds like it’s all happening inside a bowl,” Tondelli says.

She first started working on this hand-drawn animation sequence when it showed little more than the actors against a greenscreen with a few pencil drawings. “The fastest and easiest way to make a scene like that come alive is through sound. The horse, which was possibly the first thing that was drawn, is pullling the carriage. It dances in this syncopated rhythm with the music so it provides a rhythmic base. That was the first thing that we tackled.”

After the carriage is fixed, Mary and her troupe walk to the Royal Doulton Music Hall where, ultimately, Jack and Mary are going to perform. Traditionally, a music hall in London is very rowdy and boisterous. The audience is involved in the show and there’s an air of playfulness. “Rob said to me, ‘I want this to be an English music hall, Renée. You really have to make that happen.’ So I researched what music halls were like and how they sounded.”

Since the animation wasn’t complete, Tondelli consulted with the animators to find out who — or rather what — was going to be in the audience. “There were going to be giraffes dressed up in suits with hats and Indian elephants in beautiful saris, penguins on the stage dancing with Jack and Mary, flamingos, giant moose and rabbits, baby hippos and other animals. The only way I thought I could do this was to go to London and hire actors of all ages who could do animal voices.”

But there were some specific parameters that had to be met. Tondelli defines the world of Mary Poppins Returns as being “magical realism,” so the animals couldn’t sound too cartoony. They had to sound believably like animal versions of British citizens. Also, the actors had to be able to sing in their animal voices.

According to Tondelli, they recorded 15 actors at a time for a period of five days. “I would call out, ‘Who can do an orangutan?’ And then the actors would all do voices and we’d choose one. Then they would do the whole song and sing out and call out. We had all different accents — Cockney, Welsh and Scottish,” she says. “All the British Isles came together on this and, of course, they all loved Mary and knew all the songs so they sang along with her.”

On the Dolby Atmos mix, the music hall scene really comes alive. The audience’s voices are coming from the rafters and all around the walls and the music is reverberating into the space — which, by the way, no longer sounds like it’s in a ceramic bowl even though the music hall is in the ceramic bowl world. In addition to the animal voices, there are hooves and paws for the animals’ clapping. “We had to create the clapping in Foley because it wasn’t normal clapping,” explains Tondelli. “The music hall was possibly the most challenging, but also the funnest scene to do. We just loved it. All of us had a great time on it.”

The Foley
The Foley elements in Mary Poppins Returns often had to be performed in perfect sync with the music. On the big dance numbers, like “Trip the Light Fantastic,” the Foley was an essential musical element since the dances were reconstructed sonically in post. “Everything for this scene was wiped away, even the vocals. We ended up using a lot of the records for this one and a lot less production sound,” says Tondelli.

In “Trip the Light Fantastic,” Jack is bringing the kids back home through the park, and they emerge from a tunnel to see nearly 50 lamplighters on lampposts. Marshall and John DeLuca (choreographer/producer/screen story writer) arranged the dance to happen in multiple layers, with each layer doing something different. “The background dancers were doing hand slaps and leg swipes, and another layer was stepping on and off of these slate surfaces. Every time the dancers would jump up on the lampposts, they’d hit it and each would ring out in a different pitch,” explains Tondelli.

All those complex rhythms were performed in Foley in time to the music. It’s a pretty tall order to ask of any Foley artist but Tondelli has the perfect solution for that dilemma. “I hire the co-choreographers (for this film, Joey Pizzi and Tara Hughes) or dancers that actually worked on the film to do the Foley. It’s something that I always do for Rob’s films. There’s such a difference in the performance,” she says.

Tondelli worked with the Foley team of Marko Costanzo and George Lara at c5 Sound in New York, who helped to build custom surfaces — like a slate-on-sand surface for the lamplighter dance — and arrange multi-surface layouts to optimally suit the Foley performer’s needs.

For instance, in the music hall sequence, the dance on stage incorporates books, so they needed three different surfaces: wood, leather and a papery-sounding surface set up in a logical, easily accessible way. “I wanted the dancer performing the Foley to go through the entire number while jumping off and on these different surfaces so you felt like it was a complete dance and not pieced together,” she says.

For the lamplighter dance, they had a big, thick pig iron pipe next to the slate floor so that the dancer performing the Foley could hit it every time the dancers on-screen jumped up on the lampposts. “So the performer would dance on the slate floor, then hit the pipe and then jump over to the wood floor. It was an amazingly syncopated rhythmic soundtrack,” says Tondelli.

“It was an orchestration, a beautiful sound orchestra, a Foley orchestra that we created and it had to be impeccably in sync. If there was a step out of place you’d hear it,” she continues. “It was really a process to keep it in sync through all the edit conforms and the changes in the movie. We had to be very careful doing the conforms and making the adjustments because even one small mistake and you would hear it.”

The Wind
Wind plays a prominent role in the story. Mary Poppins descends into London on a gust of wind. Later, they’re transported into the ceramic bowl world via a whirlwind. “It’s everywhere, from a tiny leaf blowing across the sidewalk to the huge gale in the park,” attests Tondelli. “Each one of those winds has a personality that Eugene [Gearty] spent a lot of time working on. He did amazing work.”

As far as the on-set fans and wind machines wreaking havoc on the production dialogue, Tondelli says there were two huge saving graces. First was production sound mixer Simon Hayes, who did a great job of capturing the dialogue despite the practical effects obstacles. Second was dialogue editor Alexa Zimmerman, who was a master at iZotope RX. All told, about 85% of the production dialogue made it into the film.

“My goal — and my unspoken order from Rob — was to not replace anything that we didn’t have to. He’s so performance-oriented. He arduously goes over every single take to make sure it’s perfect,” says Tondelli, who also points out that Marshall isn’t afraid of using ADR. “He will pick words from a take and he doesn’t care if it’s coming from a pre-record and then back to ADR and then back to production. Whichever has the best performance is what wins. Our job then is to make all of that happen for him.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeny


Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko


First Man: Historical fiction meets authentic sound

By Jennifer Walden

Historical fiction is not a rigidly factual account, but rather an interpretation. Fact and fiction mix to tell a story in a way that helps people connect with the past. In director Damien Chazelle’s film First Man, audiences experience his vision of how the early days of space exploration may have been for astronaut Neil Armstrong.

Frank A. Montaño

The uncertainty of reaching the outer limits of Earth’s atmosphere, the near disasters and mistakes that led to the loss of several lives and the ultimate success of landing on the moon. These things are presented so viscerally that the audience feels as though they are riding along with Armstrong.

While First Man is not a documentary, there are factual elements in the film, particularly in the sound. “The concept was to try to be true to the astronauts’ sonic experience. What would they hear?” says effects re-recording mixer Frank A. Montaño, who mixed the film alongside re-recording mixer Jon Taylor (on dialogue/music) in the Alfred Hitchcock Theater at Universal Studios in Los Angeles.

Supervising sound editors Ai-Ling Lee (who also did re-recording mixing on the film) and Milly Iatrou were in charge of designing a soundtrack that was both authentic and visceral — a mix of reality and emotionality. When Armstrong (Ryan Gosling) and Dave Scott (Christopher Abbott) are being shot into space on a Gemini mission, everything the audience hears may not be completely accurate, but it’s meant to produce the accurate emotional response — i.e., fear, uncertainty, excitement, anxiety. The sound helps the audience to connect with the astronauts strapped into that handcrafted space capsule as it rattles and clatters its way into space.

As for the authentic sounds related to the astronauts’ experience — from the switches and toggles to the air inside the spacesuits — those were collected by several members of the post sound team, including Montaño, who by coincidence is an avid fan of the US space program and full of interesting facts on the subject. Their mission was to find and record era-appropriate NASA equipment and gear.

Recording
Starting at ILC Dover in Frederica, Delaware — original manufacturers of spacesuits for the Apollo missions — Montaño and sound effects recordist Alex Knickerbocker recorded a real A7L-B, which, says Montaño, is the second revision of the Apollo suit. It was actually worn by astronaut Paul Weiss, although it wasn’t the one he wore in space. “ILC Dover completely opened up to us, and were excited for this to happen,” says Montaño.

They spent eight hours recording every detail of the suit, like the umbilicals snapping in and out of place, and gloves and helmet (actually John Young’s from Apollo 10) locking into the rings. “In the film, when you see them plug in the umbilical for water or air, that’s the real sound. When they are locking the bubble helmet on to Neil’s suit in the clean room, that’s the real sound,” explains Montaño.

They also captured the internal environment of the spacesuit, which had never been officially documented before. “We could get hours of communications — that was easy — but there was no record of what those astronauts [felt like in those] spacesuits for that many hours, and how those things kept them alive,” says Montaño.

Back at Universal on the Hitchcock stage, Taylor and mix tech Bill Meadows were receiving all the recorded sounds from Montaño and Knickerbocker, who were still at ILC Dover. “We weren’t exactly in the right environment to get these recordings, so JT [Jon Taylor] and Bill let us know if it was a little too live or a little too sharp, and we’d move the microphones or try different microphones or try to get into a quieter area,” says Montaño.

Next, Montaño and Knickerbocker traveled to the US Space and Rocket Center in Huntsville, Alabama, where the Saturn V rocket was developed. “This is where Wernher von Braun (chief architect of the Saturn V rocket) was based out of, so they have a huge Apollo footprint,” says Montaño. There they got to work inside a Lunar Excursion Module (LEM) simulator, which according to Montaño was one of only two that were made for training. “All Apollo astronauts trained in these simulators including Neil and Buzz, so it was under plexiglass as it was only for observation. But, they opened it up to us. We got to go inside the LEM and flip all the switches, dials, and knobs and record them. It was historic. This has never been done before and we were so excited to be there,” says Montaño.

Additionally, they recorded a DSKY (Display and Keypad) flight guidance computer used by the crew to communicate with the LEM computer. This can be seen during the sequence of Buzz (Corey Stoll) and Neil landing on the moon. “It has this big numeric keypad, and when Buzz is hitting those switches it’s the real sound. When they flip all those switch banks, all those sounds are the real deal,” reports Montaño.

Other interesting recording adventures include the Cosmosphere in Hutchinson, Kansas, where they recorded all the switches and buttons of the original control flight consoles from Mission Control at the Johnson Space Center (JSC). At Edwards Airforce Base in Southern California, they recorded Joe Walker’s X-15 suit, capturing the movement and helmet sounds.

The team also recorded Beta cloth at the Space Station Museum in Novato, California, which is the white-colored, fireproof silica fiber cloth used for the Apollo spacesuits. Gene Cernan’s (Apollo 17) connector cover was used, which reportedly sounds like a plastic bag or hula skirt.

Researching
They also recreated sounds based on research. For example, they recorded an approximation of lunar boots on the moon’s surface but from exterior perspective of the boots. What would boots on the lunar surface sound like from inside the spacesuit? First, they did the research to find the right silicone used during that era. Then Frank Cuomo, who is a post supervisor at Universal, created a unique pair of lunar boots based on Montaño’s idea of having ports above the soles, into which they could insert lav mics. “Frank happens to do this as a hobby, so I bounced this idea for the boots off of him and he actually made them for us,” says Montaño.

Next, they researched what the lunar surface was made of. Their path led to NASA’s Ames Research Center where they have an eight-ton sandbox filled with JSC-1A lunar regolith simulant. “It’s the closest thing to the lunar surface that we have on earth,” he explains.

He strapped on the custom-made boots and walked on this “lunar surfasse” while Knickerbocker and sound effects recordist Peter Brown captured it with numerous different mics, including a hydrophone placed on the surface “which gave us a thuddy, non-pitched/non-fidelity-altered sound that was the real deal,” says Montaño. “But what worked best, to get that interior sound, were the lav mics inside those ports on the soles.”

While the boots on the lunar surface sound ultimately didn’t make it into the film, the boots did come in handy for creating a “boots on LEM floor” sound. “We did a facsimile session. JT (Taylor) brought in some aluminum and we rigged it up and got the silicone soles on the aluminum surface for the interior of the LEM,” says Montaño.

Jon Taylor

Another interesting sound they recreated was the low-fuel alarm sound inside the LEM. According to Montaño, their research uncovered a document that shows the alarm’s specific frequencies, that it was a square wave, and that it was 750 cycles to 2,000 cycles. “The sound got a bit tweaked out just for excitement purposes. You hear it on their powered descent, when they’re coming in for a landing on the moon, and they’re low on fuel and 20 seconds from a mandatory abort.”

Altogether, the recording process was spread over nearly a year, with about 98% of their recorded sounds making it into the final soundtrack, Taylor says, “The locking of the gloves, and the locking and handling of the helmet that belonged to John Young will live forever. It was an honor to work with that material.”

Montaño adds, “It was good to get every angle that we could, for all the sounds. We spent hours and hours trying to come up with these intangible pieces that only a handful of people have ever heard, and they’re in the movie.”

Helmet Comms
To recreate the comms sound of the transmissions back and forth between NASA and the astronauts, Montaño and Taylor took a practical approach. Instead of relying on plug-ins for futz and reverb, they built a 4-foot-by-3-foot isolated enclosure on wheels, deadened with acoustical foam and featuring custom fit brackets inside to hold either a high-altitude helmet (to replicate dialogue for the X-15 and the Gemini missions) or a bubble helmet (for the Apollo missions).

Each helmet was recorded independently using its own two-way coaxial car speaker and a set of microphones strapped to mini tripods that were set inside each helmet in the enclosure. The dialogue was played through the speaker in the helmet and sent back to the console through the mics. Taylor says, “It would come back really close to being perfectly in sync. So I could do whatever balance was necessary and it wouldn’t flange or sound strange.”

By adjusting the amount of helmet feed in relation to the dry dialogue, Taylor was able to change the amount of “futz.” If a scene was sonically dense, or dialogue clarity wasn’t an issue (such as the tech talk exchanges between Houston and the astronauts), then Taylor could push the futz further. “We were constantly changing the balance depending on what the effects and music were doing. Sometimes we could really feel the helmet and other times we’d have to back off for clarity’s sake. But it was always used, just sometimes more than others.”

Density and Dynamics
The challenge of the mix on First Man was to keep the track dynamic and not let the sound get too loud until it absolutely needed to. This made the launches feel powerful and intense. “If everything were loud up to that point, it just wouldn’t have the same pop,” says Taylor. “The director wanted to make sure that when we hit those rockets they felt huge.

One way to support the dynamics was choosing how to make the track appropriately less dense. For example, during the Gemini launch there are the sounds of the rocket’s different stages as it blasts off and breaks through the atmosphere, and there’s the sound of the space capsule rattling and metal groaning. On top of that, there’s Neil’s voice reading off various specs.

“When it comes to that kind of density sound-wise, you have to decide should we hear the actors? Are we with them? Do we have to understand what they are saying? In some cases, we just blew through that dialogue because ‘RCS Breakers’ doesn’t mean anything to anybody, but the intensity of the rocket does. We wanted to keep that energy alive, so we drove through the dialogue,” says Montaño. “You can feel that Neil’s calm, but you don’t need to understand what he’s saying. So that was a trick in the balance; deciding what should be heard and what we can gloss over.”

Another helpful factor was that the film’s score, by composer Justin Hurwitz, wasn’t bombastic. During the rocket launches, it wasn’t fighting for space in the mix. “The direction of the music is super supportive and it never had to play loud. It just sits in the pocket,” says Taylor. “The Gemini launch didn’t have music, which really allowed us to take advantage of the sonic structure that was built into the layers of sound effects and design for the take off.”

Without competition from the music and dialogue, the effects could really take the lead and tell the story of the Gemini launch. The camera stays close-up on Neil in the cockpit and doesn’t show an exterior perspective (as it does during the Apollo launch sequence). The audiences’ understanding of what’s happening comes from the sound. You hear the “bbbbbwhoop” of the Titan II missile during ignition, and hear the liftoff of the rocket. You hear the point at which they go through maximum dynamic pressure, characterized by the metal rattling and groaning inside the capsule as it’s subjected to extreme buffeting and stress.

Next you hear the first stage cut-off and the initial boosters break away followed by the ignition of the second stage engine as it takes over. Then, finally, it’s just the calmness of space with a few small metal pings and groans as the capsule settles into orbit.

Even though it’s an intense sequence, all the details come through in the mix. “Once we got the final effects tracks, as usual, we started to add more layers and more detail work. That kind of shaping is normal. The Gemini launch builds to that moment when it comes to an abrupt stop sonically. We built it up layer-wise with more groan, more thrust, more explosive/low-end material to give it some rhythm and beats,” says Montaño.

Although the rocket sounds like it’s going to pieces, Neil doesn’t sound like he’s going to pieces. He remains buttoned-up and composed. “The great thing about that scene was hearing the contrast between this intense rocket and the calmness of Neil’s voice. The most important part of the dialogue there was that Neil sounded calm,” says Taylor.

Apollo
Visually, the Apollo launch was handled differently in the film. There are exterior perspectives, but even though the camera shows the launch from various distances, the sound maintains its perspective — close as hell. “We really filled the room up with it the whole time, so it always sounds large, even when we are seeing it from a distance. You really feel the weight and size of it,” says Montaño.

The rocket that launched the Apollo missions was the most powerful ever created: the Saturn V. Recreating that sound was a big job and came with a bit of added pressure from director Chazelle. “Damien [Chazelle] had spoken with one of the Armstrong sons, Mark, who said he’s never really felt or heard a Saturn V liftoff correctly in a film. So Damien threw it our way. He threw down the gauntlet and challenged us to make the Armstrong family happy,” says Montaño.

Field recordists John Fasal and Skip Longfellow were sent to record the launch of the world’s second largest rocket — SpaceX’s Falcon Heavy. They got as close as they could to the rocket, which generated 5.5 million pounds of thrust. They also recorded it at various distances farther away. This was the biggest component of their Apollo launch sound for the film. It’s also bolstered by recordings that Lee captured of various rocket liftoffs at Vandenberg Air Force Base in California.

But recreating the world’s most powerful rocket required some mega recordings that regular mics just couldn’t produce. So they headed over to the Acoustic Test Chamber at JPL in Pasadena, which is where NASA sonically bombards and acoustically excites hardware before it’s sent into space. “They simulate the conditions of liftoff to see if the hardware fails under that kind of sound pressure,” says Montaño. They do this by “forcing nitrogen gas through this six-inch hose that goes into a diaphragm that turns that gas into some sort of soundwave, like pink noise. There are four loudspeakers bolted to the walls of this hard-shelled room, and the speakers are probably about 4’x4’ feet. It goes up to 153dB in there; that’s max.” (Fun Fact: The sound team wasn’t able to physically be in the room to hear the sound since the gas would have killed them. They could only hear the sound via their recordings.)

The low-end energy of that sound was a key element in their Apollo launch. So how do you capture the most low-end possible from a high-SPL source? Taylor had an interesting solution of using a 10-inch bass speaker as a microphone. “Years ago, while reading a music magazine, I discovered this method of recording low-end using a subwoofer or any bass speaker. If you have a 10-inch speaker as a mic, you’re going to be able to capture much more low-end. You may even be able to get as low as 7Hz,” Taylor says.

Montaño adds, “We were able to capture another octave lower than we’d normally get. The sounds we captured really shook the room, really got your chest cavity going.”
For the rocket sequences — the X-15 flight, the Gemini mission and the Apollo mission —their goal was to craft an experience the audience could feel. It was about energy and intensity, but also clarity.

Taylor concludes, “Damien’s big thing — which I love — is that he is not greedy when it comes to sound. Sometimes you get a movie where everything has to be big. Often, Damien’s notes were for things to be lower, to lower sounds that weren’t rocket affiliated. He was constantly making sure that we did what we could to get those rocket scenes to punch, so that you really felt it.”


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Capturing realistic dialogue for The Front Runner

By Mel Lambert

Early on in his process, The Front Runner director Jason Reitman asked frequent collaborator and production sound mixer Steve Morrow, CAS, to join the production. “It was maybe inevitable that Jason would ask me to join the crew,” says Morrow, who has worked with the director on Labor Day, Up in the Air and Thank You for Smoking. “I have been part of Jason’s extended family for at least 10 years — having worked with his father Ivan Reitman on Draft Day — and know how he likes to work.”

Steve Morrow

This Sony Pictures film was co-written by Reitman, Matt Bai and Jay Carson, and based on Bai’s book, “All the Truth Is Out.” The Front Runner follows the rise and fall of Senator Gary Hart, set during his unsuccessful presidential campaign in 1988 when he was famously caught having an affair with the much younger Donna Rice. Despite capturing the imagination of young voters, and being considered the overwhelming front runner for the Democratic nomination, Hart’s campaign was sidelined by the affair.

It stars Hugh Jackman as Gary Hart, Vera Farmiga as his wife Lee, J.K. Simmons as campaign manager Bill Dixon and Alfred Molina as the Washington Post’s managing editor, Ben Bradlee.

“From the first read-through of the script, I knew that we would be faced with some production challenges,” recalls Morrow, a 20-year industry veteran. “There were a lot of ensemble scenes with the cast talking over one another, and I knew from previous experience that Jason doesn’t like to rely on ADR. Not only is he really concerned about the quality of the sound we secure from the set — and gives the actors space to prepare — but Jason’s scripts are always so well-written that they shouldn’t need replacement lines in post.”

Ear Candy Post’s Perry Robertson and Scott Sanders, MPSE, served as co-supervising sound editors on the project, which was re-recorded on Deluxe Stage 2 — the former Glen Glenn Sound facility — by Chris Jenkins handling dialogue and music and Jeremy Peirson, CAS, overseeing sound effects. Sebastian Sheehan Visconti was sound effects editor.

With as many as two dozen actors in a busy scene, Morrow soon realized that he would have to mic all of the key campaign team members. “I knew that we were shooting a political film like Robert Altman’s All the President’s Men or [Michael Ritchie’s] The Candidate, so I referred back to the multichannel techniques pioneered by Jim Webb and his high-quality dialogue recordings. I elected to use up to 18 radio mics for those ensemble scenes,” including Reitman’s long opening sequence in which the audience learns who the key participants are on the campaign trail. I did this “while recording each actor on a separate track, together with a guide mono mix of the key participants for the picture editor Stefan Grube.”

Reitman is well known for his films’ elaborate opening title sequences and often highly subjective narration from a main character. His motion pictures typically revolve around characters that are brashly self-confident, but then begin to rethink their lives and responsibilities. He is also reported to be a fan of ‘70s-style cinema verite, which uses a meandering camera and overlapping dialogue to draw the audience into an immersive reality. The Front Runner’s soundtrack is layered with dialogue, together with a constant hum of conversation — from the principals to the press and campaign staff. Since Bai and Carson have written political speeches, Reitman had them on set to ensure that conversations sounded authentic.

Even though there might be four or so key participants speaking in a scene, “Jason wants to capture all of the background dialogue between working press and campaign staff, for example,” Morrow continues.

“He briefed all of the other actors on what the scene was about so they could develop appropriate conversations and background dialogue while the camera roamed around the room. In other words, if somebody was on set they got a mic — one track per actor. In addition to capturing everything, Jason wanted me to have fun with the scene; he likes a solid mix for the crew, dailies and picture editorial, so I gave him the best I could get. And we always had the ability to modify it later in post production from the iso mic channels.”

Morrow recorded the pre-fader individual tracks at between 10dB and 15dB lower than the main mix, “which I rode hot, knowing that we could go back and correct it in post. Levels on that main mix were within ±5 dB most of the time,” he says. Assisting Morrow during the 40-day shoot, which took place in and around Atlanta and Savannah, were Collin Heath and Craig Dollinger, who also served as the boom operator on a handful of scenes.

The mono production mix was also useful for the camera crew, says Morrow. “They sometimes had problems understanding the dramatic focus of a particular scene. In other words, ‘Where does my eye go?’ When I fed my mix to their headphones they came to understand which actors we were spotlighting from the script. This allowed them to follow that overview.”

Production Tools
Morrow used a Behringer Midas Model M32R digital console that features 16 rear-channel inputs and 16 more inputs via a stage box that connects to the M32R via a Cat-5 cable. The console provided pre-fader and mixed outputs to Morrow’s pair of 64-track Sound Devices 970 hard-disk recorders — a main and a parallel backup — via Audinate Dante digital ports. “I also carried my second M32R mixer as a spare,” Morrow says. “I turned over the Compact Flash media at the end of each day’s shooting and retained the contents of the 970’s internal 1TB SSDs and external back-up drives until the end of post, just in case. We created maybe 30GB of data per recorder per day.”

Color coding helps Morrow mix dialogue more accurately.

For easy level checking, the two recorders with front-panel displays were mounted on Morrow’s production sound cart directly above his mixing console. “When I can, I color code the script to highlight the dialogue of key characters in specific scenes,” he says. “It helps me mix more accurately.”

RF transmitters comprised two dozen Lectrosonics SSM Micro belt-pack units — Morrow bought six or seven more for the film — linked to a bank of Lectrosonics Venue2 modular four-channel and three-channel VR receivers. “I used my collection of Sanken COS-11D miniature lavalier microphones for the belt packs. They are my go-to lavs with clean audio output and excellent performance. I also have some DPA lavaliers, if needed.”

With 20+ RF channels simultaneously in use within metropolitan centers, frequency coordination was an essential chore to ensure consistent operation for all radio systems. “The Lectrosonics Venue receivers can auto-assign radio-mic frequencies,” Morrow explains. “The best way to do this is to have everything turned off, and then one by one let the system scan the frequency spectrum. When it finds a good channel, you assign it to the first microphone and then repeat that process for the next radio transmitters. I try to keep up with FCC deliberations [on diminishing RF spectrum space], but realize that companies who manufacture this equipment also need to be more involved. So, together, I feel good that we’ll have the separation we all need for successful shoots.”

Morrow’s setup.

Morrow also made several location recordings on set. “I mounted a couple of lavaliers on bumpers to secure car-byes and other sounds for supervising sound editor Perry Robertson, as well as backgrounds in the house during a Labor Day gathering. We also recorded Vera Farmiga playing the piano during one scene — she is actually a classically-trained pianist — using a DPA Model 5099 microphone (which I also used while working on A Star is Born). But we didn’t record much room tone, because we didn’t find it necessary.”

During scenes at a campaign rally, Morrow provided a small PA system that comprised a couple of loudspeakers mounted on a balcony and a vocal microphone on the podium. “We ran the system at medium-levels, simply to capture the reverb and ambiance of the auditorium,” he explains, “but not so much that it caused problems in post production.”

Summarizing his experience on The Front Runner, Morrow offers that Reitman, and his production partner Helen Estabrook, bring a team spirit to their films. “The set is a highly collaborative environment. We all hang out with one another and share birthdays together. In my experience, Jason’s films are always from the heart. We love working with him 120%. The low point of the shoot is going home!”


Mel Lambert has been involved with production and post on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.

The Girl in the Spider’s Web: immersive audio and picture editing

By Mel Lambert

Key members of the post crew responsible for the fast-paced look and feel of director Fede Alvarez’s new film, The Girl in the Spider’s Web, came to the project via a series of right time/right place situations. First, co-supervising sound editor Julian Slater (who played a big role in Baby Driver’s audio post) met picture editor Tatiana Riegel at last year’s ACE Awards.

During early 2018, Slater was approached to work on the lastest adaptation of the crime novels by the Swedish author Stieg Larsson. Alvarez was impressed with Slater’s contribution to both Baby Driver and the Oscar-winning Mad Max: Fury Road (2015). “Fede told me that he uses the soundtrack to Mad Max to show off his home Atmos playback system,” says Slater, who served as sound designer on that film. “I was happy to learn that Tatiana had also been tagged to work on The Girl in the Spider’s Web.”

Back row (L-R): Micah Loken, Sang Kim, Mandell Winter, Dan Boccoli, Tatiana Riegel, Kevin O’Connell, Fede Alvarez, Julian Slater, Hamilton Sterling, Kyle Arzt, Del Spiva and Maarten Hofmeijer. Front row (L-R): Pablo Prietto, Lola Gutierrez, Mathew McGivney and Ben Sherman.

Slater, who would also be working on the crime drama Bad Times at the El Royale for director Drew Goddard, wanted Mandell Winter as his co-supervising sound editor. “I very much liked his work on The Equalizer 2, Death Wish and The Magnificent Seven, and I knew that we could co-supervise well together. I came on full time after completing El Royale.”

Editor Riegel (Gringo, I Tonya, Million Dollar Arm, Bad Words) was a fan of the original Stieg Larsson Millennium Series films —The Girl With the Dragon Tattoo, The Girl Who Kicked the Hornet’s Nest and The Girl Who Played with Fire — as well as David Fincher’s 2011 remake of The Girl With the Dragon Tattoo. She was already a fan of Alvarez, admiring his previous suspense film, Don’t Breathe, and told him she enjoyed working on different types of films to avoid being typecast. “We hit it off immediately,” says Riegel, who then got together with Julian Slater and Mandell Winter to discuss specifics.

The latest outing in the Stieg Larsson franchise, The Girl in the Spider’s Web: A New Dragon Tattoo Story, stars English actress Claire Foy (The Crown) in the eponymous role of a young computer hacker Lisbeth Salander who, along with journalist Mikael Blomkvist, gets caught up in a web of spies, cybercriminals and corrupt government officials. The screenplay was co-written by Jay Basu and Alvarez from the novel by David Lagercrantz. The cast also includes Sylvia Hoeks, Stephen Merchant and Lakeith Stanfield.

Having worked previously with Niels Arden Oplev, the Swedish director of 2009’s The Girl with the Dragon Tattoo, Winter knew the franchise and was interested in working on the newest offering. He was also excited about working with director Fede Alvarez. “I loved the use of color and lighting choices that Fede selected for Don’t Breathe, so when Julian Slater called I jumped at the opportunity. None of us had worked together before, and it was Fede’s first large-budget film, having previously specialized in independent offerings. I was eager to help shepherd the film’s immersive soundtrack through the intricate process from location to the dub stage.”

From the very outset, Slater argued for a native Dolby Atmos soundtrack, with a 7.1-channel Avid Pro Tools bed that evolved through editorial, with appropriate objects being assigned during re-recording to surround and overhead locations. “We knew that the film would be very atmospheric,” Slater recalls, “so we decided to use spaces and ambiences to develop a moody, noir thriller.”

The film was dubbed on the William Holden Stage at Sony Pictures Studios, with Kevin O’ Connell handling dialog and music, and Slater overseeing sound effects elements.

Cutting Picture on Location
Editor Riegel and two assistants joined the project at its Berlin location last January. “It was a 10-month journey until final print mastering in mid-October,” she says. “We knew CGI elements would be added later. Fede didn’t do any previz, instead focusing on VFX during post production. We set up Avid Media Composers and assemble-edited the dailies as we went” against early storyboards. “Fede wanted to play up the film’s rogue theme; he had a very, very clear focus of the film as spectacle. He wanted us to stay true to the Lisbeth Salander character from the original films, yet retain that dark, Scandinavian feel from the previous outings. The film is a fun ride!”

The team returned to Los Angeles in April and turned the VFX over to Pixomondo, which was brought on to handle the greenscreen CGI sequences. “We adjourned to Pivotal Post in Burbank for the Director’s Cut and then to the Sony lot in Culver City for the first temp mix,” explains Riegel. “My editing decisions were based on the innate DNA of the shot material, and honoring the script. I asked Fede a lot of questions to ensure that the story and the pacing were crystal clear. Our first assembly was around two hours and 15 minutes, which we trimmed to just under two hours during a series of refinements. We then removed 15 minutes to reach our final 1:45 running time, which worked for all of us. The cut was better without the dropped section.”

Daniel Boccoli served as first assistant picture editor, Patrick Clancey was post finishing editor, Matthew McGivney was VFX editor and Andrew McGivney was VFX assistant editor.

Because Riegel likes to cut against an evolving soundtrack, she developed a temporary dialog track in her Avid workstation, adding sound effects taken from commercial libraries. “But there is a complex fight and chase sequence in the middle of the film that I turned over to Mandell and Julian early on so I could secure realistic effects elements to help inform the cut,” she explains. “Those early tracks were wonderful and gave me a better idea of what the final film would sound like. That way I can get to know the film better — I can also open up the cut to make space for a sound if it works within the film’s creative arcs.”

“Our overall direction from Fede Alvarez was to make the soundtrack feel cold when we were outside and to grab the audience with the action… while focusing on the story,” Winter explains. “We were also working against a very tight schedule and had little time for distractions. After the first temp, Julian and I got notes from Fede and Tatiana and set off using that feedback, which continued through three more temp mixes.”

Having complete supervising The Equalizer 2, Mandell came aboard full time in mid-June, with temp mixes running through the beginning of September. “We were finaling by the last week of September, ahead of the film’s World Premiere on October 19 at the International Rome Film Festival.”

Since there was no spotting session, from day one we were in a tight post schedule, according to Slater. “There were a number of high-action scenes that needed intricate sound design, including the eight-minute sequence that begins with explosions in Lisbeth Salander’s apartment and the subsequent high-speed motorbike chase.”

Sound designer Hamilton Sterling crafted major sections of the film’s key fight and chase sequences.

Intricate Sound Design
“We liked Hamilton’s outstanding work on Independence Day: Resurgence and Logan and relied upon him to develop truly unique sounds for the industrial heating towers, motorbikes and fights,” says Winter. “Sound effects editor Ryan Collins cut the gas mask fight sequence, as well as a couple of reels, while Karen Vassar Triest handled another couple of reels, and David Esparza worked on several of the early sequences.”

Other sound effects editors included Ando Johnson and Robert Stambler, together with dialog editor Micah Loken and supervising Foley editor Sang Jun Kim.

Sterling is particularly proud of several sequences he designed for the film. “During a scene in which the lead character Lisbeth Salander is drugged, I used the Whoosh plug-in [from the German company, Tonsturm] inside Native Instruments’ Reaktor [modular music software] to create a variable, live-performable heartbeat. I used muffled explosion samples that were Doppler-shifted at different speeds against the picture to mimic the pulse-changing effects of various drugs. I also used Whoosh to create different turbo sounds for the Ducati motorcycle driven by Lisbeth, together with air-release sounds. They were subtle effects, because we didn’t want the result to sound like a ‘sci-fi bike’ — just a souped-up twin-cylinder Ducati.”

For the car chases, Sterling used whale-spout blasts to mimic the sound of a car driving through deep puddles with water striking the inside of the wheel wells. For frightening laughs in another sequence, the sound designer turned to Tonsturm’s Doppler program, which he used in an unorthodox way. “The program can be set to break up a sound sample using, for example, a 5.1-channel star pattern with small Doppler shifts to produce very disturbing laughter,” he says. “For the heating towers I used several sound components, including slowed-down toaster noises to add depth and resonance — a hum from the heating elements, plus ticks and clangs as they warmed up. Julian suggested that we use ‘chittery’ effects for the computer user interfaces, so I used The Cargo Cult’s Envy plug-in to create unusual sounds, and to avoid the conventional ‘bips” and ‘boops’ noises. Envy is a spectral-shift, pitch- and amplitude-change application that is very pitch manipulatable. I also turned to the Sound Particles app to generate complex wind sounds that I delivered as immersive 7.1.2 Pro Tools tracks.”

“We also had a lot of Foley, which was recorded on Stage B at Sony Studios by Nerses Gezalyan with Foley artists Sara Monat and Robin Harlen,” Winter adds. “Unfortunately, the production dialog had a number of compromised tracks from the Berlin locations. As a result, we had a lot of ADR to shoot. Scheduling the ADR was complicated by the time difference, as most of our actors were in London, Berlin, Oslo or Stockholm. We used Foley to support the cleaned-up dialog tracks and backfilled tracks. Our dialog editor was very knowledgeable with iZotope RX7 Advance software. Micah Loken really understood how to use it, and how not to use it. He can dig deep into a track without affecting the quality of the voice, and without overdoing the processing.”

The music from composer Roque Baños — who also worked with Alvarez on Don’t Breathe and Evil Dead — arrived very late in the project, “and remained something of a mystery,” Riegel recalls. “Being a musician himself, Fede knew what he wanted and how to achieve that result. He would disappear into an edit suite close to the stage with the music editors Maarten Hofmeijer and Del Spiva, where they cut together the score against the locked picture — or as locked as it ever was! After that we could balance the music against the dialog and sound effects.”

Regarding sound effects elements, Winter acknowledges that his small editorial team needed to work against a tight schedule. “We had a 7.1.2 template that allowed Tony [Lamberti] and later Julian to use the automated panning data. For the final mix in Atmos, we used objects minimally for the music and dialog. However, we used overhead objects strategically for effects and design. In an early sequence we put the sound of the rope — used to suspend an abusive husband — above the audience.” Re-recording mixer Tony Lamberti handled some of the early temp mixes in Slater’s absence.

Collaborative Re-Recording Process
When the project reached the William Holden Stage, “we could see the overall shape of the film with the VFX elements and decide what sounds would now be needed to match the visuals, since we had a lot of new technology to cover, including computer screens,” Riegel says.

Mandell agrees: “Yes, we could now see where Fede Alvarez wanted to take the film and make suggestions about new material. We started asking: ‘What do you think about this and that option?’ Or, ‘What’s missing?’ It was an ongoing series of conversation through the temp mixes, re-mixes and then the final.”

Having handled the first temp mix at Sony Studios, Slater returned full-time for the final Atmos mixes. “After so many temp mixes using the same templates, I knew that we would not be re-inventing the wheel on the William Holden Stage. We simply focused on changing the spatiality of what we had. Having worked with Kevin O’ Connell on both Jumanji: Welcome to the Jungle and The Public, I knew that I had to do my homework and deliver what he needed from my side of the console. Kevin is very involved. He’ll make suggestions, but always based on what is best for the film. I learned a lot by seeing how he works; he is very experienced. It’s easy to find what works with Kevin, since he has experience with a wide range of technologies and keeps up with new advances.”

Describing the re-recording process as being highly collaborative, Mandell remained objective about creative options. “You can get too close to the soundtrack. With a number of German and English actors, we constantly had to ask ourselves: ‘Do we have clarity?’ If not, can we fix it in the track or turn to ADR? We maintained a continuing conversation with Tatiana and Fede, with ideas that we would circulate backwards and forwards. Since we had a lot of new people working on the crew, trust became a major factor. Everybody was incredibly professional.”

“It was a very rewarding experience working with so many talented new people,” Slater concludes. “I quickly tuned into Fede Alvarez’s specific needs and sensibilities. It was a successful liaison.”

Riegel says that her biggest challenge was “trying to figure out what the film is supposed to be — from the script and pre-production through the shoot and first assembly. It’s a gradual process and one that involves regular conversations with my assistant editors and the director as we develop characters and clarify the information being shown. But I didn’t want to hit the audience over the head with too much information. We needed to decide: ‘What is important?’ and retain as much realism as possible. It’s a complex, creative process … and one that I totally love being a part of!”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

A Star is Born: Live vocals, real crowds and venues

By Jennifer Walden

Warner Bros. Pictures’ remake of A Star is Born stars Bradley Cooper as Jackson Maine, a famous musician with a serious drinking hobby who stumbles onto singer/songwriter Ally (Lady Gaga) at a drag bar where she’s giving a performance. Jackson is taken by her raw talent and their chance meeting turns into something more. With Jackson’s help, Ally becomes a star but her fame is ultimately bittersweet.

Jason Ruder

Aside from Lady Gaga and Bradley Cooper (who also directed and co-wrote the screenplay), the other big star of this film is the music. Songwriting started over two years ago. Cooper and Gaga collaborated with several other songwriters along the way, like Lukas Nelson (son of Willie Nelson), Mark Ronson, Hillary Lindsey and DJ White Shadow.

According to supervising music editor/re-recording mixer Jason Ruder from 2 Pop Music — who was involved with the film from pre-production through post — the lyrics, tempo and key signatures were even changing right up to the day of the shoot. “The songwriting went to the 11th hour. Gaga sort of works in that fashion,” says Ruder, who witnessed her process first-hand during a sound check at Coachella. (2 Pop Music is located on the Warner Bros. lot in Burbank.)

Before each shoot, Ruder would split out the pre-recorded instrumental tracks, reference vocals and have them ready for playback, but there were days when he would get a call from Gaga’s manager as he was driving to the set. “I was told that she had gone into the studio in the middle of the night and made changes, so there were all new pre-records for the day. I guess she could be called a bit of a perfectionist, always trying to make it better.

“On the final number, for instance, it was only a couple hours before the shoot and I got a message from her saying that the song wasn’t final yet and that she wanted to try it in three different keys and three different tempos just to make sure,” continues Ruder. “So there were a lot of moving parts going into each day. Everyone that she works with has to be able to adapt very quickly.”

Since the music is so important to the story, here’s what Cooper and Gaga didn’t want — they start singing and the music suddenly switches over to a slick, studio-produced track. That concern was the driving force behind the production and post teams’ approach to the on-camera performances.

Recording Live Vocals
All the vocals in A Star is Born were recorded live on-set for all the performances. Those live vocals are the ones used in the film’s final mix. To pull this off, Ruder and the production sound team did a stage test at Warner Bros. to see if this was possible. They had a pre-recorded track of the band, which they played back on the stage. First, Cooper and Gaga did live vocals. Then they tried the song again, with Cooper and Gaga miming along to pre-recorded vocals. Ruder took the material back to his cutting room and built a quick version of both. The comparison solidified their decision. “Once we got through that test, everyone was more confident about doing the live vocals. We felt good about it,” he says.

Their first shoot for the film was at Coachella, on a weekday since there were no performances. They were shooting a big, important concert scene for the film and only had one day to get it done. “We knew that it all had to go right,” says Ruder. It was their first shot at live vocals on-set.

Neither the music nor the vocals were amplified through the stage’s speaker system since song security was a concern — they didn’t want the songs leaked before the film’s release. So everything was done through headphone mixes. This way, even those in the crowd closest to the stage couldn’t hear the melodies or lyrics. Gaga is a seasoned concert performer, comfortable with performing at concert volume. She wasn’t used to having the band muted and the vocals live (though not amplified), so some adjustments needed to be made. “We ended up bringing her in-ear monitor mixer in to help consult,” explains Ruder. “We had to bring some of her touring people into our world to help get her perfectly comfortable so she could focus on acting and singing. It worked really well, especially later for Arizona Sky, where she had to play the piano and sing. Getting the right balance in her ear was important.”

As for Jackson Maine’s band on-screen, those were all real musicians and not actors — it was Lukas Nelson’s band. “They’re used to touring together. They’re very tight and they’re seasoned musicians,” says Ruder. “Everyone was playing and we were recording their direct feeds. So we had all the material that the musicians were playing. For the drums, those had to be muted because we didn’t want them bleeding into the live vocals. We were on-set making sure we were getting clean vocals on every take.”

Real Venues, Real Reverbs
Since the goal from the beginning was to create realistic-sounding concerts, Ruder decided to capture impulse responses at every performance location — from big stages like Coachella to much smaller venues — and use those to create reverbs in Audio Ease’s Altiverb.

The challenge wasn’t capturing the IRs, but rather, trying to convince the assistant director on-set that they needed to be captured. “We needed to quiet the whole set for five or 10 minutes so we could put up some mics and shoot these tones through the spaces. This all had to be done on the production clock, and they’re just not used to that. They didn’t understand what it was for and why it was important — it’s not cheap to do that during production,” explains Ruder.

Those IRs were like gold during post. They allowed the team to recreate spaces like the main stage at Coachella, the Greek Theatre and the Shrine Auditorium. “We were able to manufacture our own reverbs that were pretty much exactly what you would hear if you were standing there. For Coachella, because it’s so massive, we weren’t sure if they were going to come out, but it worked. All the reverbs you hear in the film are completely authentic to the space.”

Live Crowds
Oscar-winning supervising sound editor Alan Murray at Warner Bros. Sound was also capturing sound at the concert performances, but his attention was away from the stage and into the crowd. “We had about 300 to 500 people at the concerts, and I was able to get clean reactions from them since I wasn’t picking up any music. So that approach of not amplifying the music worked for the crowd sounds too,” he says.

Production sound mixer Steven Morrow had set up mics in and around the crowd and recorded those to a multitrack recorder while Murray had his own mic and recorder that he could walk around with, even capturing the crowds from backstage. They did multiple recordings for the crowds and then layered those in Avid Pro Tools in post.

Alan Murray

“For Coachella and Glastonbury, we ended up enhancing those with stadium crowds just to get the appropriate size and excitement we needed,” explains Murray. They also got crowd recordings from one of Gaga’s concerts. “There was a point in the Arizona Sky scene where we needed the crowd to yell, ‘Ally!’ Gaga was performing at Fenway Park in Boston and so Bradley’s assistant called there and asked Gaga’s people to have the crowd do an ‘Ally’ chant for us.”

Ruder adds, “That’s not something you can get on an ADR stage. It needed to have that stadium feel to it. So we were lucky to get that from Boston that night and we were able to incorporate it into the mix.”

Building Blocks
According to Ruder, they wanted to make sure the right building blocks were in place when they went into post. Those blocks — the custom recorded impulse responses, the custom crowds, the live vocals, the band’s on-set performances, and the band’s unprocessed studio tracks that were recorded at The Village — gave Ruder and the re-recording mixers ultimate flexibility during the edit and mix to craft on-scene performances that felt like big, live concerts or intimate songwriting sessions.

Even with all those bases covered, Ruder was still worried about it working. “I’ve seen it go wrong before. You get tracks that just aren’t usable, vocals that are distorted or noisy. Or you get shots that don’t work with the music. There were those guitar playing shots…”

A few weeks after filming, while Ruder was piecing all the music together in post, he realized that they got it all. “Fortunately, it all worked. We had a great DP on the film and it was clear that he was capturing the right shots. Once we got to that point in post, once we knew we had the right pieces, it was a huge relief.”

Relief gave way to excitement when Ruder reached the dub stage — Warner Bros. Stage 10. “It was amazing to walk into the final mix knowing that we had the material and the flexibility to pull this off,” he says.

In addition to using Altiverb for the reverbs, Ruder used Waves plug-ins, such as the Waves API Collection, to give the vocals and instrumental tracks a live concert sound. “I tend to use plug-ins that emulate more of a tube sound to get punchier drums and that sort of thing. We used different 5.1 spreaders to put the music in a 5.1 environment. We changed the sound to match the picture, so we dried up the vocals on close-ups so they felt more intimate. We had tons and tons of flexibility because we had clean vocals and raw guitars and drum tracks.”

All the hard work paid off. In the film, Ally joins Jackson Maine on stage to sing a song she wrote called “Shallow.” For Murray and Ruder, this scene portrays everything they wanted to achieve for the performances in A Star is Born. The scene begins outside the concert, as Ally and her friend get out of the car and head toward the stage. The distant crowd and music reverberate through the stairwell as they’re led up to the backstage area. As they get closer, the sound subtly changes to match their proximity to the band. On stage, the music and crowd are deafening. Jackson begins to play guitar and sing solo before Ally finds the courage to join in. They sing “Shallow” together and the crowd goes crazy.

“The whole sequence was timed out perfectly, and the emotion we got out of them was great. The mix there was great. You felt like you were there with them. From a mix perspective, that was probably the most successful moment in the film,” concludes Ruder.


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.

Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 

Behind the Title: Sim re-recording mixer Sue Pelino

This audio post vet, who specializes in performance-based projects, is also an accomplished musician.

Name: Sue Pelino

Company: Sim Post New York

Can you describe your company?
I work within Sim Post New York, a division of Sim located in North Tribeca. This is a post production studio that specializes in offline editing, sound and picture finishing, color timing and VFX/Flame. We offer our clients end-to-end solutions for content creation and certified project delivery. Sim also has additional locations in Los Angeles, Atlanta, Toronto and Vancouver, and also has three other divisions: Studio, Camera and Lighting and Grip.

What’s your job title?
Senior Re-Recording Mixer

What does that entail?
At Sim Post New York, the job of re-recording mixer entails all aspects of sound for picture. We are not only responsible for the final 5.1 and stereo mix, but also act as supervising sound editors and sound designers. Our team of re-recording mixers mainly concentrate on long-form television, including documentaries, scripted series, reality programs and game shows. Of course, we mix commercials and promos as well. I specialize in music performance and comedy specials.

What would surprise people the most about what falls under that title?
The amount of mouth clicks and spit that we need to remove from dialogue!

What’s your favorite part of the job?
The lasting friendships that I have made with my clients and colleagues. Also, I’ve had the great opportunity to work with so many interesting artists and actors, which makes the job exciting.

What’s your least favorite?
The unpredictable hours.

What is your most productive time of the day?
I am a night owl, so I’m most creative between 7pm and 1am… actually, make that 2am.

If you didn’t have this job, what would you be doing instead?
I would most likely be either a full-time musician/songwriter. I would absolutely love to design guitars.

How early on did you know this would be your path?
The first time I was in a recording studio was when I was 10 years old. My dad was friends with a great jingle writer who offered to have me come in for a recording session. He thought that I was going to play “Mary Had a Little Lamb” on ukulele, but instead I showed up with a seven-piece band and played two originals and a Carpenters tune. I think I played a Gibson Dove that day at Electric Lady Studios. I was mesmerized, and it was at that moment when I got the bug!

Rock & Roll Hall of Fame Induction Ceremony: Bon jovi.

Can you name some recent projects you have worked on?
Roy Wood Jr. — No One Loves You (Comedy Central stand-up special), MTV Video Music Awards, Kings of Leon – Landmarks Live in Concert, Special Olympics: 50 Years of Changing the Game (ESPN on ABC documentary), Rock & Roll Hall of Fame Induction Ceremony (HBO Special).

What is the project that you are most proud of?
Tony Bennett: An American Classic (NBC special, directed by Rob Marshall).

Name three pieces of technology that you can’t live without.
iZotope RX Post Production Suite, Penteo 7 Pro and Pro Tools.

What social media channels do you follow?
For work, I follow LinkedIn. It’s a great research and marketing tool and was extremely helpful in getting the word out when the team and I joined Sim. I heard from connections and clients that I hadn’t heard from in a while. They were excited to come by for a tour.

Some groups/companies that I follow on LinkedIn include: Audio Engineering Society, Sim, Panavision, Waves Audio, Apogee Electronics, Avid, Technicolor, Viacom, Sony Music Entertainment, HBO, Hulu, Netflix, Media and Entertainment Professionals , New York Film Academy, New York Women in Film and Television, Producers Guild of America.

For pleasure (and a little business) I love Instagram. I have always been into photography and love to get my message across in a photo. I definitely do follow quite a few production companies and many of my clients who are also close friends.

Care to share some music to listen to?
In my car, I mainly listen to Jam On. At home I’ve been constantly playing the LP of my new favorite band, Roadcase Royale (on my turntable)!

What do you do to de-stress from it all?
I play guitar and sing almost every night. Many times I even hold my guitar while chilling out watching TV, and I’ve found myself playing along with the Game of Thrones theme!

Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

RTW at AES NY with 19-inch rackmount TouchMonitor

RTW, which makes visual audio meters and monitoring devices for broadcast, production, post production and quality control, will be in the Avid Pavilion at AES NY this year with the 19-inch 4U rack-mount (MA4U) version of its TouchMonitor TM9.

This reconfigured unit brings all the audio monitoring features of the standalone RTW TM9 in a new design that is more easily accessible to users in studio settings.

The TM9 panel-mount version is 235x135x45mm (9.25x 5.35×1.8 inches) without the power supply and is ideal for mounting into front panels. The unit comes standard with a USB extension for the front panel and the mounting kit is compatible with DIN 41494/IEC 60297 19-inch 4U racks (483x177x91mm).

“With the continued evolution of studio spaces and workflows, we have seen an increased interest in rack-mountable formats of our loudness solutions,” says Andreas Tweitmann, CEO, RTW.

Equipped with RTW’s high-grade nine-inch touch screens and an easy-to-use GUI, the TouchMonitor TM9 is the latest in the company’s rack-mount solutions, which include the TM3, TM7 and RTW legacy products. The TM9 has a graphical user interface that can be scaled, randomly positioned and combined in almost every way for optimized use of available screen space. Multiple instruments of the same type, assigned to different input channels and configurations, can be displayed simultaneously. Plus, a context-sensitive, on-screen help feature supports the user, allowing for easy setup changes.

The latest firmware version of the TM9, which is also used with the TM7, expands the basic software to a four-channel display with 4x mono or 2x stereo/2x mono. Additionally, 1x stereo can be measured without the need of an activated multichannel license. Output routing can be individually adjusted for each preset and all presets can be exported or imported at the same time.

Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

Behind the Title: Heard City mixer Elizabeth McClanahan

A musician from an early age, this mixer/sound designer knew her path needed to involve music and sound.

Name: Elizabeth McClanahan

Company: New York City’s Heard City (@heardcity)

Can you describe your company?
We are an audio post production company.

What’s your job title?
Mixer and sound designer.

What does that entail?
I mix and master audio for advertising, television and film. Working with creatives, I combine production audio, sound effects, sound design, score or music tracks and voiceover into a mix that sounds smooth and helps highlight the narrative of each particular project.

What would surprise people the most about what falls under that title?
I think most people are surprised by the detailed nature of sound design and by the fact that we often supplement straightforward diegetic sounds with additional layers of more conceptual design elements.

What’s your favorite part of the job?
I enjoy the collaborative work environment, which enables me to take on different creative challenges.

What’s your least favorite?
The ever-changing landscape of delivery requirements.

What is your favorite time of the day?
Lunch!

If you didn’t have this job, what would you be doing instead?
I think I would be interested in pursuing a career as an archivist or law librarian.

Why did you choose this profession?
Each project allows me to combine multiple tools and skill sets: music mixing, dialogue cleanup, sound design, etc. I also enjoy the problem solving inherent in audio post.

How early on did you know this would be your path?
I began playing violin at age four, picking up other instruments along the way. As a teenager, I often recorded friends’ punk bands, and I also started working in live sound. Later, I began my professional career as a recording engineer and focused primarily on jazz. It wasn’t until VO and ADR sessions began coming into the music studio in which I was working that I became aware of the potential paths in audio post. I immediately enjoyed the range and challenges of projects that post had to offer.

Can you name some recent projects you have worked on?
Lately, I’ve worked on projects for Google, Budweiser, Got Milk?, Clash of Clans, and NASDAQ.

I recently completed work on a feature film, called Nancy. This was my first feature in the role of supervising sound editor and re-recording mixer, and I appreciated the new experience on both a technical and creative level. Nancy was particularly unique in that all department heads (in both production and post) were women. It was an incredible opportunity to work with so many talented people.

Name three pieces of technology you can’t live without.
The Teenage Engineering OP-1, my phone and the UAD plugins that allow me to play bass at home without bothering my neighbors.

What social media channels do you follow?
Although I am not a heavy social media user, I follow a few pragmatic-yet-fun YouTube channels: Scott’s Bass Lessons, Hicut Cake and the gear review channel Knobs. I love that Knobs demonstrates equipment in detail without any talking.

What do you do to de-stress from it all?
In addition to practicing yoga, I love to read and visit museums, as well as play bass and work with modular synths.

Enhancing BlacKkKlansman’s tension with Foley

By Jennifer Walden

Director Spike Lee’s latest film, BlacKkKlansman, has gotten rave reviews from both critics and audiences. The biographical dramedy is based on Ron Stallworth’s true story of infiltrating the Colorado Springs chapter of the Ku Klux Klan back in the 1970s.

Stallworth (John David Washington) was a detective for the Colorado Springs police department who saw a recruitment advertisement for the KKK and decided to call the head of the local Klan chapter. He claimed he  was a racist white man wanting to join the Klan. Stallworth asks his co-worker Flip Zimmerman (Adam Driver) to act as Stallworth when dealing with the Klan face-to-face. Together, they try to thwart a KKK attack on an upcoming civil rights rally.

Marko Costanzo

The Emmy-award winning team (The Night Of and Boardwalk Empire) of Foley artist Marko Costanzo and Foley engineer George Lara at c5 Sound in New York City were tasked with recreating the sound of the ‘70s — from electric typewriters and rotary phones at police headquarters to the creak of leather jackets that were so popular in that era. “There are cardboard files and evidence boxes being moved around, phones dialing, newspapers shuffling and applause. We even had a car explosion which meant a lot of car parts landing on the ground,” explains Costanzo. “If you could listen to the film before our Foley, you would notice just how many of the extraneous noises had been removed, so we replaced all of that. Pretty much everything you hear in that film was replaced or at least sweetened.”

One important role of Foley is using it to define a character through sound. For example, Stallworth typically wears a leather jacket, and his jacket has a signature sound. But many of the police officers, and some Klan members, wear leather jackets, too, and they couldn’t all sound the same. The challenge was to create a unique sound that would represent each character.

According to Costanzo, the trickiest ones to define were the police officers, since they all have similar gear but still needed to sound different. “For the racist police officer Andy Landers (Frederick Weller), we wanted to make him noisy so he sounds a little more overzealous or full of himself. He’s got more of a presence.” The kit they created for Landers has more equipment for his belt, like bullets and handcuffs that rattle as he walks, a radio and a nightstick clattering, and they used extra leather creaking as well. “We did the night stick for him because he’s always ready and quick to pull out his nightstick to harass someone. He was a pretty nasty character, so we made him sound nasty with all our Foley trimmings.”

The police officer Foley really shines during the scene in which Stallworth apprehends Connie (Ashlie Atkinson), who just planted a bomb outside the residence of Patrice (Laura Harrier), president of the black student union at Colorado College. Stallworth is undercover, and he’s being arrested by local uniformed police officers instead of Connie the criminal. “The trick there was to make the police officer sound intimidating, and we did that through the sound of their belts,” says Costanzo. “They’re frisking the undercover cop and putting the handcuffs on and we covered all of those actions with sound.”

That scene is followed by a huge car explosion, which the Foley team also covered. While they didn’t do the actual explosion sound, they did perform the sounds of the glass shattering and many different debris impacts. “Our work helps to identify the perspective of the camera, and adds detail like parts hitting the bushes or parts hitting other cars. We go and pick out all the little things that you see and add those to the track,” he says.

Sometimes the Foley adds to the storytelling in less overt ways. For instance, during the scene when Stallworth calls up the head of the local KKK. As he’s on the phone listing all the types of people he hates, the other police officers in the station stop what they’re doing. Zimmerman swivels his chair around slowly and you hear it squeaking the whole time. It’s this uncomfortable sound, like the sonic equivalent of an eyebrow raise. Costanzo says, “Uncomfortable sounds are what we specialize in. Those are moments we embellish wherever possible so that it does tell part of the story. We wanted that moment to feel uncomfortable. Once those sounds are heard, it becomes part of the story, but it also just falls into the soundtrack.”

Foley can be helpful in communicating what’s happening off-screen as well. The police station is filled with officers. In Foley, they covered telephone hang-ups and grabs, the sound of the cords clattering and the chairs creaking, filing cabinets being opened and closed. “We try to create the feeling that you are located in that room and so we embellish off-camera sounds as well as the sounds for things on camera,” says Lara. Sometimes those off-camera sounds are atmospheric, like the police station, and other times they’re very specific. The director or supervising sound editor may ask to hear the characters walk away and out onto the street, or they need to hear a big crowd on the other side of a wall.

Part of the art of Foley is getting it to sound like it’s coming from the scene, like it’s production sound even though it isn’t. When a character waves an arm, you hear a cloth rustle. If people are walking down a long hallway, you hear their footsteps, and the sound diminishes as they get farther away from the camera. “We embellish all those movements, and that makes what we’re seeing feel more real,” explains Costanzo. To get those sounds to sit right, to feel like they’re coming from the scene, the Foley team strives to match the quality of the room for each scene, for each camera angle. “We try to do our best to match what we hear in production so the Foley will match that and sound like it was recorded there, live, on-set that day.”

Tools & Collaboration
Lara uses a four-mic approach to capturing the Foley. For the main mic (closest to Costanzo), he uses a Neumann KMR 81 D shotgun mic, which is a common boom mic used on-set. He has three other KMR 81 Ds placed at different distances and angles to the sound source. Those are all fed into an 8-channel Millennia mic pre-amp. By changing the balance of the mics in the mix, Lara can change the perspective of the sound. Because how well the Foley fits into the track isn’t just about volume, it’s about perspective and tonal quality. “Although we can EQ the sound, we try not to because we want to give the supervising sound editor the best sound, the fullest and richest sounding Foley possible,” he says.

Lara and Costanzo have been creating Foley together for 26 years. Both got their start at Sound One’s Foley stage in New York. “We have a really good idea of what’s good Foley and what’s bad Foley. Because George and I both learned the same way, I often refer to George as having the same ear as myself — meaning we both know when something works and when something doesn’t work,” shares Costanzo.

This dynamic allows the team to record anywhere from 300 to 400 sounds per day. For BlacKkKlansman, they were able to turn the film around in eight days. “The way that we work together, and why we work so well together, is because we both know what we are looking for and we have recorded many, many hours and years of Foley together,” says Lara.

Costanzo concludes, “Foley is a collaborative art but since we’ve been working together for many years, there are a lot of things that go unsaid. We don’t need to explain to each other everything that goes on. We both have imaginations that flourish when it comes to sound and we know how to take ideas and transfer them into working sounds. That’s something you learn over time.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

The Emmy-nominated sound editing team’s process on HBO’s Vice Principals

By Jennifer Walden

HBO’s comedy series Vice Principals — starring Danny McBride and Walton Goggins as two rival vice principals of North Jackson High School — really went wild for the Season 2 finale. Since the school’s mascot is a tiger, they hired an actual tiger for graduation day, which wreaked havoc inside the school. (The tiger was part real and part VFX, but you’d never know thanks to the convincing visuals and sound.)

The tiger wasn’t the only source of mayhem. There was gunfire and hostages, a car crash and someone locked in a cage — all in the name of comedy.

George Haddad

Through all the bedlam, it was vital to have clean and clear dialogue. The show’s comedy comes from the jokes that are often ad-libbed and subtle.

Here, Warner Bros. Sound supervising sound editor George Haddad, MPSE, and dialogue/ADR editor Karyn Foster talk about what went into the Emmy-nominated sound editing on the Vice Principals Season 2 finale, “The Union Of The Wizard & The Warrior.”

Of all the episodes in Season 2, why did you choose “The Union of the Wizard & The Warrior” for award consideration?
George Haddad: Personally, this was the funniest episode — whether that’s good for sound or not. They just let loose on this one. For a comedy, it had so many great opportunities for sound effects, walla, loop group, etc. It was the perfect match for award consideration. Even the picture editor said beforehand that this could be the one. Of course, we don’t pay too much attention to its award-potential; we focus on the sound first. But, sure enough, as we went through it, we all agreed that this could be it.

Karyn Foster: This episode was pretty dang large, with the tiger and the chaos that the tiger causes.

In terms of sound, what was your favorite moment in this episode? Why?
Haddad: It was during the middle of the show when the tiger got loose from the cage and created havoc. It’s always great for sound when an animal gets loose. And it was particularly fun because of the great actors involved. This had comedy written all over it. You know no one is going to die, just because the nature of the show. (Actually, the tiger did eat the animal handler, but he kind of deserved it.)

Karyn Foster

I had a lot of fun with the tiger and we definitely cheated reality there. That was a good sound design sequence. We added a lot of kids screaming and adults screaming. The reaction of the teachers was even more scared than the students, so it was funny. It was a perfect storm for sound effects and dialogue.

Foster: My favorite scene was when Lee [Goggins] is on the ground after the tiger mauls his hand and he’s trying to get Neal [McBride] to say, “I love you.” That scene was hysterical.

What was your approach to the tiger sounds?
Haddad: We didn’t have production sound for the tiger, as the handler on-set kept a close watch on the real animal. Then in the VFX, we have the tiger jumping, scratching with its paws, roaring…

I looked into realistic tiger sounds, and they’re not the type of animal you’d think would roar or snarl — sounds we are used to having for a lion. We took some creative license and blended sounds together to make the tiger a little more ferocious, but not too scary. Because, again, it’s a comedy so we needed to find the right balance.

What was the most challenging scene for sound?
Haddad: The entire cast was in this episode, during the graduation ceremony. So you had 500 students and a dozen of the lead cast members. That was pretty full, in terms of sound. We had to make it feel like everyone is panicking at the same time while focusing on the tiger. We had to keep the tension going, but it couldn’t be scary. We had to keep the tone of the comedy going. That’s where the balance was tricky and the mixers did a great job with all the material we gave them. I think they found the right tone for the episode.

Foster: For dialogue, the most challenging scene was when they are in the cafeteria with the tiger. That was a little tough because there are a lot of people talking and there were overlapping lines. Also, it was shot in a practical location, so there was room reflection on the production dialogue.

A comedy series is all about getting a laugh. How do you use sound to enhance the comedy in this series?
Haddad: We take the lead off of Danny McBride. Whatever his character is doing, we’re not going to try to go over the top just because he and his co-stars are brilliant at it. But, we want to add to the comedy. We don’t go cartoonish. We try to keep the sounds in reality but add a little bit of a twist on top of what the characters are already doing so brilliantly on the screen.

Quite frankly, they do most of the work for us and we just sweeten what is going on in the scene. We stay away from any of the classic Hanna-Barbera cartoon sound effects. It’s not that kind of comedy, but at the same time we will throw a little bit of slapstick in there — whether it’s a character falling or slipping or it’s a gun going off. For the gunshots, I’ll have the bullet ricochet and hit a tree just to add to the comedy that’s already there.

A comedy series is all about the dialogue and the jokes. What are some things you do to help the dialogue come through?
Haddad: The production dialogue was clean overall, and the producers don’t want to change any of the performances, even if a line is a bit noisy. The mixers did a great job in making sure that clarity was king for dialogue. Every single word and every single joke was heard perfectly. Comedy is all about timing.

We were fortunate because we get clean dialogue and we found the right balance of all the students screaming and the sounds of panicking when the tiger created havoc. We wanted to make sure that Danny and his co-stars were heard loud and clear because the comedy starts with them. Vice Principals is a great and natural sounding show for dialogue.

Foster: Vice Principals was a pleasure to work on because the dialogue was in good shape. The editing on this episode wasn’t difficult. The lines went together pretty evenly.

We basically work with what we’ve been given. It’s all been chosen for us and our job is to make it sound smooth. There’s very minimal ADR on the show.

In terms of clarification, we make sure that any lines that really need to be heard are completely separate, so when it gets to the mix stage the mixer can push that line through without having to push everything else.

As far as timing, we don’t make any changes. That’s a big fat no-no for us. The picture editor and showrunners have already decided what they want and where, and we don’t mess with that.

There were a large number of actors present for the graduation ceremony. Was the production sound mixer able to record those people in that environment? Or, was that sound covered in loop?
Haddad: There are so many people in the scene. and that can be challenging to do solely in loop group. We did multiple passes with the actors we had in loop. We also had the excellent sound library here at Warner Bros. Sound. I also captured recordings at my kids’ high school. So we had a lot of resource material to pull from and we were able to build out that scene nicely. What we see on-camera, with the number of students and adults, we were able to represent that through sound.

As for recording at my kids’ high school, I got permission from the principal but, of course, my kids were embarrassed to have their dad at school with his sound equipment. So I tried to stay covert. The microphones were placed up high, in inconspicuous places. I didn’t ask any students to do anything. We were like chameleons — we came and set up our equipment and hit record. I had Røde microphones because they were easy to mount on the wall and easy to hide. One was a Røde VideoMic and the other was their NTG1 microphone. I used a Roland R-26 recorder because it’s portable and I love the quality. It’s great for exterior sounds too because you don’t get a lot of hiss.

We spent a couple hours recording and we were lucky enough to get material to use in the show. I just wanted to catch the natural sound of the school. There are 2,700 students, so it’s an unusually high student population and we were able to capture that. We got lucky when kids walked by laughing or screaming or running to the next class. That was really useful material.

Foster: There was production crowd recorded. For most of the episodes when they had pep rallies and events, there was production crowd recorded. They took the time to record some specific takes. When you’re shooting group on the stage, you’re limited to the number of people you have. You have to do multiple takes to try and mimic that many people.

Can you talk about the tools you couldn’t have done without?
Haddad: This show has a natural sound, so we didn’t use pitch shifting or reverb or other processing like we’d use on a show like Gotham, where we do character vocal treatments.

Foster: I would have to say iZotope RX 6. That tool for a dialogue editor is one that you can’t live without. There were some challenging scenes on Vice Principals, and the production sound mixer Christof Gebert did a really good job of getting the mics in there. The iso-mics were really clean, and that’s unusual these days. The dialogue on the show was pleasant to work on because of that.

What makes this show challenging in terms of dialogue is that it’s a comedy, so there’s a lot of ad-libbing. With ad-libbing, there’s no other takes to choose from. So if there’s a big clunk on a line, you have to make that work. With RX 6, you can minimize the clunk on a line or get rid of it. If those lines are ad-libs, they don’t want to have to loop those. The ad-libbing makes the show great but it also makes the dialogue editing a bit more complicated.

Any final thoughts you’d like to share on Vice Principals?
Haddad: We had a big crew because the show was so busy. I was lucky to get some of the best here at Warner Bros. Sound. They helped to make the show sound great, and we’re all very proud of it. We appreciate our peers selecting Vice Principals for Emmy nomination. That to us was a great feeling, to have all of our hard work pay off with an Emmy nomination.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Crafting sound for Emmy-winning Atlanta

By Jennifer Walden

FX Network’s dramedy series Atlanta, which recently won an Emmy for Outstanding Sound Editing For A Comedy or Drama Series (Half-Hour)tells the story of three friends from, well, Atlanta — a local rapper named Paper Boi whose star is on the rise (although the universe seems to be holding him down), his cousin/manager Earn and their head-in-the-clouds friend Darius.

Trevor Gates

Told through vignettes, each episode shows their lives from different perspectives instead of through a running narrative. This provides endless possibilities for creativity. One episode flows through different rooms at a swanky New Year’s party at Drake’s house; another ventures deep into the creepy woods where real animals (not party animals) make things tense.

It’s a playground for sound each week, and MPSE-award-winning supervising sound editor Trevor Gates of Formosa Group and his sound editorial team on Season 2 (aka, Robbin’ Season) got their 2018 Emmy based on the work they did on Episode 6 “Teddy Perkins,” in which Darius goes to pick up a piano from the home of an eccentric recluse but finds there’s more to the transaction than he bargained for.

Here, Gates discusses the episode’s precise use of sound and how the quiet environment was meticulously crafted to reinforce the tension in the story and to add to the awkwardness of the interactions between Darius and Teddy.

There’s very little music in “Teddy Perkins.” The soundtrack is mainly different ambiences and practical effects and Foley. Since the backgrounds play such an important role, can you tell me about the creation of these different ambiences?
Overall, Atlanta doesn’t really have a score. Music is pretty minimal and the only music that you hear is mainly source music — music coming from radios, cell phones or laptops. I think it’s an interesting creative choice by producers Hiro Murai and Donald Glover. In cases like the “Teddy Perkins” episode, we have to be careful with the sounds we choose because we don’t have a big score to hide behind. We have to be articulate with those ambient sounds and with the production dialogue.

Going into “Teddy Perkins,” Hiro (who directed the episode) and I talked about his goals for the sound. We wanted a quiet soundscape and for the house to feel cold and open. So, when we were crafting the sounds that most audience members will perceive as silence or quietness, we had very specific choices to make. We had to craft this moody air inside the house. We had to craft a few sounds for the outside world too because the house is located in a rural area.

There are a few birds but nothing overt, so that it’s not intrusive to the relationship between Darius (Lakeith Stanfield) and Teddy (Donald Glover). We had to be very careful in articulating our sound choices, to hold that quietness that was void of any music while also supporting the creepy, weird, tense dialogue between the two.

Inside the Perkins residence, the first ambience felt cold and almost oppressive. How did you create that tone?
That rumbly, oppressive air was the cold tone we were going for. It wasn’t a layer of tones; it was actually just one sound that I manipulated to be the exact frequency that I wanted for that space. There was a vastness and a claustrophobia to that space, although that sounds contradictory. That cold tone was kind of the hero sound of this episode. It was just one sound, articulately crafted, and supported by sounds from the environment.

There’s a tonal shift from the entryway into the parlor, where Darius and Teddy sit down to discuss the piano (and Teddy is eating that huge, weird egg). In there we have the sound of a clock ticking. I really enjoy using clocks. I like the meter that clocks add to a room.

In Ouija: Origin of Evil, we used the sound of a clock to hold the pace of some scenes. I slowed the clock down to just a tad over a second, and it really makes you lean in to the scene and hold what you perceive as silence. I took a page from that book for Atlanta. As you leave the cold air of the entryway, you enter into this room with a clock ticking and Teddy and Darius are sitting there looking at each other awkwardly over this weird/gross ostrich egg. The sound isn’t distracting or obtrusive; it just makes you lean into the awkwardness.

It was important for us to get the mix for the episode right, to get the right level for the ambiences and tones, so that they are present but not distracting. It had to feel natural. It’s our responsibility to craft things that show the audience what we want them to see, and at the same time we have to suspend their disbelief. That’s what we do as filmmakers; we present the sonic spaces and visual images that traverse that fine line between creativity and realism.

That cold tone plays a more prominent role near the end of the episode, during the murder-suicide scene. It builds the tension until right before Benny pulls the trigger. But there’s another element too there, a musical stinger. Why did you choose to use music at that moment?
What’s important about this season of Atlanta is that Hiro and Donald have a real talent for surrounding themselves with exceptional people — from the picture department to the sound department to the music department and everyone on-set. Through the season it was apparent that this team of exceptional people functioned with extreme togetherness. We had a homogeny about us. It was a bunch of really creative and smart people getting together in a room, creating something amazing.

We had a music department and although there isn’t much music and score, every once in a while we would break a rule that we set for ourselves on Season 2. The picture editor will be in the room with the music department and Hiro, and we’ll all make decisions together. That musical stinger wasn’t my idea exactly; it was a collective decision to use a stinger to drive the moment, to have it build and release at a specific time. I can’t attribute that sound to me only, but to this exceptional team on the show. We would bounce creative ideas off of each other and make decisions as a collective.

The effects in the murder-suicide scene do a great job of tension building. For example, when Teddy leans in on Darius, there’s that great, long floor creak.
Yeah, that was a good creak. It was important for us, throughout this episode, to make specific sound choices in many different areas. There are other episodes in the season that have a lot more sound than this episode, like “Woods,” where Paper Boi (Brian Tyree Henry) is getting chased through the woods after he was robbed. Or “Alligator Man,” with the shootout in the cold open. But that wasn’t the case with “Teddy Perkins.”

On this one, we had to make specific choices, like when Teddy leans over and there’s that long, slow creak. We tried to encompass the pace of the scene in one very specific sound, like the sound of the shackles being tightened onto Darius or the movement of the shotgun.

There’s another scene when Darius goes down into the basement, and he’s traveling through this area that he hasn’t been in before. We decided to create a world where he would hear sounds traveling through the space. He walks past a fan and then a water heater kicks on and there is some water gurgling through pipes and the clinking sound of the water heater cooling down. Then we hear Benny’s wheelchair squeak. For me, it’s about finding that one perfect sound that makes that moment. That’s hard to do because it’s not a composition of many sounds. You have one choice to make, and that’s what is going to make that moment special. It’s exciting to find that one sound. Sometimes you go through many choices until you find the right one.

There were great diegetic effects, like Darius spinning the globe, and the sound of the piano going onto the elevator, and the floor needle and the buttons and dings. Did those come from Foley? Custom recordings? Library sounds?
I had a great Foley team on this entire season, led by Foley supervisor Geordy Sincavage. The sounds like the globe spinning came from the Foley team, so that was all custom recorded. The elevator needle moving down was a custom recording from Foley. All of the shackles and handcuffs and gun movements were from Foley.

The piano moving onto the elevator was something that we created from a combination of library effects and Foley sounds. I had sound effects editor David Barbee helping me out on this episode. He gave me some library sounds for the piano and I went in and gave it a little extra love. I accentuated the movement of the piano strings. It was like piano string vocalizations as Darius is moving the piano into the elevator and it goes over the little bumps. I wanted to play up the movements that would add some realism to that moment.

Creating a precise soundtrack is harder than creating a big action soundtrack. Well, there are different sets of challenges for both, but it’s all about being able to tell a story by subtraction. When there’s too much going on, people can feel the details if you start taking things away. “Teddy Perkins” is the case of having an extremely precise soundtrack, and that was successful thanks to the work of the Foley team, my effects editor, and the dialogue editor.

The dialogue editor Jason Dotts is the unsung hero in this because we had to be so careful with the production dialogue track. When you have a big set — this old, creaky house and lots of equipment and crew noise — you have to remove all the extraneous noise that can take you out of the tension between Darius and Teddy. Jason had to go in with a fine-tooth comb and do surgery on the production dialogue just to remove every single small sound in order to get the track super quiet. That production track had to be razor-sharp and presented with extreme care. Then, with extreme care, we had to build the ambiences around it and add great Foley sounds for all the little nuances. Then we had to bake the cake together and have a great mix, a very articulate balance of sounds.

When we were all done, I remember Hiro saying to us that we realized his dream 100%. He alluded to the fact that this was an important episode going into it. I feel like I am a man of my craft and my fingerprint is very important to me, so I am always mindful of how I show my craft to the world. I will always take extreme care and go the extra mile no matter what, but it felt good to have something that was important to Hiro have such a great outcome for our team. The world responded. There were lots of Emmy nominations this year for Atlanta and that was an incredible thing.

Did you have a favorite scene for sound? Why?
It was cool to have something that we needed to craft and present in its entirety. We had to build a motif and there had to be consistency within that motif. It was awesome to build the episode as a whole. Some scenes were a bit different, like down in the basement. That had a different vibe. Then there were fun scenes like moving the piano onto the elevator. Some scenes had production challenges, like the scene with the film projector. Hiro had to shoot that scene with the projector running and that created a lot of extra noise on the production dialogue. So that was challenging from a dialogue editing standpoint and a mix standpoint.

Another challenging scene was when Darius and Teddy are in the “Father Room” of the museum. That was shot early on in the process and Donald wasn’t quite happy with his voice performance in that scene. Overall, Atlanta uses very minimal ADR because we feel that re-recorded performances can really take the magic out of a scene, but Donald wanted to redo that whole scene, and it came out great. It felt natural and I don’t think people realize that Donald’s voice was re-recorded in its entirety for that scene. That was a fun ADR session.

Donald came into the studio and once he got into the recording booth and got into the Teddy Perkins voice he didn’t get out of it until we were completely finished. So as Hiro and Donald are interacting about ideas on the performance, Donald stayed in the Teddy voice completely. He didn’t get out of it for three hours. That was an interesting experience to see Donald’s face as himself and hear Teddy’s voice.

Where there any audio tools that you couldn’t have lived without on this episode?
Not necessarily. This was an organic build and the tools that we used in this were really basic. We used some library sounds and recorded some custom sounds. We just wanted to make sure that we could make this as real and organic as possible. Our tool was to pick the best organic sounds that we could, whether we used source recordings or new recordings.

Of all the episodes in Season 2 of Atlanta, why did you choose “Teddy Perkins” for Emmy consideration?
Each episode had its different challenges. There were lots of different ways to tell the stories since each episode is different. I think that is something that is magical about Atlanta. Some of the episodes that stood out from a sound standpoint were Episode 1 “Alligator Man” with the shootout, and Episode 8 “Woods.” I had considered submitting “Woods” because it’s so surreal once Paper Boi gets into the woods. We created this submergence of sound, like the woods were alive. We took it to another level with the wildlife and used specific wildlife sounds to draw some feelings of anxiety and claustrophobia.

Even an episode like “Champagne Papi,” which seems like one of the most basic from a sound editorial perspective, was actually quite varied. They’re going between different rooms at a party and we had to build spaces of people that felt different but the same in each room. It had to feel like a real space with lots of people, and the different spaces had to feel like it belonged at the same party.

But when it came down to it, I feel like “Teddy Perkins” was special because there wasn’t music to hide behind. We had to do specific and articulate work, and make sharp choices. So it’s not the episode with the most sound but it’s the episode that has the most articulate sound. And we are very proud of how it turned out.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Sound Lounge, Mad Hat team on Sound Lounge Everywhere Atlanta

Sound Lounge has partnered with Atlanta’s Mad Hat Creative to bring its Sound Lounge Everywhere remote collaboration service to the Southeast. Sound Lounge Everywhere will allow advertising, broadcast and corporate clients in Atlanta and neighboring states to work with Sound Lounge sound editors, designers and mixers in New York in realtime and share high-quality audio and video.

This will allow clients access to top sound talent, while saving time, travel and production costs. Sound Lounge already has launched Sound Lounge Everywhere at sites in Boston and Boulder, Colorado.

At Mad Hat’s Atlanta offices, a suite dedicated to sound work is equipped with Bowers & Wilkins speakers and other leading-edge gear to ensure accurate playback of music and sound. Proprietary Sound Lounge Everywhere hardware and software facilitates realtime streaming of high-quality video and uncompressed, multichannel audio between the Mad Hat and Sound Lounge locations with virtually no latency. Web cameras and talkback modules support two-way communication.

For Mad Hat Creative, Sound Lounge Everywhere helps the company round out an offering that includes video production, editorial, visual effects, motion graphics, color correction and post services.

To help manage the new service, Sound Lounge has promoted Becca Falborn to senior producer. Falborn, who joined the studio as a producer last year, will coordinate sound sessions between the two sites, assist Sound Lounge head of production Liana Rosenberg in overseeing local sound production and serve as the studio’s social coordinator.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including posts with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Review: Blackmagic’s Resolve 15

By David Cox

DaVinci Resolve 15 from Blackmagic Design has now been released. The big news is that Blackmagic’s compositing software Fusion has been incorporated into Resolve, joining the editing and audio mixing capabilities added to color grading in recent years. However, to focus just on this would hide a wide array of updates to Resolve, large and small, across the entire platform. I’ve picked out some of my favorite updates in each area.

For Colorists
Each time Blackmagic adds a new discipline to Resolve, colorists fear that the color features take a back seat. After all, Resolve was a color grading system long before anything else. But I’m happy to say there’s nothing to fear in Version 15, as there are several very nice color tweaks and new features to keep everyone happy.

I particularly like the new “stills store” functionality, which allows the colorist to find and apply a grade from any shot in any timeline in any project. Rather than just having access to manually saved grades in the gallery area, thumbnails of any graded shot can be viewed and copied, no matter which timeline or project they are in, even those not explicitly saved as stills. This is great for multi-version work, which is every project these days.

Grades saved as stills (and LUTS) can also be previewed on the current shot using the “Live Preview” feature. Hovering the mouse cursor over a still and scrubbing left and right will show the current shot with the selected grade temporarily applied. It makes quick work of finding the most appropriate look from an existing library.

Another new feature I like is called “Shared Nodes.” A color grading node can be set as “shared,” which creates a common grading node that can be inserted into multiple shots. Changing one instance, changes all instances of that shared node. This approach is more flexible and visible than using Groups, as the node can be seen in each node layout and can sit at any point in the process flow.

As well as the addition of multiple play-heads, a popular feature in other grading systems, there is a plethora of minor improvements. For example, you can now drag the qualifier graphics to adjust settings, as opposed to just the numeric values below them. There are new features to finesse the mattes generated from the keying functions, as well as improvements to the denoise and face refinement features. Nodes can be selected with a single click instead of a double click. In fact, there are 34 color improvements or new features listed in the release notes.

For Editors
As with color, there are a wide range of minor tweaks all aimed at improving feel and ergonomics, particularly around dynamic trim modes, numeric timecode entry and the like. I really like one of the major new features, which is the ability to open multiple timelines on the screen at the same time. This is perfect for grabbing shots, sequences and settings from other timelines.

As someone who works a lot with VFX projects, I also like the new “Replace Edit” function, which is aimed at those of us that start our timelines with early drafts of VFX and then update them as improved versions come along. The new function allows updated shots to be dragged over their predecessors, replacing them but inheriting all modifications made, such as the color grade.

An additional feature to the existing markers and notes functions is called “Drawn Annotations.” An editor can point out issues in a shot with lines and arrows, then detail them with notes and highlight them with timeline markers. This is great as a “note to self” to fix later, or in collaborative workflows where notes can be left for other editors, colorists or compositors.

Previous versions of Resolve had very basic text titling. Thanks to the incorporation of Fusion, the edit page of Resolve now has a feature called Text+, a significant upgrade on the incumbent offering. It allows more detailed text control, animation, gradient fills, dotted outlines, circular typing and so on. Within Fusion there is a modifier called “Follower,” which enables letter-by-letter animation, allowing Text+ to compete with After Effects for type animation. On my beta test version of Resolve 15, this wasn’t available in the Edit page, which could be down to the beta status or an intent to keep the Text+ controls in the Edit page more streamlined.

For Audio
I’m not an audio guy, so my usefulness in reviewing these parts is distinctly limited. There are 25 listed improvements or new features, according to the release notes. One is the incorporation of Fairlight’s Automated Dialog Replacement processes, which creates a workflow for the replacement of unsalvageable originally recorded dialog.

There are also 13 new built-in audio effects plugins, such as Chorus, Echo and Flanger, as well as de-esser and de-hummer clean-up tools.
Another useful addition both for audio mixers and editors is the ability to import entire audio effects libraries, which can then be searched and star-rated from within the Edit and Fairlight pages.

Now With Added Fusion
So to the headline act — the incorporation of Fusion into Resolve. Fusion is a highly regarded node-based 2D and 3D compositing software package. I reviewed Version 9 in postPerspective last year [https://postperspective.com/review-blackmagics-fusion-9/]. Bringing it into Resolve links it directly to editing, color grading and audio mixing to create arguably the most agile post production suite available.

Combining Resolve and Fusion will create some interesting challenges for Blackmagic, who say that the integration of the two will be ongoing for some time. Their challenge isn’t just linking two software packages, each with their own long heritage, but in making a coherent system that makes sense to all users.

The issue is this: editors and colorists need to work at a fast pace, and want the minimum number of controls clearly presented. A compositor needs infinite flexibility and wants a button and value for every function, with a graph and ideally the ability to drive it with a mathematical expression or script. Creating an interface that suits both is near impossible. Dumbing down a compositing environment limits its ability, whereas complicating an editing or color environment destroys its flow.

Fusion occupies its own “page” within Resolve, alongside pages for “Color,” “Fairlight” (audio) and “Edit.” This is a good solution in so far that each interface can be tuned for its dedicated purpose. The ability to join Fusion also works very well. A user can seamlessly move from Edit to Fusion to Color and back again, without delays, rendering or importing. If a user is familiar with Resolve and Fusion, it works very well indeed. If the user is not accustomed to high-end node-based compositing, then the Fusion page can be daunting.

I think the challenge going forward will be how to make the creative possibilities of Fusion more accessible to colorists and editors without compromising the flexibility a compositor needs. Certainly, there are areas in Fusion that can be made more obvious. As with many mature software packages, Fusion has the occasional hidden right click or alt-click function that is hard for new users to discover. But beyond that, the answer is probably to let a subset of Fusion’s ability creep into the Edit and Color pages, where more common tasks can be accommodated with simplified control sets and interfaces. This is actually already the case with Text+; a Fusion “effect” that is directly accessible within the Edit section.

Another possible area to help is Fusion Macros. This is an inbuilt feature within Fusion that allows a designer to create an effect and then condense it down to a single node, including just the specific controls needed for that combined effect. Currently, Macros that integrate the Text+ effect can be loaded directly in the Edit page’s “Title Templates” section.

I would encourage Blackmagic to open this up further to allow any sort of Macro to be added for video transitions, graphics generators and the like. This could encourage a vibrant exchange of user-created effects, which would arm editors and colorists with a vast array of immediate and community sourced creative options.

Overall, the incorporation of Fusion is a definite success in my view, whether used to empower multi-skilled post creatives or to provide a common environment for specialized creatives to collaborate. The volume of updates and the speed at which the Resolve software developers address the issues exposed during public beta trials, remains nothing short of impressive.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

Emmy Season: Audio post for Netflix docu-series Wild Wild Country

By Jennifer Walden

A community based on peace and love, acceptance and non-judgment, where everyone has a job and a purpose. Who wouldn’t want to be a part of that, right? Or, is there a part of you that thinks this all sounds a bit utopian and is dubious?

Wild Wild Country, the six-part docu-series created by brothers Chapman and Maclain Way — its executive producers include two more sets of brothers: Mark and Jay Duplass and Josh and Dan Braun — tells the true story of what happened to a small town in Oregon after a religious cult set up their “utopian” city on a nearby ranch. This seven-hour documentary premiered in its entirety at the 2018 Sundance Film Festival and is currently available to stream on Netflix. It was also nominated for five Emmy Awards, winning in the category of Outstanding Documentary or Non-Fiction Series.

The Unbridled sound team at Sundance.

Wild Wild Country is a mix of archival news footage from the ‘80s — when the Rajneesh cult’s influence was on the rise in Oregon — and footage shot by the Rajneeshees, particularly in their own camp. It also draws from other documentaries and news specials on the Rajneesh movement that was created over the years. The Way brothers conduct extensive interviews with former Rajneeshees — including Ma Anand Sheela, who was personal secretary to cult leader Bhagwan Shree Rajneesh. They also interview a list of other interesting characters, from FBI agents who helped to bring down the cult to Oregonians (including the former mayor of Antelope) who lived near the cult’s camp.

The result is a story that’s almost too twisted to be true. “This could’ve been a narrative feature that someone scripted and produced… a film that’s well thought out and well played instead of a story that was stumbled upon,” says Emmy award-winning supervising sound editor Brent Kiser of LA’s Unbridled Sound. He and his sound editing team are recipients of one of the show’s five Emmy noms for their work on Wild Wild Country.

“Creatively, we didn’t see Wild Wild Country as a documentary per se,” explains Kiser. “We wanted it to be cinematic so that, in a way, you couldn’t believe this was real life because it was too crazy. The sound needed to reflect that.”

The Dialog
One way they achieved a feature film feel was by processing the interview dialog so that it didn’t sound like a stereotypical talking-head documentary. “We didn’t want the dialog to have that very dry, close sound you get with lavalier microphones,” says Kiser.

Years ago, while working on a documentary called Tiny: A Story About Living Small (2013), dialog editor Elliot Thompson discovered that stripping all the noise from the production dialog also stripped out all the character and nuances of a location. It made the dialog feel impersonal, as though it was talking at the audience instead of to them.

“That worked well on Tiny because you’re in small, close spaces, but on Wild Wild Country we wanted to do the opposite,” says Kiser. “We wanted to give the interview dialog a little bit of life, so we added in reverb using Audio Ease’s Altiverb. This gave the dialog a smoother, softer feel that helps the audience to feel the room to feel the environment and to feel like they’re there. Subsequently, this polish gave the dialog a cinematic feel. It felt more like a story being told and less like news.”

For the news footage from the ‘80s, which includes segments by former NBC news anchor Tom Brokaw, Kiser went for an unpolished approach. “The material hadn’t been maintained, and there were these weird VHS bleeds; the audio had a huge hum. Initially, we tried to clean it up a bit, but in the end we decided to just let it roll because that’s how it is,” he says.

Replacing Some Sound
The sound of the news footage set the tone for the rest of the archival material. Kiser and his team replaced all the sound for the B-roll shots that didn’t have someone talking on-camera. They did the same for footage from the Rajneeshees, who shot tons of footage for their promotional videos. “Every footstep, every gunshot, we covered all that. We basically replaced it all.”

For example, there’s footage of the Rajneeshees all dressed in red, walking through the town of Antelope, Oregon. Kiser and his team replaced all the sound there, adding in wind, footsteps and other elements you’d expect to hear. “We wanted to keep those moments feeling very real and very voyeuristic,” says Kiser. “By ‘real,’ I mean our idea of what archival material should sound like.”

In order for the sound to feel “real” it had to sound dirty, just like the archival news footage. Sound effects editors Jacob Flack and Danielle Price mined the libraries at Unbridled Sound in search of effects that were old, noisy and poorly recorded — effects that wouldn’t normally be useful today. Kiser says, “The old Hollywood Edge and BBC libraries were perfect! The wind sounds that are rumbly and distorted — those were just perfect.”

They also recorded new sounds when needed, but those fresh, clean recordings had to match the gritty archival material. Kiser tried adding futz processing via Audio Ease’s Speakerphone, but ultimately it wasn’t giving him the desired result. “So we tried cranking the Pro Tools SansAmp PSA-1 plug-in on it, and we also used the Waves Cobalt Saphira harmonic shaper plug-in. This helped the new recordings to feel warm and analog in the right way. We would bus all the ‘archival’ sound through an AUX channel with those two plug-ins for overall processing.

Some sounds couldn’t be replaced, specifically the Rajneeshee chants and singing. Those were pulled from already-published sources, like other documentaries, due to rights issues. Kiser explains, “That was important because the Rajneeshees, a.k.a. sannyasins, are still around. You can still go to India and find them. And Osho (Bhagwan Shree Rajneesh) is the yoga guy. If you dive into any hardcore yoga philosophy or theology, he’s written all about it and he’s quoted all the time.”

Knowing Wild Wild Country was going to play theatrically at Sundance, Kiser and his team were able to work with the 5.1 surround field — a rare opportunity in the documentary world. They chose to keep the sound on the front wall to maintain that archival feel, but when they wanted to kick up the excitement — for example, during the helicopter flyovers of Rajneeshpuram — they pulled the sound into the surrounds. “We used whooshes and sound design elements to make that feel bigger, more cinematic than the other archival material.”

The Music
Another prominent feature in the soundtrack was the music, composed by musician Brocker Way (brother to the filmmakers). “It’s basically wall-to-wall, and it’s amazing. You can watch all seven hours and not be annoyed by the music,” says Kiser. Interestingly, the music wasn’t composed to picture. Brocker Way wrote four- to five-minute cues that were later edited to picture. “We’d get the edited music tracks and make some adjustments, too. The result was a soundtrack that was perfect for this project.”

The biggest thing Kiser was worried about (knowing the film festival audience was going to watch a seven-hour documentary in its entirety) was boredom. That turned out to be a non-issue. The story itself is exciting. “And as far as the sound goes, the dialog feels warm and accessible through the whole film, so it feels like a story. A lot of times you’ll hear the sound design and music ramping up towards the end of each part, so that it would tease and build into the next one. It worked. At Sundance, they kept the theater at 40 to 50 people for all seven hours,” reports Kiser.

What’s most amazing about the post sound process on Wild Wild Country is that Unbridled Sound had just three weeks to get it all done, from edit to final mix. “We’re only a five-person crew here,” says Kiser. “Not only were we working on Wild Wild Country, but we had another Sundance film too, called An Evening with Beverly Luff Linn. And we were working on a series for Adult Swim called Dream Corp, LLC. So, it was intense.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Eleven mixes two Jurassic-themed Target spots

Jeff Payne, founder/mixer at Santa Monica’s audio post studio Eleven, helped bring dinosaurs to life — well, kind of — for two new Target spots, Dino Clash and Giant Steps. The spots coincided with the recent release of Jurassic World: Fallen Kingdom.

Dino Clash begins with the shadow of a dinosaur roaming a toy city where its shadow meets another dino shadow. We then see a boy and a girl each holding a toy dino, giggling and roaring. The girl starts singing the Jurassic Park theme song, which is then taken over by the original music musical. It plays while the Jurassic World logo appears followed by the Target logo and the tagline, “Jurassic World gear is here.”

Giant Steps also begins with the shadow of a dinosaur roaming a suburban street toy street. The camera lands on a toy car that is suddenly crushed by a roaring boy’s tiny foot. He is wearing a dino mask and roaring, then his roar morphs into one of a “real” dinosaur.

Eleven was brought in after the final picture edit had been completed. While the sound design and music for this spot was done by Antfood, Payne says he did “a bit of additional sound design to ‘plus the spot.’ I added more aggressive booms for footsteps, remixed the city sounds, added some ambience under the kids scene and did some editing of the sound design from what was provided.

“The sound design splits were a challenge because the backgrounds were already pre-mixed i.e.: city ambiance was married with siren, etc., so I had to do some cutting around to have better control over the individual sounds.”

Payne says the main challenge of the job was editing the backend music to match the different art cards that were changing while they were mixing. “The goal is always to make the ‘hits’ hit correctly on the visual as well as making it in time musically.”

Payne used an Avid Pro Tools HD X with an Avid S6 console to complete the job.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

Sim Post NY expands audio offerings, adds five new staffers

Sim Post in New York is in growth mode. They recently expanded their audio for TV and film services and boosted their post team with five new hires. Following the recent addition of a DI theater to its New York location, Sim is building three audio suites, a voiceover room and support space for the expanded audio capabilities.

Primetime Emmy award-winner Sue Pelino joins Sim as a senior re-recording mixer. Over Pelino’s career, she has been nominated for 10 Primetime Emmy Awards, most recently winning her third Emmy in 2017 for Outstanding Sound Mixing for her work on the 2017 Rock & Roll Hall of Fame Induction Ceremony (HBO). Project highlights that include performance series such as VH1 Sessions at West 54th, Tony Bennett: An American Classic, Alicia Keys — Unplugged, Tupac: Resurrection and Elton John: The Red Piano.

Dan Ricci also joins the Sim audio department as a re-recording mixer. A graduate of the Berklee College of Music, his prior work experience includes time at Sony Music and credits include Comedians in Cars Getting Coffee and the Grammy-nominated Jerry Before Seinfeld Netflix special. Ricci has worked extensively with Dolby Atmos and immersive technologies involved in VR content creation.

Ryan Schumer completes Sim New York’s audio department as an assistant audio engineer. Schumer has a bachelor’s degree from Five Towns College on Long Island in Jazz Commercial Music with a concentration in audio recording technology.

Stephanie Pacchiano joins Sim as a finishing producer, following a 10-year stint at Broadway Video where she provided finishing and delivery services for a robust roster of clients. Highlights include Jerry Seinfeld’s Comedians in Cars Getting Coffee, Atlanta, Portlandia, Documentary Now! and delivering Saturday Night Live to over 25 domestic and international platforms.

Kassie Caffiero joins Sim as VP, business development, east coast sales. She brings with her over 25 years of post experience. A graduate of Queens College with a degree in communication arts, Caffiero began her post career in the mid 1980s and found herself working on the CBS TV series. Caffiero’s experience managing the scheduling, operations and sales departments at major post facilities led her to the role of VP of post production at Sony Music Studios in New York City for 10 years. This was followed by a stint at Creative Group in New York for five years and most recently Broadway Video, also in New York, for six years.

Sim Post is a division of Sim, provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

The score for YouTube Red’s Cobra Kai pays tribute to original Karate Kid

By Jennifer Walden

In the YouTube Red comedy series Cobra Kai, Daniel LaRusso (Ralph Macchio), the young hero of the Karate Kid movies, has grown up to be a prosperous car salesman, while his nemesis Johnny Lawrence (William Zabka) just can’t seem to shake that loser label he earned long ago. Johnny can’t hold down his handy-man job. He lives alone in a dingy apartment, and his personality hasn’t benefited from maturity at all. He lives a very sad reality until one day he finds himself sticking up for a kid being bullied, and that redeeming bit of character makes you root for him. It’s an interesting dynamic that the series writers/showrunners have crafted, and it works.

L-R: Composers Leo Birenberg and Zack Robinson

Fans of the 1980’s film franchise will appreciate the soundtrack of the new Cobra Kai series. Los Angeles-based composers Leo Birenberg and Zach Robinson were tasked with capturing the essence of both composer Bill Conti’s original film scores and the popular music tracks that also defined the sound of the films.

To find that Karate Kid essence, Birenberg and Robinson listened to the original films and identified what audiences were likely latching onto sonically. “We concluded that it was mostly a color palette connection that people have. They hear a certain type of orchestral music with a Japanese flute sound, and they hear ‘80s rock,” says Birenberg. “It’s that palette of sounds that people connect with more so than any particular melody or theme from the original movies.”

Even though Conti’s themes and melodies for Karate Kid don’t provide the strongest sonic link to the films, Birenberg and Robinson did incorporate a few of them into their tracks at appropriate moments to create a feeling of continuity between the films and the series. “For example, there were a couple of specific Japanese flute phrases that we redid. And we found a recurring motif of a simple pizzicato string melody,” explains Birenberg. “It’s so simple that it was easy to find moments to insert it into our cues. We thought that was a really cool way to tie everything together and make it feel like it is all part of the same universe.”

Birenberg and Robinson needed to write a wide range of music for the show, which can be heard en masse on the Cobra Kai OST. There are the ’80s rock tracks that take over for licensed songs by bands like Poison and The Alan Parsons Project. This direction, as heard on the tracks “Strike First” and “Quiver,” covered the score for Johnny’s character.

The composers also needed to write orchestral tracks that incorporated Eastern influences, like the Japanese flutes, to cover Daniel as a karate teacher and to comment on his memories of Miyagi. A great example of this style is called, fittingly, “Miyagi Memories.”

There’s a third direction that Birenberg and Robinson covered for the new Cobra Kai students. “Their sound is a mixture of modern EDM and dance music with the heavier ‘80s rock and metal aesthetics that we used for Johnny,” explains Robinson. “So it’s like Johnny is imbuing the new students with his musical values. This style is best represented in the track ‘Slither.’”

Birenberg and Robinson typically work as separate composers, but they’ve collaborated on several projects before Cobra Kai. What makes their collaborations so successful is that their workflows and musical aesthetics are intrinsically similar. Both use Steinberg’s Cubase as their main DAW, while running Ableton Live in ReWire mode. Both like to work with MIDI notes while composing, as opposed to recording and cutting audio tracks.

Says Birenberg, “We don’t like working with audio from the get-go because TV and film are such a notes-driven process. You’re not writing music as much as you are re-writing it to specification and creative input. You want to be able to easily change every aspect of a track without having to dial in the same guitar sound or re-record the toms that you recorded yesterday.”

Virtual Instruments
For Cobra Kai, they first created demo songs using MIDI and virtual instruments. Drums and percussion sounds came from XLN Audio’s Addictive Drums. Spectrasonics Trilian was used for bass lines and Keyscape and Omnisphere 2 provided many soft-synth and keyboard sounds. Virtual guitar sounds came from MusicLab’s RealStrat and RealLPC, Orange Tree, and Ilya Efimov virtual instrument libraries. The orchestral sections were created using Native Instruments Kontakt, with samples coming from companies such as Spitfire, Cinesamples, Cinematic Strings, and Orchestral Tools.

“Both Zach and I put a high premium on virtual instruments that are very playable,” reports Birenberg. “When you’re in this line of work, you have to work superfast and you don’t want a virtual instrument that you have to spend forever tweaking. You want to be able to just play it in so that you can write quickly.”

For the final tracks, they recorded live guitar, bass and drums on every episode, as well as Japanese flute and small percussion parts. For the season finale, they recorded a live orchestra. “But,” says Birenberg, “all the orchestra and some Japanese percussion you hear earlier in the series, for the most part, are virtual instruments.”

Live Musicians
For the live orchestra, Robinson says they wrote 35 minutes of music in six days and immediately sent that to get orchestrated and recorded across the world with the Prague Radio Symphony Orchestra. The composing team didn’t even have to leave Los Angeles. “They sent us a link to a private live stream so we could listen to the session as it was going on, and we typed notes to them as we were listening. It sounds crazy but it’s pretty common. We’ve done that on numerous projects and it always turns out great.”

When it comes to dividing up the episodes — deciding who should score what scenes — the composing team likes to “go with gut and enthusiasm,” explains Birenberg. “We would leave the spotting session with the showrunners, and usually each of us would have a few ideas for particular spots.”

Since they don’t work in the same studio, the composers would split up and start work on the sections they chose. Once they had an idea down, they’d record a quick video of the track playing back to picture and share that with the other composer. Then they would trade tracks so they each got an opportunity to add in parts. Birenberg says, “We did a lot of sending iPhone videos back and forth. If it sounds good over an iPhone video, then it probably sounds pretty good!”

Both composers have different and diverse musical backgrounds, so they both feel comfortable diving right in and scoring orchestral parts or writing bass lines, for instance. “For the scope of this show, we felt at home in every aspect of the score,” says Birenberg. “That’s how we knew this show was for both of us. This score covers a lot of ground musically, and that ground happened to fit things that we understand and are excited about.” Luckily, they’re both excited about ‘80s rock (particularly Robinson) because writing music in that style effectively isn’t easy. “You can’t fake it,” he says.

Recreating ‘80s Rock
A big part of capturing the magic of ‘80s rock happened in the mix. On the track “King Cobra,” mix engineer Sean O’Brien harnessed the ‘80s hair metal style by crafting a drum sound that evoked Motley Crew and Bon Jovi. “I wanted to make the drums as bombastic and ‘80s as possible, with a really snappy kick drum and big reverbs on the kick and snare,” says O’Brien.

Using Massey DRT — a drum sample replacement plug-in for Avid Pro Tools, he swapped out the live drum parts with drum samples. Then on the snare, he added a gated reverb using Valhalla VintageVerb. He also used Valhalla Room to add a short plate sound to thicken up the kick and snare drums.

To get the toms to match the cavernous punchiness of the kick and snare, O’Brien augmented the live toms with compression and EQ. “I chopped up the toms so there wasn’t any noise in between each hit and then I sent those to the nonlinear short reverbs in Valhalla Room,” he says. “Next, I did parallel compression using the Waves SSL E-Channel plug-in to really squash the tom hits so they’re big and in your face. With EQ, I added more top end then I normally would to help the toms compete with the other elements in the mix. You can make the close mics sound really crispy with those SSL EQs.”

Next, he bussed all the drum tracks to a group aux track, which had a Neve 33609 plug-in by UAD and a Waves C4 multi-band compressor “to control the whole drum kit after the reverbs were laid in to make sure those tracks fit in with the other instruments.”

Sean O’Brien

On “Slither,” O’Brien also focused on the drums, but since this track is more ‘80s dance than ‘80s rock, O’Brien says he was careful to emphasize the composers’ ‘80s drum machine sounds (rather than the live drum kit), because that is where the character of the track was coming from. “My job on this track was to enhance the electric drum sounds; to give the drum machine focus. I used UAD’s Neve 1081 plug-in on the electronic drum elements to brighten them up.”

“Slither” also features Taiko drums, which make the track feel cinematic and big. O’Brien used Soundtoys Devil-Loc to make the taiko drums feel more aggressive, and added distortion using Decapitator from Soundtoys to help them cut through the other drums in the track. “I think the drums were the big thing that Zach [Robinson] and Leo [Birenberg] were looking to me for because the guitars and synths were already recorded the way the composers wanted them to sound.”

The Mix
Mix engineer Phil McGowan, who was responsible for mixing “Strike First,” agrees. He says, “The ‘80s sound for me was really based on drum sounds, effects and tape saturation. Most of the synth and guitar sounds that came from Zach and Leo were already very stylized so there wasn’t a whole lot to do there. Although I did use a Helios 69 EQ and Fairchild compressor on the bass along with a little Neve 1081 and Kramer PIE compression on the guitars, which are all models of gear that would have been used back then. I used some Lexicon 224 and EMT 250 on the synths, but otherwise there really wasn’t a whole lot of processing from me on those elements.”

Phil McGowan’s ‘Strike First’ Pro Tools session.

To get an ‘80s gated reverb sound for the snare and toms on “Strike First,” McGowan used an AMS RMX16 nonlinear reverb plug-in in Pro Tools. For bus processing, he mainly relied on a Pultec EQ, adding a bit of punch with the classic “Pultec Low End Trick” —which involves boosting and attenuating at the same frequency — plus adding a little bump at 8k for some extra snap. Next in line, he used an SSL G-Master buss compressor before going into UAD’s Studer A800 tape plug-in set to 456 tape at 30 ips and calibrated to +3 dB.

“I did end up using some parallel compression using a Distressor plug-in by Empirical Labs, which was not around back then, but it’s my go-to parallel compressor and it sounded fine, so I left it in my template. I also used a little channel EQ from FabFilter Pro-Q2 and the Neve 88RS Channel Strip,” concludes McGowan.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Cinema Audio Society sets next awards date and timeline

The Cinema Audio Society (CAS) will be holding its 55th Annual CAS Awards on Saturday, February 16, 2019 at the InterContinental Los Angeles Downtown in the Wilshire Grand Ballroom. The CAS Awards recognize outstanding sound mixing in film and television as well as outstanding products for production and post. Recipients for the CAS Career Achievement Award and CAS Filmmaker Award will be announced later in the year.

The InterContinental Los Angeles Downtown is a new venue for the awards. They were held at the Omni Los Angeles Hotel at California Plaza last year.

The timeline for the awards is as follows:
• Entry submission form will be available online on the CAS website on Thursday, October 11, 2018.
• Entry submissions are due online by 5:00pm PST on Thursday, November 15, 2018.
• Outstanding product entry submissions are due online by 5:00pm PST on Friday December 7, 2018.
• Nomination ballot voting begins online on Thursday, December 13, 2018.
• Nomination ballot voting ends online at 5:00pm PST on Thursday, January 3, 2019.
• Final nominees in each category will be announced on Tuesday, January 8, 2019.
• Final voting begins online on Thursday, January 24, 2019.
• Final voting ends online at 5:00pm PST on Wednesday, February 6, 2019.

 

Hobo’s Chris Stangroom on providing Quest doc’s sonic treatment

Following a successful film fest run that included winning a 2018 Independent Spirit Award, and being named a 2017 official selection at Sundance, the documentary Quest is having its broadcast premiere on PBS this month as part of their POV series.

Chris Stangroom

Filmed with vérité intimacy for nearly a decade, Quest follows the Rainey family who live in North Philadelphia. The story begins at the start of the Obama presidency with Christopher “Quest” Rainey, and his wife Christine (“Ma Quest”) raising a family, while also nurturing a community of hip-hop artists in their home music studio. It’s a safe space where all are welcome, but as the doc shows, this creative sanctuary can’t always shield them from the strife that grips their neighborhood.

New York-based audio post house Hobo, which is no stranger to indie documentary work (Weiner, Amanda Knox, Voyeur), lent its sonic skills to the film, including the entire sound edit (dialogue, effects and music), sound design, 5.1 theatrical and broadcast mixes.

We spoke with Hobo’s Chris Stangroom, supervising sound editor/re-recording mixer on the project about the challenges he and the Hobo team faced in their quest on this film.

Broadly speaking what did you and Hobo do on this project? How did you get involved?
We handled every aspect of the audio post on Quest for its Sundance Premiere, theatrical run and broadcast release of the film on POV.

This was my first time working with director Jonathan Olshefski and I loved every minute of it, The entire team on Quest was focused on making this film better with every decision, and he had to be the final voice on everything. We were connected through my friend producer Sabrina Gordon, who I had previously worked with on the film Undocumented. It was a pretty quick turn of events, as I think I got the first call about the film Thanksgiving weekend of 2016. We started working on the film the day after Christmas that year and were finished mix two weeks later with the entire sound edit and mix for the 2017 Sundance film festival.

How important is the audio mix/sound design in the overall cinematic experience of Quest? What was most important to Olshefski?
The sound of a film is half of the experience. I know it sounds cliché, but after years of working with clients on improving their films, the importance of a good sound mix and edit can’t be understated. I have seen films come to life by simply adding Foley to a few intimate moments in a scene. It seems like such a small detail in the grand scheme of a film’s soundtrack, but feeling that intimacy with a character connects us to them in a visceral way.

Since Quest was a film not only about the Rainey family but also their neighborhood of North Philly, I spent a lot of time researching the sounds of Philadelphia. I gathered a lot of great references and insight from friends who had grown up in Philly, like the sounds of “ghetto birds” (helicopters), the motorbikes that are driven around constantly and the SEPTA buses. As Jon and I spoke about the film’s soundtrack, those kinds of sounds and ideas were exactly what he was looking for when we were out on the streets of North Philly. It created an energy to the film that made it vivid and alive.

The film was shot over a 10-year period. How did that prolonged production affect the audio post? Were there format issues or other technical issues you needed to overcome?
It presented some challenges, but luckily Jon always recorded with a lav or a boom on his camera for the interviews, so matching their sound qualities was easier than if he had just been using a camera mic. There are probably half a dozen “narrated” scenes in Quest that are built from interview sound bites, so bouncing around from interviews 10 years apart was tricky and required a lot of attention to detail.

In addition, Quest‘s phenomenal editor Lindsay Utz was cutting scenes up until the last day of our sound mix. So even once we got an entire scene sounding clean and balanced, it would then change and we’d have to add a new line from some other interview during that decade-long period. She definitely kept me on my toes, but it was all to make the film better.

Music is a big part of the family’s lives. Did the fact that they run a recording studio out of their home affect your work?
Yes. The first thing I did once we started on the film was to go down to Quest’s studio in Philly and record “impulse responses” (IRs) of the space, essentially recording the “sound” of a room or space. I wanted to bring that feeling of the natural reverbs in his studio and home to the film. I captured the live room where the artists would be recording, his control room in the studio and even the hallway leading to the studio with doors opened and closed, because sound changes and becomes more muffled as more doors are shut between the microphone and the sound source. The IRs helped me add incredible depth and the feeling that you were there with them when I was mixing the freestyle rap sessions and any scenes that took place in the home and studio.

Jon and I also grabbed dozens of tracks that Quest had produced over the years, so that we could add them into the film in subtle ways, like when a car drives by or from someone’s headphones. It’s those kinds of little details that I love adding, like Easter eggs that only a handful of us know about. They make me smile whenever I watch a film.

Any particular scene or section or aspect of Quest that you found most challenging or interesting to work on?
The scenes involving Quest’s daughter PJ’s injury through her stay in the hospital and her return back home had a lot of challenges that came along with them. We used sound design and the score from the amazing composer T. Griffin to create the emotional arc that something dangerous and life-changing was about to happen.

Once we were in the hospital, we wanted the sound of everything to be very, very quiet. There is a scene in which Quest is whispering to PJ while she is in pain and trying to recover. The actual audio from that moment had a few nurses and women in the background having a loud conversation and occasionally laughing. It took the viewer immediately away from the emotions that we were trying to connect with, so we ended up scrapping that entire audio track and recreated the scene from scratch. Jon actually ended up getting in the sound booth and did some very low and quiet whispering of the kinds of phrases Quest said to his daughter. It took a couple hours to finesse that scene.

Lastly, the scene when PJ gets out of the hospital and is returning back into a world that didn’t stop while she was recovering. We spent a lot of time shifting back and forth between the reality of what happened, and the emotional journey PJ was going through trying to regain normalcy in her life. There was a lot of attention to detail in the mix on that scene because it had to be delivered correctly in order to not break the momentum that had been created.

What was the key technology you used on the project?
Avid Pro Tools, Izotope RX 5 Advanced, Audio Ease Altiverb, Zoom H4N; and a matched stereo pair of sE Electronics sE1a condenser mics.

Who else at Hobo was involved in Quest?
The entire Hobo team really stepped up on this project — namely our sound effects editors Stephen Davies, Diego Jimenez and Julian Angel; Foley artist Oscar Convers; and dialogue editor Jesse Peterson.

Netflix’s Lost in Space: New sounds for a classic series

By Jennifer Walden

Netflix’s Lost in Space series, a remake of the 1965 television show, is a playground for sound. In the first two episodes alone, the series introduces at least five unique environments, including an alien planet, a whole world of new tech — from wristband communication systems to medical analysis devices — new modes of transportation, an organic-based robot lifeform and its correlating technologies, a massive explosion in space and so much more.

It was a mission not easily undertaken, but if anyone could manage it, it was four-time Emmy Award-winning supervising sound editor Benjamin Cook of 424 Post in Culver City. He’s led the sound teams on series like Starz’s Black Sails, Counterpart and Magic City, as well as HBO’s The Pacific, Rome and Deadwood, to name a few.

Benjamin Cook

Lost in Space was a reunion of sorts for members of the Black Sails post sound team. Making the jump from pirate ships to spaceships were sound effects editors Jeffrey Pitts, Shaughnessy Hare, Charles Maynes, Hector Gika and Trevor Metz; Foley artists Jeffrey Wilhoit and Dylan Tuomy-Wilhoit; Foley mixer Brett Voss; and re-recording mixers Onnalee Blank and Mathew Waters.

“I really enjoyed the crew on Lost in Space. I had great editors and mixers — really super-creative, top-notch people,” says Cook, who also had help from co-supervising sound editor Branden Spencer. “Sound effects-wise there was an enormous amount of elements to create and record. Everyone involved contributed. You’re establishing a lot of sounds in those first two episodes that are carried on throughout the rest of the season.”

Soundscapes
So where does one begin on such a sound-intensive show? The initial focus was on the soundscapes, such as the sound of the alien planet’s different biomes, and the sound of different areas on the ships. “Before I saw any visuals, the showrunners wanted me to send them some ‘alien planet sounds,’ but there is a huge difference between Mars and Dagobah,” explains Cook. “After talking with them for a bit, we narrowed down some areas to focus on, like the glacier, the badlands and the forest area.”

For the forest area, Cook began by finding interesting snippets of animal, bird and insect recordings, like a single chirp or little song phrase that he could treat with pitching or other processing to create something new. Then he took those new sounds and positioned them in the sound field to build up beds of creatures to populate the alien forest. In that initial creation phase, Cook designed several tracks, which he could use for the rest of the season. “The show itself was shot in Canada, so that was one of the things they were fighting against — the showrunners were pretty conscious of not making the crash planet sound too Earthly. They really wanted it to sound alien.”

Another huge aspect of the series’ sound is the communication systems. The characters talk to each other through the headsets in their spacesuit helmets, and through wristband communications. Each family has their own personal ship, called a Jupiter, which can contact other Jupiter ships through shortwave radios. They use the same radios to communicate with their all-terrain vehicles called rovers. Cook notes these ham radios had an intentional retro feel. The Jupiters can send/receive long-distance transmissions from the planet’s surface to the main ship, called Resolute, in space. The families can also communicate with their Jupiters ship’s systems.

Each mode of communication sounds different and was handled differently in post. Some processing was handled by the re-recording mixers, and some was created by the sound editorial team. For example, in Episode 1 Judy Robinson (Taylor Russell) is frozen underwater in a glacial lake. Whenever the shot cuts to Judy’s face inside her helmet, the sound is very close and claustrophobic.

Judy’s voice bounces off the helmet’s face-shield. She hears her sister through the headset and it’s a small, slightly futzed speaker sound. The processing on both Judy’s voice and her sister’s voice sounds very distinct, yet natural. “That was all Onnalee Blank and Mathew Waters,” says Cook. “They mixed this show, and they both bring so much to the table creatively. They’ll do additional futzing and treatments, like on the helmets. That was something that Onna wanted to do, to make it really sound like an ‘inside a helmet’ sound. It has that special quality to it.”

On the flipside, the ship’s voice was a process that Cook created. Co-supervisor Spencer recorded the voice actor’s lines in ADR and then Cook added vocoding, EQ futz and reverb to sell the idea that the voice was coming through the ship’s speakers. “Sometimes we worldized the lines by playing them through a speaker and recording them. I really tried to avoid too much reverb or heavy futzing knowing that on the stage the mixers may do additional processing,” he says.

In Episode 1, Will Robinson (Maxwell Jenkins) finds himself alone in the forest. He tries to call his father, John Robinson (Toby Stephens — a Black Sails alumni as well) via his wristband comm system but the transmission is interrupted by a strange, undulating, vocal-like sound. It’s interference from an alien ship that had crashed nearby. Cook notes that the interference sound required thorough experimentation. “That was a difficult one. The showrunners wanted something organic and very eerie, but it also needed to be jarring. We did quite a few versions of that.”

For the main element in that sound, Cook chose whale sounds for their innate pitchy quality. He manipulated and processed the whale recordings using Symbolic Sound’s Kyma sound design workstation.

The Robot
Another challenging set of sounds were those created for Will Robinson’s Robot (Brian Steele). The Robot makes dying sounds, movement sounds and face-light sounds when it’s processing information. It can transform its body to look more human. It can use its hands to fire energy blasts or as a tool to create heat. It says, “Danger, Will Robinson,” and “Danger, Dr. Smith.” The Robot is sometimes a good guy and sometimes a bad guy, and the sound needed to cover all of that. “The Robot was a job in itself,” says Cook. “One thing we had to do was to sell emotion, especially for his dying sounds and his interactions with Will and the family.”

One of Cook’s trickiest feats was to create the proper sense of weight and movement for the Robot, and to portray the idea that the Robot was alive and organic but still metallic. “It couldn’t be earthly technology. Traditionally for robot movement you will hear people use servo sounds, but I didn’t want to use any kind of servos. So, we had to create a sound with a similar aesthetic to a servo,” says Cook. He turned to the Robot’s Foley sounds, and devised a processing chain to heavily treat those movement tracks. “That generated the basic body movement for the Robot and then we sweetened its feet with heavier sound effects, like heavy metal clanking and deeper impact booms. We had a lot of textures for the different surfaces like rock and foliage that we used for its feet.”

The Robot’s face lights change color to let everyone know if it’s in good-mode or bad-mode. But there isn’t any overt sound to emphasize the lights as they move and change. If the camera is extremely close-up on the lights, then there’s a faint chiming or tinkling sound that accentuates their movement. Overall though, there is a “presence” sound for the Robot, an undulating tone that’s reminiscent of purring when it’s in good-mode. “The showrunners wanted a kind of purring sound, so I used my cat purring as one of the building block elements for that,” says Cook. When the Robot is in bad-mode, the sound is anxious, like a pulsing heartbeat, to set the audience on edge.

It wouldn’t be Lost in Space without the Robot’s iconic line, “Danger, Will Robinson.” Initially, the showrunners wanted that line to sound as close to the original 1960’s delivery as possible. “But then they wanted it to sound unique too,” says Cook. “One comment was that they wanted it to sound like the Robot had metallic vocal cords. So we had to figure out ways to incorporate that into the treatment.” The vocal processing chain used several tools, from EQ, pitching and filtering to modulation plug-ins like Waves Morphoder and Dehumaniser by Krotos. “It was an extensive chain. It wasn’t just one particular tool; there were several of them,” he notes.

There are other sound elements that tie into the original 1960’s series. For example, when Maureen Robinson (Molly Parker) and husband John are exploring the wreckage of the alien ship they discover a virtual map room that lets them see into the solar system where they’ve crashed and into the galaxy beyond. The sound design during that sequence features sound material from the original show. “We treated and processed those original elements until they’re virtually unrecognizable, but they’re in there. We tried to pay tribute to the original when we could, when it was possible,” says Cook.

Other sound highlights include the Resolute exploding in space, which caused massive sections of the ship to break apart and collide. For that, Cook says contact microphones were used to capture the sound of tin cans being ripped apart. “There were so many fun things in the show for sound. From the first episode with the ship crash and it sinking into the glacier to the black hole sequence and the Robot fight in the season finale. The show had a lot of different challenges and a lot of opportunities for sound.”

Lost in Space was mixed in the Anthony Quinn Theater at Sony Pictures in 7.1 surround. Interestingly, the show was delivered in Dolby’s Home Atmos format. Cook explains, “When they booked the stage, the producer’s weren’t sure if we were going to do the show in Atmos or not. That was something they decided to do later so we had to figure out a way to do it.”

They mixed the show in Atmos while referencing the 7.1 mix and then played those mixes back in a Dolby Home Atmos room to check them, making any necessary adjustments and creating the Atmos deliverables. “Between updates for visual effects and music as well as the Atmos mixes, we spent roughly 80 days on the dub stage for the 10 episodes,” concludes Cook.

Behind the Title: Grey Ghost Music mix engineer Greg Geitzenauer

NAME: Greg Geitzenauer

COMPANY: Minneapolis-based Grey Ghost Music

CAN YOU DESCRIBE YOUR COMPANY?
Side A: Music production, creative direction and licensing for the advertising and marketing industries. Side B: Audio post production for the advertising and marketing industries.

WHAT’S YOUR JOB TITLE?
Senior Mix Engineer

WHAT DOES THAT ENTAIL?
All the hands-on audio post work our clients need — from VO recording, editing, forensic/cleanup work to sound design and final mixing.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The number of times my voice has ended up in a final spot when the script calls for “recording engineer. “

WHAT’S YOUR FAVORITE PART OF THE JOB?
There are some really funny people in this industry. I laugh a lot.

WHAT’S YOUR LEAST FAVORITE?
Working on a particular project so long that I lose perspective on whether the changes being made are helping any more.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get to work early — the time I get to spend confirming all my shit is together.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cutting together music for my daughter’s dance team.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was 14 when I found out what a recording engineer did, and I just knew. Audio and technology… it just pushes all my buttons.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Essentia Water, Best Buy, Comcast, Invisalign, 3M and Xcel Energy.

Invisalign

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
An anti-smoking radio campaign that won Radio Mercury and One Show awards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools HD, Kensington Expert Mouse trackball and Pentel Quicker-Clicker mechanical pencils.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Reddit and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Go home.

JoJo Whilden/Hulu

Color and audio post for Hulu’s The Looming Tower

Hulu’s limited series, The Looming Tower, explores the rivalries and missed opportunities that beset US law enforcement and intelligence communities in the lead-up to the 9/11 attacks. Based on the Pulitzer Prize-winning book by Lawrence Wright, who also shares credit as executive producer with Dan Futterman and Alex Gibney, the show’s 10 episodes paint an absorbing, if troubling, portrait of the rise of Osama bin Laden and al-Qaida, and offer fresh insight into the complex people who were at the center of the fight against terrorism.

For The Looming Tower’s sound and picture post team, the show’s sensitive subject matter and blend of dramatizations and archival media posed significant technical and creative challenges. Colorist Jack Lewars and online editor Jeff Cornell of Technicolor PostWorks New York, were tasked with integrating grainy, run-and-gun news footage dating back to 1998 with crisply shot, high-resolution original cinematography. Supervising sound designer/effects mixer Ruy García and re-recording mixer Martin Czembor from PostWorks, along with a Foley team from Alchemy Post Sound, were charged with helping to bring disparate environments and action to life, but without sensationalizing or straying from historical accuracy.

L-R: colorist Jack Lewars and editor Jeff Cornell

Lewars and Cornell mastered the series in Dolby Vision HDR, working from the production’s camera original 2K and 3.4K ArriRaw files. Most of the color grading and conforming work was done with a light touch, according to Lewars, as the objective was to adhere to a look that appeared real and unadulterated. The goal was for viewers to feel they are behind the scenes, watching events as they happened.

Where more specific grades were applied, it was done to support the narrative. “We developed different look sets for the FBI and CIA headquarters, so people weren’t confused about where we were,” Lewars explains. “The CIA was working out of the basement floors of a building, so it’s dark and cool — the light is generated by fluorescent fixtures in the room. The FBI is in an older office building — its drop ceiling also has fluorescent lighting, but there is a lot of exterior light, so its greener, warmer.”

The show adds to the sense of realism by mixing actual news footage and other archival media with dramatic recreations of those same events. Lewars and Cornell help to cement the effect by manipulating imagery to cut together seamlessly. “In one episode, we matched an interview with Osama bin Laden from the late ‘90s with new material shot with an Arri Alexa,” recalls Lewars. “We used color correction and editorial effects to blend the two worlds.”

Cornell degraded some scenes to make them match older, real-world media. “I took the Alexa material and ‘muddied’ it up by exporting it to compressed SD files and then cutting it back into the master timeline,” he notes. “We also added little digital hits to make it feel like the archival footage.”

While the color grade was subtle and adhered closely to reality, it still packed an emotional punch. That is most apparent in a later episode that includes the attack on the Twin Towers. “The episode starts off in New York early in the morning,” says Lewars. “We have a series of beauty shots of the city and it’s a glorious day. It’s a big contrast to what follows — archival footage after the towers have fallen where everything is a white haze of dust and debris.”

Audio Post
The sound team also strove to remain faithful to real events. García recalls his first conversations about the show’s sound needs during pre-production spotting sessions with executive producer Futterman and editor Daniel A. Valverde. “It was clear that we didn’t want to glamorize anything,” he says. “Still, we wanted to create an impact. We wanted people to feel like they were right in the middle of it, experiencing things as they happened.”

García says that his sound team approached the project as if it were a documentary, protecting the performances and relying on sound effects that were authentic in terms of time and place. “With the news footage, we stuck with archival sounds matching the original production footage and accentuating whatever sounds were in there that would connect emotionally to the characters,” he explains. “When we moved to the narrative side with the actors, we’d take more creative liberties and add detail and texture to draw you into the space and focus on the story.”

He notes that the drive for authenticity extended to crowd scenes, where native speakers were used as voice actors. Crowd sounds set in the Middle East, for example, were from original recordings from those regions to ensure local accents were correct.

Much like Lewars approach to color, García and his crew used sound to underscore environmental and psychological differences between CIA and FBI headquarters. “We did subtle things,” he notes. “The CIA has more advanced technology, so everything there sounds sharper and newer versus the FBI where you hear older phones and computers.”

The Foley provided by artists and mixers from Alchemy Post Sound further enhanced differences between the two environments. “It’s all about the story, and sound played a very important role in adding tension between characters,” says Leslie Bloome, Alchemy’s lead Foley artist. “A good example is the scene where CIA station chief Diane Marsh is berating an FBI agent while casually applying her makeup. Her vicious attitude toward the FBI agent combined with the subtle sounds of her makeup created a very interesting juxtaposition that added to the story.”

In addition to footsteps, the Foley team created incidental sounds used to enhance or add dimension to explosions, action and environments. For a scene where FBI agents are inspecting a warehouse filled with debris from the embassy bombings in Africa, artists recorded brick and metal sounds on a Foley stage designed to capture natural ambience. “Normally, a post mixer will apply reverb to place Foley in an environment,” says Foley artist Joanna Fang. “But we recorded the effects in our live room to get the perspective just right as people are walking around the warehouse. You can hear the mayhem as the FBI agents are documenting evidence.”

“Much of the story is about what went wrong, about the miscommunication between the CIA and FBI,” adds Foley mixer Ryan Collison, “and we wanted to help get that point across.”

The soundtrack to the series assumed its final form on a mix stage at PostWorks. Czembor spent weeks mixing dialogue, sound and music elements into what he described as a cinematic soundtrack.

L-R: Martin Czember and Ruy Garcia

Czembor notes that the sound team provided a wealth of material, but for certain emotionally charged scenes, such as the attack on the USS Cole, the producers felt that less was more. “Danny Futterman’s conceptual approach was to go with almost no sound and let the music and the story speak for themselves,” he says. “That was super challenging, because while you want to build tension, you are stripping it down so there’s less and less and less.”

Czembor adds that music, from composer Will Bates, is used with great effect throughout the series, even though it might go by unnoticed by viewers. “There is actually a lot more music in the series than you might realize,” he says. “That’s because it’s not so ‘musical;’ there aren’t a lot of melodies or harmonies. It’s more textural…soundscapes in a way. It blends in.”

Czembor says that as a longtime New Yorker, working on the show held special resonance for him, and he was impressed with the powerful, yet measured way it brings history back to life. “The performances by the cast are so strong,” he says. “That made it a pleasure to work on. It inspires you to add to the texture and do your job really well.”

Pace Pictures opens large audio post and finishing studio in Hollywood

Pace Pictures has opened a new sound and picture finishing facility in Hollywood. The 20,000-square-foot site offers editorial finishing, color grading, visual effects, titling, sound editorial and sound mixing services. Key resources include a 20-seat 4K color grading theater, two additional HDR color grading suites and 10 editorial finishing suites. It also features a Dolby Atmos mix stage designed by three-time Academy Award-winning re-recording mixer Michael Minkler, who is a partner in the company’s sound division.

The new independently-owned facility is located within IgnitedSpaces, a co-working site whose 45,000 square feet span three floors along Hollywood Boulevard. IgnitedSpaces targets media and entertainment professionals and creatives with executive offices, editorial suites, conference rooms and hospitality-driven office services. Pace Pictures has formed a strategic partnership with IgnitedSpaces to provide film and television productions service packages encompassing the entire production lifecycle.

“We’re offering a turnkey solution where everything is on-demand,” says Pace Pictures founder Heath Ryan. “A producer can start out at IgnitedSpaces with a single desk and add offices as the production grows. When they move into post production, they can use our facilities to manage their media and finish their projects. When the production is over, their footprint shrinks, overnight.”

Pace Pictures is currently providing sound services for the upcoming Universal Pictures release Mamma Mia! Here We Go Again. It is also handling post work for a VR concert film from this year’s Coachella Valley Music and Arts Festival.

Completed projects include the independent features Silver Lake, Flower and The Resurrection of Gavin Stone, the TV series iZombie, VR Concerts for the band Coldplay, Austin City Limits and Lollapalooza, and a Mariah Carey music video related to Sony Pictures’ animated feature Star.

Technical features of the new facility include three DaVinci Resolve Studio color grading suites with professional color consoles, a Barco 4K HDR digital cinema projector in the finishing theater, and dual Avid Pro Tools S6 consoles in the Dolby Atmos mix stage, which also includes four Pro Tools HDX systems. The site features facilities for sound design, ADR and voiceover recording, title design and insert shooting. Onsite media management includes a robust SAN network, as well as LTO7 archiving and dailies services, and cold storage.

Ryan is an editor who has operated Pace Pictures as an editorial service for more than 15 years. His many credits include the films Woody Woodpecker, Veronica Mars, The Little Rascals, Lawless Range and The Lookalike, as well as numerous concert films, music clips, television specials and virtual reality productions. He has also served as a producer on projects for Hallmark, Mariah Carey, Queen Latifah and others. Originally from Australia, he began his career with the Australian Broadcasting Corporation.

Ryan notes that the goal of the new venture is to break from the traditional facility model and provide producers with flexible solutions tailored to their budgets and creative needs. “Clients do not have to use our talent; they can bring in their own colorists, editors and mixers,” he says. “We can be a small part of the production, or we can be the backbone.”

Sound editor/re-recording mixer Will Files joins Sony Pictures Post

Sony Pictures Post Production Services has added supervising sound editor/re-recording mixer Will Files, who has spent a decade at Skywalker Sound. His brings with him credits on more than 80 feature films, including Passengers, Deadpool, Star Wars: The Force Awakens and Fantastic Four.

Files won a 2018 MPSE Golden Reel Award for his work on War for the Planet of the Apes. His current project is the upcoming Columbia Pictures release Venom out in US theaters this October.

He adds that he was also attracted by Sony Pictures’ ability to support his work both as a sound editor/sound designer and as a re-recording mixer. “I tend to wear a lot of hats. I often supervise sound, create sound design and mix my projects,” he says. “Sony Pictures has embraced modern workflows by creating technically-advanced rooms that allow sound artists to begin mixing as soon as they begin editing. It makes the process more efficient and improves creative storytelling.”

Files will work in a new pre-dub mixing stage and sound design studio on the Sony Pictures lot in Culver City. The stage has Dolby Atmos mixing capabilities and features two Avid S6 mixing consoles, four Pro Tools systems, a Sony 4K digital cinema projector and a variety of other support gear.

Files describes the stage as a sound designer/mixer’s dream come true. “It’s a medium-size space, big enough to mix a movie, but also intimate. You don’t feel swallowed up when it’s just you and the filmmaker,” he says. “It’s very conducive to the creative process.”

Files began his career with Skywalker Sound in 2002, shortly after graduating from the University of North Carolina School of the Arts. He earned his first credit as supervising sound editor on the 2008 sci-fi hit Cloverfield. His many other credits include Star Trek: Into Darkness, Dawn of the Planet of the Apes and Loving.

AlphaDogs’ Terence Curren is on a quest: to prove why pros matter

By Randi Altman

Many of you might already know Terence Curren, owner of Burbank’s AlphaDogs, from his hosting of the monthly Editor’s Lounge, or his podcast The Terence and Philip Show, which he co-hosts with Philip Hodgetts. He’s also taken to producing fun, educational videos that break down the importance of color or ADR, for example.

He has a knack for offering simple explanations for necessary parts of the post workflow while hammering home what post pros bring to the table. You can watch them here:

I reached out to Terry to find out more.

How do you pick the topics you are going to tackle? Is it based on questions you get from clients? Those just starting in the industry?
Good question. It isn’t about clients as they already know most of this stuff. It’s actually a much deeper project surrounding a much deeper subject. As you well know, the media creation tools that used to be so expensive, and acted as a barrier to entry, are now ubiquitous and inexpensive. So the question becomes, “When everyone has editing software, why should someone pay a lot for an editor, colorist, audio mixer, etc.?”

ADR engineer Juan-Lucas Benavidez

Most folks realize there is a value to knowledge accrued from experience. How do you get the viewers to recognize and appreciate the difference in craftsmanship between a polished show or movie and a typical YouTube video? What I realized is there are very few people on the planet who can’t afford a pencil and some paper, and yet how many great writers are there? How many folks make a decent living writing, and why are readers willing to pay for good writing?

The answer I came up with is that almost anyone can recognize the difference between a paper written by a 5th grader and one written by a college graduate. Why? Well, from the time we are very little, adults start reading to us. Then we spend every school day learning more about writing. When you realize the hard work that goes into developing as a good writer, you are more inclined to pay a master at that craft. So how do we get folks to realize the value we bring to our craft?

Our biggest problem comes from the “magician” aspect of what we do. For most of the history of Hollywood, the tricks of the trade were kept hidden to help sell the illusion. Why should we get paid when the average viewer has a 4K camera phone with editing software on it?

That is what has spurred my mission. Educating the average viewer to the value we bring to the table. Making them aware of bad sound, poor lighting, a lack of color correction, etc. If they are aware of poorer quality, maybe they will begin to reject it, and we can continue to be gainfully employed exercising our hard-earned skills.

Boom operator Sam Vargas.

How often is your studio brought in to fix a project done by someone with access to the tools, but not the experience?
This actually happens a lot, and it is usually harder to fix something that has been done incorrectly than it is to just do it right from the beginning. However, at least they tried, and that is the point of my quest: to get folks to recognize and want a better product. I would rather see that they tried to make it better and failed than just accepted poor quality as “good enough.”

Your most recent video tackles ADR. So let’s talk about that for a bit. How complicated a task is ADR, specifically matching of new audio to the existing video?
We do a fair amount of ADR recording, which isn’t that hard for the experienced audio mixer. That said, I found out how hard it is being the talent doing ADR. It sounds a lot easier than it actually is when you are trying to match your delivery from the original recording.

What do you use for ADR?
We use Avid Pro Tools as our primary audio tool, but there are some additional tools in Fairlight (included free in Blackmagic’s Resolve now) that make ADR even easier for the mixer and the talent. Our mic is Sennheiser long shotgun, but we try to match mics to the field mic when possible for ADR.

I suppose Resolve proves your point — professional tools accessible for free to the masses?
Yeah. I can afford to buy a paint brush and some paint. It would take me a lot of years of practice to be a Michelangelo. Maybe Malcolm Gladwell, who posits that it takes 10,000 hours of practice to master something, is not too far off target.

What about for those clients who don’t think you need ADR and instead can use a noise reduction tool to remove the offensive noise?
We showed some noise reduction tools in another video in the series, but they are better at removing consistent sounds like air conditioner hum. We chose the freeway location as the background noise would be much harder to remove. In this case, ADR was the best choice.

It’s also good for replacing fumbled dialogue or something that was rewritten after production was completed. Often you can get away with cheating a new line of dialogue over a cutaway of another actor. To make the new line match perfectly, you would rerecord all the dialogue.

What did you shoot the video with? What about editing and color?
We shot with a Blackmagic Cinema Camera in RAW so we could fix more in post. Editing was done in Avid Media Composer with final color in Blackmagic’s Resolve. All the audio was handled in Avid’s Pro Tools.

What other topics have you covered in this series?
So far we’ve covered some audio issues and the need for color correction. We are in the planning stages for more videos, but we’re always looking for suggestions. Hint, hint.

Ok, letting you go, but is there anything I haven’t asked that’s important?
I am hoping that others who are more talented than I am pick up the mantle and continue the quest to educate the viewers. The goal is to prevent us all becoming “starving artists” in a world of mediocre media content.

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

Making the indie short The Sound of Your Voice

Hunt Beaty is a director, producer and Emmy Award-winning production sound recordist based in Brooklyn. Born and raised in Nashville, this NYU Tisch film school grad spent years studying how films got made — and now he’s made his own.

The short film The Sound of Your Voice was directed by Beaty and written and produced by Beaty, José Andrés Cardona and Wesley Wingo. This thriller focuses a voiceover artist who is haunted by a past relationship as she sinks deep into the isolation of a recording booth.

Hunt Beaty

The Sound of Your Voice was shot on location at Silver Sound, a working audio post house, in New York City.

What inspired the film?
This short was largely reverse-engineered. I work with Silver Sound, a production and post sound studio in New York City, so we knew we had a potential location. Given access to such a venue, Andrés lit the creative fuse with an initial concept and we all started writing from there.

I’ve long admired the voiceover craft, as my father made his career in radio and VO work. It’s a unique job, and it felt like a world not often portrayed in film/TV up to this point. That, combined with my experience working alongside VO artists over the years, made this feel like fertile ground to create a short film.

The film is part of a series of shorts my producers and I have been making over the past few months. We’re all good friends who met at NYU film undergrad. While narrative filmmaking was always our shared interest and catalyst for making content, the realities of staying afloat in NYC after graduation prompted a focus on freelance commercial work in our chosen crafts in order to make a living. It’s been a great ride, but our own narrative work, the original passion, was often moved to the backburner.

After discussing the idea for years — we drank too many beers one night and decided to start getting back into narrative work by making shorts within a particular set of constrained parameters: one weekend to shoot, no stunts/weapons or other typical production complicators, stay close to home geographically, keep costs low, finish the film fast and don’t stop. We’re getting too old to remain stubbornly precious.

Inspired by a class we all took at NYU called “Sight and Sound: Film,” we built our little collective on the idea of rotating the director role while maintaining full support from the other two in whatever short currently in production.

Andrés owns a camera and can shoot, Wesley writes and directs and also does a little bit of everything. I can produce and use all of my connections and expertise having been in the production and post sound world for so long.

We shot a film that Wesley directed at the end of November and released it in January. We shot my film in January and are releasing it here and now. Andrés just directed a film that we’re in post-production on right now.

What were you personally looking to achieve with the film?
My first goal was to check my natural inclination to overly complicate a short story, either by including too many characters or bouncing from one location to another.
I wanted to stay in one close-fitting place and largely focus on one character. The hope was I’d have more time to focus on performance nuance and have multiple takes for each setup. Realistically, with indie filmmaking, you never have the time you want, but being able to work closely with the actors on variations of their performances was super important. I also wanted to be able to focus on the work of directing as opposed to getting lost in the ambition of the production itself.

How was the film made?
The production was noticeably scrappy, as all of these films inevitably become. The crew was just the three of us, in addition to a rotating set of production sound recordists and an HMU artist (Allison Brooke), who all agreed to help us out.

We rented from Hand Held films, which is a block away from Silver Sound, so we knew we could just wheel over all of the lights and grip equipment without renting a vehicle. Wesley would would primarily focus on camera and lighting support for Andrés, but we were all functioning within an “all hands on deck” framework. It was never pretty, but we made it all happen.

Our cast was incredibly chill, and we had worked with Harry, the engineer, on our first short Into Quiet. We shot the whole thing over a weekend, (again, one of our parameters) so we could do our best to get back to our day-to-day.

Also, a significant amount of re-writing was done to the off-screen voices in post based on the performance of our actress, which gave us some interesting room to play around while writing to the edit, tweaking the edit itself to fit new script, and in the recording of our voice actors to the cut. Meta? Probably.

We’ve been wildly fortunate to have the support of our post-sound team at Silver Sound. Theodore Robinson and Tarcisio Longobardi, in particular, gave so much of themselves to the sound design process in order to make this come to life. Given my background as a production recordist, and simply due to the storyline of this short, sound design was vital.

In tandem with that hard work, we had Alan Gordon provide the color grading and Brent Ferguson the VFX.

What are you working on now?
Mostly fretting about our cryptocurrency investments. But once that all crashes and burns, we’re going to try and keep the movie momentum going. We’re all pretty hungry to make stuff. Doing feels better than sitting idly and talking about it.

L-R: Re-recording mixer Cory Choy, Hunt Beaty and supervising sound editor Tarcisio Longobardi.

We’re currently in post for Andrés’ movie, which should be coming out in a month or so. Wesley also has a new script and we’re entering into pre-production for that one as well so that we can hopefully start the cycle all over again. We’re also looking for new scripts and potential collaborators to roll into our rotation while our team continues to build momentum towards potentially larger projects.

On top of that, I’m hanging up the headphones more often to transition out of production sound work and shift to fully producing and directing commercial projects.

What camera and why?
The Red Weapon Helium because the DP owns one already (laughs). But in all seriousness, it is an incredible camera. We also shot on elite anamorphic glass. Only had two focal lengths on set, a 50mm and a 100mm plus a diopter set.

How involved were you in the edit?
DP Andres Cardona singlehandedly did the first pass at a rough cut. After that, myself and my co-producer Wes Wingo gave elaborate notes on each cut thereafter. Also, we ended up re-writing some of the movie itself after reconsidering the overall structure of the film due to our lead actress’ strong performance in certain shots.

For example, I really loved the long close-up of Stacey’s eyes that’s basically the focal point of the movie’s ending. So I had to reconfigure some of the story points in order to give that shot its proper place in the edit to allow it to be the key moment the short is building up to.

The grade what kind of look were you going for?
The color grade was done by Alan Gordon at Post Pro Gumbo using a DaVinci Resolve. It was simply all about fixing inconsistencies and finessing what we shot in camera.

What about the sound design and mix?
The sound design was completed by Ted Robinson and Tarcisio Longobardi. The final mix was handled by Cory Choy at Silver Sound in New York. All the audio work was done in Reaper.

London’s LipSync upgrades studio, adds Dolby Atmos

LipSync Post, located in London’s Soho, has upgraded its studio with Dolby Atmos and  installed a new control system. To accomplish this, LipSync teamed up with HHB Communications’ Scrub division to create a hybrid dual Avid S6 and AMS Neve DFC3D desk while also upgrading the room to create Dolby Atmos mixes with a new mastering unit. Now that the upgrade to Theatre 2 is complete, LipSync plans to upgrade Theatre 1 this summer.

The setup has the best of both worlds with full access to both the classic Neve DFC sound while also bringing more hands-on control of their Avid Pro Tools automation via the S6 desks. In order to streamline their workflow as more projects are mixed exclusively “in the box,” LipSync installed the S6s within the same frame as the DFC, with custom furniture created by Frozen Fish Design. This dual operator configuration frees the mix engineers to work on separate Pro Tools systems simultaneously for fast and efficient turnaround in order to meet crucial project deadlines.

“The move into extended surround formats like Dolby Atmos is very exciting,” explains LipSync senior re-recording mixer Rob Hughes. “We have now completed our first feature mix in the refitted theater (Vita & Virginia directed by Chanya Button). It has a very detailed, involved soundtrack and the new system handled it with ease.”

Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Behind the Title: Sim’s supervising sound editor David McCallum

Name: David McCallum

Company: Sim International — Sim Post (Sound) in Toronto

Can you describe your company?
Sim provides equipment and creative services for projects in film and television. We have offices in Toronto, Los Angeles, New York City, Atlanta and Vancouver. I work as part of our Sim Post team in Toronto’s King St. East post facility where our emphasis is post sound and picture. We’re a small division, but we’ve been together as a team for nearly 15 years, the last three of which have been as part of Sim.

What’s your job title?
Supervising Sound Editor

What does that entail? 
My work is 90% project and client focused. I work directly on the sound design and sound edit for television and film projects, collaborating with directors and producers to shape the sound for their show. I also manage a team of people at Sim Post (Sound) Toronto that make up our sound crew(s). Part of my job also involves studio time, working closely with actors and directors to help shape the final performances that end up on the screen.

What would surprise people the most about what falls under that title?
I don’t work extreme hours. The screen industry, and post production in particular, has a well-deserved reputation for working its people hard, with long hours and tight demands as the norm rather than the exception. I don’t believe in overworking either my crew or myself. I strongly believe that people work best under predictable conditions.

Individuals need to be placed in positions to succeed, not merely survive. So, I put a lot of effort into managing my workload, getting on top of things well in advance of deadlines. I try to keep my days and weeks structured and organized so that I’m at my best as much as possible.

Sim’s ADR room.

What’s your favorite part of the job?
Finding a unique way to solve a sound problem. I love discovering a new trick, like using parts of two different words to make a character say a new word. You never know when or where you can find these kinds of solutions — hearing the possibilities requires patience and a keen ear. Sometimes the things I put together sound ridiculous, but because I mostly work alone nobody gets to hear my mistakes. Every now and then something unexpected works, and it’s golden.

What’s your least favorite?
There can be a lot of politics that permeate the film and television world. I prefer direct communication and collaboration, even if what you hear from someone isn’t what you want to hear.

What is your favorite time of the day?
The start. I like getting in a bit early, relaxing with a good coffee while I map out my goals for the day. Every day something good needs to be accomplished, and if the day gets off to a positive start then there is a better chance that all my objectives for that day will be met.

If you didn’t have this job, what would you be doing instead?
I would probably still be working in audio, but perhaps on the consumer side, selling high-end tube audio electronics and turntables. Either that, or I would be a tennis instructor.

Why did you choose this profession? 
That is actually a long story. I didn’t find this profession or career path on my own. I was put on it by a very thoughtful university professor named Clarke Mackay at Queen’s University in Kingston, Ontario, who saw a skill set in me that I did not recognize in myself. The path started with Clarke, went through the Academy of Canadian Cinema and Television and on to Jane Tattersall, who is senior VP of Sim Toronto.

Jane’s been the strongest influence in my career by far, teaching and steering me along the way. Not all lessons were intended, and sometimes we found ourselves on the same path. Sim Post (Sound) went through so many changes, and we managed a lot of them together. I don’t know if I would have found or stayed in this profession without Clarke or Jane, so in a way they have helped choose it for me.

Can you name some recent projects you have worked on?
The Handmaid’s Tale, Vikings, Alias Grace, Cardinal, Molly’s Game, Kin and The Man Who Invented Christmas.

What is the project that you are most proud of?
The one I’m working on now! More seriously, that does feel like an impossible question to answer, as I’ve felt pride at numerous times in my career. But most recently I would say our work on The Handmaid’s Tale has been tremendously rewarding.

I’d also mention a small Canadian documentary I was a part of in 2016 called Unarmed Verses. It’s a National Film Board of Canada documentary by director Charles Officer and producer Lea Marin. It touched my heart.

I’m also very proud of some of my colleagues that I’ve been overseeing for a few years now, in particular Claire Dobson and Krystin Hunter. Claire and Krystin are two young editors who are both doing extremely impressive work with me. I’m very proud of them.

Name three pieces of technology that you can’t live without.
Avid Pro Tools, Izotope RX and NOS Amperex 6922 vacuum tubes.

What social media channels do you follow?
I’ve only ever participated in Facebook, but the global political climate has me off of social media right now. I do my best to stay away from the “comments section of life.”

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
I try to reduce stress within the workplace. I have a few rituals that help… and good coffee. Nothing beats stress in the morning like a delicious coffee. But more practically, I try my best to stay on top of my work and make sure I thoroughly understanding my client’s expectations. I then actively manage my work so I’m not pushed up against deadlines.

But really the best tool is my team. I have an amazing team of people around me and I would be nothing without them.