Audionamix – 7.1.20

Category Archives: Audio Mixing

Culture Clash: The sound design of Mrs. America

By Patrick Birk

I think it’s fair to say that America is divided… and changing. But with the perfect storm that has been 2020 thus far, polarization has hit a fever pitch many have not seen in their lifetime. It may be apt, then, that FX and Hulu would release Mrs. America, a limited series depicting the fierce struggle that erupted in the US surrounding the movement to ratify the Equal Rights Amendment.

Scott Gershin

Set in the 1970s, the show explores one of the most contentious elements of the culture war and tells the stories of Phyllis Schlafly (Cate Blanchett) — a conservative activist who led the charge against the women’s liberation movement — and feminists such as Gloria Steinem (Rose Byrne), Shirley Chisholm (Uzo Aduba), Jill Ruckelshaus (Elizabeth Banks) and Betty Friedan (Tracey Ullman).

Scott Gershin was the supervising sound editor and designer for the series. His long list of credits includes Nightcrawler, American Beauty, Pacific Rim, Team America, Hellboy II, JFK, The Doors, Shrek and The Book of Life. The methods Gershin and his team put together to complete the show during quarantine give me hope for those of us in the arts during these clearly changing times.

Gershin and his editorial team, part of Sound Lab (at Keywords Studio), partnered up with walla group The Loop Squad to record Episodes 1 through 8 at the Todd-AO ADR stage in Los Angeles. The show was mixed at Burbank’s Westwind Sound with a team that included mixers Christian Minkler (dialogue and music) and Andrew King (sound effects).

Mrs. America takes place throughout the ‘70s. Do you enjoy working on period pieces?
I love it. You have to research and learn about the events of that period. You need to be able to smell it and hear it. I believe that we captured that time, its tone and its vernacular. A lot of it is very subtle, but if we did it wrong, you would notice it.

The subtlety in the sound design served the show well. You never get the impression that it was there for its own sake.
I have worked on a range of projects. On the quiet side is American Beauty. On the loud side is Pacific Rim. In both cases, nobody should know I exist. If the illusion is correct, you enjoy the story, you buy the illusion. Interestingly enough, there was so much design in American Beauty that nobody knows about. An example is the use of silence; it was done strategically to create an aural contrast to support the pace and the actors’ performances. We recreated subtle sounds, such as when they were eating at the table. It was all manufactured to match the dialogue’s ambience in that scene. As the audience watches a show, they should think that everything they’re hearing was recorded at that time, whether it’s fanciful and sci-fi, or it’s realistic.

What’s an example of what you thought this show needed?
I come from movies, so a major goal was to make sure this show could have the same level of detail that I would put into a film, despite budgetary limitations. The first thing I did was to go into my library, which is pretty big. I realized I had no women-only crowd recordings, so I called some fellow sound pros. They had men and women, and the occasional solo woman laughing or crying, but not crowds of women. That’s when I realized I had to create it myself. While I do this often on my films, I had to find a way to accomplish this within the budget I had and across nine episodes.

That was the fun — trying to capture that variety of accents, the vernaculars, in which different cultures and areas within the United States communicated during that time. Then there was capturing the acoustical spaces needed for the show, thinking about the right microphones to use, where they should be placed and how I could combine them with certain sound effects to help the illusion of very large venues, such as rallies or political conventions.

Scott Gershin (center) and the Mrs. America walla group.

Like during the Reagan-era toward the end?
Yes. In a couple of episodes, there were chants and singalongs. I combined walla group recording (which was somewhere between six to 15 female actresses, depending on the episode) with concert crowds, which I had to manipulate to sound like women. I’d envelope ( a form of precision blending) those crowds against the recorded walla group to give the illusion that a convention hall of women was chanting and singing, even though they weren’t.

We created a tickle of a certain sound to give it that reverb-y, mass-y kind of thing. It’s a lot of experimenting and a lot of “No, that didn’t work. Ooh, that worked. That’s kind of cool.” Then occasionally we’d be lucky that music was in the right place to mask it a little bit. So it’s a bit of a sonic puzzle, an audio version of smoke and mirrors.

It’s like being a painter. I love minimalism for the right shows and rocking the room for others. This show wasn’t about either. In discussions with Dahvi Waller (writer and showrunner), Anna Boden and Ryan Fleck (directors and executive producers), Stacey Sher (executive producer) Ebony Jones (post producer) and the picture editors for each episode (Todd Downing, Emily E. Greene and Robert Komatsu), we agreed on dense textures and details. We didn’t want to go the route of dialogue, music and six sound effects; we wanted to create a rich tapestry of details within the environments, using Foley to enhance (while not interfering with) the actors’ performances while hearing the voice and the sound of the times. (Check out our interview with Mrs. America‘s editors here.)

When you did need more specific varieties and dialects to come through in crowds and walla, how did you go about it?
I get very detail-oriented. For instance, when we talk about capturing the language of the time, a lot of this was embellished with The Loop Squad, our walla group. I wanted to make sure we were accurate. We didn’t want typical accents that are sometimes associated with conservatives or liberals; we wanted to capture the different tone and dialects of the region each group was from. The principal actresses did an amazing job portraying the different characters, so I wanted to follow suit and continue that approach.

For example, the scenes with Shirley Chisholm and the members of the Black feminist movement at the party. All the times you saw people’s mouths moving, there was no sound (the whole show was shot this way). It was all reproduced, so I wanted to make sure that we had the right vernacular, the right sonic style, the right representation – capturing the voice and sound of the times, the region, the culture.

So an emphasis on respectability politics?
Absolutely. At the party, there was a combination of different issues within the black community. In addition to women’s rights, it was about black rights and lesbian rights, and there were conflicts within that group of women.

Patty Connolly and Mark Sussman of The Loop Squad and I had to do a lot of research. It was important to find the right (loop) actresses who could portray that era, that time and culture, and come up with what the issues were that were being discussed within the different timelines that were covered in the show.

We had the opportunity to record a political rally held in LA. For the scene where Phyllis shows up in DC, and there’s a large group of women activists in front of the government building, the Bernie Sanders rally provided the exterior spatial perspective I needed. Adding in the walla group made it feel like it was all women discussing the issues of that time period.

What recording methods did you use?
Because I didn’t have a massive budget to record enormous amounts of people, I had to create hundreds of people with a small group of actors and actresses. For Mrs. America, I grabbed the big ADR room at the old Todd-AO building. Working with our ADR mixer, Jeffrey Roy, I brought in a bunch of my own mics and placed them in different places within the room. Traditionally, ADR stages use shotgun microphones to get rid of any ambience or size of the room. I didn’t do that at all. I wanted to use the acoustics of the room as an important component of the performance.

In using the room, I had to position the actors in strategic places within the room to accomplish a given scene. To get another perspective, I had them stand facing the wall one or two feet away, or in the middle of the room facing each other, or back to back in a line.

In Episode 3, when all the men were running to take back their seats in the convention center, I had them (the loop actors) running really fast in two opposing circles to try to create the feeling of motion and energy. By combining these perspectives and placing them in different speakers during the mix, it gave the scene a certain “spatial-ness” and energy. I loved using the acoustics of the room as a color and a major part of the illusion.

What mics were you using, and did you use any shotgun mics despite not relying on them?
The stage had a Sennheiser MKH 416 shotgun for specific lines, but I prefer using a Sennheiser MKH 800 more often than not. I like the midrange clarity better. For spatial effect, I used a pair of MKH 8040s in ORTF pattern in front (with the MKH 800 in the middle), while in the back I used the Sanken CSS-5 or the DPA 5100, which I moved around a bunch. This gave me the option to have a 5.0 perspective or to use the rear mics for an offstage or defocused perspective.

Each mic and their placement served as a kind of paint brush. When I sent my tracks to effects mixer Andy King at Westwind, I didn’t want to just bathe it in reverb because that would smear the spatial image. I wanted to preserve a 5-channel spatial spread or ambience of the room, so the left was different from the right and the front was different than the back, giving a kind of a movement within the room.

Working from home during the COVID-19 shutdown.

Did quarantine affect the post process?
Halfway through the mix, the virus hit. So little by little, we didn’t feel comfortable being in the same room together for safety reasons. We looked at different streaming technologies, which we had to figure out quickly, and decided to go with Streambox for broadcasting our mix in real time.

We ended up broadcasting privately to the showrunner, the producers and the picture editors. Our music editor Andrew Silver and I were online most of the time. At the end, the only people on the stage at Westwind were our two mixers, with our mix tech Jesse Ehredt in a room next to the dubbing stage and our first assistant Chris Richardson in his edit room down the hall. Everybody else was remote.

Doug Kent introduced us to Flemming Laursen and Dave Weathers of Center Point Post who supplied us with Streambox. We came up with something that worked within the bandwidth of everyone’s download speeds at their houses, since the whole country was working and going to school online. This challenged everyone’s capabilities. When picture and audio started to degrade, Flemming and I decided to increase the buffer size and decrease the picture quality a little bit, which seemed to solve a lot of our issues during peak usage times.

We used Zoom to communicate, allowing us to give each other notes in real time to the stage. I’ve got a similar setup at my home studio to what I have in Burbank, so I was able to listen in a quality environment. At the end of the day, we sent out QuickTimes in both 5.1 and stereo for everyone to listen to, which supported their schedules. Also, if a streaming glitch happened while we were Zooming or streaming, we could verify that it wasn’t in the mix.

It added more time to the process, but we still got it done while maintaining the quality we strived for. Being online made the process efficient. Using Zoom, I would contact dialogue editor Mike Hertlein, who was working from home, for an alternate line or a fix during the mix (with clients on Streambox). Fifteen minutes later we had it in the session and were mixing it.

Did you record walla groups remotely?
Yes, for some of Episode 8 and all of Episode 9. I’d normally record 10 to 15 actors at a time, recording five to eight takes of those 10 to 15 actors, each with a different acoustical perspective. Since Todd-AO was closed due to the pandemic, I had to come up with a different solution. I decided to have all the actors record in their closets or booths if they had them. They recorded into their own recording systems, with each actor having his or her own unique setup. The first thing I had to do was teach a number of actors how to record (basic audio and delivery).

I used Zoom to communicate and direct them through the different scenes. I could hear well enough through group chat on Zoom, and I was able to direct them and provide them with picture by sharing my second screen, like we do on an ADR stage. They would all record at once. From that point, I could direct an actor, saying, “You’re doing too much of this” or “You’re too loud.” I needed to maintain what we had done in previous episodes and keep that blended feel.

Can you talk about benefits and negatives to working this way?
A benefit was that every actor was on a separate track. When I record everybody in a group at Todd-AO, if one person’s off, the whole recording had to be scrapped. Separation let me choose whether I would use someone’s take or not. They didn’t pollute each other’s performances.

When it came to editing, instead of being five or six tracks (each containing eight to 15 actors), now it was 100 tracks. I had five to eight takes of each actor, so when combined, it made for a lot of tracks. Editing those took quite a bit more time. I had to EQ and clean up each actor’s setups, using different types of reverbs to fit the room (which Andy King and Christian Minkler did as well). We had created such cool sounds from previous episodes; the goal was to see if we could match them. It was a bit of a white-knuckle ride. We honestly weren’t sure we could pull it off. But when we were finished, Dahvi let me know she really couldn’t hear a difference between Episode 9 and the previous episodes.

How did you approach the scene in Episode 8, where Alice mixes cocktails with a “Christian pill” and ends up sharing a meal with a group of lesbian feminists? Did you consciously lean toward the surreal given how much time it took to make the home recordings blend naturally?
We had lots of discussions. At first, we wanted to try doing something a little out there. Basically, “How does Alice hear this?” We wanted to be consistent, but we wanted to be able to tell the story. Sarah Paulson did such a great job of portraying being drugged that we thought maybe we should take a step back and let her run with it a little bit, rather than trying to make something that we don’t see. Picture editor Todd Downing did a fantastic job of editing, which enhanced Sarah’s performance — giving it a psychedelic feel without going way over the top

We wanted to stay organic. We manipulated the mother’s voice on the phone a little when Alice’s pill started to take effect. For that scene, we recorded Alice’s mother’s lines on a phone during quarantine, and it worked out because the futz coming from recording on a phone translated quite well. To keep it organic, I did some subtle things: slowed down the crowds without affecting pitch and inserted backward and forward voices and blended them together so they would sound a little odd.

During the scene with the nun, at a certain point we replaced the nun’s voice with Cate’s voice, so she heard Cate’s voice talking through the nun’s performance. We did a number of other things and supported the hard cuts and time travel feel.

Overall it seemed like half my job was coming up with ways to keep working, creating new workflows, dealing with constant change. You’d have an hour’s notice to come up with plan B, plan C, plan D, and “How do we do this?” We’d all talk about it and say, “Let’s try this.” If that worked, cool. On to the next challenge!


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

Squeak E. Clean Studios adds Max Taylor ups Amanda Patterson

Squeak E. Clean Studios, which offers music compositions, audio post, music supervision and licensing, has added senior producer Max Taylor to its Los Angeles team and promoted Amanda Patterson to executive producer.

Taylor joins the studio after spending four years at 20th Century Fox coordinating music for the drama Empire. A Chicago-native, Taylor grew up amid the music industry, spending his formative years around concerts and musicians with his father as a sound engineer. He studied at Butler University before kicking off his career in music ticketing and promotions. He spent five years working across multiple functions of live events between Chicago, Los Angeles and New York City, liaising between artists, management teams and venues to optimize the guest experience. In 2016, he joined the production team on Empire as a music production liaison, overseeing the end-to-end process for the music-driven series.

Patterson has over a decade of experience in commercial music production, ranging from original composition to sync and sound design for film and advertising. In her two years with Squeak E. Clean Studios, she has produced work for such clients as Facebook, Pepsi and The North Face, as well as the award-winning spot Sport Changes Everything for Nike that features 14-year-old fighter Chantel Navarro.

Her passion for music led her down the path of a serious fan at an early age, surrounding herself with artists and music fans as impassioned as herself. After college, her musician friends began to swap touring gigs for longer-term opportunities, exploring the early days of the commercial music shop. This landed her a job at Black Iris Music, producing out of its Richmond, Virginia office and later moving to Portland, Oregon, to launch the shop’s West Coast presence. After that , she joined Marmoset Music before making the move to Los Angeles to work with Squeak E. Clean Studios.

“Through the murky waters that the world has been wading, it’s a welcome exception to speak of bright and positive news, and these two are those exceptions. Amanda has been a leader without the title ever since our merger, and her seasoned producing skills and ability to lead by example will make her a superb EP. On top of this, to bring on board a senior producer the calibre of Max, with a wealth of experience in composing culture-defining music for Empire, will not only add another layer of skills from another medium, but further expand our offering with another serious talent to help drive us into the future.” says Hamish Macdonald, managing director, Squeak E. Clean Studios.

Audionamix – 7.1.20

Creating the soundscape for Hulu’s Normal People

By Patrick Birk

Normal People, a new Hulu series based on Sally Rooney’s 2018 novel of the same name, details the intense yet strained romance between Marianne Sheridan (Daisy Edgar-Jones) and Connell Waldron (Paul Mescal). The athletic and popular Connell and the witty and socially outcast Marianne attend the same high school in County Sligo, Ireland. When the wealthy Marianne reveals her feelings for Connell — whose mother works as housekeeper for Marianne’s family — he begins a relationship with her on the condition of it being a secret. After a turbulent final year in their hometown, the two reconnect at Trinity College Dublin, where the tables have turned socially.

Steve Fanagan

The series was written by Rooney, Alice Birch and Mark O’Rowe and directed by Lenny Abrahamson and Hettie Macdonald. (You can see our interview with director/EP Abrahamson about the series here.)

I had the pleasure of chatting with Steve Fanagan (Game of Thrones, Room), who was the supervising sound editor, sound designer and re-recording mixer on the series. Fanagan also contributed to the source music on Normal People, which seamlessly interacts with both the design and a phenomenal licensed soundtrack. From Ireland but now based in London, Fanagan had a lot of knowledge to share on building the soundscape of this world.

Fanagan began his process working on the sound design and editorial at his studio in London before heading to Dublin to mix at picture and sound house Outer Limits, which is owned by Abrahamson’s longtime colorist Gary Curran. Fanagan finds that coordinating with the picture editors prior to the shoot is often helpful. In the case of Normal People, second director Macdonald worked with her editor, Stephen O’Connell, in London. Abrahamson worked with his editor, Nathan Nugent, in Dublin at Outer Limits. O’Connell assembled at Outer Limits then came over to London for the fine cutting.

Let’s find out more from Fanagan, how he works with the picture editors and his workflow on the series.

Let’s talk about working with picture editors. In Episode 5, there’s a shot where the music stops with a sudden cut to Jamie cracking a pool ball with his cue, right on the transient. I’ve met a few sound designers that use transients on cuts as a technique.
It’s a funny thing there. I have to put my hands up and say all credit goes to Nathan Nugent, who cut that episode. That was very much his design. In editorial and then in the mix, we worked on enhancing and expanding on that idea. One of the lovely things about working with a film editor like Nathan is that he is really sophisticated with sound and music.

The way I tend to work is to get my hands on the script at the beginning of the process, which always happens on Lenny’s projects. I then build a library of stuff I think will be useful. I might start mocking up some tonal, more abstract sound design, but I’m also thinking about all the fundamentals: room tone, wind or whatever environmental material they might need. I always make sure to give that to the editor in advance. Then, as the cutting begins, there is a library to pull from rather than the editor having to go search for things. Hopefully, in doing that, we’ve begun a bit of a conversation, and, hopefully, it means the editor is using stuff that I think is useful.

There’s something about a guide track that can become very loved because it’s working as they assemble a cut. It’s also a good way around copyright issues with temp effects while supplying the cutting room with high-quality material. I also always try to go and record material specifically for the show. For this series, I spent four days at the locations and got access to all the different houses, to the school, to parts of Trinity College.

A lot of the extras are actual Trinity students?
Yes, absolutely. They had about 130 extras, and from what I know, a bunch of those were actual Trinity students. That meant that I got some really good crowd material with that specific crowd, but I also got to just wander around the campus freely with my recording equipment, which you wouldn’t ordinarily be allowed to do.

On Connell’s first day in Trinity, he comes off Dame Street, which is a busy front road. He walks through the front arch into the front square, and there is something quite magical about leaving this busy city street. As you go through the front arch, it’s an echo-y space, and there’s quite a lovely acoustic to that. There’s always life in it. And when you come into the front square, a lot of the city disappears. Those three locations have such different acoustic properties to them. To be able to record a whole lot of options for those and build a piece that hopefully does that experience justice felt like a real gift.

I noticed a lot of character in the reverbs on each of the voices. Did you take impulse responses of the spaces?
I did. We started to do that with Lenny on his last film, The Little Stranger, and it worked really well. For Normal People, I captured an impulse response from every location I went to. Sometimes they work brilliantly, and sometimes they give you a really good idea of the kind of reverb you’re looking for. So reverb on this series is very much a mixture of Altiverb and those impulse responses, plus Exponential Audio PhoenixVerb for interiors. I’d also used Slapper from The Cargo Cult for exteriors and Avid’s ReVibe as another option on the buss. I try not to be purist about anything.

When you get to hang out in the places where they’re shooting, you have a bit of a feel for how they sound. And you remember that if you were speaking at that level in that space, there would be a kind of this size reverb on it. If I’m quieter or louder, that changes.

How else do you prepare for a project, apart from building that ambience library?
I love building a session template with plugins that I think will be appropriate for the show. With this, it was like, what do I think will be useful to us across all 12 episodes? For the noise reduction, dialogue/ADR supervisor Niall Brady is an iZotope RX wiz, and he used a lot of that on the dialogue track. I tend to use a mixture of Cedar and Waves WNS. I really love FabFilter Pro-Q 3 as an EQ. I love the versatility of it. If I want to put an extra notch or something in there, I can just keep adding to it. I also love their de-esser.

I always have some sort of compression available, but I don’t have it turned on as a default. In this case, I was using Avid Pro Compressor and more often than not, that’s turned off. I love the idea of trying to figure out the simplest approach to the cleanup and to the EQ end of things, and then trying to figure out what I can do with volume automation. After that, it’s just about figuring out if there’s a little bit of extra polish that’s needed through compression.

I always have multi-band compression available to me. On my dialogue auxes, I’ll have some extra compression or de-essing and limiting available if I need it. The one thing that I might leave on the buss is a limiter, but it’s doing almost nothing except managing the peaks. I keep all of my plugins and inserts bypassed and only enable them as I feel I need them.

How did you handle metering?
What’s interesting with the BBC spec is that they don’t just want, for example in our case, a -23 LUFS with a -3 dB true peak. They also want to make sure that the internal dynamic of that spec isn’t too broad for broadcast television — to make sure that at no point are you really hammering music at a very high level or allowing the quiet scenes to be so quiet that people volume surf. We worked hard to keep a good dynamic within that spec. I use VisLM to do those measurements because I quite like the Nugen interfaces. I also use their LMCorrect.

Dynamic range was used to great effect in Normal People. In a show like this where so much of the drama is unspoken, when explosions happen — like Marianne’s brother becoming physically abusive happened — they rocked me.
I think it’s that beautiful idea in sound — quiet and loud are always relative. If something needs to feel loud, then if you can have near-silence before it, you’ll get more of that jump in the moment when the loud bit happens.

It’s also true with the quiet stuff. An example of this in the series is their first kiss in Episode 1. It begins as a normal scene, wherein we’re hearing the ambience outside and inside Marianne’s house. The room tones and that environment are all very live and present, but as the actors lean into each other, it feels natural to start to pull that material away to create some space. This allows us to focus on their breathing and tiny movements because, if you were in that situation, you’re not going to be thinking about the birds outside. I can’t really overstate how much of a joy it was to work on this because all of that material is there. You’re working with this beautiful source material and the book — these beautifully realized scripts — and with directors who’ve really thought that space out. And they’re working with these actors, Paul and Daisy, who just are those characters.

There’s a beautiful moment, the morning after Marianne meets Connell at Trinity. She’s in her boyfriend’s flat and he gets up and asks her if there’s coffee. The look she gives him, you know he’s a dead man walking. It’s just that idea of being allowed to sit in people’s space, being trusted in a lot of ways as an audience member to observe and to infer rather than sort of being hammered over the head with exposition.

The screeners I received for this interview were not finalized in terms of picture or sound. As a sound designer I was grateful, because I could see behind the curtain and get insight into your process. It was like hearing a song you can already tell is good before the final mix. Apart from building ambience banks and templates at first, how do you whittle down a project to its final, most polished form?
What you’re always trying to do is to be open to the project that’s in front of you. Obviously, the sound work is always a team effort, so Niall Brady, our dialogue and ADR supervisor, is very involved in this as well.

I really love sound but also cinema and storytelling. The work that we get to do as sound designers is an amazing alchemy of all of those things. As you approach the work, you’re just trying to find the way into a scene or a character. If you can find small sounds that help you begin that process, some simple building blocks, then hopefully you can go on a journey with the sound work that will help your director realize the vision that he or she has for the work.

A lot of the time, that can be about really subtle stuff. At times it’s adding things like breath and very close-up breath and nonverbal utterances. The impetus for this in Normal People is intimacy — the idea that these characters are so close together and so inhabiting each other’s space that you’d hear those kinds of noises. A really lovely thing about sound is that it’s a very subconscious experience in a funny way.

Often, the moments where we become aware of sound in film is when it’s not working. So you’re trying to find the things that feel natural, honest and true to what you’re watching. Here, that began with trying to figure out what the environments might sound like. You’ve got this lovely contrast that is a real feature of the book and the series, which is that these two people have quite different backgrounds and quite different home lives.

The Foley crew that worked on this was Caoimhe Doyle and Jonathan Reynolds, and their work is incredibly specific in that way as well. From trying to pick the right shoes for a character to the right surface to miking techniques, all so that the right acoustic is on that sound.

This exploration is also facilitated by the collaboration that you have with the entire production. In this case, the collaboration is very much led and directed by Lenny, who has an amazing insight into everything that we’re working on, and his editor Nathan Nugent, who always has a really clear sound and music pass done on an episode. We always have a very interesting place to start. A lot of the time, rather than doing formal spotting sessions, we’ll have conversations. Lenny likes to talk to us in preproduction. I was in touch with the location sound mixer Niall O’Sullivan, who also worked on Lenny’s film Frank, to get ahead of any challenging shoot locations.

Then, what begins to happen is that Lenny and Nathan will share some of the picture with us, whether it’s some scenes that they’ve assembled or full episodes that are work in progress, and we tend to just start working on them. We’ll send some dialogue, music and effects bounces to them, so we’re starting to build the track a little bit. I’m always mixing as I cut because I feel like it’s the best way for me to present the work and figure out what it is. So we’re developing the mix from the beginning of editorial through to the end of the final mix. Sometimes you’re having conversations with them about what they liked or didn’t like, and sometimes you’re getting the next version of the cut back, and you can see from their AAF what they’ve used or haven’t used.

Also, as you’re watching the cuts, you’re looking for those notes from them that may appear on a card or a subtitle on the screen. So it’s a really helpful way to work.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.


Review: Sound Devices MixPre 3-II portable 5-track audio recorder

By Brady Betzel

Even though things are opening up slowly, many of us are still spending the majority of our time at home. Some of us are lucky enough to be working, some still furloughed and some unemployed. Many are using the time to try new things.

Here is one idea: While podcasts might not be a moneymaker out of the gate, they are a great way to share your knowledge with the community. Whether you’re making video or audio, there is one constant: You need high-quality audio recording equipment. In this review, I am going to be covering Sound Devices’ MixPre-3 II three-preamp, five-track, 32-bit float audio recorder.

While at Sundance this past January, I saw someone using this portable recorder. It seemed easy to use and very durable. I was intrigued enough to reach out to Sound Devices about a review; they sent me the MixPre-3 II, which is their smallest and most portable recorder. The box can run with a power cable, USB-C or with four AA batteries. The MixPre-3 II has several new advancements over the original MixPre-3, including 32-bit float recording, USB audio streaming, recording up to 192KHz, faster hardware, internal LTC timecode generation and output, adjustable limiters, auto-copy to USB drive, and pre-roll buffer increased to 10 seconds. But really the MixPre-3 II is a rugged field audio recorder, voiceover recorder, podcast recorder and more. It currently retails for around $680 from retailers like Sweetwater and B&H.

One of my goals for this review was to see how easy this recorder was to set up and use with relatively little technical know-how. It was really simple. In my mind, I wanted to plug in the MixPre-3 II and begin recording — and to my surprise, I was up and running within 10 minutes.

Up and Running
To test it, I grabbed an old AKG microphone (which I got when I purchased an entire Avid Nitris offline edit bay after Matchframe went out of business), an XLR cable, my Android phone and a spare TRRS cable to plug my phone into the MixPre-3 II for audio. I accessed the menus using the touch screen and the gain knobs. I was able to adjust the XLR mic on Input 1 and the phone on Input 2, which I set by pushing the gain knob to assign the input to the aux/mic input, and I plugged my headphones into the headphone jack to monitor the audio.

The levels on the on-screen display used in conjunction with my headphones let me dial in my gain without raising the noise floor too much. I was actually impressed at how quiet the noise was. I think I can attribute the clean audio to my AKG mic and Kashmir microphone preamps. The audio was surprisingly clean, even when recording in a noisy garage. I used Spotify on my Android phone to mix in songs while I was talking on the AKG (like a podcast), and within 10 minutes, I was ready to record.

Digging Deeper
Once I was up and running, I dove a little deeper and discovered that the MixPre-3 II can connect to my phone using Sound Devices’ Wingman app. The Wingman app can trigger recording as well as monitor your inputs. I then remembered I had a spare Timecode Systems Ultra Sync One timecode generator from a previous review. One essential tool when working with backup audio or field recording during a video shoot is sync.

Without too much work, I plugged in the Ultra Sync One using a Mini BNC-to-3.5mm cable connector to send mic level LTC timecode to the MixPre-3 II via the aux/mic input. I then enabled external timecode through the menus and had timecode running to the MixPre-3 II. The only caveat when using the 3.5mm plug for timecode from the Ultra Sync One is that you lose the ability to feed something like a 3.5mm mic or phone into the MixPre-3 II. But still, it was easy to get external timecode into the recorder.

It is really amazing that the MixPre-3 II gives users the ability to be up and running in minutes, not hours. Beyond the simplicity of use, you can dive deeper into the Advanced Menu to assign different inputs to different gain knobs, control the MixPre-3 II over USB, use timecode or HDMI signals to trigger recording and much more.

Summing Up
Sound Devices produces some great products. The MixPre-3 II costs under $700; while that might not be cheap, it’s definitely worth it. The high-quality casing and ease of use makes it a must-buy if you are looking for a podcast recorder, field audio recorder or mixer.

In addition to its product line, Sound Devices is also one of those companies making a difference during the pandemic.

The past couple of months have been very eye-opening for our industry and the world. We are seeing the best from people and businesses. My wife began sewing masks using her own fabric for hospital workers (for free), people are donating their time and money to bring meals to children and the elderly, and we’ve seen so many more amazing acts of kindness.
Sound Devices recently began producing face shields. See our coverage here. After we get through these hard times, I know that I and many others will remember the companies and people who tried to do their best for the community at large. Sound Devices is one of those companies.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producers Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 


Creating a sonic identity for Fiat’s new electric car

More and more, people are looking for an alternative to gas-powered vehicles, and Fiat’s new electric car is one of those alternatives. The Fiat 500e was going to be unveiled at the Milan Auto Show, but the COVID-19 crisis hit, and new plans were made. Instead, the car was introduced via a video shot on nearly empty Italian streets by Fiat’s Olivier François.

Electric cars are quiet, so a lot goes into creating their sonic identity. For the 500e, Fiat called on music agency Syn, which worked remotely with Red Rose Productions and Helsinki-based voice artist Rudi Rok. They were all tasked to create something that sounded like a human voice combined with the melody of “Amarcord” by Nino Rota — marrying the innovative and the organic.

Nick Wood

Syn creative director Nick Wood turned to Rok, who has given voice to a wide range of projects — from the engine sound of electric cars to games, movies, exhibitions and virtual reality experiences. Under the EU legislation, automakers must equip any electric vehicle with an Acoustic Vehicle Alert System (AVAS). This system emits warning noises when the car is moving at a slow speed. Rok’s voice became the interpretation of an engine sound where none exists, which was layered with the orchestral track for a sense of soaring beginnings.

While we typically cover audio for picture, we thought that others could learn from Syn’s experience in sonic branding and from this project in particular.

Syn created a making-of video that walks us through the process, and we reached out to Wood to find out more.

How early did you get involved?
We received a phone call at the end of November 2019 from Olivier Francois, the president/CMO of Fiat, and he explained they were looking for something unique for the engine start-up and hadn’t yet found it. I thought we’d have two to three months to really nail this, but he told me he needed our first presentation by December 18, so we jumped in.

What was the creative brief you were given by Fiat?
There is a legal requirement for the AVAS systems to generate a sound when electric road vehicles are reversing or running below 20kmh (about 12mph). The reason for limiting the speed is that at higher speeds, the tire noise usually drowns out the engine. The brief touched on both the practical and legal aspects, with strict technical requirements that would be tested and the creative opportunity to create a unique sonic identity for this car.

Olivier made it very clear he wanted the AVAS sound to have character and personality to enhance this iconic car and its established heritage and history. What they didn’t want was a generic-sounding turbine noise, the spaceship, Jetsons and other cliché engine sounds. The brief was more about what they didn’t want, with the expectation that our creative abilities would bring something unique, quirky and fun.

Were there any specific technical constraints in terms of approach or delivery for use in the car?
Yes, there were many. First of all, the quality of the speaker didn’t allow for any low-end bass, so forget anything inspired by THX or IMAX. The specifications for the playback system for the sound design were quite limited. The audio file had to be a mono, 16-bit Wav with a sample rate of only 32KHz, which is far less than our usual production output.

In addition, the sound frequency band had to be within 300Hz to 8KHz, again quite a limited range to work with when creating the sonic identity that would become the sound of the 500e engine. The additional challenge was to make sure that the composition could be looped seamlessly without any glitches.

How would you describe the sonic branding? And what aspects of the audio go into sonic branding?
Sonic branding is all about registering an idea or emotion, or both, that become interconnected with a product or company. Here, it’s human and machine that produce movement.

I was looking for elements that could be mixed together to give the engine sound soul and personality to create a recognizable sonic identity. In this instance, the sounds for the Fiat 500e feature the human voice instead of using synthesized sounds, and as the car reached 20kph, we seamlessly marry Nino Rota’s iconic melody from “Amarcord” with the engine sound, thus giving the car a very special sonic identity that we hoped would become synonymous with the Fiat 500e.

Rudi Rok

How did you come up with the creative approach?
We tried many approaches working with international teams of sound designers in Tokyo and LA, and they were all good, but they ultimately did not embody the specific personality and essence I was looking for. I traveled to Kyoto and was sitting in a Buddhist temple, and the idea of the human voice came to me when I heard the monks chanting. It’s at that moment I thought of Rudi Rok in Helsinki as the right person to collaborate with. He’s one of the world’s leading vocal artists.

Did you direct Rudi’s performance? How did that collaboration take place, virtually speaking?
I had worked with Rudi on a recent Disney project, and I knew he had the range and creative imagination to use his voice beyond typical sounds. We had several FaceTime sessions to discuss and brainstorm how to approach the legal and technical requirements whilst making this aesthetically inspiring. There were many, many revisions to get this right.

We went through so many variations; we tried adding in different layers and elements and had to mock up the pitch to bend as it would when the car’s speed increases or decreases. We were giving Fiat ideas right up to the deadline of the road test, when the sound had to pass the legal requirements.

What tools were used to create the sonic branding?
To record his voice, Rudi used a custom-made LDC (large diaphragm condenser) microphone straight into a Metric Halo ULN-8 interface. We wanted to capture the authentic voice/timbre without adding in any unnecessary coloration to the source.

Sound design and further manipulation of the source material were done in Ableton Live, also using plugins like the ones from FabFilter and Oeksound. For the mixing and mastering stage, we used Avid Pro Tools as the DAW, importing the files. These were then processed using iZotope’s Ozone suite, which provides a very useful set of clearly laid-out mastering tools, including filters, EQ and dynamics, which are ideal for this type of application. Of course, tools are only a means to an end, and the real magic is in the long-term creative collaboration.

How did the process of creating a sonic brand for a car differ from doing so for a brand campaign that would air on TV, for example?
One big difference is that this was a legally required sound; it has to be tested on a road track with microphones to ensure it passed sonic guidelines. We had limited sonic quality in the speakers that Fiat used, so we could not think big and audiophile like THX or IMAX, no bass.

The frequencies we used had to be checked and pass their tests, and it had to sound pretty interesting. I would say it was a drastically different approach with far more restrictions, and the objective of ensuring that pedestrians know there is a car coming was a considerable responsibility. Functionality and aesthetics combined.

You can see the making-of video here:


AATranslator tool updated for additional compatibility

By Cory Choy

The new beta of AATranslator v6.3.287 has been released and is designed to help pros transfer audio/video projects across multiple platforms. Product makers like Avid and Adobe are constantly updating their tools — Pro Tools, Media Composer, Premiere and Audition — and that can cause compatibility issues. That’s where AATranslator comes in.

AATranslator allows for export of stereo AAFs and additional PTX support, among other things, but more on that in a bit. This is sure to please post sound pros like me, who like to be able to use their tool of choice and still remain compatible with industry standards.

AATranslator, developed by Michael Rooney of Suite Spot, was originally created to allow him to send Adobe Audition (which sprung from the wildly popular Cool Edit Pro) sessions to a mixer using Pro Tools. It has since developed into an extensive session conversion program. That makes it an integral tool for anyone who needs to move a session from one DAW to another, and for anyone who wants to use their DAW of choice for sound design and mix for video edited on an NLE.

AATranslator is utilitarian. Rather than waste time on a fancy-looking GUI, AATranslator focuses on functionality and ease of use. How do you use it? Select the file format you are importing from the top. Select the file format you’d like to export to at the bottom. Click “Generate Output.” Voila! You’re done.

The enhanced version, which supports OMF, AAF and PTX and is therefore necessary for most pro users, costs $199.  While AATranslator is written for PC, Mac users can run it using Winebottler, and the developers will send detailed instructions about how to do it upon request.

AATranslator currently converts sessions to/from Ableton Live, Ardour, Audiofile, Audition, Auria, Cool Edit Pro, Capture, Cubase, DAR, Digital Performer, Fairlight, Final Cut Pro 7 and X, Harrison Mixbus, Hindenburg, iMovie, Lightworks, Logic and Logic Pro (via OMF and AAF), Media Composer/Adrenaline, N-Track, Nuendo, Paris, Premiere, Pro Tools, Pyramix, Reaper, Sadie, Samplitude/Sequoia, SAWStudio, Sonar (via OMF), Soundscape, Soundtrack Pro, Studio One, Studio Live, Symphony, Tascam MX2424, Tascam X48, Tracktion, TripleDat, Vegas, Waveframe and Wavelab.

Cory Choy using AATranslator while working from home.

The newest update sports these shiny new features:- AAF: Apart from a huge number of fixes and DAW-specific changes, AATranslator can now export stereo AAFs.
-AES31/ADL: The new software fixes a number of Sadie-specific issues and can now read Dalet ADLs and better deal with poly channel media.- Ableton Live: A lot of improvements in this area, including the ability to read Live 10 sessions.
– Ardour/Mixbus: Many improvements, including support for poly channel media when converting to Mixbus.- Pro Tools: Many additional details extracted from Pro Tools sessions and Meter, Tempo and Marker descriptions written to PTX files.
– Can now read Audacity, Premiere Pro, SAW 64-bit, StarTrack, Tascam MMR-16, Tascam X48 and TripleDat sessions.
– Many more improvements and fixes related to Hindenburg, Final Cut Pro, OMF, OpenTL, Pyramix, Reaper, Audition, Vegas, Samplitude, Cubase, Nuendo, Studio One and Tracktion.
– Improved support for poly channel media.
– Reduced conversion time for some formats.
– Added a sample rate converter.


Cory Choy is a New York City-based sound mixer with over 17 years of experience. He won an Emmy Award for his work on ABC’s Born To Explore and is a partner at Silver Sound.


The sonic world of Quibi’s Survive

By Patrick Birk

In response to shortened attention spans and an increase in people watching content on smaller devices, Jeffrey Katzenberg and Meg Whitman started Quibi. This new streaming service aims to deliver Hollywood-quality productions, with a twist — the platform is solely focused on mobile viewership, with episodes of each big-budget series divided into “quick bites” that are generally 10 minutes long.

Peter G. Adams

I recently spoke with Peter G. Adams, who composed the score for one of Quibi’s initial offerings, Survive. Directed by Mark Pellington, the show focuses on Jane (Sophie Turner), a suicidal young woman who finds herself in dire straits when her flight crashes in the wilderness. She and Paul (Corey Hawkins), the only other survivor, must try to escape a frozen mountaintop, as Jane continues to wrestle with her suicidal tendencies.

An ASCAP award-winning composer, Adams past projects include Den of Thieves, Game Night and Amazon’s Too Old to Die Young. He was kind enough to give us some insight into his process.

Was it difficult to pack the emotional depth of a drama into 10 minutes episodes? How did you maintain subtlety within the cues based on the time constraints of the episodes?
I don’t think so. I I mostly just looked at it like I was scoring a movie. Even though the episodes are short, I feel like once you watch it all together, the narrative is more along the lines of what most audiences will be used to in a film. I thought there was enough time within scenes. I didn’t feel like they compressed scenes in order to accommodate the format.

Did you write any of the pop-style songs featured in the show, or did you score around them?
I didn’t write the stuff with the voices, but a lot of the things around it are things that I created. Because we did use some licensed music in this show — some really beautiful pop tunes — I felt like I should probably not use that in the score. I wanted to have the two ideas stand apart from each other a little bit.

What instruments did you record?
We recorded a lot of strings. I recorded myself playing some strings here in my studio, along with things like guitars and bass. We also did some sessions with a small ensemble… just two players. Then we did a chamber ensemble of 25 string players. So mostly I recorded a lot of strings for the emotional punch.

Do you find that recording the real deal gets you further than using a library?
Libraries are great and everybody uses them, but absolutely. I mean there’s just no comparison to having life breathed into the music. We had some nice players on this, and it always makes a difference. I can’t tell you how many times I’ve needed to have players or there weren’t budgets to have players, and it’s never the same.

Do you work out of your home?
I have a little home studio that’s big enough to record in. That’s mostly the way I’ve done it in the last five years or so. I used to not work at home, but now I do, and there’s a lot of benefits right now with COVID, of course.

Were there any skills that you pulled out from the reserve for this project?
One of the great things about working on Survive was getting to write some really fun, emotional themes. I love writing music that can touch people emotionally. That’s something that I probably like doing more than anything else.

Of course, that’s a very subjective thing. Sometimes you think you got it and you don’t, so you have to really collaborate and make sure that people are feeling what you are. You need to make sure that you’re speaking a musical language that is universal enough that people can understand it. love doing that and I got to do it on Survive.

I’ve seen “composer cheat sheets” online with chord changes that are supposed to correspond to a certain emotional response. Do you have a framework like that or is it a clean slate every time?
It’s never a really a clean slate — I bring my own bag of tricks to the job and have my own path that I tread down; things that I’m used to, musically speaking. But every job is different, so I tailor my language to each one. Sometimes one thing will work and sometimes another thing will work. Cheat sheets can be a good thing; anything that can prompt you to find a creative path is great.

There are some universals — I tend to write shorter themes because if I can express a theme in eight bars. I like it better that way. If I try to write longer themes upfront, I’m always trying to pair them down throughout the show. If I write a shorter theme that’s succinct, I find that it fits in more places and more subtly.

Sometimes there’ll be references that filmmakers want me to listen to, or ideas they want to share. I take all the feedback and input and incorporate that into what I do. I did that with Survive. Mark Pellington was very involved in the scoring process; I would call it a collaboration. The producers were also involved creatively, especially Cary Granat.

When you’re using synth, do you design all your own patches? Do you find presets useful as a starting point or have you ever just used a preset in your project?
All the above. A lot of times, I’ll start with presets. If I don’t have the time, that’s where I start and then I always try to make it my own if I can. That’s not to say that some things haven’t just gotten in there that are very “preset-y,” but I do my best to customize things.

I really like that process of sound design and creating synth patches, or processing sounds or recording sounds and then mixing those sounds together. I do that almost as a pre-production routine a lot of times. I start to create a sound world, and I really like doing that, so I try to get the time to do it on every project if I can, and always make it custom.

What kind of sound palette did you begin assembling when you started working on Survive?
I started writing demos a couple of weeks before I had picture. I read the script, and the creative process arose from conversations with Mark Pellington, and we would exchange music we liked.

Mark had some of my previous scores and would point out particular pieces he thought could really work. From those early conversations, I started to write demos and create a sonic world for Survive. I talked about instruments that he liked, and I would couple those instruments with an ambient sound palette with me overdubbing myself on guitar or on strings, or creating feedback, or me taking a synth, stretching it out and ramping it.

I tried to do whatever I could to create this ambient world. The world of Survive is less synth-oriented, so we don’t hear a lot of overt synth sounds in the show. We mostly hear real instruments, whether they be bowed percussion instruments or strings or whatever. Survive is more of a handmade feel. I like to doing that a lot; I just try to record myself and overdub myself and create an atmosphere that way.

I noticed some changes in instrumentation throughout. For example, it sounded like a ghostly flute comes into play around the time when Jane and Paul are leaving the plane together.
That’s actually a violin, but it’s played in a way where it sounds flute-like; I like doing this. We were talking about samples before, and there are some things that you just can’t get from samples. One of them is something like trying to make a string instrument sound like a flute or something like that.

For the vast emptiness of Survive — the mountain — I played strings to create something with a handmade feel. I wanted it to feel weird and lonely and personal and also just a little off. Sort of both natural and unfamiliar. This allows the audience to feel unsettled and uncomfortable, just as the characters are in that setting.

Did you coordinate with the post sound department? There’s a moment in Episode One where Jane’s pocket watch is ticking and it intersects rhythmically with the score.
The sound designers did their own pocket watch and I did a watch as well. That’s definitely one of Mark Pellington’s hallmarks: having sound design that might be score and score that might be sound design. I sampled in some stopwatches and timed them up to be in rhythm with the score and then brought them in. I also did some processing on them. So it’s not just one stopwatch, it’s a bunch of different ones that have different processing on them, and then I faded them in and out and brought them in when they would request them.

There are some huge sound design moments in the show, like the plane crash and the avalanche. How do you judge where to focus the score and where sound design will take prominence?
It’s different for every project. For the plane crash, I did try to go pretty big, but you also know that you’re going to be sort of fighting with sound design. So I tried to do things that would not ever sound like what you’re actually seeing on screen. At the same time, I put in some things like rises in the plane crash section, where there’s also the sound of the jet engines happening, Those are also speeding up. So once again, it’s like, “Is this score or is this sound design?”

In the case of the avalanche, there’s a lot of score in there, but I don’t think it’s too big in the mix. Mostly because there’s all this production sound and sound design going on in those sections.

Let’s talk about your path. How did you get into scoring?
I’ve been involved with music since I was seven. I played a bunch of different instruments and in a lot of different kinds of ensembles. I started doing songwriting and playing in bands in my teens, and then that led to me getting a composition degree.

When I started a degree in music, I didn’t think about doing film. I had always liked film music, and it was always on my radar, but I’m not from California, so it seemed like something that would be hard to do and hard to get into. But I got a chance to intern with a composer in LA, and once I did that I realized that what I was doing back at home was so similar to what people were doing out here in LA. And it would be an easy jump for me.

What were the similarities that made getting into scoring such a seamless transition for you?
I was doing songwriting and playing in bands and I had a very avant garde classical music education. So all those things immediately came into play.

There were always sound design elements that I had to work with in scoring and that’s something that I had been doing in my records. There was the need to write nice melodies with nice chords that people might relate to. That’s something that I had been working with in pop music, so it translated pretty well.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

 


Barking Owl adds first NYC hire, sound designer/mixer Dan Flosdorf

Music and sound company Barking Owl has made its first New York hire with sound designer/mixer Dan Flosdorf. His addition comes as the LA-based company ramps up to open a studio in New York City this summer.

Flosdorf’s resume includes extensive work in engineering, mixing and designing sound and music for feature films, commercials and other projects. Prior to his role at Barking Owl, Flosdorf was mixer and sound designer at NYC-based audio post studio Heard City for seven years.

In addition to his role at Heard City, Flosdorf has had a long collaboration with director Derek Cianfrance creating sound design for his films The Place Beyond the Pines and Blue Valentine. He also worked with Ron Howard on his feature, Made in America. Flosdorf’s commercial work includes working with brands such as Google, Volvo, Facebook and Sprite and multiple HBO campaigns for Watchmen, Westworld and Game of Thrones.

Leading up to the opening of Barking Owl’s NYC studio, Flosdorf is four-walling in NYC and working on projects for both East and West Coast clients

Even with the COVID crisis, Barking Owl’s New York studio plans continue. “COVID-19 has been brutal for our industry and many others, but you have to find the opportunity in everything,” says Kelly Bayett, founder/creative director at Barking Owl. “Since we were shut down two weeks into New York construction, we were able to change our air systems in the space to ones that bring in fresh air three times an hour, install UV systems and design the seating to accommodate the new way of living. We have been working consistently on a remote basis with clients and so in that way, we haven’t missed a beat. It might take us a few months longer to open there, but it affords us the opportunity to make relevant choices and not rush to open.”


Posting Michael Jordan’s The Last Dance — before and during lockdown

By Craig Ellenport

One thing viewers learned from watching The Last Dance — ESPN’s 10-part documentary series about Michael Jordan and the Chicago Bulls — is that Jordan might be the most competitive person on the planet. Even the slightest challenge led him to raise his game to new heights.

Photo by Andrew D. Bernstein/NBAE via Getty Images

Jordan’s competitive nature may have rubbed off on Sim NY, the post facility that worked on the docuseries. Since they were only able to post the first three of the 10 episodes at Sim before the COVID-19 shutdown, the post house had to manage a work-from-home plan in addition to dealing with an accelerated timeline that pushed up the deadline a full two months.

The Last Dance, which chronicles Jordan’s rise to superstardom and the Bulls’ six NBA title runs in the 1990s, was originally set to air on ESPN after this year’s NBA Finals ended in June. With the sports world starved for content during the pandemic, ESPN made the decision to begin the show on April 19 — airing two episodes a night on five consecutive Sunday nights.

Sim’s New York facility offers edit rooms, edit systems and finishing services. Projects that rent these rooms will then rely on Sim’s artists for color correction and sound editing, ADR and mixing. Sim was involved with The Last Dance for two years, with ESPN’s editors working on Avid Media Composer systems at Sim.

When it became known that the 1997-98 season was going to be Jordan’s last, the NBA gave a film crew unprecedented access to the team. They compiled 500 hours of 16mm film from the ‘97-’98 season, which was scanned at 2K for mastering. The Last Dance used a combination of the rescanned 16mm footage, other archival footage and interviews shot with Red and Sony cameras.

Photo by Andrew D. Bernstein/NBAE via Getty Images

“The primary challenge posed in working with different video formats is conforming the older standard definition picture to the high definition 16:9 frame,” says editor Chad Beck. “The mixing of formats required us to resize and reposition the older footage so that it fit the frame in the ideal composition.”

One of the issues with positioning the archival game footage was making sure that viewers could focus when shifting their attention between the ball and the score graphics.

“While cutting the scenes, we would carefully play through each piece of standard definition game action to find the ideal frame composition. We would find the best position to crop broadcast game graphics, recreate our own game graphics in creative ways, and occasionally create motion effects within the frame to make sure the audience was catching all the details and flow of the play,” says Beck. “We discovered that tracking the position of the backboard and keeping it as consistent as possible became important to ensuring the audience was able to quickly orient themselves with all the fast-moving game footage.”

From a color standpoint, the trick was taking all that footage, which was shot over a span of decades, and creating a cohesive look.

Rob Sciarratta

“One of main goals was to create a filmic, dramatic natural look that would blend well with all the various sources,” says Sim colorist Rob Sciarratta, who worked with Blackmagic DaVinci Resolve 15. “We went with a rich, slightly warm feeling. One of the more challenging events in color correction was blending the archival work into the interview and film scans. The older video footage tended to have various quality resolutions and would often have very little black detail existing from all the transcoding throughout the years. We would add a filmic texture and soften the blacks so it would blend into the 16mm film scans and interviews seamlessly. … We wanted everything to feel cohesive and flow so the viewer could immerse themselves in the story and characters.”

On the sound side, senior re-recording mixer/supervising sound editor Keith Hodne used Avid Pro Tools. “The challenge was to create a seamless woven sonic landscape from 100-plus interviews and locations, 500 hours of unseen raw behind-the-scenes footage, classic hip-hop tracks, beautifully scored instrumentation and crowd effects, along with the prerecorded live broadcasts,” he says. “Director Jason Hehir and I wanted to create a cinematic blanket of a basketball game wrapped around those broadcasts. What it sounds like to be at the basketball game, feel the game, feel the crowd — the suspense. To feel the weight of the action — not just what it sounds like to watch the game on TV. We tried to capture nostalgia.”

When ESPN made the call to air the first two episodes on April 19, Sim’s crew still had the final seven episodes to finish while dealing with a work-from-home environment. Expectations were only heightened after the first two episodes of The Last Dance averaged more than 6 million viewers. Sim was now charged with finishing what would become the most watched sports documentary in ESPN’s history — and they had to do this during a pandemic.

Stacy Chaet

When the shutdown began in mid-March, Sim’s staff needed to figure out the best way to finish the project remotely.

“I feel like we started the discussions of possible work from home before we knew it was pushed up,” says Stacy Chaet, Sim’s supervising workflow producer. “That’s when our engineering team and I started testing different hardware and software and figuring out what we thought would be the best for the colorist, what’s the best for the online team, what’s the best for the audio team.”

Sim ended up using Teradici to get Sciarratta connected to a machine at the facility. “Teradici has become a widely used solution for remote at home work,” says Chaet. “We were easily able to acquire and install it.”

A Sony X300 monitor was hand-delivered to Sciarratta’s apartment in lower Manhattan, which was also connected to Sciarratta’s machine at Sim through an Evercast stream. Sim shipped him other computer monitors, a Mac mini and Resolve panels. Sciarratta’s living room became a makeshift color bay.

“It was during work on the promos that Jason and Rob started working together, and they locked in pretty quickly,” says David Feldman, Sim’s senior VP, film and television, East Coast. “Jason knows what he wants, and Rob was able to quickly show him a few color looks to give him options.

David Feldman

“So when Sim transitioned to a remote workflow, Sciarratta was already in sync with what the director, Jason Hehir, was looking for. Rob graded each of the remaining seven episodes from his apartment on his X300 unsupervised. Sim then created watermarked QTs with final color and audio. Rob reviewed each QT to make sure his grade translated perfectly when reviewed on Jason’s retina display MacBook. At that point, Sim provided the director and editorial team access for final review.”

The biggest remote challenge, according to producer Matt Maxson, was that the rest of the team couldn’t see Sciarratta’s work on the X300 monitor.

“You moved from a facility with incredible 4K grading monitors and scopes to the more casual consumer-style monitors we all worked with at home,” says Maxson. “In a way, it provided a benefit because you were watching it the way millions of people were going to experience it. The challenge was matching everyone’s experience — Jason’s, Rob’s and our editors’ — to make sure they were all seeing the same thing.”

Keth Hodne

For his part, Hodne had enough gear in his house in Bay Ridge, Brooklyn. Using Pro Tools with Mac Pro computers at Sim, he had to work with a pared-down version of that in his home studio. It was a challenge, but he got the job done.

Hodne says he actually had more back-and-forth with Hehir on the final episode than any of the previous nine. They wanted to capture Jordan’s moments of reflection.

“This episode contains wildly loud, intense crowd and music moments, but we counterbalance those with haunting quiet,” says Hodne. “We were trying to achieve what it feels like to be a global superstar with all eyes on Jordan, all expectations on Jordan. Just moments on the clock to write history. The buildup of that final play. What does that feel and sound like? Throughout the episode, we stress that one of his main strengths is the ability to be present. Jason and I made a conscious decision to strip all sound out to create the feeling of being present and in the moment. As someone whose main job it is to add sound, sometimes there is more power in having the restraint to pull back on sound.”

ESPN Films_Netflix_Mandalay Sports Media_NBA Entertainment

Even when they were working remotely, the creatives were able to communicate in real time via phone, text or Zoom sessions. Still, as Chaet points out, “you’re not getting the body language from that newly official feedback.”

From a remote post production technology standpoint, Chaet and Feldman both say one of the biggest challenges the industry faces is sufficient and consistent Internet bandwidth. Residential ISPs often do not guarantee speeds needed for flawless functionality. “We were able to get ahead of the situation and put systems in place that made things just as smooth as they could be,” says Chaet. “Some things may have taken a bit longer due to the remote situation, but it all got done.”

One thing they didn’t have to worry about was their team’s dedication to the project.

“Whatever challenges we faced after the shutdown, we benefitted from having lived together at the facility for so long,” says Feldman. “There was this trust that, somehow, we were going to figure out a way to get it done.”


Craig Ellenport is a veteran sports writer who also covers the world of post production. 

Soundwhale app intros new editing features for remote audio collaboration

Soundwhale, which makes a Mac and iOS-based remote audio collaboration app, has introduced a new suite of editing capabilities targeting teams working apart but together during this COVID crisis. It’s a virtual studio that lets engineers match sound to picture and lets actors, with no audio experience, record their lines. The company says this is done with minimal latency and no new hardware or additional specialized software required. The app also allows pro-quality mixing, recording and other post tasks, and can work alongside a user’s DAW of choice.
“Production teams are scattered and in self-isolation all around the world,” says Soundwhale founder Ameen Abdulla, who is an audio engineer. “They can’t get expensive hardware to everyone. They have to get people without any access to, or knowledge of, a digital audio workspace like Pro Tools to collaborate. That’s why we felt some urgency to launch more stand-alone editing options within Soundwhale, specifically designed for tasks like ADR.”



Soundwhale allows users to:
– Record against picture
– Control another user’s timeline and playback
– Manage recorded takes
– Cope with slow connections thanks to improved compression
– Optimize stream settings
– Share takes in timeline of other users
– Customize I/O for different setups
– Do basic copy, paste, and moving of audio files
– Share any file by drag and drop
– Share screens and video chat

Soundwhale stems from Abdulla’s own challenges trying to perfect the post process from his recording studio, Mothlab, in Minneapolis. His clients were often on the West Coast and he needed to work with them remotely. Nothing available at the time worked very well, and drawing on his technical background, he set out to fix the issues, which included frustrating lags.

“Asynchronous edits and feedback are hell,” Abdulla notes. “As the show goes on, audio professionals need ways to edit and work with talent in real time over the Internet. Everybody’s experiencing this same thing. Everyone needs the same thing at the same time.”

Behind the Title: Squeak E. Clean executive producer Chris Clark

This executive producer combines his background as a musician with his 11 years working at advertising  agencies to ensure clients get their audio post needs met.

Name: Chris Clark

Company: Squeak E. Clean Studios

Can you describe your company?
We are an international audio post and music company with a fun-loving, multi-talented crew of composers, producers, sound designers and engineers across six studios.

What’s your job title?
Executive Producer

What does that entail?
I work closely with our creative and production teams to ensure the highest quality for audio post production, original music and music supervision are upheld. I also take the lead role and responsibility for ensuring our agency and brand clients are satisfied with our work, and that the entire Chicago operation is seamlessly integrated with the other five studios on a daily basis.

Chicago Ideas Week “Put the Guns Down” music video

What would surprise people the most about what falls under that title?
I also take out the trash. Sometimes.

You have an agency background. How will that help you at Squeak E. Clean Studios?
I’ve had the privilege of working closely with creative teams and clients on a wild and wide array of inspired music treatments over the past 11 years at Leo Burnett and across various Publicis Groupe agencies.

I know what it’s like to be in those meetings when things go off the rails and, fortunately, I take pleasure in creating calm and restoring inspiration by laying out all the musical options available. I know this intimate knowledge of agency and brand challenges will help us at Squeak E. Clean Studios provide really smart, focused music and post audio solutions without any filler.

What’s your favorite part of the job?
The individual challenge of each project and the individual person at the other end of that request. It’s a small and very personal industry… and being able to help out creative friends with great music solutions just makes us all feel good.

What’s your least favorite?
When I kick myself after remembering I don’t have to do everything; we have many capable collaborative people across the company.

What is your most productive time of the day?
Morning. Coffee and excitement for the day’s challenges bring out the best in me, typically.

If you didn’t have this job, what would you be doing instead? Something entrepreneurial in music marketing. Or working as a high school basketball coach.

Why did you choose this profession?
Music had always been my therapy, but it wasn’t until I moved to NYC and started making my own bedroom-produced music that I realized it had fully taken over as my passion. It suddenly surpassed creative writing, sports, comedy, etc. I was working in media communications and bored with my day-to-day challenges when it struck me that there must be some type of work in music and advertising/marketing. Then this whole world opened up just one Craigslist job search later.

You are an industry veteran. How have you seen the industry change over the years?
I worked in the media world when digital broke through to challenge broadcast for supremacy, worked in DJ music marketing when the DJ/producers came to the forefront of pop music, and I’ve been fortunate enough to benefit from the rise of music experts in large agency settings.

Somewhere in all of that you see the industry embracing more content, the individuality of the rightful creator and the importance of music in every aspect of development and production. I’m pretty happy with the changes.

Can you name some recent projects you have worked on?
I recently finalized some new Coors Light “Chill” campaign spots with Leo Burnett. I am also producing original music for 3 Beats by Dre spots for their creative team in Japan with the help of our awesome composer roster.

What is the project that you are most proud of?
Uniting Chicago rappers like Common, G Herbo, Saba, Noname and King Louie for the Chicago Ideas Week Put the Guns Down music video was really special and unprecedented.

Samsung

I also pitched and licensed a cover of “Across the Universe” for a Samsung global spot that featured a father and his newborn son as a main vignette; it came out shortly after the birth of my first son, Charlie, so that will always be a memorable one.

Name three pieces of technology you can’t live without.
Phone, TV, turntables!

What social media channels do you follow?
Instagram mainly, but Twitter and Facebook in moderation.

What do you do to de-stress from it all?
Playing in bands and writing music with no intention of ever tying it to anything professional is always a great release and escape from the day job. I’ve found it also helps me relate to artists and up-and-coming composers/producers who are trying to get their footing in the music industry.

Adding precise and realistic Foley to The Invisible Man

Foley artists normally produce sound effects by mimicking the action of characters on a screen, but for Universal Pictures’ new horror-thriller, The Invisible Man, the Foley team from New York’s Alchemy Post Sound faced the novel assignment of creating the patter of footsteps and swish of clothing for a character who cannot be seen.

Directed by Leigh Whannell, The Invisible Man centers on Cecilia Kass (Elisabeth Moss), a Bay Area architect who is terrorized by her former boyfriend, Adrian Griffin (Oliver Jackson-Cohen), a wealthy entrepreneur who develops a digital technology that makes him invisible. Adrian causes Cecelia to appear to be going insane by drugging her, tampering with her work and committing similarly fiendish acts while remaining hidden from sight.

The film’s sound team was led by the LA-based duo of sound designer/supervising sound editor P.K. Hooker and re-recording mixer Will Files. Files recalls that he and Hooker had extensive conversations with Whannell during pre-production about the unique role sound would play in telling the film’s story. “Leigh encouraged us to think at right angles to the way we normally think,” he recalls. “He told us to use all the tools at our disposal to keep the audience on the edge of their seats. He wanted us to be bold and create something very special.”

Hooker and Files asked Alchemy Post Sound to create a huge assortment of sound effects for the film. The Foley team produced footsteps, floor creaks and fist fights, but its most innovative work involved sounds that convey Adrian’s onscreen presence when he is wearing his high-tech invisibility suit. “Sound effects let the audience know Adrian is around when they can’t see him,” explains lead Foley artist Leslie Bloome. “The Invisible Man is a very quiet film and so the sounds we added for Adrian needed to be very precise and real. The details and textures had to be spot on.”

Alchemy’s Andrea Bloome, Ryan Collison and Leslie Bloome

Foley mixer Ryan Collison adds that getting the Foley sound just right was exceedingly tough because it needed to communicate Adrian’s presence, but in a hesitant, ephemeral manner. “He’s trying to be as quiet as possible because he doesn’t want to be heard,” Collison explains. “You want the audience to hear him, but they should strain just a bit to do so.”

Many of Adrian’s invisible scenes were shot with a stand-in wearing a green suit who interacted with other actors and was later digitally removed. Alchemy’s Foley team had access to the original footage and used it in recording matching footsteps and body motions. “We were lucky to be able to perform Foley to what was originally shot on the set, but unlike normal Foley work, we were given artistic license to enhance the performance,” notes Foley artist Joanna Fang. “We could make him walk faster or slower, seem creepier or step with more creakiness than what was originally there.”

Foley sound was also used to suggest the presence of Adrian’s suit, which is made from neoprene and covered in tiny optical devices. “Every time Adrian moves his hand or throws a punch, we created the sound of his suit rustling,” Fang explains. “We used glass beads from an old chandelier and light bulb filaments for the tinkle of the optics and a yoga mat for the material of the suit itself. The result sounds super high-tech and has a menacing quality.”

Special attention was applied to Adrian’s footsteps. “The Invisible Man’s feet needed a very signature sound so that when you hear it, you know it’s him,” says Files. “We asked the Foley team for different options.”

Ultimately, Alchemy’s solution involved something other than shoes. “Like his suit, Adrian’s shoes are made of neoprene,” explains Bloome, whose team used Neumann KMR 81 mics, an Avid C24 Pro Tools mixing console, a Millennia HV-3D eight-channel preamp, an Apogee Maestro control interface and Adam A77X speakers. “So they make a soft sound, but we didn’t want it to sound like he’s wearing sneakers, so I pulled large rubber gloves over my feet and did the footsteps that way.”

Invisible Adrian makes his first appearance in the film’s opening scene when he invades Cecilia’s home while she is asleep in bed. For that scene, the Foley team created sounds for both the unseen Adrian and for Cecilia as she moves about her house looking for the intruder. “P.K. Hooker told us to imagine that we were a kid who’s come home late and is trying to sneak about the house without waking his parents,” recalls Foley editor Nick Seaman. “When Cecilia is tiptoeing through the kitchen, she stumbles into a dog food can. We made that sound larger than life, so that it resonates through the whole place. It’s designed to make the audience jump.”

Will Files

“P.K. wanted the scene to have more detail than usual to create a feeling of heightened reality,” adds Foley editor Laura Heinzinger. “As Cecelia moves through her house, sound reverberates all around her, as if she were in a museum.”

The creepiness was enhanced by the way the effects were mixed. “We trick the audience into feeling safe by turning down the sound,” explains Files. “We dial it down in pieces. First, we removed the music, and then the waves, so you just hear her bare feet and breath. Then, out of nowhere, comes this really loud sound, the bowl banging and dog food scattering across the floor. The Foley team provided multiple layers that we panned throughout the theater. It feels like this huge disaster because of how shocking it is.”

At another point in the film, Cecilia meets Adrian as she is about to get into her car. It’s raining and the droplets of water reveal the contours of his otherwise invisible frame. To add to the eeriness of the moment, Alchemy’s Foley team recorded the patter of raindrops. “We recorded drops differently depending on whether they were landing on the hood of the car or its trunk,” says Fang. “The drops that land on Adrian make a tinkling sound. We created that by letting water roll off my finger. I also stood on a ladder and dropped water onto a chamois for the sound of droplets striking Adrian’s suit.”

 

The film climaxes with a scene in a psychiatric hospital where Cecilia and several guards engage in a desperate struggle with the invisible Adrian. “It’s a chaotic moment but the footsteps help the audience track Adrian as the fight unfolds,” says Foley mixer Connor Nagy. “The audience knows where Adrian is, but the guards don’t. They hear him as he comes around corners and moves in and out of the room. The guards, meanwhile, are shaking in disbelief.”

“The Foley had a lot of detail and texture,” adds Files. “It was also done with finesse. And we needed that, because Foley was featured in a way it normally isn’t in the mix.”

Alchemy often uses Foley sound to suggest the presence of characters who are off screen, but this was the first instance when they were asked to create sound for a character whose presence onscreen derives from sound alone. “It was a total group effort,” says Bloome. “It took a combination of Foley performance, editing and mixing to convince the audience that there is someone on the screen in front of them who they can’t see. It’s freaky.”

Mixing and sound design for NatGeo’s Cosmos: Possible Worlds

By Patrick Birk

National Geographic’s Cosmos returned for 2020 with Possible Worlds, writer/director/producer Ann Druyan’s reimagining of the house that Carl Sagan built. Through cutting-edge visuals combined with the earnest, insightful narration of astrophysicist Neil deGrasse Tyson, the series aims to show audiences how brilliant the future could be… if we learn to better understand the natural phenomena of which we are a part.

I recently spoke with supervising sound editor/founder Greg King and sound designer Jon Greasley of LA’s King Soundworks about how they tackled the challenges of bringing the worlds of forests and bees to life in Episode 6, “The Search for Intelligent Life on Earth.”

L-R: Greg King and Jon Greasley

In this episode, Neil deGrasse Tyson talks about ripples in space time. It sounds like drops of water, but it also sounds a little synthesized to me. Was that part of the process?
Jon Greasley: Sometimes we do use synthesized sound, but it depends on the project. For example, we use the synthesizer a great deal when we’re doing science-fiction work, like The Orville, to create user interface beeps, spaceship noises and things. But for this show, we stayed away from that because it’s about the organic universe around us, and how we fit into it.

We tried to stick with recordings of real things for this show, and then we did a lot of processing and manipulation, but we tried to do it in a way where everything still sounded grounded and organic and natural. So if there was an instance where we might perhaps want to use some sort of synth bass thing, we would instead, for example, use a real bass guitar or stringed instrument — things that provided the show with an organic feel.

Did you guys provide the score as well?
Greasley: No, Alan Silvestri did the score, but there’s just so much we can do. Everybody that works at King Soundworks, almost without exception, is a musician. We’ve got drummers, guitarists, bass players and keyboard players. Having a sense of musicality really helps with the work that we do, so those are just honestly tools in our tool kit that we can go to very leisurely because it’s second nature to us. There’s a bunch of guitars on the wall at our main office, and everybody’s pulling guitars and basses out and playing throughout the day.

Greg King: We even use a didgeridoo as part one of the elements for the Imagination — the ship that Neil deGrasse Tyson flies around in — because we like the low throbbing oscillating tone and the pitch ranges we can get in it.

Sometimes I wasn’t sure where the sound design and score intersected. How do you balance those two, and what was the creative process like between yourselves and the Silvestri?
King: Alan is one of the top composers in Hollywood. Probably the biggest recent thing he did was the Avenger movies. He’s a super-pro, so he knows the score, he understands what territory the sound design is going to take and when each element is going to take center stage. More often than not, when we’re working with composers, that tends to be when things bump or don’t bump, but when you’re dealing with a pro like Alan, it’s innate with him — when score or design take over.

Due to the show’s production schedule, we were often getting VFX while we were mixing it, which required some improvisation. We’d get input from executive producers Brannon Braga and Ann Druyan, and once we had the VFX, if we needed to move Neil’s VO by a second, we could do that. We could start the music five seconds later, or maybe sound design would need to take over, and we get out of the music for 30 seconds. And conversely, if we just had 30 seconds of this intense sound design moment, we could get rid of our sound effects and sound design and let music carry this scene.

You pre-plan as much as you can, but because of the nature of this show, there was a lot of improvisation happening on the stage as we were mixing. We would very often just try things, and we were given the latitude by Ann and Brannon to try that stuff and experiment. The only rule was to tell the story better.

I heard that sense of musicality you’d mentioned, even in things like the backgrounds of the show. For example, Neil deGrasse Tyson’s walking through the forest, and you have it punctuated with woodpeckers.
Greasley: That was a good layer. There’s a sense of rhythm in nature anyway. We talk about this a lot… not necessarily being able to identify a constant or consistent beat or rhythm, but just the fact that the natural world has all of these ebbs and flows and rhythms and beats.

In music theory classes, they’ll talk about how there’s a reason 4/4 is the most common time signature, and it’s because so many things we do in life are in fours: walking or your heartbeat, anything like that. That’s the theory, anyway.

King: Exactly, because one of the overarching messages of this series is that we’re all connected, everything’s connected. We don’t live in isolation. So from the cellular level in our planet to within our bodies to this big macro level through the universe, things have a natural rhythm in a sense and communicate consciously or unconsciously. So we try to tie things together by building rhythmic beats and hits so they feel connected in some way.

Did you use all of the elements of sound design for that? Backgrounds? Effects?
King: Absolutely. Yeah, we’ll do that in the backgrounds, like when Neil deGrasse is walking across the calendar, we’ll be infusing that kind of thing. So as you go from scene to scene and episode to episode, there’s a natural feel to things. It doesn’t feel like individual events happening, but they’re somehow, even subconsciously, tied together.

It definitely contributed to an emotional experience by the end of the episode. For the mycelium network, what sound effects or recordings did you start off with? Sounds from nature, and then you process them?
King: Yes. And sparking. We had recordings, a whole bunch of different levels of sparking, and we took these electrical arcs and manipulated and processed them to give it that lighter, more organic feeling. Because when we saw the mycelium, we were thinking of connecting the communication of the bees, brain waves and mycelium, sending information among the different plants. That’s an example of things we’re all trying to tie together on that organic level.

It’s all natural, so we wanted to keep it feeling that way so that the mycelium sound would then tie into brain wave sounds or bees communicating.

Greasley: Some of the specific elements used in the mycelium include layers that are made from chimes, like metallic or wooden chimes, that are processed and manipulated. Then we used the sounds of gas — pressure release-type sound of air escaping. That gives you that delicate almost white noise, but in a really specific way. We use a lot of layers of those sorts of things to create the idea of those particles moving around and communicating with each other as well.

You stress the organic nature of the sound design, but at times elements sounded bitcrushed or digitized a bit, and that made sense to me. The way I understand things like neural networks is almost already in a digital context. Did you distort the sounds, mix them together to glue them?
Greasley: There’s definitely a ton of layers, and sometimes yeah, it can help to run everything through one process to help the elements stick. I don’t specifically bitcrush, although we did a lot of stuff with some time stretching. So sometimes you do end up with artifacting, and sometimes it’s desirable and sometimes it isn’t. There’s lots of reverb because reverb is one of the more naturalistic sounding processes you can do.

With reverbs in mind, how much of the reverbs on Tyson’s voice were recorded on set, and how much was added in post?
King: That’s a great question because the show’s production period was long. In one shot, Mr. Tyson may be standing on cliffs next to the ocean, and then the next time you see him he’s in this very lush forest. Not only are those filmed at different times, but because they’re traveling around so much, they often hire a local sound recordist. So not only is his voice recorded at different times and in different locations, but by different sets of equipment.

There’s also a bunch of in-studio narration, and that came in multiple parts as well. As they were editing, they discovered we needed to flush out this line more, or now that we’ve cut it this way, we have to add this information, or change this cadence.

So now you had old studio recordings, new studio recordings and all the various different location recordings. And we’re trying to make it sound like it’s one continuous piece, so you don’t hear all those differences. We used a combination of reverbs so that when you went from one location, you didn’t have a jarring reverb change.

A lot of it was our ADR mixer Laird Fryer, who really took it upon himself to research those original production recordings so when Neil came into the studio here, he could match the microphones as much as possible. Then our ADR supervisor Elliot Thompson would go through and find the best takes that matched. It was actually one of the bigger tasks of the show.

Do you use automated EQ match tools as a starting point?
King: Absolutely. I use iZotope EQ matching all the time. That’s the starting point. And sometimes you get lucky and it matches great right away, and you go, “Wow, that was awesome. Fantastic.” But usually, it’s a starting point, and then it’ll be a combination of additional EQ by ear, and you’ll do reverb matching and EQing the reverb. Then I’ll use a multi-band compressor. I like the FabFilter multiband compressor, and I’ll use that to even further roll the EQ of the dialogue in a gentler way.

I’ve used all those different tools to try getting it as close as I could. And sometimes there will be a shift in the quality of his dialogue, but we decided that was a better way to go because maybe there was just a performance element of the way he delivered a line. So essentially the trade-off was to go with a minor mismatch to keep the performance.

What would your desert island EQ be?
King: We have different opinions. We both do sound effects, and we both do dialogue, so it’s a personal taste. On dialogue, right now I’m a real fan of the FabFilter suite of EQs. For music, I tend to use the McDSP EQs.

Greasley: We’re on the same page for the music EQs. When I’m mixing music, I love McDSP Channel G. Not only the EQ; the compressor is also fantastic on that particular plugin. I use that on all of my sound effects and sound design tracks too. Obviously, before you get to the mix, there’s a whole bunch of other stuff you could use from the design stage, but once I’m actually mixing it, the Channel G is my go-to.

VFX play a heavy role in both the mycelium network and the bee dances. Can you talk about how that affected your workflow/process?
Greasley: When we started prepping the design, some of the visuals were actually not very far along. It’s fun to watch the early cuts of the episodes because what’s ultimately going to end up being Neil standing there with a DNA strand floating above the palm of his hand begins with him standing in front of a greenscreen, and there’s the light bulb in a C stand in his hand.

Sometimes, we had to start working our sound concepts based almost purely on the description of what we were eventually going to be seeing. Based off that, and the conversations that we had with Ann Druyan and Brannon Braga in the spotting sessions, the sound concepts would have to develop in tandem with the visual concepts — both are based off of the intellectual concepts. Then on the mix stage, we would get some of these visual elements in, and we would have to tweak what we had done and what the rest of that team had done right up until the 11th hour.

Were your early sound sketches shown to the VFX department so they could take inspiration from that?
Greasley: That’s a good question. We did provide some stuff, not necessarily to the VFX department, but to the picture editing team. They would ask us to send things not to help so much with conceptualization of things, but with timings. So one of the things they asked us for early on was sounds for the Ship of the Imagination. They would lay those sounds in, and that helped them to get the rhythm of the show and to get a feel for where certain sounds are going to align.

I’m surprised to hear how early in the production process you began working on your sound design, based on how well the bee dance sounds match the light tracer along the back of the bee.
King: That was a lot of improvisation Jon and I were doing on the mix stage. We’re both sound designers, sound editors and mixers, so while we were mixing, we would be getting updates because part of the bee dance sequence is animated — pure hand-drawn animated stuff in the bee sequence — and some of it is actually beehive material, where they show you in a graphical way how the bees communicate with their wiggles and their waggles.

We then figured out a way to grab a bee sound and make it sound like it’s doing those movements, rhythms and wiggles. There’s a big satellite dish in the show, and at the end, you hear these communications coming through the computer panel that are suggested as alien transmissions. We actually took the communication methods we had developed for the bee wiggles and waggles and turned that into alien communication.

What did you process it with to achieve that?
King: Initially, we recorded actual bee sounds. We’re lucky that I live about an hour outside of LA in Santa Paula, which has beehives everywhere. We took constant bee sounds, edited them and used LFOs filters to get the rhythms, and then we’d do sound editing for the start and stops.

For the extraterrestrial communication at the end, we took the bee sounds and really thinned them out and processed them to make them sound a little more radio frequency/telecommunication-like. Then we also took shortwave radio sounds and ran that through the exact process of the LFO filters and the editing so we had the same rhythm. So while the sound is different, it sounds like a similar form of communication.

What I really learned from the series is that there’s all this communication going on that we aren’t aware of, and the mycelium’s a great example of that. I didn’t know different trees and plants communicated with each other — communicate the condition of the soil, root supply and pest invasion. It makes you see a forest in a different way.

It’s the same with the bees. I knew bees were intelligent insects, but I had no idea that a bee could pinpoint an exact location two or three miles away by a sophisticated form of communication. So that gave us the inspiration that runs through the whole series. We’re all dependent on each other; we’re all communicating with each other. In our sound design process, we wanted there to be a thread between all those forms of communication, whatever they are — that they’re basically all coming from the same place.

There’s a scene where a bee goes out to scout a new hive and ends up in a hollowed-out tree. It’s a single bee floating and moving up, down, left, right, front, back. I imagine you’d achieve that movement through panning and the depth would be through volume. Is there any mixing trick that you’re using to do the up and down?
Greasley: That’s such a level of detail. That’s cool that you even asked the question. Yes, left and right obviously; we’re in 5.1, so panning left and right, up and down. As with most things, it’s the simplest things that get you the best results. So EQ and reverb, essentially. You can create the illusion of height with the EQ. Say you do a notch at a certain frequency, and then as the bee flies up, you just roll the center of that frequency up higher. So you track the up and down movement of the bee with a little notch in EQ, and it gives you this extra sense of movement. Since the frequency is moving up and down, you can trick the ear and the brain into perceiving it as height because that’s what you’re looking at. It’s that great symbiosis of audio and video working together.

Then you can use a little bit of a woody-sounding reverb, like a convolution reverb that was recorded in a tight wood room, and then take that as the inside of this hollowed-out tree.

King: A lot of pitch work was done with the bees too. Because when you record a bee, they’re so quiet; it basically goes “bzz” and it’s gone. So you actually end up using a lot of, let’s call them static bees, where the bee is buzzing. Now, you’re having to pitch that in fake dopplers to give the sense of movement. You’re going to have it pitched down as it gets further away and add more reverb, and then do an EQ layer on that, and the same as one’s approaching or one’s flying by. So you’re actually spending a lot of time just creating what feels like very natural sounds but that aren’t really possible to record.

A plugin like Serato Pitch ‘n Time is great for variable pitch too, because if you want something to sound like it’s moving away from you, you have a drop in pitch during the course of it, and the reverse for something approaching you.

Greg King playing guitar

How do you get a single bee sound?
King: You gather a few bees and do a few different things. The easiest way is to get a few bees, bring them into your house, and release them at a brightly lit window. Then the bees are buzzing away like crazy to try to get out the window. You can just track it with the microphone. You’ll then have to go through and edit out any of the louder window knocks.

I’ve tried all different things through the years, like having them in jars and all that kind of stuff, but there’s too much acoustic to that. I’ve discovered that with flies, grasshoppers, or any of the larger winged insects that actually make a noise, doing it in the daytime against the window is the best way because they’ll go for a long time.

What was your biggest takeaway, as an artist, as a sound designer, from working on this project?
Greasley: It was so mind-blowing how much we learned from the people on Cosmos. The people that put the show together can accurately be described as geniuses, particularly Ann. She’s just so unbelievably smart.

Each episode had its individual challenges and taught us things in terms of the craft, but I think for me, the biggest takeaway on a personal and intellectual level is the interconnectedness of everything in the observable world. And the further we get with science, the more we’re able to observe, whether it’s at the subatomic quantum level or billions of light-years away.

Just the level to which all life and matter is interconnected and interdependent.

I also think we’re seeing practical examples of that right now with the coronavirus, in terms of unexpected consequences. It’s like a microcosm for what could happen in the future.
King: On a whole philosophical level, we’re at this particular point in time globally, where we seem to be going down a path of ignoring science, or denying science is there. And when you get to watch a series like Cosmos, you can see science is how we’re going to survive. If we learn to interact with nature, and use nature as a technology, as opposed to using nature as a resource, what we could eventually do is mind-blowing. So I think the timing of this is ideal.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

Apple and Avid offer free temp licenses during COVID-19 crisis

Apple is offering free 90-day trials of Final Cut Pro X and Logic Pro X apps for all in order to help those working from home and looking for something new to master, as well as for students who are already using the tools in school but don’t have the apps on their home computers.

Apple Final Cut X

Apple is extending what is normally a 30-day trial for Final Cut Pro X, while a free trial is new to Logic Pro X. The extension to 90 days is for a limited time and will revert to 30 days across both apps in the future.

Trials for both Final Cut Pro X and Logic Pro X are now available. Customers can download the free trials on the web pages for Final Cut Pro X  and Logic Pro X. The 90-day extension is also available to customers who have already downloaded the free 30-day trial of Final Cut Pro X.

For its part, Avid is offering free temp licenses for remote users of the company’s creative tools. Commercial customers can get a free 90-day license for each registered user of Media Composer | Ultimate, Pro Tools, Pro Tools | Ultimate and Sibelius | Ultimate. For students whose school campuses are closed, any student of an Avid-based learning institution that uses Media Composer, Pro Tools or Sibelius can receive a free 90-day license for the same products.

The offer is open through April 17.

Main Image: Courtesy of Avid

Sebastian Robertson, Mark Johnson on making Playing For Change’s The Weight

By Randi Altman

If you have any sort of social media presence, it’s likely that you have seen Playing For Change’s The Weight video featuring The Band’s Robbie Robertson, Ringo Starr, Lukas Nelson and musicians from all over the world. It’s amazing, and if you haven’t seen it, please click here now. Right now. Then come back and read how it was made.

L-R: Mark Johnson, Robbie Robertson, Sebastian Robertson, Raan Williams and Robin Moxey

The Weight was produced by Mark Johnson and Sebastian Robertson, Robbie’s son. It was a celebration of the 50th anniversary of The Band’s first studio album, Music From Big Pink, where the song “The Weight” first appeared. Raan Williams and Robin Moxey were also producers on the project.

Playing For Change (PFC) was co-founded by Johnson and Whitney Kroenke in 2002 with the goal to share the music of street musicians worldwide. And it seems the seed of the idea involved the younger Robertson and Johnson. “Mark Johnson is an old friend of mine,” explains Robertson. “I was sitting around in his apartment when he initially conceived the idea of Playing For Change. At first, it was a vehicle that brought street musicians into the spotlight, then it became world musicians, and then it evolved into a big musical celebration.”

Johnson explains further: “Playing For Change was born out of the idea that no matter how many things in life divide us, they will never be as strong as the power of music to bring us all together. We record and film songs around the world to reconnect all of us to our shared humanity and to show the world through the lens of music and art.” Pretty profound words considering current events.

Mermans Mosengo – Kinshasa Congo

Each went on with their busy lives, Robertson as a musician and composer, and Johnson traveling the world capturing all types of music. They reconnected a couple of years ago, and the timing was ideal. “I wanted to do something to commemorate the 50th anniversary of The Band’s Music From Big Pink — this beautiful album and this beautiful song that my dad wrote — so I brought it to Mark. I wanted to team up with some friends and we all came together to do something really special for him. That was the driving force behind the production of this video.”

To date, Playing For Change has created over 50 “Songs Around the World” videos — including The Grateful Dead’s Ripple and Jimi Hendrix’s All Along the Watchtower — and recorded and filmed over 1,000 musicians across more than 60 countries.

The Weight is beautifully shot and edited, featuring amazingly talented musicians, interesting locales and one of my favorite songs to sing along to. I reached out to Robertson and Johnson to talk through the production, post and audio post.

This was a big undertaking. All those musicians and locales… how did you choose the musicians that were going to take part in it?
Robertson: First, some friends and I went into the studio to record the very basic tracks of the song — the bass, drums, guitar, a piano and a scratch vocal. The first instrument that was added was my dad on rhythm and lead guitar. He heard this very kind of rough demo version of what we had done and played along with it. Then, slowly along the way, we started to replace all those rough instruments with other musicians around them. That’s basically how the process worked.

Larkin Poe – Venice, California

Was there an audition process, or people you knew, like Lukas Nelson and Marcus King? Or did Playing For Change suggest them?
Robertson: Playing For Change was responsible for the world musicians, and I brought in artists like Lukas, my dad, Ringo and Larkin Poe. They have this incredible syndicate of world musicians, so there is no auditioning. So we knew they were going to be amazing. We brought what we had, they added this flavor, and then the song started to take on a new identity because of all these incredible cultures that are added to it. And it just so happened that Lukas was in Los Angeles because he had been recording up at Shangri-La in Malibu. My friend Eric (Lynn) runs that studio, so we got in touch. Then we filmed Lukas.

Is Shangri-La where you initially went to record the very basic parts of the song?
Robertson: It is. The funny and kind of amazing coincidence is that Shangri-La was The Band’s clubhouse in the ’70s. Since then, producer Rick Rubin has taken over. That’s where the band recorded the studio songs of The Last Waltz (film). That’s where they recorded their album, Northern Lights – Southern Cross. Now, here we are 50 years later, recording The Weight.

Mark, how did you choose the locations for the musicians? They were all so colorful and visually stunning.
Johnson: We generally try to work with each musician to find an outdoor location that inspires them and a place that can give the audience a window into their world. Not every location is always so planned out, so we do a lot of improvising to find a suitable location to record and film music live outside.

Shooting Marcus King in Greenville, South Carolina

What did you shoot on? Did you have one DP/crew or use some from all over the world? Were you on set?
Johnson: Most of the PFC videos are recorded and filmed by one crew (Guigo Foggiatto and Joe Miller), including myself, an additional audio person and two camera operators. We work with a local guide to help us find both musicians and locations. We filmed The Weight around the world on 4K with Sony A7 cameras — one side angle, one zoom and a Ronin for more motion.

How did you capture the performances from an audio aspect, and who did the audio post?
Johnson: We record all the musicians around the world live and outside using the same mobile recording studio we’ve used since the beginning of our “Song Around the World” videos over 10 years ago. The only thing that has changed is the way we power everything. In the beginning it was golf cart batteries and then car batteries with big heavy equipment, but fortunately it evolved into lightweight battery packs.

We primarily use Grace mic preamps and Schoeps microphones, and our recording mantra comes from a good friend and musician named Keb’ Mo’. He once told us, “Sound is a feeling first, so if it feels good it will always sound good…” This inspires us to help the musicians to feel comfortable and aware that they are performing along with other musicians from around the world to create something bigger than themselves.

One interesting thing that often comes from this project that differs from life in the studio is that the musicians playing on our songs around the world tend to listen more and play less. They know they are only a part of the performance and so they try to find the best way to fit in and support the song without any ego. This reality makes the editing and mixing process much easier to handle in post.

Lukas Nelson – Austin, Texas

The Weight was recorded by the Playing For Change crew and mixed by Greg Morgenstein, Robin Moxey, Sebastian and me.

What about the editing? All that footage and lining up the song must have been very challenging. I’m assuming cutting your previous videos has given you a lot of experience with this.
Johnson: That is a great question, and one of the most challenging and rewarding parts of the process. It can get really complicated sometimes to edit because we have three cameras per shoot/musician and sometimes many takes of each performance. And sometimes we comp the audio. For example, the first section came from Take 1, the second from Take 6, etc. … and we need to match the video to correspond to each different audio take/performance. We always rough-mix the music first in Avid Pro Tools and then find the corresponding video takes in Adobe Premiere. Whenever we return from a trip, we add the new layer to the Pro Tools session, then the video edit and build the song as we go.

The Weight was a really big audio session in Pro Tools with so many tracks and options to choose from as to who would play what fill or riff and who would sing each verse, and the video session was also huge. with about 20 performances around the world combined with all the takes that go along with them. One of the best parts of the process for me is soloing all the various instruments from around the world and seeing how amazing they all fit together.

You edited this yourself? And who did the color grade?
Johnson: The video was colored by Jon Walls and Yasuhiro Takeuchi on Blackmagic DaVinci Resolve and edited by me, along with everyone’s help, using Premiere. The entire song and video took over a year to make, so we had time throughout the process to work together on the rough mixes and rough edits from each location and build it brick by brick as we went along the journey.

Sherieta Lewis and Roselyn Williams – Trenchtown, Jamaica

When your dad is on the bench playing and wearing headphones — and the other artists as well — what are they listening to? Are they listening to the initial sort of music that you recorded in studio, or was it as it evolved, adding the different instruments and stuff? Is that what he was listening to and playing along to?
Robertson: Yeah. My dad would listen to what we recorded, except in his case we muted the guitar, so he was now playing the guitar part. Then, as elements from my dad and Ringo are added, those [scratch] elements were removed from what we would call the demo. So then as it’s traveling around the world, people are hearing more and more of what the actual production is going to be. It was not long before all those scratch tracks were gone and people were listening to Ringo and my dad. Then we just started filling in with the singers and so on and so forth.

I’m assuming that each artist played the song from start to finish in the video, or at least for the video, and then the editor went in and cut different lines together?
Robertson: Yes and no. For example, we asked Lukas to do a very specific part as far as singing. He would sing his verse, and then he would sing a couple choruses and play guitar over his section. It varied like that. Sometimes when necessary, if somebody is playing percussion throughout the whole song, then they would listen to it from start to finish. But if somebody was just being asked to sing a specific section, they would just sing that section.

Rajeev Shrestha – Nepal

How was your dad’s reaction to all of it? From recording his own bit to watching it and listening to the final?
Robertson: He obviously came on board very early because we needed to get his guitar, and we wanted to get him filmed at the beginning of the process. He was kind of like, “I don’t know what the hell you guys are doing, but it seems cool.” And then by the time the end result came, he was like, “Oh my God.” Also, the response that his friends and colleagues had to it… I think they had the similar response to what you had, which is A, how the hell did you do this? And, B, this is one of the most beautiful things I’ve ever seen.

It really is amazing. One of my favorite parts of the video is the very end, when your dad’s done playing, looks up and has that huge smile on his face.
Robertson: Yeah. It’s a pulling-at-the-heart-strings moment for me, because that was really a perfect picture of the feeling that I had when it all came together.

You’re a musician as well. What are you up to these days?
Robertson: I have a label under the Universal Production Music umbrella, called Sonic Beat Records. The focus of the label is on contemporary, up-to-the-minute super-slick productions. My collaboration with Universal has been a great one so far; we just started in the fall of 2019, so it’s really new. But I’m finding my way in that family, and they’ve welcomed me with open arms.

Another really fun collaboration was working with my dad on the score for Martin Scorsese’s The Irishman. That was a wonderful experience for me. I’m happy with how the music that we did turned out. Over the course of my life, my dad and I haven’t collaborated that much. We’ve just been father and son, and good friends, but as of late, we’ve started to put our forces together, and that has been a lot of fun.

L-R: Mark Johnson and Ahmed Al Harmi – Bahrain

Any other scores on the horizon?
Robertson: Yeah. I just did another score for a documentary film called Let There Be Drums!, which is a look into the mindset of rock and roll drummers. My friend, Justin Kreutzmann, directed it. He’s the son of Bill Kreutzmann, the drummer of the Grateful Dead. He gave me some original drum tracks of his dad’s and Mickey Hart’s, so I would have all these rhythmic elements to play with, and I got to compose a score on top of Mickey Hart and Bill Kreutzmann’s percussive and drumming works. That was a thrill of a lifetime.

Any final thoughts? And what’s next for you, Mark?
Johnson: One of the many amazing things that came out making this video was our partnership with Sheik Abdulla bin Hamad bin Isa Al Khalifa from Bahrain, who works with us to help end the stereotype of terrorism through music by including musicians from the Middle East in our videos. In The Weight watch an oud master in Bahrain cut to a sitar master in Nepal followed by Robbie Robertson and Ringo Starr, and they all work so well together.

One of the best things about Playing For Change is that it never ends. There are always more songs to make, more musicians to record and more people to inspire through the power of music. One heart and one song at a time…


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.

Netflix’s Mindhunter: Skywalker’s audio adds to David Fincher’s vision

By Patrick Birk

Scott Lewis

I was late in discovering David Fincher’s gripping series on serial killers, Mindhunter. But last summer, I noticed the Netflix original lurking in my suggested titles and decided to give it a whirl. I burned through both seasons within a week. The show is both thrilling and chilling, but the majority of these moments are not achieved through blazing guns, jump scares and pyrotechnics. It instead focuses on the inner lives of multiple murderers and the FBI agents whose job it is to understand them through subtle but detail-rich conversation.

Sound plays a crucial role in setting the tone of the series and heightening tension through each narrative arc. I recently spoke to rerecording mixers Scott Lewis and Stephen Urata as well as supervising sound editor Jeremy Molod — all from Skywalker Sound — about their process creating a haunting and detail-laden soundtrack. Let’s start with Lewis and Urata and then work our way to Molod.

How is working with David Fincher? Does he have any directorial preferences when it comes to sound? I know he’s been big on loud backgrounds in crowded spaces since The Social Network.
Scott Lewis: David is extremely detail-oriented and knowledgeable about sound. So he would give us very indepth notes about the mix… down to the decibel.

Stephen Urata: That level of attention to detail is one of the more challenging parts of working on a show like Mindhunter.

Working with a director who is so involved in the audio, does that limit your freedom at all?
Lewis: No. It doesn’t curtail your freedom, because when a director has a really clear vision, it’s more about crafting the track to be what he’s looking for. Ultimately, it’s the director’s show, and he has a way of bringing the best work out of people. I’m sure you heard about how he does hundreds of takes with actors to get many options. He takes a similar approach with sound in that we might give him multiple options for a certain scene or give him many different flavors of something to choose from. And he’ll push us to deliver the goods. For example, you might deliver a technically perfect mix but he’ll dig in until it’s exactly what he wants it to be.

Stephen Urata

Urata: Exactly. It’s not that he’s curtailing or handcuffing us from doing something creative. This project has been one of my favorites because it was just the editorial team and sound design, and then it would come to the mix stage. That’s where it would be just Scott and me in a mix room just the two of us and we’d get a shot at our own aesthetic and our own choice. It was really a lot of fun trying to nail down what our favorite version of the mix would be, and David really gave us that opportunity. If he wanted something else he would have just said, “I want it like this and only do it like this.”

But at the same time, we would do something maybe completely different than he was expecting, and if he liked it, he would say, “I wasn’t thinking that, but if you’re going to go that direction, try this also.” So he wasn’t handcuffing us, he was pushing us.

Do you have an example of something that you guys brought to the table that Fincher wasn’t expecting and asked you to go with it?
Urata: The first thing we did was the train scene. It was the scene in an empty parking garage and there is the sound of an incoming train from two miles away. That was actually the first thing that we did. It was the middle of Episode 2 or something, and that’s where we started.

Where they’re talking to the BTK survivor, Kevin?
Lewis: Exactly.

Urata: He’s fidgeting and really uncomfortable telling his story, and David wanted to see if that scene would work at all, because it really relied heavily on sound. So we got our shot at it. He said, “This is the kind of the direction I want you guys to go in.” Scott and I played off of each other for a good amount of time that first day, trying to figure out what the best version would be and we presented it to him. I don’t remember him having that many notes on that first one, which is rare.

It really paid off. Among the mixes you showed Fincher, did you notice a trend in terms of his preferences?
Lewis: When I say we gave him options it might be down to something like with Son of Sam. Throughout that scene we used a slight pitching to slowly lower his voice over the length of the scene so that by the time he reveals that he actually isn’t crazy and he’s playing everybody, his voice drops a register. So when we present him options, it’s things like how much we’re pitching him down over time or things like that. It’s a constant review process.

The show takes place in the mid ‘70s and early ’80s. Were there any period-specific sounds or mixing tricks you used when it came to diegetic music and things like that?
Lewis: Oh yeah. Ren Klyce is the supervising sound designer on the show, and he’s fantastic. He’s the sound designer on all of David’s films. He is really good about making sure that we stay to the period. So with regard to mixing, panning is something that he’s really focused on because it’s the ‘70s. He’d tell us not to go nuts on the panning, the surrounds, that kind of thing; just keep it kind of down the middle. Also, futzes are a big thing in that show; music futzes, phone futzes … we did a ton of work on making sure that everything was period-specific and sounded right.

Are you using things like impulse responses and Altiverb or worldizing?
Lewis: I used a lot of Speakerphone by Audio Ease as well as EQ and reverb.

What mixing choices did you make to immerse the viewer in Holden’s reality, i.e. the PTSD he experiences?
Lewis: When he’s experiencing anxiety, it’s really important to make sure that we’re telling the story that we’re setting out to tell. Through mixing, you can focus the viewers’ attention on what you want them to track. So that could be dialogue in the background of a scene, like the end of Episode 1, when he’s having a panic attack, and in the distance, his boss and Tench are talking. It was very important that you make out the dialogue there, even though you’re focusing on Holden having a panic attack. So it’s moments like that when it’s making sure that the viewer is feeling that claustrophobia but also picking up on the story point that we want you to follow.

Lewis: Also, Stephen did something really great there — there are sprinklers in the background and you don’t even notice, but the tension is building through them.

There’s a very intense moment when Holden’s trying to figure out who let their boss know about a missing segment of tape in an interview, and he accuses Greg, who leans back in his chair, and there’s a squeal in there that kind of ramps up the tension.
Urata: David’s really, really honed in on Foley in general — chair squeaks, the type of shoes somebody’s wearing, the squeak of the old wooden floor under their feet. All those things have to play with David. Like when Wendy’s creeping over to the stairwell to listen to her girlfriend and her ex-husband talking. David said, “I want to hear the wooden floor squeaking while she’s sneaking over.”

It’s not just the music crescendo-ing and making you feel really nervous or scared. It’s also Foley work that’s happening in the scene, I want to hear more of that or less of that. Or more backgrounds to just add to the sound pressure to build to the climax of the scene. David uses all those tools to accomplish the storytelling in the scene with sound.

How much ambience do you have built into the raw Foley tracks that you get, and how much is reverb added after the fact? Things like car door slams have so much body to them.
Urata: Some of those, like door slams, were recorded by Ren Klyce. Instead of just recording a door slam with a mic right next to the door and then adding reverb later on, he actually goes into a huge mansion and slams a huge door from 40 feet away and records that to make it sound really realistic. Sometimes we add it ourselves. I think the most challenging part about all of that is marrying and making all the sounds work together for the specific aesthetic of the soundtrack.

Do you have a go-to digital solution for that? Is it always something different or do you find yourself going to the same place?
Urata: It definitely varies. There’s a classic reverb, a digital version of it: the Lexicon 480. We use that a good amount. It has a really great natural film sound that people are familiar with and it sounds natural. There are other ones but it’s really just another tool to make it. If it doesn’t work, we just have to use something else.

Were there any super memorable ADR moments?
Lewis: I can just tell you that there’s a lot of ADR. Some whole scenes are ADR. Any Fincher show that I’ve mixed dialogue on, where I also mixed the ADR, I am 10 times better than I was before I started. Because David’s so focused on storytelling, if there’s a subtle inflection that he’s looking for that he didn’t get on set, he will loop the line to make sure that he gets that nuance.

Did you coordinate with the composer? How do you like to mix the score so that it has a really complementary relationship to the rest of the elements?
Lewis: As re-recording mixers, they don’t involve us in the composition part of it; it just comes to us after they’ve spotted the score.

Jason Hill was the composer, and his score is great. It’s so spooky and eerie. It complements the sound design and sound effects layers really well so that a lot of it will kind of will sit in there. The score is great and it’s not traditional. He’s not working with big strings and horns all over the place. He’s got a lot of synth and guitars and stuff. He would use a lot of analog gear as well. So when it comes to mix sometimes you get kind of anomalies that you don’t commonly get, whether it’s hiss or whatever, elements he’s adding to add kind of an analog sound to it.

Lewis: And a lot of times we would keep that in because it’s part of his score.

Now let’s jump in with sound editor Jeremy Molod

As a sound editor, what was it like working with David Fincher?
Jeremy Molod: David and I have done abot seven or eight films together, so by the time we started on Season Two of Mindhunter, we pretty much knew each other’s styles. I’m a huge fan of David’s movies. It’s a privilege to work with him because he’s such a good director, and the stuff he creates is so entertaining and beautifully done. I really admire his organization and how detailed he is. He really gets in there and gives us detail that no other director has ever given us.

Jeremy Molod

You worked with him on The Social Network. In college, my sound professors would always cite the famous bar scene, where Mark Zuckerberg and his girlfriend had to shout at each other over the backgrounds.
Molod: I remember that moment well. When we were mixing that scene, because the music was so loud and so pulsating, David said, “I don’t want this to sound like we’re watching a movie about a club; I want this to be like we’re in the club watching this.” To make it realistic, when you’re in the club, you’re straining to hear sounds and people’s voices. He said that’s what it should be like. Our mixer, David Parker, kept pushing the music up louder and louder, so you can barely make out those words.

I feel like I’m seeing iterations of that in Mindhunter as well.
Molod: Absolutely. That makes it more stressful and like you said, gives it a lot more tension.

Scott said that David’s down to the decibel in terms of how he likes his sound mixed. I’m assuming he’s that specific when it comes to the editorial as well?
Molod: That is correct. It’s actually even more to that quarter decibel. He literally does that all the time. He gets really, really in there.

He does the same thing with editorial, and what I love about his process is he doesn’t just say, “I want this character to sound old and scared,” he gives real detail. He’ll say, “This guy’s very scared and he’s dirty and his shoelaces are untied and he’s got a rag and a piece of snot rag hanging out of his pocket. And you can hear the lint and the Swiss army knife with the toothpick part missing.” He gets into painting a picture, he wants us literally to translate the sound, but he wants us to make it sound like the picture he’s painting.

So he wanted to make Kevin sound really nervous in the truck scene. Kevin’s in the back and you don’t really see him too much. He’s blurred out. David really wanted to sell his fear by using sound, so we had him tapping the leg nervously, scratching the side of the car, kind of slapping his leg and obviously breathing really heavy and sniffing a lot, and it was those bounds that really helped sell that scene.

So while he does have the acumen and vocabulary within sound to talk to you on a technical level, he’ll give you direction in a similar way to how he would an actor.
Molod: Absolutely, and that’s always how I’ve looked at it. When he’s giving us direction, it’s actually the same way as he’s giving an actor direction to be a character. He’s giving the sound team direction to help those characters and help paint those characters and the scenes.

With that in mind, what was the dialogue editing process like? I’ve heard that his attention to detail really comes into play with inflection of lines. Were you organizing and pre-syncing the alternate takes as closely as you could with the picture selection?
Molod: We did that all the time. The inclination and the intonation and the cadence of the voices of the characters is really important to him, and he’s really good about figuring out which words of which takes he can stitch together to do it. So there might be two sentences that one actor says at one time and those sentences are actually made up of five different takes. And he does so many takes that we have a wealth of material to choose from.

We’d probably send about five or six versions to David to listen to and then he would make his notes. That would happen almost every day and we would start honing in on the performances he liked. Eventually he might say, “I don’t like any of them. You’ve got to loop this guy on the ADR stage.” He likes us to stitch up the best little parts and loop together like a puzzle.

What is the ADR stage like at Skywalker?
Molod: We actually did all of our ADR at Disney Studios in LA because David was down there, as were the actors. We did a fair amount of ADR in Mindhunter, there’s lots of it in there.

We usually have three or four microphones running during an ADR session, one of which will be a radio mic. The other three would be booms set in different locations, the same microphones that they use in production. We also throw in an extra [Sennheiser MKH 50] just to have it with the track of sound that we could choose from.

The process went great, we went through it, we’d come back and give him about five or six choices and then he would start making notes and we had to pin it down to the way he liked it. So by the time we got to the mix stage, the decision was done.

There was a scene where people are walking around talking after a murder had been committed, and what David really wanted was to kind of be talking a little softly about this murder. So we had to go in and loop that whole scene again with them performing it at a more quiet, sustained volume. We couldn’t just turn it down. They had to perform it as if they were not quite whispering but trying to speak a little lower so no one could hear.

To what extent did loop groups play a part in the soundtrack? With the prominence of backgrounds in the show it seems like customization would be helpful, to have time-specific little bits of dialogue that might pop out.
Molod: We’ve used a group called the Loop Squad for all the features, House of Cards shows and the Mindhunters. We would send a list of all of our cues, get on the phone and explain what the reasoning was, what the storylines were. All their actors would on their own, go and research everything that was happening at the time, so if they were just standing by a movie theater, they had something to talk about that was relevant at the time.

When it came to production sound on the show, which track did you normally find yourself working from?
Molod: In most scenes, they would have a couple of radio mics attached to the actors and they’d have several booms. Normally, there were maybe eight different microphones set up. You would have one general boom over the whole thing, you’d have the boom that was close to each character.

We almost always went with one of the booms, unless we were having trouble making out what they were saying. And then it depended just on which actor was standing closest to the boom. One of the tricks our editors did in order to make it sound better is they would phase the two. So if the boom wasn’t quite working on its own and the radio either, one of our tricks would be to make those two play together in a way, and accomplish what we wanted where you could hear it but also give the space in the room.

Were there any moments that you remember from the production tracks for effects?
Molod: Whenever we could use production effects, we always tried to get those in, because they always sound the most realistic and most pertinent to that scene and that location. If we can maintain any footsteps in the production, we always do because those always sound great.

Any kind of subtle things like creaks, bed creaks, the floor creaking, we always try to salvage those and those help a lot too. Fincher is very, very, very into Foley. We have Foley covering the whole thing, end to end. He gives us notes on everybody’s footsteps and we do tests of each character with different types of shoes on and different strides of walking, and we send it to him.

So much of the show’s drama plays out in characters’ internal worlds. In a lot of the prison interview scenes, I notice door slams here and there that I think serve to heighten the tension. Did you develop a kind of a logical language when it came to that, or did you find it was more intuitive?
Molod: No, we did have our language to it and that was based on Fincher’s direction, and when it was really crazy he wanted to hear the door slams and buzzers and keys jingling and tons of prisoners yelling offsite. We spent days recording loop-group prisoners and they would be sprinkled throughout the scene. And when something about the conversation had an upsetting subject matter, we might ramp up the voices in the back.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

A Closer Look: Delta Soundworks’ Ana Monte and Danielo Deboy

Germany’s Delta Soundworks  was co-founded by Ana Monte and Danielo Deboy back in 2016 in Heidelberg, Germany. This 3D/immersive audio post studio’s projects span across installations, virtual reality, 360-degree films and gaming, as well as feature films, documentaries, TV shows and commercials. Its staff includes production sound mixers, recording engineers, sound designers, Foley artists, composers and music producers.

Below the partners answer some questions about their company and how they work.

How did Delta come about?
Ana Monte: Delta Soundworks grew from the combination of my creative background in film sound design and Daniel’s high-level understanding of the science of sound. I studied music industry and technology at California State University, Chico and I earned my master’s degree in film sound and sound design at the Film Academy Baden-Württemberg, here in Germany.

Daniel is a graduate of the Graz University of Technology, where he focused his studies on 3D audio and music production. He was honored with a Student Award from the German Acoustical Society (DEGA) for his research in the field of 3D sound reproduction. He has also received gold, silver and bronze awards from the Audio Engineering Society (AES) for his music recordings.

Can you talk about some recent projects?
Deboy: I think our biggest current project is working for The Science Dome at the Experimenta, a massive science center in Heilbronn, Germany. It’s a 360-degree theater with a 360-degree projection system and a 29-channel audio system, which is not standard. We create the entire sound production for all the theater’s in-house shows. For one of the productions, our composer Jasmin Reuter wrote a beautiful score, which we recorded with a chamber orchestra. It included a lot of sound design elements, like rally cars. We put all these pieces together and finally mixed them in a 3D format. It was a great ride for us.

Monte: The Science Dome has a very unique format. It’s not a standard planetarium, where everyone is looking up and to the middle, but rather a mixture of theater plus planetarium, wherein people look in front, above and behind. For example, there’s a children’s show with pirates who travel to the moon. They begin in the ocean with space projected above them, and the whole video rotates 180-degrees around the audience. It’s a very cool format and something that is pretty unique, not only in Europe, but globally. The partnership with the Experimenta is very important for us because they do their own productions and, eventually, they might license it to other planetariums.

With such a wide array of projects and requirements, tell us about your workflow.
Deboy: Delta is able to quickly and easily adjust to different workflows because we know, or at least love to be, at the edge of what’s possible. We are always happy to take on new and interesting projects, try out new workflows and design, and look at up-and-coming techniques. I think that’s kind of a unique selling point for us. We are way more flexible than a typical post production house would be, and that includes our work for cinema sound production.

What are some tools you guys use in your work?
Deboy: Avid Pro Tools Ultimate, Reaper, Exponential Audio, iZotope RX 6 and Metric Halo 2882 3D. We also have had a license for Nugen Halo Upmix for a while, and we’ve been using it quite a bit for 5.1 production. We rely on it significantly for the Experimenta Science Dome projects because we also work with a lot of external source material from composers who deliver it in stereo format. Also, the Dome is not a 5.1/7.1 theater; it’s 29 channels. So, Upmix really helped us go from a stereo format to something that we could distribute in the room. I was able to adjust all my sources through the plugin and, ultimately, create a 3D mix. Using Nugen, you can really have fun with your audio.

Monte: I use Nugen. Halo Upmix for sound design, especially to create atmosphere sounds, like a forest. I plug in my source and Upmix just works. It’s really great; I don’t have to spend hours tweaking the sound just to have it only serve as a bed to add extra elements on top. For example, maybe I want an extra bird chirping over there and then, okay, we’re in the forest now. It works really well for tasks like that.

Blackmagic releases Resolve 16.2, beefs up audio post tools

Blackmagic has updated its color, edit, VFX and audio post tool to Resolve 16.2. This new version features major Fairlight updates for audio post as well as many improvements for color correction, editing and more.

This new version has major new updates for editing in the Fairlight audio timeline when using a mouse and keyboard. This is because the new edit selection mode unlocks functionality previously only available via the audio editor on the full Fairlight console, so editing is much faster than before. In addition, the edit selection mode makes adding fades and cuts and even moving clips only a mouse click away. New scalable waveforms let users zoom in without adjusting the volume. Bouncing lets customers render a clip with custom sound effects directly from the Fairlight timeline.

Adding multiple clips is also easier, as users can now add them to the timeline vertically, not just horizontally, making it simpler to add multiple tracks of audio at once. Multichannel tracks can now be converted into linked groups directly in the timeline so users no longer have to change clips manually and reimport. There’s added support for frame boundary editing, which improves file export compatibility for film and broadcast deliveries. Frame boundary editing now adds precision so users can easily trim to frame boundaries without having to zoom all the way in the timeline. The new version supports modifier keys so that clips can be duplicated directly in the timeline using the keyboard and mouse. Users can also copy clips across multiple timelines with ease.

Resolve 16.2 also includes support for the Blackmagic Fairlight Sound Library with new support for metadata based searches, so customers don’t need to know the filename to find a sound effect. Search results also display both the file name and description, so finding the perfect sound effect is faster and easier than before.

MPEG-H 3D immersive surround sound audio bussing and monitoring workflows are now supported. Additionally, improved pan and balance behavior includes the ability to constrain panning.

Fairlight audio editing also has index improvements. The edit index is now available in the Fairlight page and works as it does in the other pages, displaying a list of all media used; users simply click on a clip to navigate directly to its location in the timeline. The track index now supports drag selections for mute, solo, record enable and lock as well as visibility controls so editors can quickly swipe through a stack of tracks without having to click on each one individually. Audio tracks can also be rearranged by click and dragging a single track or a group of tracks in the track index.

This new release also includes improvements in AAF import and export. AAF support has been refined so that AAF sequences can be imported directly to the timeline in use. Additionally, if the project features a different time scale, the AAF data can also be imported with an offset value to match. AAF files that contain multiple channels will also be recognized as linked groups automatically. The AAF export has been updated and now supports industry-standard broadcast wave files. Audio cross-fades and fade handles are now added to the AAF files exported from Fairlight and will be recognized in other applications.

For traditional Fairlight users, this new update makes major improvements in importing old legacy Fairlight projects —including improved speed when opening projects with over 1,000 media files, so projects are imported more quickly.

Audio mixing is also improved. A new EQ curve preset for clip EQ in the inspector allows removal of troublesome frequencies. New FairlightFX filters include a new meter plug-in that adds a floating meter for any track or bus, so users can keep an eye on levels even if the monitoring panel or mixer are closed. There’s also a new LFE filter designed to smoothly roll off the higher frequencies when mixing low-frequency effects in surround.

Working with immersive sound workflows using the Fairlight audio editor has been updated and now includes dedicated controls for panning up and down. Additionally, clip EQ can now be altered in the inspector on the editor panel. Copy and paste functions have been updated, and now all attributes — including EQ, automation and clip gain — are copied. Sound engineers can set up their preferred workflow, including creating and applying their own presets for clip EQ. Plug-in parameters can also be customized or added so that users have fast access to their preferred tool set.

Clip levels can now be changed relatively, allowing users to adjust the overall gain while respecting existing adjustments. Clip levels can also be reset to unity, easily removing any level adjustments that might have previously been made. Fades can also be deleted directly from the Fairlight Editor, making it faster to do than before. Sound engineers can also now save their preferred track view so that they get the view they want without having to create it each time. More functions previously only available via the keyboard are now accessible using the panel, including layered editing. This also means that automation curves can now be selected via the keyboard or audio panel.

Continuing on with the extensive improvements to the Fairlight audio, there has also been major updates to the audio editor transport control. Track navigation is now improved and even works when nothing is selected. Users can navigate directly to the timecode entry window above the timeline from the audio editor panel, and there is added support for high-frame-rate timecodes. Timecode entry now supports values relative to the current CTI location, so the playhead can move along the timeline relative to the position rather than a set timecode.

Support has also been added so the colon key can be used in place of the user typing 00. Master spill on console faders now lets users spill out all the tracks to a bus fader for quick adjustments in the mix. There’s also more precision with rotary controls on the panel and when using a mouse with a modifier key. Users can also change the layout and select either icon or text-only labels on the Fairlight editor. Legacy Fairlight users can now use the traditional — and perhaps more familiar — Fairlight layout. Moving around the timeline is even quicker with added support for “media left” and “media right” selection keys to jump the playhead forward and back.

This update also improves editing in Resolve. Loading and switching timelines on the edit page is now faster, with improved performance when working with a large number of audio tracks. Compound clips can now be made from in and out points so that editors can be more selective about which media they want to see directly in the edit page. There is also support for previewing timeline audio when performing live overwrites of video-only edits. Now when trimming, the duration will reflect the clip duration as users actively trim, so they can set a specific clip length. Support for a change transition duration dialogue.

The media pool now includes metadata support for audio files with up to 24 embedded channels. Users can also duplicate clips and timelines into the same bin using copy and paste commands. Support for running the primary DaVinci Resolve screen as a window when dual-screen mode is enabled. Smart filters now let users sort media based on metadata fields, including keywords and people tags, so users can find the clips they need faster.

Amazon’s The Expanse Season 4 gets HDR finish

The fourth season of the sci-fi series The Expanse was finished in HDR for the first time streaming via Amazon Prime Video. Deluxe Toronto handled end-to-end post services, including online editorial, sound remixing and color grading. The series was shot on ARRI Alexa Minis.

In preparation for production, cinematographer Jeremy Benning, CSC, shot anamorphic test footage at a quarry that would serve as the filming stand-in for the season’s new alien planet, Ilus. Deluxe Toronto senior colorist Joanne Rourke then worked with Benning, VFX supervisor Bret Culp, showrunner Naren Shankar and series regular Breck Eisner to develop looks that would convey the location’s uninviting and forlorn nature, keeping the overall look desaturated and removing color from the vegetation. Further distinguishing Ilus from other environments, production chose to display scenes on or above Ilus in a 2.39 aspect ratio, while those featuring Earth and Mars remained in a 16:9 format.

“Moving into HDR for Season 4 of our show was something Naren and I have wanted to do for a couple of years,” says Benning. “We did test HDR grading a couple seasons ago with Joanne at Deluxe, but it was not mandated by the broadcaster at the time, so we didn’t move forward. But Naren and I were very excited by those tests and hoped that one day we would go HDR. With Amazon as our new home [after airing on Syfy], HDR was part of their delivery spec, so those tests we had done previously had prepared us for how to think in HDR.

“Watching Season 4 come to life with such new depth, range and the dimension that HDR provides was like seeing our world with new eyes,” continues Benning. “It became even more immersive. I am very much looking forward to doing Season 5, which we are shooting now, in HDR with Joanne.”

Rourke, who has worked on every season of The Expanse, explains, “Jeremy likes to set scene looks on set so everyone becomes married to the look throughout editorial. He is fastidious about sending stills each week, and the intended directive of each scene is clear long before it reaches my suite. This was our first foray into HDR with this show, which was exciting, as it is well suited for the format. Getting that extra bit of detail in the highlights made such a huge visual impact overall. It allowed us to see the comm units, monitors, and plumes on spaceships as intended by the VFX department and accentuate the hologram games.”

After making adjustments and ensuring initial footage was even, Rourke then refined the image by lifting faces and story points and incorporating VFX. This was done with input provided by producer Lewin Webb; Benning; cinematographer Ray Dumas, CSC; Culp or VFX supervisor Robert Crowther.

To manage the show’s high volume of VFX shots, Rourke relied on Deluxe Toronto senior online editor Motassem Younes and assistant editor James Yazbeck to keep everything in meticulous order. (For that they used the Grass Valley Rio online editing and finishing system.) The pair’s work was also essential to Deluxe Toronto re-recording mixers Steve Foster and Kirk Lynds, who have both worked on The Expanse since Season 2. Once ready, scenes were sent in HDR via Streambox to Shankar for review at Alcon Entertainment in Los Angeles.

“Much of the science behind The Expanse is quite accurate thanks to Naren, and that attention to detail makes the show a lot of fun to work on and more engaging for fans,” notes Foster. “Ilus is a bit like the wild west, so the technology of its settlers is partially reflected in communication transmissions. Their comms have a dirty quality, whereas the ship comms are cleaner-sounding and more closely emulate NASA transmissions.”

Adds Lynds, “One of my big challenges for this season was figuring out how to make Ilus seem habitable and sonically interesting without familiar sounds like rustling trees or bird and insect noises. There are also a lot of amazing VFX moments, and we wanted to make sure the sound, visuals and score always came together in a way that was balanced and hit the right emotions story-wise.”

Foster and Lynds worked side by side on the season’s 5.1 surround mix, with Foster focusing on dialogue and music and Lynds on sound effects and design elements. When each had completed his respective passes using Avid ProTools workstations, they came together for the final mix, spending time on fine strokes, ensuring the dialogue was clear, and making adjustments as VFX shots were dropped in. Final mix playbacks were streamed to Deluxe’s Hollywood facility, where Naren could hear adjustments completed in real time.

In addition to color finishing Season 4 in HDR, Rourke also remastered the three previous seasons of The Expanse in HDR, using her work on Season 4 as a guide and finishing with Blackmagic DaVinci Resolve 15. Throughout the process, she was mindful to pull out additional detail in highlights without altering the original grade.

“I felt a great responsibility to be faithful to the show for the creators and its fans,” concludes Rourke. “I was excited to revisit the episodes and could appreciate the wonderful performances and visuals all over again.”

London’s Molinare launches new ADR suite

Molinare has officially opened a new ADR suite in its Soho studio in anticipation of increased ADR output and to complement last month’s CAS award-winning ADR work on Fleabag. Other recent ADR credits for the company include Good Omens, The Capture and Strike Back. Molinare sister company Hackenbacker also picked up some award love with a  a BAFTA TV Craft and an AMPS award for Killing Eve.

Molinare and Hackenbacker’s audio setup includes nine mixing theaters, three of which have Dolby 5.1/7.1 Theatrical or Commercials & Trailers Certification, and one has full Dolby Atmos home entertainment mix capability.

Molinare works on high-end TV dramas, feature films, feature documentaries and TV reality programming. Recent audio credits include BBC One’s Dracula, The War of the Worlds from Mammoth Screen and Worzel Gummidge. Hackenbacker has recently worked on HBO’s Avenue 5 for returning director Armando Iannucci and Carnival Film’s Downton Abbey and has contributed to the latest season of Peaky Blinders.

Behind the Title: Harbor sound editor/mixer Tony Volante

“As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.”

Name: Tony Volante

Company: Harbor

Can you describe what Harbor does?
Harbor was founded in 2012 to serve the feature film, episodic and advertising industries. Harbor brings together production and post production under one roof — what we like to call “a unified process allowing for total creative control.”

Since then, Harbor has grown into a global company with locations in New York, Los Angeles and London. Harbor hones every detail throughout the moving-image-making process: live-action, dailies, creative and offline editorial, design, animation, visual effects, CG, sound and picture finishing.

What’s your job title?
Supervising Sound Editor/Re-Recording Mixer

What does that entail?
I supervise the sound editorial crew for motion pictures and TV series along with being the re-recording mixer on many of my projects. I put together the appropriate crew and schedule along with helping to finalize a budget through the bidding process. As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.

What would surprise people the most about what falls under that title?
How almost all the sound that someone hears in a movie has been replaced by a sound editor.

What’s your favorite part of the job?
Creatively collaborating with co-workers and hearing it all come together in the final mix.

What is your most productive time of day?
Whenever I can turn off my emails and can concentrate on mixing.

If you didn’t have this job, what would you be doing instead?
Fishing!

When did you know this would be your path?
I played drums in a rock band and got interested in sound at around 18 years old. I was always interested in the “sound” of an album along with the musicality. I found myself buying records based on who had produced and engineered them.

Can you name some recent projects?
Fosse/Verdo (FX) and Boys State, which just one Grand Jury Prize at Sundance.

How has the industry changed since you began working?
Technology has improved workflows immensely and has helped us with the creative process. It has also opened up the door to accelerating schedules to the point of sacrificing artistic expression and detail.

Name three pieces of technology you can’t live without
Avid Pro Tools, my iPhone and my car’s navigation system.

How do you de-stress from it all?
I stand in the middle of a flowing stream fishing with my fly rod. If I catch something that’s a bonus!

Talking with 1917’s Oscar-nominated sound editing team

By Patrick Birk

Sam Mendes’ 1917 tells the harrowing story of Lance Corporals Will Schofield and Tom Blake, following the two young British soldiers on their perilous trek across no man’s land to deliver lifesaving orders to the Second Battalion of the Devonshire Regiment.

Oliver Tarney

The story is based on accounts of World War I by the director’s grandfather, Alfred Mendes. The production went to great lengths to create an immersive experience, placing the viewer alongside the protagonists in a painstakingly recreated world, woven together seamlessly, with no obvious cuts. The film’s sound department had to rise to the challenge of bringing this rarely portrayed sonic world to life.

We checked in with supervising sound editor Oliver Tarney and ADR/dialogue supervisor Rachael Tate, who worked out of London’s Twickenham Studios. Both Tarney and Tate are Oscar-nominated in the Sound Editing category. Their work was instrumental in transporting audiences to a largely forgotten time, helping to further humanize the monochrome faces of the trenches. I know that I will keep their techniques — from worldizing to recording more ambient Foley — in mind on the next project I work on.

Rachael Tate

A lot of the film is made up of quiet, intimate moments punctuated by extremely traumatic events. How did you decide on the most key sounds for those quiet moments?
Oliver Tarney: When Sam described how it was going to be filmed, it was expected that people would comment on how it was made from a technical perspective. But for Sam, it’s a story about the friendship between these two men and the courage and sacrifice that they show. Because of this, it was important to have those quieter moments when you aren’t just engaged in full-tilt action the whole time.

The other factor is that the film had no edits — or certainly no obvious edits (which actually meant many edits) — and was incredibly well-rehearsed. It would have been a dangerous thing to have had everything playing aggressively the whole way through. I think it would have been very fatiguing for the audience to watch something like that.

Rachael Tate: Also, you can’t rely on a cut in the normal way to inform pace and energy, so you are using things like music and sound to sort of ebb and flow the energy levels. So after the plane crash, for example, you’ll notice it goes very quiet, and also with the mine collapse, there’s a huge section of very little sound, and that’s on purpose so your ears can reacclimatize.

Absolutely, and I feel like that’s a good way to go — not to oversaturate the audience with the extreme end of the sound design. In other interviews, you said that you didn’t want it to seem overly processed.
Tarney: Well, we didn’t want the weapons to sound heroic in any way. We didn’t want it to seem like they were enjoying what they were doing. It’s very realistic; it’s brutal and harsh. Certainly, Schofield does shoot at people, but it’s out of necessity rather than enjoying his role there. In terms of dynamics, we broke the film up into a series of arcs, and we worked out that some would be five minutes, some would be nine minutes and so on.

In terms of the guns, we went more naturalistic in our recordings. We wanted the audience to feel everything from their perspective — that’s what Sam wanted with the entire film. Rather than having very direct recordings, we split our energies between that and very ambient recordings in natural spaces to make it feel more realistic. The distance that enemy fire was coming from is much more realistic than you would normally play in a film, and the same goes for the biplane recordings. We had microphones all across airfields to get that lovely phase-y kind of sound. For the dogfight with the planes, we sold the fact that you’re watching Blake and Schofield watching the dogfight rather than being drawn directly to the dogfight. I guess it was trying to mirror the visual, which would stick with the two leads.

Tate: We did the same with the crowd. We tried to keep it more realistic by using half actual territorial army guys, along with voice actors, rather than just being a crowdy-sounding crowd. When we put that into the mix, we also chose which bits to focus on — Sam described it as wanting it to be like a vignette, like an old photo. You have the brown edging that fades away in the corners. He wanted you to zoom in on them so much that the stuff around them is there, but at the level they would hear it. So, if there’s a crowd on the screen further back from them, in reality you wouldn’t really hear it. In most films you put something in everyone’s mouth, but we kept it pared right back so that you’re just listening to their voices and their breaths. This is similar to how it was done with the guns and effects.

You said you weren’t going for any Hollywood-type effects, but I did notice that there are some psychoacoustic cues, like when a bomb goes in the bunker, and I think a tinnitus-type effect.
Tarney: There are a few areas where you have to go with a more conventional film language. When the plane’s very close — on the bridge perhaps — once he’s being fired upon, we start going into something that’s a little more conventional, and then we set the lingo back into him. It was that thing that Sam mentioned, which was subjectivity, objectivity; you can flip between them a little bit, otherwise it becomes too linear.

Tate: It needed to pack a punch.

Foley plays a massive part in this production. Assuming you used period weaponry and vehicles?
Tarney: Sam was so passionate about this project. When you visited the sets, the detail was just beautiful. They set the bar in terms of what we had to achieve realism-wise. We had real World War I rifles and machine guns, both British and German, and biplanes. We also did wild track Foley at the first trench and the last trench: the muddy trench and then the chalk one at the end.

Tate: We even put Blakeys on the boots.

Tarney: Yes, we bought various boots with different hobnails and metal tips.

That’s what a Blakey is?
Tate: The metal things that they put in the bottom of their shoes so that they didn’t slip around.

Tarney: And we went over the various surfaces and found which worked the best. Some were real hobnail boots, and some had metal stuck into them. We still wanted each character to have a certain personality; you don’t want everything sounding the same. We also recorded them without the nails, so when we were in a quieter part of the film, it was more like a normal boot. If you’d had that clang, clang, clang all the way through the film…

Tate: It would throw your attention away from what they were saying.

Tarney: With everything we did on the Foley, it was important to keep focus on them the whole time. We would work in layers, and as we would build up to one of the bigger events, we’d start introducing the heavier, more detailed Foley and take away the more diffuse, mellow Foley.

You only hear webbing and that kind of stuff at certain times because it would be too annoying. We would start introducing that as they went into more dangerous areas. You want them to feel conspicuous, too — when they’re in no man’s land, you want the audience to think, “Wow, there are two guys, alone, with absolutely no idea what’s out there. Is there a sniper? What’s the danger?” So once you start building up that tension, you make them a little bit louder again, so you’re aware they are a target.

How much ADR did the film require? I’m sure there was a lot of crew noise in the background.
Tate: Yes, there was a lot of crew noise — there were only two lines of “technical” ADR, which is when a line needs to be redone because the original could not be used/cleaned sufficiently. My priority was to try and keep as much production as possible. Because we started a couple of weeks after shooting started, and as they were piecing it together, it was as if it was locked. It’s not the normal way.

With this, I had the time to go deep and spectrally remove all the crew feet from the mics because they had low-end thuds on their clip mics, which couldn’t be avoided. The recordist, Stuart Wilson, did a great job, giving me a few options with the clip mics, and he was always trying to get a boom in wherever he could.

He had multiple lavaliers on the actors?
Tate: Yes, he had up to three on both those guys most of the time, and we went with the one on their helmets. It was like a mini boom. But, occasionally, they would get wind on them and stuff like that. That’s when I used iZotope RX 7. It was great having the time to do it. Ordinarily people might say, “Oh no, let’s ADR all the breaths there,” but I could get the breaths out. When you hear them breathing, that’s what they were doing at the time. There’s so much performance in them, I would hate to get them standing in a studio in London, you know, in jeans, trying to recreate that feeling.

So even if there’s slight artifacting, the littlest bit, you’d still go with that over ADR?
Tate: Absolutely. I would hope there’s not too much there though.

Tarney: Film editor Lee Smith and Sam have such a great working relationship; they really were on the same page putting this thing together. We had a big decision to make early on: Do we risk being really progressive and organize Foley recording sessions whilst they were still filming? Because, if everything was going according to plan, they were going to be really hungry for sound since there was no cutting once they had chosen the takes. If it didn’t go to plan, then we’d be forever swapping out seven-minute takes, which would be a nightmare to redo. We took a gamble and budgeted to spend the resources front heavy, and it worked out.

Tate: Lee Smith used to be a sound guy, which didn’t hurt.

I saw how detailed they were with the planning. The model of the town for figuring out the trajectory of the flair for lighting, for example.
Tate: They also mapped out the trenches so they were long enough to cover the amount of dialogue the actors were going to say — so the trenches went on for 500 yards. Before that, they were on theater stages with cardboard boxes to represent trenches, walking through them again and again. Everything was very well-planned.

Apart from dialogue and breaths, were there any pleasant surprises from the production audio that you were able to use in the final cut?
Tate: In the woods, toward the end of the film, Schofield stumbles out of the river and hears singing, and the singing that you hear is the guy doing it live. That’s the take. We didn’t get him in to sing and then put it on; that’s just his clip mic, heavily affected. We actually took his recording out into the New Forest, which is south of London.

A worldizing-type technique?
Tate: Yes, we found a remote part, and we played it and recorded it from different distances, and we had that woven against the original with a few plugins on it for the reverbs.

Tarney: We don’t know if Schofield is concussed and if he’s hallucinating. So we really wanted it to feel sort of ethereal, sort of wafting in and out on the wind — is he actually hearing this or not?

Tate: Yeah, we played the first few lines out of sequence, so you can’t really catch if there’s a melody. Just little bits on the breeze so that you’re not even quite sure what you’re hearing at that point, and it gradually comes to a more normal-sounding tune.

Tarney: Basically, that’s the thing with the whole film; things are revealed to the audience as they’re revealed to the lead characters.

Tate: There are no establishing shots.

Were there any elements of the sound design you wouldn’t expect to be in there that worked for one reason or another?
Tarney: No, there’s nothing… we were pretty accurate. Even the first thing you hear in the film — the backgrounds that were recorded in April.

Tate: In the field.

Tarney: Rachael and I went to Ypres in Belgium to visit the World War I museum and immerse ourselves in that world a little bit.

Tate: We didn’t really know that much about World War I. It wasn’t taught in my school, so I really didn’t know anything before I started this; we needed to educate ourselves.

Can you talk about the loop groups and dialing down to the finest details in terms of the vocabulary used?
Tate: Oh, God, I’ve got so many books, and we got military guys for that sort of flat way they operate. You can’t really explain that fresh to a voice actor and get them to do it properly. But the voice actors helped those guys perform and get out of their shells, and the military guys helped the voice actors in showing them how it’s done.

I gave them all many sheets of key words they could use, or conversation starters, so that they could improvise but stay on the right track in terms of content. Things like slang, poems from a cheap newspaper that was handed out to the soldiers. There was an officer’s manual, so I could tell them the right equipment and stuff. We didn’t want to get anything wrong.

That reminds me of this series of color photographs taken in the early 1900s in Russia. Automatically, it brings you so much closer to life at that point in time. Do you feel like you were able to achieve that via the sound design of this film?
Tarney: I think the whole project did that. When you’ve watched a film every day for six months, day in and day out, you can’t help but think about that era more, and it’s slightly embarrassing that it’s one generation past your grandparents.

How much more worldizing did you do, apart from the nice moment with the song?
Tarney: The Foley that you hear in the trench at the beginning and in the trench at the end is a combination between worldizing and sound designer Mike Fentum’s work. We both went down about three weeks before we started because Stuart Wilson gave us a heads up that they were wrapping at that location, so we spoke to the producer, and he gave us access.

So, in terms of worldizing, it’s not quite worldizing in the conventional sense of taking a recording and then playing it in a space. We actually went to the space and recorded the feet in that space, and the Foley supervisor Hugo Adams went to Salisbury Plain (the chalk trench at the end), and those were the first recordings that we edited and gave to Lee Smith. And then, we would get the two Foley artists that we had — Andrea King and Sue Harding — to top that with a performed pass against a screen. The whole film is layered between real recordings and studio Foley, and it’s the blend of natural presence and the performed studio Foley, with all the nuance and detail that you get from that.

Tate: Similarly, the crowd that we recorded out on a field in the back lot of Shepperton, with a 50 array; we did as much as we could without a screen with them just acting and going through the motions. We had an authentic World War I stretcher, which we used with hilarious consequences. We got them to run up and down carrying their friends on stretchers and things like that and passing enormous tables to each other and stuff so that we had the energy of it. There is something about recording outside and that sort of natural slap that you get off the buildings. It was embedded with production quite seamlessly really, and you can’t really get the same from a studio. We had to do the odd individual line in there, but most of it was done out in a field.

When need be, were you using things like convolution reverbs, such as Audio Ease Altiverb, in the mix?
Tarney: Absolutely. As good as the recordings were, it’s only when you put it against picture that you really understand what it is you need to achieve. So we would definitely augment with a lot — Altiverb is a favorite. Re-recording mixer Mark Taylor and I, we would use that a lot to augment and just change perspective a little bit more.

Can you talk about the Atmos mix and what it brought to the film?
Tarney: I’ve worked on many films with Atmos, and it’s a great tool for us. Sam’s very performance-orientated and would like things to be more screen-focused. The minute you have to turn around, you’ve lost that connection with the lead characters. So, in general, we kept things a little more front-loaded than we might have done with another director, but I really liked the results. It’s actually all the more shocking when you hear the biplane going overhead when they’re in no man’s land.

Sam wanted to know all the way through, “Can I hear it in 5.1, 7.1 and Atmos?” We’d make sure that in the three mixes — other than the obvious — we had another
plane coming over from behind. There’s not a wild difference in Atmos. The low end is nicer, and the discrete surrounds play really well, but it’s not a showy kind of mix in that sense. That would not have been true to everything we were trying to achieve, which was something real.

So Sam Mendes knows sound?
Tarney: He’s incredibly hungry to understand everything, in the best way possible. He’s very good at articulating what he wants and makes it his business to understand everything. He was fantastic. We would play him a section in 5.1, 7.1 and Atmos, and he would describe what he liked and disliked about each format, and we would then try to make each format have the same value as the other ones.


Patrick Birk is a musician and sound engineer at Silver Sound, a boutique sound house based in New York City.

CAS Awards recognize GOT, Fleabag, Ford v Ferrari, more

The CAS Awards were held this past weekend, with the sound mixing team from Ford v Ferrari  — Steven A. Morrow, CAS, Paul Massey CAS, David Giammarco CAS, Tyson Lozensky, David Betancourt and Richard Duarte — taking home the Cinema Audio Society Award for Outstanding Sound Mixing Motion Picture – Live Action.

Game of Thrones – The Bells

Top honors for Motion Picture – Animated went to Toy Story 4 and the sound mixing team of Doc Kane CAS, Vince Caro CAS, Michael Semanick CAS, Nathan Nance, David Boucher and Scott Curtis. The CAS Award for Outstanding Sound Mixing Motion Picture – Documentary went to Making Waves: The Art of Cinematic Sound and the team of David J. Turner, Tom Myers, Dan Blanck and Frank Rinella.

Held in the Wilshire Grand Ballroom of the InterContinental Los Angeles Downtown, the awards were presented in seven categories for Outstanding Sound Mixing Motion Picture and Television and two Outstanding Product Awards. The evening saw CAS president Karol Urban pay tribute to recently retired CAS executive board member Peter R. Damski for his years of service to the organization. The contributions of re-recording mixer Tom Fleischman, CAS, were recognized as he received the CAS Career Achievement Award. Presenter Gary Bourgeois spoke to Fleischman’s commitment to excellence demonstrated in a career that spans over 40 years,  nearly 200 films and collaborations with dozens of notable directors.  

James Mangold

James Mangold received the CAS Filmmaker Award in a presentation that included remarks  by re-recording mixer Paul Massey, CAS, who was joined in the presentation by Harrison Ford. Mangold had even more to celebrate as he watched his sound team take top honors for Outstanding Achievement in Sound Mixing Motion Picture – Live Action. 

Here is the complete list of winners:

MOTION PICTURE – LIVE ACTION

Ford v Ferrari

Ford v Ferrari team

Production Mixer – Steven A. Morrow CAS 

Re-recording Mixer – Paul Massey CAS 

Re-recording Mixer – David Giammarco CAS 

Scoring Mixer – Tyson Lozensky

ADR Mixer – David Betancourt 

Foley Mixer – Richard Duarte

MOTION PICTURE – ANIMATED 

Toy Story 4

Original Dialogue Mixer – Doc Kane CAS

Original Dialogue Mixer – Vince Caro CAS

Re-recording Mixer – Michael Semanick CAS 

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Scott Curtis

 

MOTION PICTURE – DOCUMENTARY

Making Waves: The Art of Cinematic Sound

Production Mixer – David J. Turner 

Re-recording Mixer – Tom Myers 

Scoring Mixer – Dan Blanck

ADR Mixer – Frank Rinella

 

TELEVISION SERIES – 1 HOUR

Game of Thrones: The Bells

Production Mixer – Ronan Hill CAS 

Production Mixer –Simon Kerr 

Production Mixer – Daniel Crowley 

Re-recording Mixer – Onnalee Blank CAS 

Re-recording Mixer – Mathew Waters CAS 

Foley Mixer – Brett Voss CAS

TELEVISION SERIES – 1/2 HOUR 

TIE

Barry: ronny/lily

Production Mixer – Benjamin A. Patrick CAS 

Re-recording Mixer – Elmo Ponsdomenech CAS 

Re-recording Mixer – Jason “Frenchie” Gaya 

ADR Mixer – Aaron Hasson

Foley Mixer – John Sanacore CAS

 

Fleabag: Episode #2.6

Production Mixer – Christian Bourne 

Re-recording Mixer – David Drake 

ADR Mixer – James Gregory

 

TELEVISION MOVIE or LIMITED SERIES

Chernobyl: 1:23:45

Production Mixer – Vincent Piponnier 

Re-recording Mixer – Stuart Hilliker 

ADR Mixer – Gibran Farrah

Foley Mixer – Philip Clements

 

TELEVISION NON-FICTION, VARIETY or MUSIC SERIES or SPECIALS

David Bowie: Finding Fame

Production Mixer – Sean O’Neil 

Re-recording Mixer – Greg Gettens

 

OUTSTANDING PRODUCT – PRODUCTION

Sound Devices, LLC

Scorpio

 

OUTSTANDING PRODUCT – POST PRODUCTION 

iZotope

Dialogue Match

 

STUDENT RECOGNITION AWARD

Bo Pang

Chapman University

 

Main Image: Presenters Whit Norris, Elisha Cuthbert, Award winners Onnalee Blank, Ronan Hill and Brett Voss at the CAS Awards. (Tyler Curtis/ABImages) 

 

 

Wylie Stateman on Once Upon a Time… in Hollywood‘s Oscar nod for sound

By Beth Marchant

To director Quentin Tarantino, sound and music are primal forces in the creation of his idiosyncratic films. Often using his personal music collection to jumpstart his initial writing process and later to set a film’s tone in the opening credits, Tarantino always gives his images a deep, multi-sensory well to swim in. According to his music supervisor Mary Ramos, his bold use of music is as much a character as each film’s set of quirky protagonists.

Wylie Stateman – Credit: Andrea Resnick

Less showy than those memorable and often nostalgic set-piece songs, the sound design that holds them together is just as critically important to Tarantino’s aesthetic. In Once Upon a Time… in Hollywood it even replaces the traditional composed score. That’s one of many reasons why the film’s supervising sound editor Wylie Stateman, a long-time Tarantino collaborator, relished his latest Oscar-nominated project with the director (he previously received nominations for Django Unchained and Inglourious Basterds and has a lifetime total of nine Oscar nominations).

Before joining team Tarantino, Stateman sound designed some of the most iconic films of the ‘80s and ‘90s, including Tron, Footloose, Ferris Bueller’s Day Off (among 15 films he made with John Hughes), Born on the Fourth of July and Jerry Maguire. He also worked for many years with Oliver Stone, winning a BAFTA for his sound work on JFK. He went on to cofound the Topanga, California-based sound studio Twentyfourseven.

We talked to Stateman about how he interpreted Tarantino’s sound vision for his latest film — about a star having trouble evolving to new roles in Hollywood and his stuntman — revealing just how closely the soundtrack is connected to every camera move and cut.

How does Tarantino’s style as a director influence the way you approach the sound design?
I believe that sound is a very important department within the process of making any film. And so, when I met Quentin many years ago, I was meeting him under the guise that he wanted help and he wanted somebody who could focus their time, experience and attention on this very specific department called sound.

I’ve been very fortunate, especially on Quentin’s films, to also have a great production sound mixer and great rerecording mixers. We have both sides of the process in really tremendously skilled hands and tremendously experienced hands. Mark Ulano, our production sound mixer, won an Oscar for Titanic. He knows how to deal with dialogue. He knows how to deal with a complex set, a set where there are a lot of moving parts.

On the other side of that, we have Mike Minkler doing the final re-recording mixing. Mike, who I worked with on JFK, is tremendously skilled with multiple Oscars to his credit. He’s just an amazing creative in terms of re-recording mixing.

The role that I like to play as  supervising sound editor and designer, is how to speak to the filmmaker in terms of sound. For this film, we realized we could drive the soundtrack without a composer by using the chosen songs and KHJ radio, and select these bits and pieces from the shows of infamous DJ “Humble Harve,” or from the clips of all the other DJs on KHJ radio who really defined 1969 in Los Angeles.

And as the film shows, most people heard them over the car radio in car-centric LA.
The DJs were powerful messengers of popular culture. They were powerful messengers of what was happening in the minds and in the streets and in popular culture of that time. That was Quentin’s idea. When he wrote the script, he had written into it all of the KHJ radio segments, and he listens a lot, and he’s a real student of the filmmaking process and a real master.

On the student side, he’s constantly learning and he’s constantly looking and he’s constantly listening. On the master side, he then applies that to the characters that he wants to develop and those situations that he’s looking to be at the base and basis of his story. So, basically, Quentin comes to me for a better understanding of his intention in terms of sound, and he has a tremendous understanding to begin with. That’s what makes it so exciting.

When talking to Quentin and his editor Fred Raskin, who are both really deeply knowledgeable filmmakers, it can be quite challenging to stay in front of them and/or to chase behind them. It’s usually a combination of the two. But Quentin is a very generous collaborator, meaning he knows what he wants, but then he’s able to stop, listen and evaluate other ideas.

How did you find all of the clips we hear on the various radios?
Quentin went through hundreds of hours of archival material. And he has a tremendous working knowledge of music to begin with, and he’s also a real student of that period.

Can you talk about how you approached the other elements of specific, Tarantino-esque sound, like Cliff crunching on a celery stick in that bar scene?
Quentin’s movies are bold in the sense of some of the subject matter that he tackles, but they’re highly detailed and also very much inside his actors’ heads. So when you talk about crunching on a piece of celery, I interpret everything that Quentin imparts on his characters as having some kind of potential vocabulary in terms of sound. And that vocabulary… it applies to the camera. If the camera hides behind something and then comes out and reveals something or if the camera’s looking at a big, long shot — like Cliff Booth’s walk to George Spahn’s house down that open area in the Spahn Ranch — every one of those moves has a potential sound component and every editorial cut could have a vocabulary of sound to accompany it.

We also use those [combinations] to alter time, whether it’s to jump forward or jump back or just crash in. He does a lot of very explosive editing moves and all of that has an audio vocabulary. It’s been quite interesting to work with a filmmaker that sees picture and sound as sort of a romance and a dance. And the sound could lead the picture, or it could lag the picture. The sound can establish a mood, or it can justify a mood or an action. So it’s this constant push-pull.

Robert Bresson, the father of the French New Wave, basically said, “When the ear leads the eye, the eye becomes impatient. When the eye leads the ear, the ear becomes impatient. Use those impatiences.” So what I’m saying is that sound and pictures are this wonderful choreographed dance. Stimulate peoples’ ears and their eye is looking for something; stimulate their eyes and their ears are looking for something, and using those together is a really intimate and very powerful tool that Quentin, I think, is a master at.

How does the sound design help define the characters of Rick Dalton (Leonardo DiCaprio) and Cliff Booth (Brad Pitt)?
This is essentially a buddy movie. Rick Dalton is the insecure actor who’s watching a certain period — when they had great success and comfort — transition into a new period. You’re going from the John Wayne/True Grit way of making movies to Butch Cassidy and the Sundance Kid or Easy Rider, and Rick is not really that comfortable making this transition. His character is full of that kind of anxiety.

The Cliff Booth character is a very internally disturbed character. He’s an unsuccessful crafts/below-the-line person who’s got personal issues and is kind of typical of a character that’s pretty well-known in the filmmaking process. Rick Dalton’s anxious world is about heightened senses. But when he forgets his line during the bar scene in the Lancer set, the world doesn’t become noisy. The world becomes quiet. We go to silence because that’s what’s inside his head. He can’t remember the line and it’s completely silent. But you could play that same scene 180 degrees in the opposite direction and make him confused in a world of noise.

The year 1969 was very important in the history of filmmaking, and that’s another key to Rick’s and Cliff’s characters. If you look at 1969, it was the turning point in Hollywood when indie filmmaking was introduced. It was also the end of a great era of traditional studio fair and traditional acting, and was more defined by the looser, run-and-gun style of Easy Rider. In a way, the Peter Fonda/Dennis Hopper dynamic of Hopper’s film is somewhat similar to that of Rick Dalton and Cliff Booth.

I saw Easy Rider again recently and the ending hit me like a ton of bricks. The cultural panic, and the violence it invokes, is so palpable because you realize that clash of cultures never really went away; it’s still with us all these years later. Tarantino definitely taps into that tension in this film.
It’s funny that you say that because my wife and I went to the Cannes Film Festival with the team, and they were playing Easy Rider on the beach on a giant screen with a thousand seats in the sand. We walked up on it and we stood there for literally an hour and a half transfixed, just watching it. I hadn’t seen it in years.

What a great use of music and location photography! And then, of course, the story and the ending; it’s like, wow. It’s such a huge departure from True Grit and that generation that made that film. That’s what I love about Quentin, because he plays off the tension between those generations in so many ways in the film. We start out with Al Pacino, and they’re drinking whiskey sours, and then we go all the way through the gambit of what 1969 really felt like to the counterculture.

Was there anything unusual that you did in the edit to manipulate sound to make a scene work?
Sound design is a real design-level responsibility. We invent sound. We go to the libraries and we go to great lengths to record things in nature or wherever we can find it. In this case, we recorded all the cars. We apply a very methodical approach to sound.

Sound design, for me, is the art of shaping noise to suit the picture and to enhance the story and great sound lives somewhere between the science of audio and the subjectivity of storytelling. The science part is really well-known, and it’s been perfected over many, many years with lots of talented artists and artisans. But the story part is what excites me, and it’s what excites Quentin. So it becomes what we don’t do that’s so interesting, like using silence instead of noise or creating a soundtrack without a composer. I don’t think you miss having score music. When we couldn’t figure out a song, we made sound design elements. So, yeah, we would make tension sounds.

Shaping noise is not something I could explain to you with an “an eye of newt plus a tail of yak” secret recipe. It’s a feeling. It’s just working with audio, shaping sound effects and noise to become imperceptibly conjoined with music. You can’t tell where the sound design is beginning and ending and where it transfers into more traditional song or music. That is the beauty of Quentin’s films. In terms of sound, the audio has shapes that are very musical.

His deep-cut versions of songs are so interesting, too. Using “California Dreaming” by the Mamas and Papas would have been way too obvious, so he uses a José Feliciano cover of it and puts the actual Mamas and Papas into the film as walk-on characters.
Yeah. I love his choice of music. From Sharon and Roman listening to “Hush” by Deep Purple in the convertible, their hair flying, to going straight into “Son of a Lovin’ Man” after they arrive at the Playboy Mansion. Talk about 1969 and setting it off! It’s not from the San Francisco catalog; it’s just this lovely way that Quentin imagines time and can relate to it as sound and music. The world as it relates to sound is very different than the world of imagery. And the type of director that Quentin is, he’s a writer, he’s a director, and he’s a producer, so he really understands the coalescing of these disciplines.

You haven’t done a lot of interviews in the past. Why not?
I don’t do what I do to call attention to either myself or my work. Over the first 35 years of my career, there’s very little record of any conversation that I had outside of my team and directly with my filmmakers. But at this point in life, when we’re at the cusp of this huge streaming technology shift and everything is becoming more politically sensitive, with deep fakes in both image and audio, I think it’s time sound should have somebody step up and point out, “Hey, we are invisible. We are transitory.” Meaning, when you stop the electricity going to the speakers, the sound disappears, which is kind of an amazing thing. You can pause the picture and you can study it. Sound only exists in real time. It’s just the vibration in the air.

And to be clear, I don’t see motion picture sound as an art form. I see it, rather, as a form of art and it takes a long time to become a sculptor in sound who can work in a very simple style. After all, it’s the simplest lines that just blow your mind!

What blew your mind about this film, either while you worked on it or when you saw the finished product?
I really love the whole look of the film. I love the costumes, and I have great respect for the team that Quentin consistently pulls together. When I work on Quentin’s films, I never turn around and find somebody that doesn’t have a great idea or deep experience in their craft. Everywhere you turn, you bump into extraordinary talent.

Dakota Fanning’s scene at the Spahn Ranch… I mean, wow! Knocks my socks off. That’s really great stuff. It’s a remarkable thing to work with a director who has that kind of love for filmmaking and that allows for really talented people to also get in the sandbox and play.


Beth Marchant is a veteran journalist focused on the production and post community and contributes to “The Envelope” section of the Los Angeles Times. Follow her on Twitter @bethmarchant.

Behind the Title: Sound Lounge ADR mixer Pat Christensen

This ADR mixer was a musician as a kid and took engineering classes in college, making him perfect for this job.

Name: Pat Christensen

Company: Sound Lounge (@soundloungeny)

What’s your job title?
ADR mixer

What does Sound Lounge do?
Sound Lounge is a New York City-based audio post facility. We provide sound services for TV, commercials, feature films, television series, digital campaigns, games, podcasts and other media. Our services include sound design, editing and mixing; ADR recording and voice casting.

What does your job entail?
As an ADR mixer, I re-record dialogue for film and television. It is necessary when dialogue cannot be recorded properly on the set or for creative reasons or because additional dialogue is needed. My stage is set up differently from a standard mix stage as it includes a voiceover booth for actors.

We also have an ADR stage with a larger recording environment to support groups of talent. The stage also allows us to enhance sound quality and record performances with greater dynamics, high and low. The recording environment is designed to be “dead,” that is without ambient sound. That results in a clean recording so when it gets to the next stage, the mixer can add reverb or other processing to make it fit the environment of the finished soundtrack.

What would people find most surprising about your job?
People who aren’t familiar with ADR are often surprised how it’s possible to make an actor’s voice lipsync perfectly with the image on screen and indistinguishable from dialogue recorded on the day.

What’s your favorite part of the job?
Interacting with people — the sound team, the director or the showrunner, and the actors. I enjoy helping directors in guiding the actors and being part of the creative process. I act as a liaison between the technical and creative sides. It’s fun and it’s different every day. There’s never a boring session.

What’s your least favorite?
I don’t know if there is one. I have a great studio and all the tools that I need. I work with good people. I love coming to work every day.

What’s your most productive time of the day?
Whenever I’m booked. It could be 9am. It could be 7a.m. I do night sessions. When the client needs the service, I am ready to go.

If you didn’t have this job, what would you be doing instead?
In high school, I played bass in a punk rock band. I learned the ins and outs of being a musician while taking classes in engineering. I also took classes in automotive technology. If I’d gone that route, I wouldn’t be working in a muffler shop; I’d be fine-tuning Formula 1 engines.

How early on did you know that sound would be your path?
My mom bought me a four-string Washburn bass for Christmas when I was in the eighth grade, but even then I was drawn to the technical side. I was super interested in learning about audio consoles and other gear and how they were used to record music. Luckily, my high school offered a radio and television class, which I took during my senior year. I fell in love with it from day one.

Silicon Valley

What are some of your recent projects?
I worked on the last season of HBO’s Silicon Valley and the second season of CBS’ God Friended Me. We also did Starz’s Power and the new Adam Sandler movie Palm Springs. There are many more credits on my IMDB page. I try to keep it up-to-date.

Is there a project that you’re most proud of?
Power. We’ve done all seven seasons. It’s been exciting to watch how successful that show has become. It’s also been fun working with the actors and getting to know many of them on a personal level. I enjoy seeing them whenever they come it. They trust me to bridge the gap between the booth and the original performance and deliver something that will be seen, and heard, by millions of people. It’s very fulfilling.

Name three pieces of technology you cannot live without.
A good microphone, a good preamp and good speakers. The speakers in my studio are ADAM A7Xs.

What social media channels do you follow?
Instagram and Facebook.

What do you do to relax?
I play hockey. On weekends, I enjoy getting on the ice, expending energy and playing hard. It’s a lot of fun. I also love spending time with my family.

67th MPSE Golden Reel Winners

By Dayna McCallum

The Motion Picture Sound Editors (MPSE) Golden Reel Awards shared the love among a host of films when handing out awards this past weekend at their 67th annual ceremony.

The feature film winners included Ford v Ferrari for effects/Foley, 1917 for dialogue/ADR, Rocketman for the musical category, Jojo Rabbit for musical underscore, Parasite for foreign-language feature, Toy Story 4 for animated feature, and Echo in the Canyon for feature documentary.

The Golden Reel Awards, recognizing outstanding achievement in sound editing, were presented in 23 categories, including feature films, long-form and short-form television, animation, documentaries, games, special venue and other media.

Academy Award-nominated producer Amy Pascal (Little Women) surprised Marvel’s Victoria Alonso when she presented her with the 2020 MPSE Filmmaker Award (re-recording mixer Kevin O’Connell and supervising sound editor Steven Ticknor were honorary presenters).

The 2020 MPSE Career Achievement Award was presented to Academy Award-winning supervising sound editor Cecelia “Cece” Hall by two-time Academy Award-winning supervising sound editor Stephen H. Flick.

“Business models, formats and distribution are all changing,” said MPSE president-elect Mark Lanza during the ceremony. “Original scripted TV shows have set a record in 2019. There were 532 original shows this year. This number is expected to surge in 2020. Our editors and supervisors are paving the way and making our product and the user experience better every year.”

Here is the complete list of winners:

Outstanding Achievement in Sound Editing – Animation Short Form

3 Below “Tales of Arcadia”

Netflix

Supervising Sound Editor: Otis Van Osten
Sound Designer: James Miller
Dialogue Editors: Jason Oliver, Carlos Sanches
Foley Artists: Aran Tanchum, Vincent Guisetti
Foley Editor: Tommy Sarioglou 

Outstanding Achievement in Sound Editing – Non-Theatrical Animation Long Form

Lego DC Batman: Family Matters

Warner Bros. Home Entertainment

Supervising Sound Editor: Rob McIntyre, D.J. Lynch
Sound Designer: Lawrence Reyes
Sound Effects Editors: Ezra Walker
ADR Editor: George Peters
Foley Editor: Aran Tanchum, Derek Swanson
Foley Artists:  Vincent Guisetti 

Outstanding Achievement in Sound Editing – Feature Animation

Toy Story 4

Walt Disney Studios Motion Pictures

Supervising Sound Editor: Coya Elliott
Sound Designer: Ren Klyce
Supervising Dialogue Editor: Cheryl Nardi
Sound Effects Editors: Kimberly Patrick, Qianbaihui Yang, Jonathon Stevens
Foley Editors: Thom Brennan, James Spencer
Foley Artists:  John Roesch, MPSE, Shelley Roden, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Documentary

Serengeti

Discovery Channel

Supervising Sound Editor: Paul Cowgill
Foley Editor: Peter Davies 
Music Editor: Alessandro Baldessari
Foley Artists: Paul Ackerman 

Outstanding Achievement in Sound Editing – Feature Documentary

Echo in the Canyon

Greenwich Entertainment

Sound Designer: Robby Stambler, MPSE
Dialogue Editor:  Sal Ojeda, MPSE

Outstanding Achievement in Sound Editing – Computer Cinematic

Call of Duty: Modern Warfare (2019)

Activision Blizzard
Audio Director: Stephen Miller
Supervising Sound Editor: Dave Rowe
Supervising Sound Designer: Charles Deenen, MPSE Csaba Wagner
Supervising Music Editor:  Peter Scaturro

Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Sound Designers: Bryan Watkins, Mark Ganus, Eddie Pacheco, Darren Blondin
Dialogue Lead: Dave Natale
Dialogue Editors: Chrissy Arya, Michael Krystek
Sound Editors: Braden Parkes, Nick Martin, Tim Walston, MPSE, Brent Burge, Alex Ephraim, MPSE, Samuel Justice, MPSE
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti
Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Computer Interactive Game Play
Call of Duty: Modern Warfare (2019)
Infinity Ward
Audio Director: Stephen Miller
Senior Lead Sound Designer: Dave Rowe
Senior Lead Technical Sound Designer: Tim Stasica
Supervising Music Editor: Peter Scaturro
Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Senior Sound Designers: Chris Egert, Doug Prior
Supervising Sound Designers: Charles Deenen, MPSE, Csaba Wagner
Sound Designers: Chris Staples, Eddie Pacheco, MPSE, Darren Blondin, Andy Bayless, Ian Mika, Corina Bello, John Drelick, Mark Ganus
Dialogue Leads: Dave Natale, Bryan Watkins, Adam Boyd, MPSE, Mark Loperfido
Sound Editors: Braden Parkes, Nick Martin, Brent Burge, Tim Walston, Alex Ephraim, Samuel Justice
Dialogue Editors: Michael Krystek, Chrissy Arya, Cesar Marenco>
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti

Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Feature

Togo

Disney+

Supervising Sound Editors: Odin Benitez, MPSE, Todd Toon, MPSE
Sound Designer: Martyn Zub, MPSE
Dialogue Editor: John C. Stuver, MPSE
Sound Effects Editors: Jason King, Adam Kopald, MPSE, Luke Gibleon, Christopher Bonis
ADR Editor: Dave McMoyler
Supervising Music Editor: Peter “Oso” Snell, MPSE
Foley Artists: Mike Horton, Tim McKeown
Supervising Foley Editor: Walter Spencer

Outstanding Achievement in Sound Editing – Special Venue

Vader Immortal: A Star Wars VR Series “Episode 1”

Oculus

Supervising Sound Editors: Kevin Bolen, Paul Stoughton
Sound Designer: Andy Martin
Supervising ADR Editors: Gary Rydstrom, Steve Slanec
Dialogue Editors: Anthony DeFrancesco, Christopher Barnett, MPSE Benjamin A. Burtt, MPSE
Foley Artists: Shelley Roden, MPSE Jana Vance

Outstanding Achievement in Sound Editing – Foreign Language Feature

Parasite

Neon

Supervising Sound Editor: Choi Tae Young
Sound Designer: Kang Hye Young
Supervising ADR Editor: Kim Byung In
Sound Effects Editors: Kang Hye Young
Foley Artists: Park Sung Gyun, Lee Chung Gyu
Foley Editor: Shin I Na
 

Outstanding Achievement in Sound Editing – Live Action Under 35:00

Barry “ronny/lily”

HBO

Supervising Sound Editors:  Sean Heissinger, Matthew E. Taylor
Sound Designer:  Rickley W. Dumm, MPSE
Sound Effects Editor: Mark Allen
Dialogue Editors:  John Creed, Harrison Meyle
Music Editor:  Michael Brake
Foley Artists:  Alyson Dee Moore, Chris Moriana 
Foley Editors:  John Sanacore, Clayton Weber

Outstanding Achievement in Sound Editing – Episodic Short Form – Music

Wu Tang: An American Saga “All In Together Now”

Hulu 

Music Editor: Shie Rozow

Outstanding Achievement in Sound Editing – Episodic Short Form – Dialogue/ADR

Modern Love “Take Me as I Am”

Prime Video
Supervising Sound Editor: Lewis Goldstein
Supervising ADR Editor: Gina Alfano, MPSE
Dialogue Editor:  Alfred DeGrand

Outstanding Achievement in Sound Editing – Episodic Short Form – Effects / Foley

The Mandalorian “Chapter One”

Disney+

Supervising Sound Editors: David Acord, Matthew Wood
Sound Effects Editors: Bonnie Wild, Jon Borland, Chris Frazier, Pascal Garneau, Steve Slanec
Foley Editor: Richard Gould
Foley Artists: Ronni Brown, Jana Vance

Outstanding Achievement in Sound Editing – Student Film (Verna Fields Award)

Heatwave

National Film and Television School

Supervising Sound Editor: Kevin Langhamer

Outstanding Achievement in Sound Editing – Single Presentation

El Camino: A Breaking Bad Movie

Netflix

Supervising Sound Editors: Nick Forshager, Todd Toon, MPSE
Supervising ADR Editor: Kathryn Madsen
Sound Effects Editor: Luke Gibleon
Dialogue Editor: Jane Boegel
Foley Editor: Jeff Cranford
Supervising Music Editor: Blake Bunzel
Music Editor: Jason Tregoe Newman
Foley Artists: Gregg Barbanell, MPSE, Alex Ullrich 

Outstanding Achievement in Sound Editing – Episodic Long Form – Music

Game of Thrones “The Long Night”

HBO 

Music Editor: David Klotz

Outstanding Achievement in Sound Editing – Episodic Long Form – Dialogue/ADR

Chernobyl “Please Remain Calm”

HBO

Supervising Sound Editor: Stefan Henrix
Supervising ADR Editor:  Harry Barnes
Dialogue Editor: Michael Maroussas

Outstanding Achievement in Sound Editing – Episodic Long Form – Effects / Foley

Chernobyl “1:23:45”

HBO

Supervising Sound Editor: Stefan Henrix
Sound Designer: Joe Beal
Foley Editors: Philip Clements, Tom Stewart
Foley Artist:  Anna Wright

Outstanding Achievement in Sound Editing – Feature Motion Picture – Music Underscore

JoJo Rabbit

Fox Searchlight Pictures

Music Editor: Paul Apelgren

Outstanding Achievement in Sound Editing – Feature Motion Picture – Musical

Rocketman

Paramount Pictures

Music Editors: Andy Patterson, Cecile Tournesac

Outstanding Achievement in Sound Editing – Feature Motion Picture – Dialogue/ADR

1917

Universal Pictures

Supervising Sound Editor: Oliver Tarney, MPSE
Dialogue Editor: Rachael Tate, MPSE

Outstanding Achievement in Sound Editing – Effects / Foley

Ford v Ferrari

Twentieth Century Fox 

Supervising Sound Editor: Donald Sylvester

Sound Designers: Jay Wilkenson, David Giammarco

Sound Effects Editor: Eric Norris, MPSE

Foley Editor: Anna MacKenzie

 Foley Artists: Dan O’Connell, John Cucci, MPSE, Andy Malcolm, Goro Koyama


Main Image Caption: Amy Pascal and Victoria Alonso

 

Skywalker Sound and Cinnafilm create next-gen audio toolset

Iconic audio post studio Skywalker Sound and the makers of PixelStrings media conversion technology Cinnafilm are working together on a new audio tool expected to hit in the first quarter of 2020.

As the paradigms of theatrical, broadcast and online content begin to converge, the need to properly conform finished programs to specifications suitable for a variety of distribution channels has become more important than ever. To ensure high fidelity is maintained throughout the conversion process, it is important to implement high-quality tools to aid in time-domain, level, spatial and file-format processing for all transformed content intended for various audiences and playout systems.

“PixelStrings represents our body of work in image processing and media conversions. It is simple,  scalable and built for the future. But it is not just about image processing, it’s an ecosystem. We recognize success only happens by working with other like-minded technology companies. When Skywalker approached us with their ideas, it was immediate validation of this vision. We plan to put as much enthusiasm and passion into this new sound endeavor as we have in the past with picture — the customers will benefit as they see, and hear, the difference these tools make on the viewer experience,” says Cinnafilm CEO/ founder Lance Maurer.

To address this need, Skywalker Sound has created an audio tool set based on proprietary signal processing and orchestration technology. Skywalker Audio Tools will offer an intelligent, automated audio pipeline with features including sample-accurate retiming, loudness and standards analysis and correction, downmixing, channel mapping and segment creation/manipulation — all faster than realtime. These tools will be available exclusively within Cinnafilm’s PixelStrings media conversion platform.

Talking work and trends with Wave Studios New York

By Jennifer Walden

The ad industry is highly competitive by nature. Advertisers compete for consumers, ad agencies compete for clients and post houses compete for ad agencies. Now put all that in the dog-eat-dog milieu of New York City, and the market becomes more intimidating.

When you factor in the saturation level of the audio post industry in New York City — where audio facilities are literally stacked on top of each other (occupying different floors of the same building or located just down the hall from each other) — then the odds of a new post sound house succeeding seem dismal. But there’s always a place for those willing to work for it, as Wave Studios’ New York location is proving.

Wave Studios — a multi-national sound company with facilities in London and Amsterdam — opened its doors in NYC a little over a year ago. Co-founder/sound designer/mixer Aaron Reynolds worked on The New York Times “The Truth Is Worth It” ad campaign for Droga5 that earned two Grand Prix awards at the 2019 Cannes Lions International Festival of Creativity, and Reynolds’ sound design on the campaign won three Gold Lions. In addition, Wave Studios was recently named Sound Company of the Year 2019 at Germany’s Ciclope International Festival of Craft.

Here, Reynolds and Wave Studios New York executive producer Vicky Ferraro (who has two decades of experience in advertising and post) talk about what it takes to make it, what agency clients are looking for. They also share details on their creative approach to two standout spots they’ve done this year for Droga5.

How was your first year-plus in NYC? What were some challenges of being the new kid in town?
Vicky Ferraro: I joined Wave to help open the New York City office in May 2018. I had worked at Sound Lounge for 12 years, and I’ve worked on the ad agency side as well, so I’m familiar with the landscape.

One of the big challenges is that New York is quite a saturated market when it comes to audio. There are a lot of great audio places in the city. People have their favorite spots. So our challenges are to forge new relationships and differentiate ourselves from the competition, and figure out how to do that.

Also, the business model has changed quite a bit; a lot of agencies have in-house facilities. I used to work at Hogarth, so I’m quite familiar with how that side of the business works as well. You have a lot of brands that are working in-house with agencies.

So, opening a new spot was a little daunting despite all the success that Wave Studios in London and Amsterdam have had.
Aaron Reynolds: I worked in London, and we always had work from New York clients. We knew friends and people over here. Opening a facility in New York was something we always wanted to do, since 2007. The challenge was to get out there and tell people that we’re here. We were finally coming over from London and forging those relationships with clients we had worked with remotely.

New York has a slightly different work ethic in that they tend to do the sound design with us and then do the mix elsewhere. One challenge was to get across to our clients that we offer both, from start to finish.

Sound design and mixing are one and the same thing. When I’m doing my sound design, I’m thinking about how I want it to sound in the mix. It’s quite unique to do the sound design at one place and then do the mix somewhere else.

What are some trends you’re seeing in the New York City audio post scene? What are your advertising clients looking for?
Reynolds: On the work side, they come here for a creative sound design approach. They don’t want just a bit of sound here and a bit of sound there. They want something to be brought to the job through sound. That’s something that Wave has always done, and that’s been a bastion of our company. We have an idea, and we want to create the best sound design for the spot. It’s not just a case of, “bring me the sounds and we’ll do it for you.” We want to add a creative aspect to the work as well.

And what about format? Are clients asking for 5.1 mixes? Or stereo mixes still?
Reynolds: 99% of our work is done in stereo. Then, we’ll get the odd job mixed in 5.1 if it’s going to broadcast in 5.1 or play back in the cinema. But the majority of our mixes are still done in stereo.

Ferraro: That’s something that people might not be aware of, that most of our mixes are stereo. We deliver stereo and 5.1, but unless you’re watching in a 5.1 environment (and most people’s homes are not a 5.1 environment), you want to listen to a stereo mix. We’ve been talking about that with a lot of clients, and they’ve been appreciative of that as well.

Reynolds: If you tend to mix in 5.1 and then fold down to a stereo mix, you’re not getting a true stereo mix. It’s an artificial one. We’re saying, “Let’s do a stereo mix. And then let’s do a separate 5.1 mix. Then you’re getting the best of both.”

Most of what you’re listening to is stereo, so you want to have the best possible stereo mix you can have. You don’t want a second rate mix when 99% of the media will be played in stereo.

What are some of the benefits and challenges of having studios in three countries? Do you collaborate on projects?
Ferraro: We definitely collaborate! It’s been a great selling point, and a fantastic time-saver in a lot of cases. Sometimes we’ll get a project from London or Amsterdam, or vice versa. We have two sound studios in New York, and sometimes a job will come in and if we can’t accommodate it, we can send it over to London. (This is especially true for unsupervised work.) Then they’ll do the work, and our client has it the next morning. Based on the time zone difference, it’s been a real asset, especially when we’re under the gun.

Aaron has a great list of clients that he works with in London and Amsterdam who continue to work with him here in New York. It’s been very seamless. It’s very easy to send a project from one studio to another.

Reynolds: We all work on the same system — Steinberg Nuendo — so if I send a job to London, I can have it back the next morning, open it up, and have the clients review it with me. I can carry on working in the same session. It’s almost as if we can work on a 24-hour cycle.

All the Wave Studios use Steinberg Nuendo as their DAW?
Reynolds: It’s audio post software designed with sound designers in mind. Pro Tools is more mixing software, good for recording music and live bands. It’s good for mixing, but it’s not particularly great for doing sound design. Nuendo, on the other hand, has been built for sound design, roots up. It has a lot of great built-in plugins. With Pro Tools you need to get a lot of third-party plugins. Having all these built-in plugins makes the software really solid and reliable.

When it comes to third-party plugins, we really don’t need that many because Nuendo has so many built in. But some of the most-used third-party plugins are reverbs, like Audio Ease’s Altiverb and Speakerphone.

I think we’re one of the only studios that uses Nuendo as our main DAW. But Wave has always been a bit rogue. When we first set up years ago, we were using Fairlight, which no one else was using at the time. We’ve always had the desire to use the best tool that we can for the job, which is not necessarily the “industry standard.” When it came to upgrading all of our systems, we were looking into Pro Tools and Nuendo, but one of the partners at Wave, Johnnie Burn, uses Nuendo for the film side. He found it to be really powerful, so we made the decision to put it in all the facilities.

Why should agencies choose an independent audio facility instead of keeping their work in-house? What’s the benefit for them?
Ferraro: I can tell you from firsthand knowledge several benefits to going out-of-house. The main thing that draws clients to Wave Studios — and away from in-house — is that there is a high level of creativity and experience that comes with our engineers. We bring a different perspective than what you get from an in-house team. While there is a lot of talent in-house, those models often deal with freelancers that aren’t as vested in the company, and it poses challenges in building the brand. It’s a different approach to working and finishing up a piece.

Those two aspects play into it — the creativity and having engineers dedicated to our studio. We’re not bringing in freelancers or working with an unknown pool of people. That’s important.

From my own experience, sometimes the approach can feel more formulaic. As an independent audio facility, our approach is very collaborative. There’s a partnership that we create with all of our clients as soon as they’re on board. Sometimes we get involved even before we have a job assigned, just to help them explore how to expand their ideas through sound, how they should be capturing the sound on-set, and how they should be thinking about audio post. It’s a very involved process.

Reynolds: What we bring is a creative approach. Elsewhere, that can be more formulaic, as Vicky said. Here, we want to be as creative as possible and treat jobs with attention and care.

Wave Studios is an international audio company. Is that a draw for clients?
Ferraro: One hundred percent. You’ve got to admit, it’s got a bit of cachet to it for sure. It’s rare to be a commercial studio with outposts in other countries. I think clients really like that, and it does help us bring a different perspective. Aaron’s perspective coming from London is very different from somebody in New York. It’s also cool because our other engineer is based in the New York market, and so his perspective is different from Aaron’s. In this way, we have a blend of both.

There have been some big commercial audio post houses go under, like Howard Schwartz and Nutmeg. What does it take for an audio post house in NYC to be successful in the long run?
Reynolds: The thing to do to maintain a good studio — whether in New York City or anywhere — is not to get complacent. Don’t ever rest on your laurels. Take every job you do as if it’s your first — have that much enthusiasm about it. Keep forging for the best, and that will always shine through. Keep doing the most creative work you can do, and that will make people want to come back. Don’t get tired. Don’t get lazy. Don’t get complacent. That’s the key.

Ferraro: I also think that you need to be able to evolve with the changing environment. You need to be aware of how advertising is changing, stay on top of the trends and move with it rather than resisting it.

What are some spots that you’ve done recently at Wave Studios NYC? How do they stand out, soundwise?
Reynolds: There’s a New York Times campaign that I have been working on for Droga5. A spot in there is called Fearlessness, which was all about a journalist investigating ISIS. The visuals tell a strong story, and so I wanted to do that in an acoustic sort of way. I wanted people to be able to close their eyes and hear all of the details of the journey the writer was taking and the struggles she came across. Bombs had blown up a derelict building, and they are walking through the rubble. I wanted the viewer to feel the grit of that environment.

There’s a distorted subway train sound that I added to the track that sets the tone and mood. We explored a lot of sounds for the piece. The soundscapes were created from different layers using sounds like twisting metals and people shouting in both English and Arabic, which we sourced from libraries like Bluezone and BBC, in particular. We wanted to create a tone that was uneasy and builds to a crescendo.

We’ve got a massive amount of sound libraries — about 500,000 sound effects — that are managed via Nuendo. We don’t need any independent search engine. It’s all built within the Nuendo system. Our sound effects libraries are shared across all of our facilities in all three countries, and it’s all accessed through Nuendo via a local server for each facility.

We did another interesting spot for Droga5 called Night Trails for Harley-Davidson’s electric motorcycle. In the spot, the guy is riding through the city at night, and all of the lights get drawn into his bike. Ringan Ledwidge, one of the industry’s top directors, directed the spot. Soundwise, we were working with the actual sound of the bike itself, and I elaborated on it to make it a little more futuristic. In certain places, I used the sound of hard drives spinning and accelerating to create an electric bike-by. I had to be quite careful with it because they do have an actual sound for the bike. I didn’t want to change it too much.

For the sound of the lights, I used whispers of people talking, which I stretched out. So as the bike goes past a streetlight, for example, you hear a vocal “whoosh” element as the light travels down into the bike. I wanted the sound of the lights not to be too electric, but more light and airy. That’s why I used whispers instead of buzzing electrical sounds. In one scene, the light bends around a telephone pole, and I needed the sound to be dynamic and match that movement. So I performed that with my voice, changing the pitch of my voice to give the sound a natural arc and bend.

Main Image: (L-R) Aaron Reynolds and Vicky Ferraro


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film. Murray received an Oscar nomination in the category of Sound Editing for his work on Joker, while Ozanich, Zupancic and Tod Maitland were nominated for their Sound Mixing work.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.