AMD 2.1

Category Archives: Audio Mixing

Netflix’s Mindhunter: Skywalker’s audio adds to David Fincher’s vision

By Patrick Birk

Scott Lewis

I was late in discovering David Fincher’s gripping series on serial killers, Mindhunter. But last summer, I noticed the Netflix original lurking in my suggested titles and decided to give it a whirl. I burned through both seasons within a week. The show is both thrilling and chilling, but the majority of these moments are not achieved through blazing guns, jump scares and pyrotechnics. It instead focuses on the inner lives of multiple murderers and the FBI agents whose job it is to understand them through subtle but detail-rich conversation.

Sound plays a crucial role in setting the tone of the series and heightening tension through each narrative arc. I recently spoke to rerecording mixers Scott Lewis and Stephen Urata as well as supervising sound editor Jeremy Molod — all from Skywalker Sound — about their process creating a haunting and detail-laden soundtrack. Let’s start with Lewis and Urata and then work our way to Molod.

How is working with David Fincher? Does he have any directorial preferences when it comes to sound? I know he’s been big on loud backgrounds in crowded spaces since The Social Network.
Scott Lewis: David is extremely detail-oriented and knowledgeable about sound. So he would give us very indepth notes about the mix… down to the decibel.

Stephen Urata: That level of attention to detail is one of the more challenging parts of working on a show like Mindhunter.

Working with a director who is so involved in the audio, does that limit your freedom at all?
Lewis: No. It doesn’t curtail your freedom, because when a director has a really clear vision, it’s more about crafting the track to be what he’s looking for. Ultimately, it’s the director’s show, and he has a way of bringing the best work out of people. I’m sure you heard about how he does hundreds of takes with actors to get many options. He takes a similar approach with sound in that we might give him multiple options for a certain scene or give him many different flavors of something to choose from. And he’ll push us to deliver the goods. For example, you might deliver a technically perfect mix but he’ll dig in until it’s exactly what he wants it to be.

Stephen Urata

Urata: Exactly. It’s not that he’s curtailing or handcuffing us from doing something creative. This project has been one of my favorites because it was just the editorial team and sound design, and then it would come to the mix stage. That’s where it would be just Scott and me in a mix room just the two of us and we’d get a shot at our own aesthetic and our own choice. It was really a lot of fun trying to nail down what our favorite version of the mix would be, and David really gave us that opportunity. If he wanted something else he would have just said, “I want it like this and only do it like this.”

But at the same time, we would do something maybe completely different than he was expecting, and if he liked it, he would say, “I wasn’t thinking that, but if you’re going to go that direction, try this also.” So he wasn’t handcuffing us, he was pushing us.

Do you have an example of something that you guys brought to the table that Fincher wasn’t expecting and asked you to go with it?
Urata: The first thing we did was the train scene. It was the scene in an empty parking garage and there is the sound of an incoming train from two miles away. That was actually the first thing that we did. It was the middle of Episode 2 or something, and that’s where we started.

Where they’re talking to the BTK survivor, Kevin?
Lewis: Exactly.

Urata: He’s fidgeting and really uncomfortable telling his story, and David wanted to see if that scene would work at all, because it really relied heavily on sound. So we got our shot at it. He said, “This is the kind of the direction I want you guys to go in.” Scott and I played off of each other for a good amount of time that first day, trying to figure out what the best version would be and we presented it to him. I don’t remember him having that many notes on that first one, which is rare.

It really paid off. Among the mixes you showed Fincher, did you notice a trend in terms of his preferences?
Lewis: When I say we gave him options it might be down to something like with Son of Sam. Throughout that scene we used a slight pitching to slowly lower his voice over the length of the scene so that by the time he reveals that he actually isn’t crazy and he’s playing everybody, his voice drops a register. So when we present him options, it’s things like how much we’re pitching him down over time or things like that. It’s a constant review process.

The show takes place in the mid ‘70s and early ’80s. Were there any period-specific sounds or mixing tricks you used when it came to diegetic music and things like that?
Lewis: Oh yeah. Ren Klyce is the supervising sound designer on the show, and he’s fantastic. He’s the sound designer on all of David’s films. He is really good about making sure that we stay to the period. So with regard to mixing, panning is something that he’s really focused on because it’s the ‘70s. He’d tell us not to go nuts on the panning, the surrounds, that kind of thing; just keep it kind of down the middle. Also, futzes are a big thing in that show; music futzes, phone futzes … we did a ton of work on making sure that everything was period-specific and sounded right.

Are you using things like impulse responses and Altiverb or worldizing?
Lewis: I used a lot of Speakerphone by Audio Ease as well as EQ and reverb.

What mixing choices did you make to immerse the viewer in Holden’s reality, i.e. the PTSD he experiences?
Lewis: When he’s experiencing anxiety, it’s really important to make sure that we’re telling the story that we’re setting out to tell. Through mixing, you can focus the viewers’ attention on what you want them to track. So that could be dialogue in the background of a scene, like the end of Episode 1, when he’s having a panic attack, and in the distance, his boss and Tench are talking. It was very important that you make out the dialogue there, even though you’re focusing on Holden having a panic attack. So it’s moments like that when it’s making sure that the viewer is feeling that claustrophobia but also picking up on the story point that we want you to follow.

Lewis: Also, Stephen did something really great there — there are sprinklers in the background and you don’t even notice, but the tension is building through them.

There’s a very intense moment when Holden’s trying to figure out who let their boss know about a missing segment of tape in an interview, and he accuses Greg, who leans back in his chair, and there’s a squeal in there that kind of ramps up the tension.
Urata: David’s really, really honed in on Foley in general — chair squeaks, the type of shoes somebody’s wearing, the squeak of the old wooden floor under their feet. All those things have to play with David. Like when Wendy’s creeping over to the stairwell to listen to her girlfriend and her ex-husband talking. David said, “I want to hear the wooden floor squeaking while she’s sneaking over.”

It’s not just the music crescendo-ing and making you feel really nervous or scared. It’s also Foley work that’s happening in the scene, I want to hear more of that or less of that. Or more backgrounds to just add to the sound pressure to build to the climax of the scene. David uses all those tools to accomplish the storytelling in the scene with sound.

How much ambience do you have built into the raw Foley tracks that you get, and how much is reverb added after the fact? Things like car door slams have so much body to them.
Urata: Some of those, like door slams, were recorded by Ren Klyce. Instead of just recording a door slam with a mic right next to the door and then adding reverb later on, he actually goes into a huge mansion and slams a huge door from 40 feet away and records that to make it sound really realistic. Sometimes we add it ourselves. I think the most challenging part about all of that is marrying and making all the sounds work together for the specific aesthetic of the soundtrack.

Do you have a go-to digital solution for that? Is it always something different or do you find yourself going to the same place?
Urata: It definitely varies. There’s a classic reverb, a digital version of it: the Lexicon 480. We use that a good amount. It has a really great natural film sound that people are familiar with and it sounds natural. There are other ones but it’s really just another tool to make it. If it doesn’t work, we just have to use something else.

Were there any super memorable ADR moments?
Lewis: I can just tell you that there’s a lot of ADR. Some whole scenes are ADR. Any Fincher show that I’ve mixed dialogue on, where I also mixed the ADR, I am 10 times better than I was before I started. Because David’s so focused on storytelling, if there’s a subtle inflection that he’s looking for that he didn’t get on set, he will loop the line to make sure that he gets that nuance.

Did you coordinate with the composer? How do you like to mix the score so that it has a really complementary relationship to the rest of the elements?
Lewis: As re-recording mixers, they don’t involve us in the composition part of it; it just comes to us after they’ve spotted the score.

Jason Hill was the composer, and his score is great. It’s so spooky and eerie. It complements the sound design and sound effects layers really well so that a lot of it will kind of will sit in there. The score is great and it’s not traditional. He’s not working with big strings and horns all over the place. He’s got a lot of synth and guitars and stuff. He would use a lot of analog gear as well. So when it comes to mix sometimes you get kind of anomalies that you don’t commonly get, whether it’s hiss or whatever, elements he’s adding to add kind of an analog sound to it.

Lewis: And a lot of times we would keep that in because it’s part of his score.

Now let’s jump in with sound editor Jeremy Molod

As a sound editor, what was it like working with David Fincher?
Jeremy Molod: David and I have done abot seven or eight films together, so by the time we started on Season Two of Mindhunter, we pretty much knew each other’s styles. I’m a huge fan of David’s movies. It’s a privilege to work with him because he’s such a good director, and the stuff he creates is so entertaining and beautifully done. I really admire his organization and how detailed he is. He really gets in there and gives us detail that no other director has ever given us.

Jeremy Molod

You worked with him on The Social Network. In college, my sound professors would always cite the famous bar scene, where Mark Zuckerberg and his girlfriend had to shout at each other over the backgrounds.
Molod: I remember that moment well. When we were mixing that scene, because the music was so loud and so pulsating, David said, “I don’t want this to sound like we’re watching a movie about a club; I want this to be like we’re in the club watching this.” To make it realistic, when you’re in the club, you’re straining to hear sounds and people’s voices. He said that’s what it should be like. Our mixer, David Parker, kept pushing the music up louder and louder, so you can barely make out those words.

I feel like I’m seeing iterations of that in Mindhunter as well.
Molod: Absolutely. That makes it more stressful and like you said, gives it a lot more tension.

Scott said that David’s down to the decibel in terms of how he likes his sound mixed. I’m assuming he’s that specific when it comes to the editorial as well?
Molod: That is correct. It’s actually even more to that quarter decibel. He literally does that all the time. He gets really, really in there.

He does the same thing with editorial, and what I love about his process is he doesn’t just say, “I want this character to sound old and scared,” he gives real detail. He’ll say, “This guy’s very scared and he’s dirty and his shoelaces are untied and he’s got a rag and a piece of snot rag hanging out of his pocket. And you can hear the lint and the Swiss army knife with the toothpick part missing.” He gets into painting a picture, he wants us literally to translate the sound, but he wants us to make it sound like the picture he’s painting.

So he wanted to make Kevin sound really nervous in the truck scene. Kevin’s in the back and you don’t really see him too much. He’s blurred out. David really wanted to sell his fear by using sound, so we had him tapping the leg nervously, scratching the side of the car, kind of slapping his leg and obviously breathing really heavy and sniffing a lot, and it was those bounds that really helped sell that scene.

So while he does have the acumen and vocabulary within sound to talk to you on a technical level, he’ll give you direction in a similar way to how he would an actor.
Molod: Absolutely, and that’s always how I’ve looked at it. When he’s giving us direction, it’s actually the same way as he’s giving an actor direction to be a character. He’s giving the sound team direction to help those characters and help paint those characters and the scenes.

With that in mind, what was the dialogue editing process like? I’ve heard that his attention to detail really comes into play with inflection of lines. Were you organizing and pre-syncing the alternate takes as closely as you could with the picture selection?
Molod: We did that all the time. The inclination and the intonation and the cadence of the voices of the characters is really important to him, and he’s really good about figuring out which words of which takes he can stitch together to do it. So there might be two sentences that one actor says at one time and those sentences are actually made up of five different takes. And he does so many takes that we have a wealth of material to choose from.

We’d probably send about five or six versions to David to listen to and then he would make his notes. That would happen almost every day and we would start honing in on the performances he liked. Eventually he might say, “I don’t like any of them. You’ve got to loop this guy on the ADR stage.” He likes us to stitch up the best little parts and loop together like a puzzle.

What is the ADR stage like at Skywalker?
Molod: We actually did all of our ADR at Disney Studios in LA because David was down there, as were the actors. We did a fair amount of ADR in Mindhunter, there’s lots of it in there.

We usually have three or four microphones running during an ADR session, one of which will be a radio mic. The other three would be booms set in different locations, the same microphones that they use in production. We also throw in an extra [Sennheiser MKH 50] just to have it with the track of sound that we could choose from.

The process went great, we went through it, we’d come back and give him about five or six choices and then he would start making notes and we had to pin it down to the way he liked it. So by the time we got to the mix stage, the decision was done.

There was a scene where people are walking around talking after a murder had been committed, and what David really wanted was to kind of be talking a little softly about this murder. So we had to go in and loop that whole scene again with them performing it at a more quiet, sustained volume. We couldn’t just turn it down. They had to perform it as if they were not quite whispering but trying to speak a little lower so no one could hear.

To what extent did loop groups play a part in the soundtrack? With the prominence of backgrounds in the show it seems like customization would be helpful, to have time-specific little bits of dialogue that might pop out.
Molod: We’ve used a group called the Loop Squad for all the features, House of Cards shows and the Mindhunters. We would send a list of all of our cues, get on the phone and explain what the reasoning was, what the storylines were. All their actors would on their own, go and research everything that was happening at the time, so if they were just standing by a movie theater, they had something to talk about that was relevant at the time.

When it came to production sound on the show, which track did you normally find yourself working from?
Molod: In most scenes, they would have a couple of radio mics attached to the actors and they’d have several booms. Normally, there were maybe eight different microphones set up. You would have one general boom over the whole thing, you’d have the boom that was close to each character.

We almost always went with one of the booms, unless we were having trouble making out what they were saying. And then it depended just on which actor was standing closest to the boom. One of the tricks our editors did in order to make it sound better is they would phase the two. So if the boom wasn’t quite working on its own and the radio either, one of our tricks would be to make those two play together in a way, and accomplish what we wanted where you could hear it but also give the space in the room.

Were there any moments that you remember from the production tracks for effects?
Molod: Whenever we could use production effects, we always tried to get those in, because they always sound the most realistic and most pertinent to that scene and that location. If we can maintain any footsteps in the production, we always do because those always sound great.

Any kind of subtle things like creaks, bed creaks, the floor creaking, we always try to salvage those and those help a lot too. Fincher is very, very, very into Foley. We have Foley covering the whole thing, end to end. He gives us notes on everybody’s footsteps and we do tests of each character with different types of shoes on and different strides of walking, and we send it to him.

So much of the show’s drama plays out in characters’ internal worlds. In a lot of the prison interview scenes, I notice door slams here and there that I think serve to heighten the tension. Did you develop a kind of a logical language when it came to that, or did you find it was more intuitive?
Molod: No, we did have our language to it and that was based on Fincher’s direction, and when it was really crazy he wanted to hear the door slams and buzzers and keys jingling and tons of prisoners yelling offsite. We spent days recording loop-group prisoners and they would be sprinkled throughout the scene. And when something about the conversation had an upsetting subject matter, we might ramp up the voices in the back.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

A Closer Look: Delta Soundworks’ Ana Monte and Danielo Deboy

Germany’s Delta Soundworks  was co-founded by Ana Monte and Danielo Deboy back in 2016 in Heidelberg, Germany. This 3D/immersive audio post studio’s projects span across installations, virtual reality, 360-degree films and gaming, as well as feature films, documentaries, TV shows and commercials. Its staff includes production sound mixers, recording engineers, sound designers, Foley artists, composers and music producers.

Below the partners answer some questions about their company and how they work.

How did Delta come about?
Ana Monte: Delta Soundworks grew from the combination of my creative background in film sound design and Daniel’s high-level understanding of the science of sound. I studied music industry and technology at California State University, Chico and I earned my master’s degree in film sound and sound design at the Film Academy Baden-Württemberg, here in Germany.

Daniel is a graduate of the Graz University of Technology, where he focused his studies on 3D audio and music production. He was honored with a Student Award from the German Acoustical Society (DEGA) for his research in the field of 3D sound reproduction. He has also received gold, silver and bronze awards from the Audio Engineering Society (AES) for his music recordings.

Can you talk about some recent projects?
Deboy: I think our biggest current project is working for The Science Dome at the Experimenta, a massive science center in Heilbronn, Germany. It’s a 360-degree theater with a 360-degree projection system and a 29-channel audio system, which is not standard. We create the entire sound production for all the theater’s in-house shows. For one of the productions, our composer Jasmin Reuter wrote a beautiful score, which we recorded with a chamber orchestra. It included a lot of sound design elements, like rally cars. We put all these pieces together and finally mixed them in a 3D format. It was a great ride for us.

Monte: The Science Dome has a very unique format. It’s not a standard planetarium, where everyone is looking up and to the middle, but rather a mixture of theater plus planetarium, wherein people look in front, above and behind. For example, there’s a children’s show with pirates who travel to the moon. They begin in the ocean with space projected above them, and the whole video rotates 180-degrees around the audience. It’s a very cool format and something that is pretty unique, not only in Europe, but globally. The partnership with the Experimenta is very important for us because they do their own productions and, eventually, they might license it to other planetariums.

With such a wide array of projects and requirements, tell us about your workflow.
Deboy: Delta is able to quickly and easily adjust to different workflows because we know, or at least love to be, at the edge of what’s possible. We are always happy to take on new and interesting projects, try out new workflows and design, and look at up-and-coming techniques. I think that’s kind of a unique selling point for us. We are way more flexible than a typical post production house would be, and that includes our work for cinema sound production.

What are some tools you guys use in your work?
Deboy: Avid Pro Tools Ultimate, Reaper, Exponential Audio, iZotope RX 6 and Metric Halo 2882 3D. We also have had a license for Nugen Halo Upmix for a while, and we’ve been using it quite a bit for 5.1 production. We rely on it significantly for the Experimenta Science Dome projects because we also work with a lot of external source material from composers who deliver it in stereo format. Also, the Dome is not a 5.1/7.1 theater; it’s 29 channels. So, Upmix really helped us go from a stereo format to something that we could distribute in the room. I was able to adjust all my sources through the plugin and, ultimately, create a 3D mix. Using Nugen, you can really have fun with your audio.

Monte: I use Nugen. Halo Upmix for sound design, especially to create atmosphere sounds, like a forest. I plug in my source and Upmix just works. It’s really great; I don’t have to spend hours tweaking the sound just to have it only serve as a bed to add extra elements on top. For example, maybe I want an extra bird chirping over there and then, okay, we’re in the forest now. It works really well for tasks like that.

AMD 2.1

Blackmagic releases Resolve 16.2, beefs up audio post tools

Blackmagic has updated its color, edit, VFX and audio post tool to Resolve 16.2. This new version features major Fairlight updates for audio post as well as many improvements for color correction, editing and more.

This new version has major new updates for editing in the Fairlight audio timeline when using a mouse and keyboard. This is because the new edit selection mode unlocks functionality previously only available via the audio editor on the full Fairlight console, so editing is much faster than before. In addition, the edit selection mode makes adding fades and cuts and even moving clips only a mouse click away. New scalable waveforms let users zoom in without adjusting the volume. Bouncing lets customers render a clip with custom sound effects directly from the Fairlight timeline.

Adding multiple clips is also easier, as users can now add them to the timeline vertically, not just horizontally, making it simpler to add multiple tracks of audio at once. Multichannel tracks can now be converted into linked groups directly in the timeline so users no longer have to change clips manually and reimport. There’s added support for frame boundary editing, which improves file export compatibility for film and broadcast deliveries. Frame boundary editing now adds precision so users can easily trim to frame boundaries without having to zoom all the way in the timeline. The new version supports modifier keys so that clips can be duplicated directly in the timeline using the keyboard and mouse. Users can also copy clips across multiple timelines with ease.

Resolve 16.2 also includes support for the Blackmagic Fairlight Sound Library with new support for metadata based searches, so customers don’t need to know the filename to find a sound effect. Search results also display both the file name and description, so finding the perfect sound effect is faster and easier than before.

MPEG-H 3D immersive surround sound audio bussing and monitoring workflows are now supported. Additionally, improved pan and balance behavior includes the ability to constrain panning.

Fairlight audio editing also has index improvements. The edit index is now available in the Fairlight page and works as it does in the other pages, displaying a list of all media used; users simply click on a clip to navigate directly to its location in the timeline. The track index now supports drag selections for mute, solo, record enable and lock as well as visibility controls so editors can quickly swipe through a stack of tracks without having to click on each one individually. Audio tracks can also be rearranged by click and dragging a single track or a group of tracks in the track index.

This new release also includes improvements in AAF import and export. AAF support has been refined so that AAF sequences can be imported directly to the timeline in use. Additionally, if the project features a different time scale, the AAF data can also be imported with an offset value to match. AAF files that contain multiple channels will also be recognized as linked groups automatically. The AAF export has been updated and now supports industry-standard broadcast wave files. Audio cross-fades and fade handles are now added to the AAF files exported from Fairlight and will be recognized in other applications.

For traditional Fairlight users, this new update makes major improvements in importing old legacy Fairlight projects —including improved speed when opening projects with over 1,000 media files, so projects are imported more quickly.

Audio mixing is also improved. A new EQ curve preset for clip EQ in the inspector allows removal of troublesome frequencies. New FairlightFX filters include a new meter plug-in that adds a floating meter for any track or bus, so users can keep an eye on levels even if the monitoring panel or mixer are closed. There’s also a new LFE filter designed to smoothly roll off the higher frequencies when mixing low-frequency effects in surround.

Working with immersive sound workflows using the Fairlight audio editor has been updated and now includes dedicated controls for panning up and down. Additionally, clip EQ can now be altered in the inspector on the editor panel. Copy and paste functions have been updated, and now all attributes — including EQ, automation and clip gain — are copied. Sound engineers can set up their preferred workflow, including creating and applying their own presets for clip EQ. Plug-in parameters can also be customized or added so that users have fast access to their preferred tool set.

Clip levels can now be changed relatively, allowing users to adjust the overall gain while respecting existing adjustments. Clip levels can also be reset to unity, easily removing any level adjustments that might have previously been made. Fades can also be deleted directly from the Fairlight Editor, making it faster to do than before. Sound engineers can also now save their preferred track view so that they get the view they want without having to create it each time. More functions previously only available via the keyboard are now accessible using the panel, including layered editing. This also means that automation curves can now be selected via the keyboard or audio panel.

Continuing on with the extensive improvements to the Fairlight audio, there has also been major updates to the audio editor transport control. Track navigation is now improved and even works when nothing is selected. Users can navigate directly to the timecode entry window above the timeline from the audio editor panel, and there is added support for high-frame-rate timecodes. Timecode entry now supports values relative to the current CTI location, so the playhead can move along the timeline relative to the position rather than a set timecode.

Support has also been added so the colon key can be used in place of the user typing 00. Master spill on console faders now lets users spill out all the tracks to a bus fader for quick adjustments in the mix. There’s also more precision with rotary controls on the panel and when using a mouse with a modifier key. Users can also change the layout and select either icon or text-only labels on the Fairlight editor. Legacy Fairlight users can now use the traditional — and perhaps more familiar — Fairlight layout. Moving around the timeline is even quicker with added support for “media left” and “media right” selection keys to jump the playhead forward and back.

This update also improves editing in Resolve. Loading and switching timelines on the edit page is now faster, with improved performance when working with a large number of audio tracks. Compound clips can now be made from in and out points so that editors can be more selective about which media they want to see directly in the edit page. There is also support for previewing timeline audio when performing live overwrites of video-only edits. Now when trimming, the duration will reflect the clip duration as users actively trim, so they can set a specific clip length. Support for a change transition duration dialogue.

The media pool now includes metadata support for audio files with up to 24 embedded channels. Users can also duplicate clips and timelines into the same bin using copy and paste commands. Support for running the primary DaVinci Resolve screen as a window when dual-screen mode is enabled. Smart filters now let users sort media based on metadata fields, including keywords and people tags, so users can find the clips they need faster.


Amazon’s The Expanse Season 4 gets HDR finish

The fourth season of the sci-fi series The Expanse was finished in HDR for the first time streaming via Amazon Prime Video. Deluxe Toronto handled end-to-end post services, including online editorial, sound remixing and color grading. The series was shot on ARRI Alexa Minis.

In preparation for production, cinematographer Jeremy Benning, CSC, shot anamorphic test footage at a quarry that would serve as the filming stand-in for the season’s new alien planet, Ilus. Deluxe Toronto senior colorist Joanne Rourke then worked with Benning, VFX supervisor Bret Culp, showrunner Naren Shankar and series regular Breck Eisner to develop looks that would convey the location’s uninviting and forlorn nature, keeping the overall look desaturated and removing color from the vegetation. Further distinguishing Ilus from other environments, production chose to display scenes on or above Ilus in a 2.39 aspect ratio, while those featuring Earth and Mars remained in a 16:9 format.

“Moving into HDR for Season 4 of our show was something Naren and I have wanted to do for a couple of years,” says Benning. “We did test HDR grading a couple seasons ago with Joanne at Deluxe, but it was not mandated by the broadcaster at the time, so we didn’t move forward. But Naren and I were very excited by those tests and hoped that one day we would go HDR. With Amazon as our new home [after airing on Syfy], HDR was part of their delivery spec, so those tests we had done previously had prepared us for how to think in HDR.

“Watching Season 4 come to life with such new depth, range and the dimension that HDR provides was like seeing our world with new eyes,” continues Benning. “It became even more immersive. I am very much looking forward to doing Season 5, which we are shooting now, in HDR with Joanne.”

Rourke, who has worked on every season of The Expanse, explains, “Jeremy likes to set scene looks on set so everyone becomes married to the look throughout editorial. He is fastidious about sending stills each week, and the intended directive of each scene is clear long before it reaches my suite. This was our first foray into HDR with this show, which was exciting, as it is well suited for the format. Getting that extra bit of detail in the highlights made such a huge visual impact overall. It allowed us to see the comm units, monitors, and plumes on spaceships as intended by the VFX department and accentuate the hologram games.”

After making adjustments and ensuring initial footage was even, Rourke then refined the image by lifting faces and story points and incorporating VFX. This was done with input provided by producer Lewin Webb; Benning; cinematographer Ray Dumas, CSC; Culp or VFX supervisor Robert Crowther.

To manage the show’s high volume of VFX shots, Rourke relied on Deluxe Toronto senior online editor Motassem Younes and assistant editor James Yazbeck to keep everything in meticulous order. (For that they used the Grass Valley Rio online editing and finishing system.) The pair’s work was also essential to Deluxe Toronto re-recording mixers Steve Foster and Kirk Lynds, who have both worked on The Expanse since Season 2. Once ready, scenes were sent in HDR via Streambox to Shankar for review at Alcon Entertainment in Los Angeles.

“Much of the science behind The Expanse is quite accurate thanks to Naren, and that attention to detail makes the show a lot of fun to work on and more engaging for fans,” notes Foster. “Ilus is a bit like the wild west, so the technology of its settlers is partially reflected in communication transmissions. Their comms have a dirty quality, whereas the ship comms are cleaner-sounding and more closely emulate NASA transmissions.”

Adds Lynds, “One of my big challenges for this season was figuring out how to make Ilus seem habitable and sonically interesting without familiar sounds like rustling trees or bird and insect noises. There are also a lot of amazing VFX moments, and we wanted to make sure the sound, visuals and score always came together in a way that was balanced and hit the right emotions story-wise.”

Foster and Lynds worked side by side on the season’s 5.1 surround mix, with Foster focusing on dialogue and music and Lynds on sound effects and design elements. When each had completed his respective passes using Avid ProTools workstations, they came together for the final mix, spending time on fine strokes, ensuring the dialogue was clear, and making adjustments as VFX shots were dropped in. Final mix playbacks were streamed to Deluxe’s Hollywood facility, where Naren could hear adjustments completed in real time.

In addition to color finishing Season 4 in HDR, Rourke also remastered the three previous seasons of The Expanse in HDR, using her work on Season 4 as a guide and finishing with Blackmagic DaVinci Resolve 15. Throughout the process, she was mindful to pull out additional detail in highlights without altering the original grade.

“I felt a great responsibility to be faithful to the show for the creators and its fans,” concludes Rourke. “I was excited to revisit the episodes and could appreciate the wonderful performances and visuals all over again.”


London’s Molinare launches new ADR suite

Molinare has officially opened a new ADR suite in its Soho studio in anticipation of increased ADR output and to complement last month’s CAS award-winning ADR work on Fleabag. Other recent ADR credits for the company include Good Omens, The Capture and Strike Back. Molinare sister company Hackenbacker also picked up some award love with a  a BAFTA TV Craft and an AMPS award for Killing Eve.

Molinare and Hackenbacker’s audio setup includes nine mixing theaters, three of which have Dolby 5.1/7.1 Theatrical or Commercials & Trailers Certification, and one has full Dolby Atmos home entertainment mix capability.

Molinare works on high-end TV dramas, feature films, feature documentaries and TV reality programming. Recent audio credits include BBC One’s Dracula, The War of the Worlds from Mammoth Screen and Worzel Gummidge. Hackenbacker has recently worked on HBO’s Avenue 5 for returning director Armando Iannucci and Carnival Film’s Downton Abbey and has contributed to the latest season of Peaky Blinders.


Behind the Title: Harbor sound editor/mixer Tony Volante

“As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.”

Name: Tony Volante

Company: Harbor

Can you describe what Harbor does?
Harbor was founded in 2012 to serve the feature film, episodic and advertising industries. Harbor brings together production and post production under one roof — what we like to call “a unified process allowing for total creative control.”

Since then, Harbor has grown into a global company with locations in New York, Los Angeles and London. Harbor hones every detail throughout the moving-image-making process: live-action, dailies, creative and offline editorial, design, animation, visual effects, CG, sound and picture finishing.

What’s your job title?
Supervising Sound Editor/Re-Recording Mixer

What does that entail?
I supervise the sound editorial crew for motion pictures and TV series along with being the re-recording mixer on many of my projects. I put together the appropriate crew and schedule along with helping to finalize a budget through the bidding process. As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.

What would surprise people the most about what falls under that title?
How almost all the sound that someone hears in a movie has been replaced by a sound editor.

What’s your favorite part of the job?
Creatively collaborating with co-workers and hearing it all come together in the final mix.

What is your most productive time of day?
Whenever I can turn off my emails and can concentrate on mixing.

If you didn’t have this job, what would you be doing instead?
Fishing!

When did you know this would be your path?
I played drums in a rock band and got interested in sound at around 18 years old. I was always interested in the “sound” of an album along with the musicality. I found myself buying records based on who had produced and engineered them.

Can you name some recent projects?
Fosse/Verdo (FX) and Boys State, which just one Grand Jury Prize at Sundance.

How has the industry changed since you began working?
Technology has improved workflows immensely and has helped us with the creative process. It has also opened up the door to accelerating schedules to the point of sacrificing artistic expression and detail.

Name three pieces of technology you can’t live without
Avid Pro Tools, my iPhone and my car’s navigation system.

How do you de-stress from it all?
I stand in the middle of a flowing stream fishing with my fly rod. If I catch something that’s a bonus!


Talking with 1917’s Oscar-nominated sound editing team

By Patrick Birk

Sam Mendes’ 1917 tells the harrowing story of Lance Corporals Will Schofield and Tom Blake, following the two young British soldiers on their perilous trek across no man’s land to deliver lifesaving orders to the Second Battalion of the Devonshire Regiment.

Oliver Tarney

The story is based on accounts of World War I by the director’s grandfather, Alfred Mendes. The production went to great lengths to create an immersive experience, placing the viewer alongside the protagonists in a painstakingly recreated world, woven together seamlessly, with no obvious cuts. The film’s sound department had to rise to the challenge of bringing this rarely portrayed sonic world to life.

We checked in with supervising sound editor Oliver Tarney and ADR/dialogue supervisor Rachael Tate, who worked out of London’s Twickenham Studios. Both Tarney and Tate are Oscar-nominated in the Sound Editing category. Their work was instrumental in transporting audiences to a largely forgotten time, helping to further humanize the monochrome faces of the trenches. I know that I will keep their techniques — from worldizing to recording more ambient Foley — in mind on the next project I work on.

Rachael Tate

A lot of the film is made up of quiet, intimate moments punctuated by extremely traumatic events. How did you decide on the most key sounds for those quiet moments?
Oliver Tarney: When Sam described how it was going to be filmed, it was expected that people would comment on how it was made from a technical perspective. But for Sam, it’s a story about the friendship between these two men and the courage and sacrifice that they show. Because of this, it was important to have those quieter moments when you aren’t just engaged in full-tilt action the whole time.

The other factor is that the film had no edits — or certainly no obvious edits (which actually meant many edits) — and was incredibly well-rehearsed. It would have been a dangerous thing to have had everything playing aggressively the whole way through. I think it would have been very fatiguing for the audience to watch something like that.

Rachael Tate: Also, you can’t rely on a cut in the normal way to inform pace and energy, so you are using things like music and sound to sort of ebb and flow the energy levels. So after the plane crash, for example, you’ll notice it goes very quiet, and also with the mine collapse, there’s a huge section of very little sound, and that’s on purpose so your ears can reacclimatize.

Absolutely, and I feel like that’s a good way to go — not to oversaturate the audience with the extreme end of the sound design. In other interviews, you said that you didn’t want it to seem overly processed.
Tarney: Well, we didn’t want the weapons to sound heroic in any way. We didn’t want it to seem like they were enjoying what they were doing. It’s very realistic; it’s brutal and harsh. Certainly, Schofield does shoot at people, but it’s out of necessity rather than enjoying his role there. In terms of dynamics, we broke the film up into a series of arcs, and we worked out that some would be five minutes, some would be nine minutes and so on.

In terms of the guns, we went more naturalistic in our recordings. We wanted the audience to feel everything from their perspective — that’s what Sam wanted with the entire film. Rather than having very direct recordings, we split our energies between that and very ambient recordings in natural spaces to make it feel more realistic. The distance that enemy fire was coming from is much more realistic than you would normally play in a film, and the same goes for the biplane recordings. We had microphones all across airfields to get that lovely phase-y kind of sound. For the dogfight with the planes, we sold the fact that you’re watching Blake and Schofield watching the dogfight rather than being drawn directly to the dogfight. I guess it was trying to mirror the visual, which would stick with the two leads.

Tate: We did the same with the crowd. We tried to keep it more realistic by using half actual territorial army guys, along with voice actors, rather than just being a crowdy-sounding crowd. When we put that into the mix, we also chose which bits to focus on — Sam described it as wanting it to be like a vignette, like an old photo. You have the brown edging that fades away in the corners. He wanted you to zoom in on them so much that the stuff around them is there, but at the level they would hear it. So, if there’s a crowd on the screen further back from them, in reality you wouldn’t really hear it. In most films you put something in everyone’s mouth, but we kept it pared right back so that you’re just listening to their voices and their breaths. This is similar to how it was done with the guns and effects.

You said you weren’t going for any Hollywood-type effects, but I did notice that there are some psychoacoustic cues, like when a bomb goes in the bunker, and I think a tinnitus-type effect.
Tarney: There are a few areas where you have to go with a more conventional film language. When the plane’s very close — on the bridge perhaps — once he’s being fired upon, we start going into something that’s a little more conventional, and then we set the lingo back into him. It was that thing that Sam mentioned, which was subjectivity, objectivity; you can flip between them a little bit, otherwise it becomes too linear.

Tate: It needed to pack a punch.

Foley plays a massive part in this production. Assuming you used period weaponry and vehicles?
Tarney: Sam was so passionate about this project. When you visited the sets, the detail was just beautiful. They set the bar in terms of what we had to achieve realism-wise. We had real World War I rifles and machine guns, both British and German, and biplanes. We also did wild track Foley at the first trench and the last trench: the muddy trench and then the chalk one at the end.

Tate: We even put Blakeys on the boots.

Tarney: Yes, we bought various boots with different hobnails and metal tips.

That’s what a Blakey is?
Tate: The metal things that they put in the bottom of their shoes so that they didn’t slip around.

Tarney: And we went over the various surfaces and found which worked the best. Some were real hobnail boots, and some had metal stuck into them. We still wanted each character to have a certain personality; you don’t want everything sounding the same. We also recorded them without the nails, so when we were in a quieter part of the film, it was more like a normal boot. If you’d had that clang, clang, clang all the way through the film…

Tate: It would throw your attention away from what they were saying.

Tarney: With everything we did on the Foley, it was important to keep focus on them the whole time. We would work in layers, and as we would build up to one of the bigger events, we’d start introducing the heavier, more detailed Foley and take away the more diffuse, mellow Foley.

You only hear webbing and that kind of stuff at certain times because it would be too annoying. We would start introducing that as they went into more dangerous areas. You want them to feel conspicuous, too — when they’re in no man’s land, you want the audience to think, “Wow, there are two guys, alone, with absolutely no idea what’s out there. Is there a sniper? What’s the danger?” So once you start building up that tension, you make them a little bit louder again, so you’re aware they are a target.

How much ADR did the film require? I’m sure there was a lot of crew noise in the background.
Tate: Yes, there was a lot of crew noise — there were only two lines of “technical” ADR, which is when a line needs to be redone because the original could not be used/cleaned sufficiently. My priority was to try and keep as much production as possible. Because we started a couple of weeks after shooting started, and as they were piecing it together, it was as if it was locked. It’s not the normal way.

With this, I had the time to go deep and spectrally remove all the crew feet from the mics because they had low-end thuds on their clip mics, which couldn’t be avoided. The recordist, Stuart Wilson, did a great job, giving me a few options with the clip mics, and he was always trying to get a boom in wherever he could.

He had multiple lavaliers on the actors?
Tate: Yes, he had up to three on both those guys most of the time, and we went with the one on their helmets. It was like a mini boom. But, occasionally, they would get wind on them and stuff like that. That’s when I used iZotope RX 7. It was great having the time to do it. Ordinarily people might say, “Oh no, let’s ADR all the breaths there,” but I could get the breaths out. When you hear them breathing, that’s what they were doing at the time. There’s so much performance in them, I would hate to get them standing in a studio in London, you know, in jeans, trying to recreate that feeling.

So even if there’s slight artifacting, the littlest bit, you’d still go with that over ADR?
Tate: Absolutely. I would hope there’s not too much there though.

Tarney: Film editor Lee Smith and Sam have such a great working relationship; they really were on the same page putting this thing together. We had a big decision to make early on: Do we risk being really progressive and organize Foley recording sessions whilst they were still filming? Because, if everything was going according to plan, they were going to be really hungry for sound since there was no cutting once they had chosen the takes. If it didn’t go to plan, then we’d be forever swapping out seven-minute takes, which would be a nightmare to redo. We took a gamble and budgeted to spend the resources front heavy, and it worked out.

Tate: Lee Smith used to be a sound guy, which didn’t hurt.

I saw how detailed they were with the planning. The model of the town for figuring out the trajectory of the flair for lighting, for example.
Tate: They also mapped out the trenches so they were long enough to cover the amount of dialogue the actors were going to say — so the trenches went on for 500 yards. Before that, they were on theater stages with cardboard boxes to represent trenches, walking through them again and again. Everything was very well-planned.

Apart from dialogue and breaths, were there any pleasant surprises from the production audio that you were able to use in the final cut?
Tate: In the woods, toward the end of the film, Schofield stumbles out of the river and hears singing, and the singing that you hear is the guy doing it live. That’s the take. We didn’t get him in to sing and then put it on; that’s just his clip mic, heavily affected. We actually took his recording out into the New Forest, which is south of London.

A worldizing-type technique?
Tate: Yes, we found a remote part, and we played it and recorded it from different distances, and we had that woven against the original with a few plugins on it for the reverbs.

Tarney: We don’t know if Schofield is concussed and if he’s hallucinating. So we really wanted it to feel sort of ethereal, sort of wafting in and out on the wind — is he actually hearing this or not?

Tate: Yeah, we played the first few lines out of sequence, so you can’t really catch if there’s a melody. Just little bits on the breeze so that you’re not even quite sure what you’re hearing at that point, and it gradually comes to a more normal-sounding tune.

Tarney: Basically, that’s the thing with the whole film; things are revealed to the audience as they’re revealed to the lead characters.

Tate: There are no establishing shots.

Were there any elements of the sound design you wouldn’t expect to be in there that worked for one reason or another?
Tarney: No, there’s nothing… we were pretty accurate. Even the first thing you hear in the film — the backgrounds that were recorded in April.

Tate: In the field.

Tarney: Rachael and I went to Ypres in Belgium to visit the World War I museum and immerse ourselves in that world a little bit.

Tate: We didn’t really know that much about World War I. It wasn’t taught in my school, so I really didn’t know anything before I started this; we needed to educate ourselves.

Can you talk about the loop groups and dialing down to the finest details in terms of the vocabulary used?
Tate: Oh, God, I’ve got so many books, and we got military guys for that sort of flat way they operate. You can’t really explain that fresh to a voice actor and get them to do it properly. But the voice actors helped those guys perform and get out of their shells, and the military guys helped the voice actors in showing them how it’s done.

I gave them all many sheets of key words they could use, or conversation starters, so that they could improvise but stay on the right track in terms of content. Things like slang, poems from a cheap newspaper that was handed out to the soldiers. There was an officer’s manual, so I could tell them the right equipment and stuff. We didn’t want to get anything wrong.

That reminds me of this series of color photographs taken in the early 1900s in Russia. Automatically, it brings you so much closer to life at that point in time. Do you feel like you were able to achieve that via the sound design of this film?
Tarney: I think the whole project did that. When you’ve watched a film every day for six months, day in and day out, you can’t help but think about that era more, and it’s slightly embarrassing that it’s one generation past your grandparents.

How much more worldizing did you do, apart from the nice moment with the song?
Tarney: The Foley that you hear in the trench at the beginning and in the trench at the end is a combination between worldizing and sound designer Mike Fentum’s work. We both went down about three weeks before we started because Stuart Wilson gave us a heads up that they were wrapping at that location, so we spoke to the producer, and he gave us access.

So, in terms of worldizing, it’s not quite worldizing in the conventional sense of taking a recording and then playing it in a space. We actually went to the space and recorded the feet in that space, and the Foley supervisor Hugo Adams went to Salisbury Plain (the chalk trench at the end), and those were the first recordings that we edited and gave to Lee Smith. And then, we would get the two Foley artists that we had — Andrea King and Sue Harding — to top that with a performed pass against a screen. The whole film is layered between real recordings and studio Foley, and it’s the blend of natural presence and the performed studio Foley, with all the nuance and detail that you get from that.

Tate: Similarly, the crowd that we recorded out on a field in the back lot of Shepperton, with a 50 array; we did as much as we could without a screen with them just acting and going through the motions. We had an authentic World War I stretcher, which we used with hilarious consequences. We got them to run up and down carrying their friends on stretchers and things like that and passing enormous tables to each other and stuff so that we had the energy of it. There is something about recording outside and that sort of natural slap that you get off the buildings. It was embedded with production quite seamlessly really, and you can’t really get the same from a studio. We had to do the odd individual line in there, but most of it was done out in a field.

When need be, were you using things like convolution reverbs, such as Audio Ease Altiverb, in the mix?
Tarney: Absolutely. As good as the recordings were, it’s only when you put it against picture that you really understand what it is you need to achieve. So we would definitely augment with a lot — Altiverb is a favorite. Re-recording mixer Mark Taylor and I, we would use that a lot to augment and just change perspective a little bit more.

Can you talk about the Atmos mix and what it brought to the film?
Tarney: I’ve worked on many films with Atmos, and it’s a great tool for us. Sam’s very performance-orientated and would like things to be more screen-focused. The minute you have to turn around, you’ve lost that connection with the lead characters. So, in general, we kept things a little more front-loaded than we might have done with another director, but I really liked the results. It’s actually all the more shocking when you hear the biplane going overhead when they’re in no man’s land.

Sam wanted to know all the way through, “Can I hear it in 5.1, 7.1 and Atmos?” We’d make sure that in the three mixes — other than the obvious — we had another
plane coming over from behind. There’s not a wild difference in Atmos. The low end is nicer, and the discrete surrounds play really well, but it’s not a showy kind of mix in that sense. That would not have been true to everything we were trying to achieve, which was something real.

So Sam Mendes knows sound?
Tarney: He’s incredibly hungry to understand everything, in the best way possible. He’s very good at articulating what he wants and makes it his business to understand everything. He was fantastic. We would play him a section in 5.1, 7.1 and Atmos, and he would describe what he liked and disliked about each format, and we would then try to make each format have the same value as the other ones.


Patrick Birk is a musician and sound engineer at Silver Sound, a boutique sound house based in New York City.


CAS Awards recognize GOT, Fleabag, Ford v Ferrari, more

The CAS Awards were held this past weekend, with the sound mixing team from Ford v Ferrari  — Steven A. Morrow, CAS, Paul Massey CAS, David Giammarco CAS, Tyson Lozensky, David Betancourt and Richard Duarte — taking home the Cinema Audio Society Award for Outstanding Sound Mixing Motion Picture – Live Action.

Game of Thrones – The Bells

Top honors for Motion Picture – Animated went to Toy Story 4 and the sound mixing team of Doc Kane CAS, Vince Caro CAS, Michael Semanick CAS, Nathan Nance, David Boucher and Scott Curtis. The CAS Award for Outstanding Sound Mixing Motion Picture – Documentary went to Making Waves: The Art of Cinematic Sound and the team of David J. Turner, Tom Myers, Dan Blanck and Frank Rinella.

Held in the Wilshire Grand Ballroom of the InterContinental Los Angeles Downtown, the awards were presented in seven categories for Outstanding Sound Mixing Motion Picture and Television and two Outstanding Product Awards. The evening saw CAS president Karol Urban pay tribute to recently retired CAS executive board member Peter R. Damski for his years of service to the organization. The contributions of re-recording mixer Tom Fleischman, CAS, were recognized as he received the CAS Career Achievement Award. Presenter Gary Bourgeois spoke to Fleischman’s commitment to excellence demonstrated in a career that spans over 40 years,  nearly 200 films and collaborations with dozens of notable directors.  

James Mangold

James Mangold received the CAS Filmmaker Award in a presentation that included remarks  by re-recording mixer Paul Massey, CAS, who was joined in the presentation by Harrison Ford. Mangold had even more to celebrate as he watched his sound team take top honors for Outstanding Achievement in Sound Mixing Motion Picture – Live Action. 

Here is the complete list of winners:

MOTION PICTURE – LIVE ACTION

Ford v Ferrari

Ford v Ferrari team

Production Mixer – Steven A. Morrow CAS 

Re-recording Mixer – Paul Massey CAS 

Re-recording Mixer – David Giammarco CAS 

Scoring Mixer – Tyson Lozensky

ADR Mixer – David Betancourt 

Foley Mixer – Richard Duarte

MOTION PICTURE – ANIMATED 

Toy Story 4

Original Dialogue Mixer – Doc Kane CAS

Original Dialogue Mixer – Vince Caro CAS

Re-recording Mixer – Michael Semanick CAS 

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Scott Curtis

 

MOTION PICTURE – DOCUMENTARY

Making Waves: The Art of Cinematic Sound

Production Mixer – David J. Turner 

Re-recording Mixer – Tom Myers 

Scoring Mixer – Dan Blanck

ADR Mixer – Frank Rinella

 

TELEVISION SERIES – 1 HOUR

Game of Thrones: The Bells

Production Mixer – Ronan Hill CAS 

Production Mixer –Simon Kerr 

Production Mixer – Daniel Crowley 

Re-recording Mixer – Onnalee Blank CAS 

Re-recording Mixer – Mathew Waters CAS 

Foley Mixer – Brett Voss CAS

TELEVISION SERIES – 1/2 HOUR 

TIE

Barry: ronny/lily

Production Mixer – Benjamin A. Patrick CAS 

Re-recording Mixer – Elmo Ponsdomenech CAS 

Re-recording Mixer – Jason “Frenchie” Gaya 

ADR Mixer – Aaron Hasson

Foley Mixer – John Sanacore CAS

 

Fleabag: Episode #2.6

Production Mixer – Christian Bourne 

Re-recording Mixer – David Drake 

ADR Mixer – James Gregory

 

TELEVISION MOVIE or LIMITED SERIES

Chernobyl: 1:23:45

Production Mixer – Vincent Piponnier 

Re-recording Mixer – Stuart Hilliker 

ADR Mixer – Gibran Farrah

Foley Mixer – Philip Clements

 

TELEVISION NON-FICTION, VARIETY or MUSIC SERIES or SPECIALS

David Bowie: Finding Fame

Production Mixer – Sean O’Neil 

Re-recording Mixer – Greg Gettens

 

OUTSTANDING PRODUCT – PRODUCTION

Sound Devices, LLC

Scorpio

 

OUTSTANDING PRODUCT – POST PRODUCTION 

iZotope

Dialogue Match

 

STUDENT RECOGNITION AWARD

Bo Pang

Chapman University

 

Main Image: Presenters Whit Norris, Elisha Cuthbert, Award winners Onnalee Blank, Ronan Hill and Brett Voss at the CAS Awards. (Tyler Curtis/ABImages) 

 

 


Wylie Stateman on Once Upon a Time… in Hollywood‘s Oscar nod for sound

By Beth Marchant

To director Quentin Tarantino, sound and music are primal forces in the creation of his idiosyncratic films. Often using his personal music collection to jumpstart his initial writing process and later to set a film’s tone in the opening credits, Tarantino always gives his images a deep, multi-sensory well to swim in. According to his music supervisor Mary Ramos, his bold use of music is as much a character as each film’s set of quirky protagonists.

Wylie Stateman – Credit: Andrea Resnick

Less showy than those memorable and often nostalgic set-piece songs, the sound design that holds them together is just as critically important to Tarantino’s aesthetic. In Once Upon a Time… in Hollywood it even replaces the traditional composed score. That’s one of many reasons why the film’s supervising sound editor Wylie Stateman, a long-time Tarantino collaborator, relished his latest Oscar-nominated project with the director (he previously received nominations for Django Unchained and Inglourious Basterds and has a lifetime total of nine Oscar nominations).

Before joining team Tarantino, Stateman sound designed some of the most iconic films of the ‘80s and ‘90s, including Tron, Footloose, Ferris Bueller’s Day Off (among 15 films he made with John Hughes), Born on the Fourth of July and Jerry Maguire. He also worked for many years with Oliver Stone, winning a BAFTA for his sound work on JFK. He went on to cofound the Topanga, California-based sound studio Twentyfourseven.

We talked to Stateman about how he interpreted Tarantino’s sound vision for his latest film — about a star having trouble evolving to new roles in Hollywood and his stuntman — revealing just how closely the soundtrack is connected to every camera move and cut.

How does Tarantino’s style as a director influence the way you approach the sound design?
I believe that sound is a very important department within the process of making any film. And so, when I met Quentin many years ago, I was meeting him under the guise that he wanted help and he wanted somebody who could focus their time, experience and attention on this very specific department called sound.

I’ve been very fortunate, especially on Quentin’s films, to also have a great production sound mixer and great rerecording mixers. We have both sides of the process in really tremendously skilled hands and tremendously experienced hands. Mark Ulano, our production sound mixer, won an Oscar for Titanic. He knows how to deal with dialogue. He knows how to deal with a complex set, a set where there are a lot of moving parts.

On the other side of that, we have Mike Minkler doing the final re-recording mixing. Mike, who I worked with on JFK, is tremendously skilled with multiple Oscars to his credit. He’s just an amazing creative in terms of re-recording mixing.

The role that I like to play as  supervising sound editor and designer, is how to speak to the filmmaker in terms of sound. For this film, we realized we could drive the soundtrack without a composer by using the chosen songs and KHJ radio, and select these bits and pieces from the shows of infamous DJ “Humble Harve,” or from the clips of all the other DJs on KHJ radio who really defined 1969 in Los Angeles.

And as the film shows, most people heard them over the car radio in car-centric LA.
The DJs were powerful messengers of popular culture. They were powerful messengers of what was happening in the minds and in the streets and in popular culture of that time. That was Quentin’s idea. When he wrote the script, he had written into it all of the KHJ radio segments, and he listens a lot, and he’s a real student of the filmmaking process and a real master.

On the student side, he’s constantly learning and he’s constantly looking and he’s constantly listening. On the master side, he then applies that to the characters that he wants to develop and those situations that he’s looking to be at the base and basis of his story. So, basically, Quentin comes to me for a better understanding of his intention in terms of sound, and he has a tremendous understanding to begin with. That’s what makes it so exciting.

When talking to Quentin and his editor Fred Raskin, who are both really deeply knowledgeable filmmakers, it can be quite challenging to stay in front of them and/or to chase behind them. It’s usually a combination of the two. But Quentin is a very generous collaborator, meaning he knows what he wants, but then he’s able to stop, listen and evaluate other ideas.

How did you find all of the clips we hear on the various radios?
Quentin went through hundreds of hours of archival material. And he has a tremendous working knowledge of music to begin with, and he’s also a real student of that period.

Can you talk about how you approached the other elements of specific, Tarantino-esque sound, like Cliff crunching on a celery stick in that bar scene?
Quentin’s movies are bold in the sense of some of the subject matter that he tackles, but they’re highly detailed and also very much inside his actors’ heads. So when you talk about crunching on a piece of celery, I interpret everything that Quentin imparts on his characters as having some kind of potential vocabulary in terms of sound. And that vocabulary… it applies to the camera. If the camera hides behind something and then comes out and reveals something or if the camera’s looking at a big, long shot — like Cliff Booth’s walk to George Spahn’s house down that open area in the Spahn Ranch — every one of those moves has a potential sound component and every editorial cut could have a vocabulary of sound to accompany it.

We also use those [combinations] to alter time, whether it’s to jump forward or jump back or just crash in. He does a lot of very explosive editing moves and all of that has an audio vocabulary. It’s been quite interesting to work with a filmmaker that sees picture and sound as sort of a romance and a dance. And the sound could lead the picture, or it could lag the picture. The sound can establish a mood, or it can justify a mood or an action. So it’s this constant push-pull.

Robert Bresson, the father of the French New Wave, basically said, “When the ear leads the eye, the eye becomes impatient. When the eye leads the ear, the ear becomes impatient. Use those impatiences.” So what I’m saying is that sound and pictures are this wonderful choreographed dance. Stimulate peoples’ ears and their eye is looking for something; stimulate their eyes and their ears are looking for something, and using those together is a really intimate and very powerful tool that Quentin, I think, is a master at.

How does the sound design help define the characters of Rick Dalton (Leonardo DiCaprio) and Cliff Booth (Brad Pitt)?
This is essentially a buddy movie. Rick Dalton is the insecure actor who’s watching a certain period — when they had great success and comfort — transition into a new period. You’re going from the John Wayne/True Grit way of making movies to Butch Cassidy and the Sundance Kid or Easy Rider, and Rick is not really that comfortable making this transition. His character is full of that kind of anxiety.

The Cliff Booth character is a very internally disturbed character. He’s an unsuccessful crafts/below-the-line person who’s got personal issues and is kind of typical of a character that’s pretty well-known in the filmmaking process. Rick Dalton’s anxious world is about heightened senses. But when he forgets his line during the bar scene in the Lancer set, the world doesn’t become noisy. The world becomes quiet. We go to silence because that’s what’s inside his head. He can’t remember the line and it’s completely silent. But you could play that same scene 180 degrees in the opposite direction and make him confused in a world of noise.

The year 1969 was very important in the history of filmmaking, and that’s another key to Rick’s and Cliff’s characters. If you look at 1969, it was the turning point in Hollywood when indie filmmaking was introduced. It was also the end of a great era of traditional studio fair and traditional acting, and was more defined by the looser, run-and-gun style of Easy Rider. In a way, the Peter Fonda/Dennis Hopper dynamic of Hopper’s film is somewhat similar to that of Rick Dalton and Cliff Booth.

I saw Easy Rider again recently and the ending hit me like a ton of bricks. The cultural panic, and the violence it invokes, is so palpable because you realize that clash of cultures never really went away; it’s still with us all these years later. Tarantino definitely taps into that tension in this film.
It’s funny that you say that because my wife and I went to the Cannes Film Festival with the team, and they were playing Easy Rider on the beach on a giant screen with a thousand seats in the sand. We walked up on it and we stood there for literally an hour and a half transfixed, just watching it. I hadn’t seen it in years.

What a great use of music and location photography! And then, of course, the story and the ending; it’s like, wow. It’s such a huge departure from True Grit and that generation that made that film. That’s what I love about Quentin, because he plays off the tension between those generations in so many ways in the film. We start out with Al Pacino, and they’re drinking whiskey sours, and then we go all the way through the gambit of what 1969 really felt like to the counterculture.

Was there anything unusual that you did in the edit to manipulate sound to make a scene work?
Sound design is a real design-level responsibility. We invent sound. We go to the libraries and we go to great lengths to record things in nature or wherever we can find it. In this case, we recorded all the cars. We apply a very methodical approach to sound.

Sound design, for me, is the art of shaping noise to suit the picture and to enhance the story and great sound lives somewhere between the science of audio and the subjectivity of storytelling. The science part is really well-known, and it’s been perfected over many, many years with lots of talented artists and artisans. But the story part is what excites me, and it’s what excites Quentin. So it becomes what we don’t do that’s so interesting, like using silence instead of noise or creating a soundtrack without a composer. I don’t think you miss having score music. When we couldn’t figure out a song, we made sound design elements. So, yeah, we would make tension sounds.

Shaping noise is not something I could explain to you with an “an eye of newt plus a tail of yak” secret recipe. It’s a feeling. It’s just working with audio, shaping sound effects and noise to become imperceptibly conjoined with music. You can’t tell where the sound design is beginning and ending and where it transfers into more traditional song or music. That is the beauty of Quentin’s films. In terms of sound, the audio has shapes that are very musical.

His deep-cut versions of songs are so interesting, too. Using “California Dreaming” by the Mamas and Papas would have been way too obvious, so he uses a José Feliciano cover of it and puts the actual Mamas and Papas into the film as walk-on characters.
Yeah. I love his choice of music. From Sharon and Roman listening to “Hush” by Deep Purple in the convertible, their hair flying, to going straight into “Son of a Lovin’ Man” after they arrive at the Playboy Mansion. Talk about 1969 and setting it off! It’s not from the San Francisco catalog; it’s just this lovely way that Quentin imagines time and can relate to it as sound and music. The world as it relates to sound is very different than the world of imagery. And the type of director that Quentin is, he’s a writer, he’s a director, and he’s a producer, so he really understands the coalescing of these disciplines.

You haven’t done a lot of interviews in the past. Why not?
I don’t do what I do to call attention to either myself or my work. Over the first 35 years of my career, there’s very little record of any conversation that I had outside of my team and directly with my filmmakers. But at this point in life, when we’re at the cusp of this huge streaming technology shift and everything is becoming more politically sensitive, with deep fakes in both image and audio, I think it’s time sound should have somebody step up and point out, “Hey, we are invisible. We are transitory.” Meaning, when you stop the electricity going to the speakers, the sound disappears, which is kind of an amazing thing. You can pause the picture and you can study it. Sound only exists in real time. It’s just the vibration in the air.

And to be clear, I don’t see motion picture sound as an art form. I see it, rather, as a form of art and it takes a long time to become a sculptor in sound who can work in a very simple style. After all, it’s the simplest lines that just blow your mind!

What blew your mind about this film, either while you worked on it or when you saw the finished product?
I really love the whole look of the film. I love the costumes, and I have great respect for the team that Quentin consistently pulls together. When I work on Quentin’s films, I never turn around and find somebody that doesn’t have a great idea or deep experience in their craft. Everywhere you turn, you bump into extraordinary talent.

Dakota Fanning’s scene at the Spahn Ranch… I mean, wow! Knocks my socks off. That’s really great stuff. It’s a remarkable thing to work with a director who has that kind of love for filmmaking and that allows for really talented people to also get in the sandbox and play.


Beth Marchant is a veteran journalist focused on the production and post community and contributes to “The Envelope” section of the Los Angeles Times. Follow her on Twitter @bethmarchant.

Behind the Title: Sound Lounge ADR mixer Pat Christensen

This ADR mixer was a musician as a kid and took engineering classes in college, making him perfect for this job.

Name: Pat Christensen

Company: Sound Lounge (@soundloungeny)

What’s your job title?
ADR mixer

What does Sound Lounge do?
Sound Lounge is a New York City-based audio post facility. We provide sound services for TV, commercials, feature films, television series, digital campaigns, games, podcasts and other media. Our services include sound design, editing and mixing; ADR recording and voice casting.

What does your job entail?
As an ADR mixer, I re-record dialogue for film and television. It is necessary when dialogue cannot be recorded properly on the set or for creative reasons or because additional dialogue is needed. My stage is set up differently from a standard mix stage as it includes a voiceover booth for actors.

We also have an ADR stage with a larger recording environment to support groups of talent. The stage also allows us to enhance sound quality and record performances with greater dynamics, high and low. The recording environment is designed to be “dead,” that is without ambient sound. That results in a clean recording so when it gets to the next stage, the mixer can add reverb or other processing to make it fit the environment of the finished soundtrack.

What would people find most surprising about your job?
People who aren’t familiar with ADR are often surprised how it’s possible to make an actor’s voice lipsync perfectly with the image on screen and indistinguishable from dialogue recorded on the day.

What’s your favorite part of the job?
Interacting with people — the sound team, the director or the showrunner, and the actors. I enjoy helping directors in guiding the actors and being part of the creative process. I act as a liaison between the technical and creative sides. It’s fun and it’s different every day. There’s never a boring session.

What’s your least favorite?
I don’t know if there is one. I have a great studio and all the tools that I need. I work with good people. I love coming to work every day.

What’s your most productive time of the day?
Whenever I’m booked. It could be 9am. It could be 7a.m. I do night sessions. When the client needs the service, I am ready to go.

If you didn’t have this job, what would you be doing instead?
In high school, I played bass in a punk rock band. I learned the ins and outs of being a musician while taking classes in engineering. I also took classes in automotive technology. If I’d gone that route, I wouldn’t be working in a muffler shop; I’d be fine-tuning Formula 1 engines.

How early on did you know that sound would be your path?
My mom bought me a four-string Washburn bass for Christmas when I was in the eighth grade, but even then I was drawn to the technical side. I was super interested in learning about audio consoles and other gear and how they were used to record music. Luckily, my high school offered a radio and television class, which I took during my senior year. I fell in love with it from day one.

Silicon Valley

What are some of your recent projects?
I worked on the last season of HBO’s Silicon Valley and the second season of CBS’ God Friended Me. We also did Starz’s Power and the new Adam Sandler movie Palm Springs. There are many more credits on my IMDB page. I try to keep it up-to-date.

Is there a project that you’re most proud of?
Power. We’ve done all seven seasons. It’s been exciting to watch how successful that show has become. It’s also been fun working with the actors and getting to know many of them on a personal level. I enjoy seeing them whenever they come it. They trust me to bridge the gap between the booth and the original performance and deliver something that will be seen, and heard, by millions of people. It’s very fulfilling.

Name three pieces of technology you cannot live without.
A good microphone, a good preamp and good speakers. The speakers in my studio are ADAM A7Xs.

What social media channels do you follow?
Instagram and Facebook.

What do you do to relax?
I play hockey. On weekends, I enjoy getting on the ice, expending energy and playing hard. It’s a lot of fun. I also love spending time with my family.

67th MPSE Golden Reel Winners

By Dayna McCallum

The Motion Picture Sound Editors (MPSE) Golden Reel Awards shared the love among a host of films when handing out awards this past weekend at their 67th annual ceremony.

The feature film winners included Ford v Ferrari for effects/Foley, 1917 for dialogue/ADR, Rocketman for the musical category, Jojo Rabbit for musical underscore, Parasite for foreign-language feature, Toy Story 4 for animated feature, and Echo in the Canyon for feature documentary.

The Golden Reel Awards, recognizing outstanding achievement in sound editing, were presented in 23 categories, including feature films, long-form and short-form television, animation, documentaries, games, special venue and other media.

Academy Award-nominated producer Amy Pascal (Little Women) surprised Marvel’s Victoria Alonso when she presented her with the 2020 MPSE Filmmaker Award (re-recording mixer Kevin O’Connell and supervising sound editor Steven Ticknor were honorary presenters).

The 2020 MPSE Career Achievement Award was presented to Academy Award-winning supervising sound editor Cecelia “Cece” Hall by two-time Academy Award-winning supervising sound editor Stephen H. Flick.

“Business models, formats and distribution are all changing,” said MPSE president-elect Mark Lanza during the ceremony. “Original scripted TV shows have set a record in 2019. There were 532 original shows this year. This number is expected to surge in 2020. Our editors and supervisors are paving the way and making our product and the user experience better every year.”

Here is the complete list of winners:

Outstanding Achievement in Sound Editing – Animation Short Form

3 Below “Tales of Arcadia”

Netflix

Supervising Sound Editor: Otis Van Osten
Sound Designer: James Miller
Dialogue Editors: Jason Oliver, Carlos Sanches
Foley Artists: Aran Tanchum, Vincent Guisetti
Foley Editor: Tommy Sarioglou 

Outstanding Achievement in Sound Editing – Non-Theatrical Animation Long Form

Lego DC Batman: Family Matters

Warner Bros. Home Entertainment

Supervising Sound Editor: Rob McIntyre, D.J. Lynch
Sound Designer: Lawrence Reyes
Sound Effects Editors: Ezra Walker
ADR Editor: George Peters
Foley Editor: Aran Tanchum, Derek Swanson
Foley Artists:  Vincent Guisetti 

Outstanding Achievement in Sound Editing – Feature Animation

Toy Story 4

Walt Disney Studios Motion Pictures

Supervising Sound Editor: Coya Elliott
Sound Designer: Ren Klyce
Supervising Dialogue Editor: Cheryl Nardi
Sound Effects Editors: Kimberly Patrick, Qianbaihui Yang, Jonathon Stevens
Foley Editors: Thom Brennan, James Spencer
Foley Artists:  John Roesch, MPSE, Shelley Roden, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Documentary

Serengeti

Discovery Channel

Supervising Sound Editor: Paul Cowgill
Foley Editor: Peter Davies 
Music Editor: Alessandro Baldessari
Foley Artists: Paul Ackerman 

Outstanding Achievement in Sound Editing – Feature Documentary

Echo in the Canyon

Greenwich Entertainment

Sound Designer: Robby Stambler, MPSE
Dialogue Editor:  Sal Ojeda, MPSE

Outstanding Achievement in Sound Editing – Computer Cinematic

Call of Duty: Modern Warfare (2019)

Activision Blizzard
Audio Director: Stephen Miller
Supervising Sound Editor: Dave Rowe
Supervising Sound Designer: Charles Deenen, MPSE Csaba Wagner
Supervising Music Editor:  Peter Scaturro

Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Sound Designers: Bryan Watkins, Mark Ganus, Eddie Pacheco, Darren Blondin
Dialogue Lead: Dave Natale
Dialogue Editors: Chrissy Arya, Michael Krystek
Sound Editors: Braden Parkes, Nick Martin, Tim Walston, MPSE, Brent Burge, Alex Ephraim, MPSE, Samuel Justice, MPSE
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti
Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Computer Interactive Game Play
Call of Duty: Modern Warfare (2019)
Infinity Ward
Audio Director: Stephen Miller
Senior Lead Sound Designer: Dave Rowe
Senior Lead Technical Sound Designer: Tim Stasica
Supervising Music Editor: Peter Scaturro
Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Senior Sound Designers: Chris Egert, Doug Prior
Supervising Sound Designers: Charles Deenen, MPSE, Csaba Wagner
Sound Designers: Chris Staples, Eddie Pacheco, MPSE, Darren Blondin, Andy Bayless, Ian Mika, Corina Bello, John Drelick, Mark Ganus
Dialogue Leads: Dave Natale, Bryan Watkins, Adam Boyd, MPSE, Mark Loperfido
Sound Editors: Braden Parkes, Nick Martin, Brent Burge, Tim Walston, Alex Ephraim, Samuel Justice
Dialogue Editors: Michael Krystek, Chrissy Arya, Cesar Marenco>
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti

Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Feature

Togo

Disney+

Supervising Sound Editors: Odin Benitez, MPSE, Todd Toon, MPSE
Sound Designer: Martyn Zub, MPSE
Dialogue Editor: John C. Stuver, MPSE
Sound Effects Editors: Jason King, Adam Kopald, MPSE, Luke Gibleon, Christopher Bonis
ADR Editor: Dave McMoyler
Supervising Music Editor: Peter “Oso” Snell, MPSE
Foley Artists: Mike Horton, Tim McKeown
Supervising Foley Editor: Walter Spencer

Outstanding Achievement in Sound Editing – Special Venue

Vader Immortal: A Star Wars VR Series “Episode 1”

Oculus

Supervising Sound Editors: Kevin Bolen, Paul Stoughton
Sound Designer: Andy Martin
Supervising ADR Editors: Gary Rydstrom, Steve Slanec
Dialogue Editors: Anthony DeFrancesco, Christopher Barnett, MPSE Benjamin A. Burtt, MPSE
Foley Artists: Shelley Roden, MPSE Jana Vance

Outstanding Achievement in Sound Editing – Foreign Language Feature

Parasite

Neon

Supervising Sound Editor: Choi Tae Young
Sound Designer: Kang Hye Young
Supervising ADR Editor: Kim Byung In
Sound Effects Editors: Kang Hye Young
Foley Artists: Park Sung Gyun, Lee Chung Gyu
Foley Editor: Shin I Na
 

Outstanding Achievement in Sound Editing – Live Action Under 35:00

Barry “ronny/lily”

HBO

Supervising Sound Editors:  Sean Heissinger, Matthew E. Taylor
Sound Designer:  Rickley W. Dumm, MPSE
Sound Effects Editor: Mark Allen
Dialogue Editors:  John Creed, Harrison Meyle
Music Editor:  Michael Brake
Foley Artists:  Alyson Dee Moore, Chris Moriana 
Foley Editors:  John Sanacore, Clayton Weber

Outstanding Achievement in Sound Editing – Episodic Short Form – Music

Wu Tang: An American Saga “All In Together Now”

Hulu 

Music Editor: Shie Rozow

Outstanding Achievement in Sound Editing – Episodic Short Form – Dialogue/ADR

Modern Love “Take Me as I Am”

Prime Video
Supervising Sound Editor: Lewis Goldstein
Supervising ADR Editor: Gina Alfano, MPSE
Dialogue Editor:  Alfred DeGrand

Outstanding Achievement in Sound Editing – Episodic Short Form – Effects / Foley

The Mandalorian “Chapter One”

Disney+

Supervising Sound Editors: David Acord, Matthew Wood
Sound Effects Editors: Bonnie Wild, Jon Borland, Chris Frazier, Pascal Garneau, Steve Slanec
Foley Editor: Richard Gould
Foley Artists: Ronni Brown, Jana Vance

Outstanding Achievement in Sound Editing – Student Film (Verna Fields Award)

Heatwave

National Film and Television School

Supervising Sound Editor: Kevin Langhamer

Outstanding Achievement in Sound Editing – Single Presentation

El Camino: A Breaking Bad Movie

Netflix

Supervising Sound Editors: Nick Forshager, Todd Toon, MPSE
Supervising ADR Editor: Kathryn Madsen
Sound Effects Editor: Luke Gibleon
Dialogue Editor: Jane Boegel
Foley Editor: Jeff Cranford
Supervising Music Editor: Blake Bunzel
Music Editor: Jason Tregoe Newman
Foley Artists: Gregg Barbanell, MPSE, Alex Ullrich 

Outstanding Achievement in Sound Editing – Episodic Long Form – Music

Game of Thrones “The Long Night”

HBO 

Music Editor: David Klotz

Outstanding Achievement in Sound Editing – Episodic Long Form – Dialogue/ADR

Chernobyl “Please Remain Calm”

HBO

Supervising Sound Editor: Stefan Henrix
Supervising ADR Editor:  Harry Barnes
Dialogue Editor: Michael Maroussas

Outstanding Achievement in Sound Editing – Episodic Long Form – Effects / Foley

Chernobyl “1:23:45”

HBO

Supervising Sound Editor: Stefan Henrix
Sound Designer: Joe Beal
Foley Editors: Philip Clements, Tom Stewart
Foley Artist:  Anna Wright

Outstanding Achievement in Sound Editing – Feature Motion Picture – Music Underscore

JoJo Rabbit

Fox Searchlight Pictures

Music Editor: Paul Apelgren

Outstanding Achievement in Sound Editing – Feature Motion Picture – Musical

Rocketman

Paramount Pictures

Music Editors: Andy Patterson, Cecile Tournesac

Outstanding Achievement in Sound Editing – Feature Motion Picture – Dialogue/ADR

1917

Universal Pictures

Supervising Sound Editor: Oliver Tarney, MPSE
Dialogue Editor: Rachael Tate, MPSE

Outstanding Achievement in Sound Editing – Effects / Foley

Ford v Ferrari

Twentieth Century Fox 

Supervising Sound Editor: Donald Sylvester

Sound Designers: Jay Wilkenson, David Giammarco

Sound Effects Editor: Eric Norris, MPSE

Foley Editor: Anna MacKenzie

 Foley Artists: Dan O’Connell, John Cucci, MPSE, Andy Malcolm, Goro Koyama


Main Image Caption: Amy Pascal and Victoria Alonso

 

Skywalker Sound and Cinnafilm create next-gen audio toolset

Iconic audio post studio Skywalker Sound and the makers of PixelStrings media conversion technology Cinnafilm are working together on a new audio tool expected to hit in the first quarter of 2020.

As the paradigms of theatrical, broadcast and online content begin to converge, the need to properly conform finished programs to specifications suitable for a variety of distribution channels has become more important than ever. To ensure high fidelity is maintained throughout the conversion process, it is important to implement high-quality tools to aid in time-domain, level, spatial and file-format processing for all transformed content intended for various audiences and playout systems.

“PixelStrings represents our body of work in image processing and media conversions. It is simple,  scalable and built for the future. But it is not just about image processing, it’s an ecosystem. We recognize success only happens by working with other like-minded technology companies. When Skywalker approached us with their ideas, it was immediate validation of this vision. We plan to put as much enthusiasm and passion into this new sound endeavor as we have in the past with picture — the customers will benefit as they see, and hear, the difference these tools make on the viewer experience,” says Cinnafilm CEO/ founder Lance Maurer.

To address this need, Skywalker Sound has created an audio tool set based on proprietary signal processing and orchestration technology. Skywalker Audio Tools will offer an intelligent, automated audio pipeline with features including sample-accurate retiming, loudness and standards analysis and correction, downmixing, channel mapping and segment creation/manipulation — all faster than realtime. These tools will be available exclusively within Cinnafilm’s PixelStrings media conversion platform.

Talking work and trends with Wave Studios New York

By Jennifer Walden

The ad industry is highly competitive by nature. Advertisers compete for consumers, ad agencies compete for clients and post houses compete for ad agencies. Now put all that in the dog-eat-dog milieu of New York City, and the market becomes more intimidating.

When you factor in the saturation level of the audio post industry in New York City — where audio facilities are literally stacked on top of each other (occupying different floors of the same building or located just down the hall from each other) — then the odds of a new post sound house succeeding seem dismal. But there’s always a place for those willing to work for it, as Wave Studios’ New York location is proving.

Wave Studios — a multi-national sound company with facilities in London and Amsterdam — opened its doors in NYC a little over a year ago. Co-founder/sound designer/mixer Aaron Reynolds worked on The New York Times “The Truth Is Worth It” ad campaign for Droga5 that earned two Grand Prix awards at the 2019 Cannes Lions International Festival of Creativity, and Reynolds’ sound design on the campaign won three Gold Lions. In addition, Wave Studios was recently named Sound Company of the Year 2019 at Germany’s Ciclope International Festival of Craft.

Here, Reynolds and Wave Studios New York executive producer Vicky Ferraro (who has two decades of experience in advertising and post) talk about what it takes to make it, what agency clients are looking for. They also share details on their creative approach to two standout spots they’ve done this year for Droga5.

How was your first year-plus in NYC? What were some challenges of being the new kid in town?
Vicky Ferraro: I joined Wave to help open the New York City office in May 2018. I had worked at Sound Lounge for 12 years, and I’ve worked on the ad agency side as well, so I’m familiar with the landscape.

One of the big challenges is that New York is quite a saturated market when it comes to audio. There are a lot of great audio places in the city. People have their favorite spots. So our challenges are to forge new relationships and differentiate ourselves from the competition, and figure out how to do that.

Also, the business model has changed quite a bit; a lot of agencies have in-house facilities. I used to work at Hogarth, so I’m quite familiar with how that side of the business works as well. You have a lot of brands that are working in-house with agencies.

So, opening a new spot was a little daunting despite all the success that Wave Studios in London and Amsterdam have had.
Aaron Reynolds: I worked in London, and we always had work from New York clients. We knew friends and people over here. Opening a facility in New York was something we always wanted to do, since 2007. The challenge was to get out there and tell people that we’re here. We were finally coming over from London and forging those relationships with clients we had worked with remotely.

New York has a slightly different work ethic in that they tend to do the sound design with us and then do the mix elsewhere. One challenge was to get across to our clients that we offer both, from start to finish.

Sound design and mixing are one and the same thing. When I’m doing my sound design, I’m thinking about how I want it to sound in the mix. It’s quite unique to do the sound design at one place and then do the mix somewhere else.

What are some trends you’re seeing in the New York City audio post scene? What are your advertising clients looking for?
Reynolds: On the work side, they come here for a creative sound design approach. They don’t want just a bit of sound here and a bit of sound there. They want something to be brought to the job through sound. That’s something that Wave has always done, and that’s been a bastion of our company. We have an idea, and we want to create the best sound design for the spot. It’s not just a case of, “bring me the sounds and we’ll do it for you.” We want to add a creative aspect to the work as well.

And what about format? Are clients asking for 5.1 mixes? Or stereo mixes still?
Reynolds: 99% of our work is done in stereo. Then, we’ll get the odd job mixed in 5.1 if it’s going to broadcast in 5.1 or play back in the cinema. But the majority of our mixes are still done in stereo.

Ferraro: That’s something that people might not be aware of, that most of our mixes are stereo. We deliver stereo and 5.1, but unless you’re watching in a 5.1 environment (and most people’s homes are not a 5.1 environment), you want to listen to a stereo mix. We’ve been talking about that with a lot of clients, and they’ve been appreciative of that as well.

Reynolds: If you tend to mix in 5.1 and then fold down to a stereo mix, you’re not getting a true stereo mix. It’s an artificial one. We’re saying, “Let’s do a stereo mix. And then let’s do a separate 5.1 mix. Then you’re getting the best of both.”

Most of what you’re listening to is stereo, so you want to have the best possible stereo mix you can have. You don’t want a second rate mix when 99% of the media will be played in stereo.

What are some of the benefits and challenges of having studios in three countries? Do you collaborate on projects?
Ferraro: We definitely collaborate! It’s been a great selling point, and a fantastic time-saver in a lot of cases. Sometimes we’ll get a project from London or Amsterdam, or vice versa. We have two sound studios in New York, and sometimes a job will come in and if we can’t accommodate it, we can send it over to London. (This is especially true for unsupervised work.) Then they’ll do the work, and our client has it the next morning. Based on the time zone difference, it’s been a real asset, especially when we’re under the gun.

Aaron has a great list of clients that he works with in London and Amsterdam who continue to work with him here in New York. It’s been very seamless. It’s very easy to send a project from one studio to another.

Reynolds: We all work on the same system — Steinberg Nuendo — so if I send a job to London, I can have it back the next morning, open it up, and have the clients review it with me. I can carry on working in the same session. It’s almost as if we can work on a 24-hour cycle.

All the Wave Studios use Steinberg Nuendo as their DAW?
Reynolds: It’s audio post software designed with sound designers in mind. Pro Tools is more mixing software, good for recording music and live bands. It’s good for mixing, but it’s not particularly great for doing sound design. Nuendo, on the other hand, has been built for sound design, roots up. It has a lot of great built-in plugins. With Pro Tools you need to get a lot of third-party plugins. Having all these built-in plugins makes the software really solid and reliable.

When it comes to third-party plugins, we really don’t need that many because Nuendo has so many built in. But some of the most-used third-party plugins are reverbs, like Audio Ease’s Altiverb and Speakerphone.

I think we’re one of the only studios that uses Nuendo as our main DAW. But Wave has always been a bit rogue. When we first set up years ago, we were using Fairlight, which no one else was using at the time. We’ve always had the desire to use the best tool that we can for the job, which is not necessarily the “industry standard.” When it came to upgrading all of our systems, we were looking into Pro Tools and Nuendo, but one of the partners at Wave, Johnnie Burn, uses Nuendo for the film side. He found it to be really powerful, so we made the decision to put it in all the facilities.

Why should agencies choose an independent audio facility instead of keeping their work in-house? What’s the benefit for them?
Ferraro: I can tell you from firsthand knowledge several benefits to going out-of-house. The main thing that draws clients to Wave Studios — and away from in-house — is that there is a high level of creativity and experience that comes with our engineers. We bring a different perspective than what you get from an in-house team. While there is a lot of talent in-house, those models often deal with freelancers that aren’t as vested in the company, and it poses challenges in building the brand. It’s a different approach to working and finishing up a piece.

Those two aspects play into it — the creativity and having engineers dedicated to our studio. We’re not bringing in freelancers or working with an unknown pool of people. That’s important.

From my own experience, sometimes the approach can feel more formulaic. As an independent audio facility, our approach is very collaborative. There’s a partnership that we create with all of our clients as soon as they’re on board. Sometimes we get involved even before we have a job assigned, just to help them explore how to expand their ideas through sound, how they should be capturing the sound on-set, and how they should be thinking about audio post. It’s a very involved process.

Reynolds: What we bring is a creative approach. Elsewhere, that can be more formulaic, as Vicky said. Here, we want to be as creative as possible and treat jobs with attention and care.

Wave Studios is an international audio company. Is that a draw for clients?
Ferraro: One hundred percent. You’ve got to admit, it’s got a bit of cachet to it for sure. It’s rare to be a commercial studio with outposts in other countries. I think clients really like that, and it does help us bring a different perspective. Aaron’s perspective coming from London is very different from somebody in New York. It’s also cool because our other engineer is based in the New York market, and so his perspective is different from Aaron’s. In this way, we have a blend of both.

There have been some big commercial audio post houses go under, like Howard Schwartz and Nutmeg. What does it take for an audio post house in NYC to be successful in the long run?
Reynolds: The thing to do to maintain a good studio — whether in New York City or anywhere — is not to get complacent. Don’t ever rest on your laurels. Take every job you do as if it’s your first — have that much enthusiasm about it. Keep forging for the best, and that will always shine through. Keep doing the most creative work you can do, and that will make people want to come back. Don’t get tired. Don’t get lazy. Don’t get complacent. That’s the key.

Ferraro: I also think that you need to be able to evolve with the changing environment. You need to be aware of how advertising is changing, stay on top of the trends and move with it rather than resisting it.

What are some spots that you’ve done recently at Wave Studios NYC? How do they stand out, soundwise?
Reynolds: There’s a New York Times campaign that I have been working on for Droga5. A spot in there is called Fearlessness, which was all about a journalist investigating ISIS. The visuals tell a strong story, and so I wanted to do that in an acoustic sort of way. I wanted people to be able to close their eyes and hear all of the details of the journey the writer was taking and the struggles she came across. Bombs had blown up a derelict building, and they are walking through the rubble. I wanted the viewer to feel the grit of that environment.

There’s a distorted subway train sound that I added to the track that sets the tone and mood. We explored a lot of sounds for the piece. The soundscapes were created from different layers using sounds like twisting metals and people shouting in both English and Arabic, which we sourced from libraries like Bluezone and BBC, in particular. We wanted to create a tone that was uneasy and builds to a crescendo.

We’ve got a massive amount of sound libraries — about 500,000 sound effects — that are managed via Nuendo. We don’t need any independent search engine. It’s all built within the Nuendo system. Our sound effects libraries are shared across all of our facilities in all three countries, and it’s all accessed through Nuendo via a local server for each facility.

We did another interesting spot for Droga5 called Night Trails for Harley-Davidson’s electric motorcycle. In the spot, the guy is riding through the city at night, and all of the lights get drawn into his bike. Ringan Ledwidge, one of the industry’s top directors, directed the spot. Soundwise, we were working with the actual sound of the bike itself, and I elaborated on it to make it a little more futuristic. In certain places, I used the sound of hard drives spinning and accelerating to create an electric bike-by. I had to be quite careful with it because they do have an actual sound for the bike. I didn’t want to change it too much.

For the sound of the lights, I used whispers of people talking, which I stretched out. So as the bike goes past a streetlight, for example, you hear a vocal “whoosh” element as the light travels down into the bike. I wanted the sound of the lights not to be too electric, but more light and airy. That’s why I used whispers instead of buzzing electrical sounds. In one scene, the light bends around a telephone pole, and I needed the sound to be dynamic and match that movement. So I performed that with my voice, changing the pitch of my voice to give the sound a natural arc and bend.

Main Image: (L-R) Aaron Reynolds and Vicky Ferraro


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film. Murray received an Oscar nomination in the category of Sound Editing for his work on Joker, while Ozanich, Zupancic and Tod Maitland were nominated for their Sound Mixing work.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Blackmagic: Resolve 16.1 in public beta, updates Pocket Cinema Camera

Blackmagic Design has announced DaVinci Resolve 16.1, an updated version of its edit, color, visual effects and audio post software that features updates to the new cut page, further speeding up the editing process.

With Resolve 16, introduced at NAB 2019, now in final release, the Resolve 16.1 public beta is now available for download from the Blackmagic Design website. This new public beta will help Blackmagic continue to develop new ideas while collaborating with users to ensure those ideas are refined for real-world workflows.

The Resolve 16.1 public beta features changes to the bin that now make it possible to place media in various folders and isolate clips from being used when viewing them in the source tape, sync bin or sync window. Clips will appear in all folders below the current level, and as users navigate around the levels in the bin, the source tape will reconfigure in real time. There’s even a menu for directly selecting folders in a user’s project.

Also new in this public beta is the smart indicator. The new cut page in DaVinci Resolve 16 introduced multiple new smart features, which work by estimating where the editor wants to add an edit or transition and then applying it without the editor having to waste time placing exact in and out points. The software guesses what the editor wants to do and just does it — it adds the inset edit or transition to the edit closest to where the editor has placed the CTI.

But a problem can arise in complex edits, where it is hard to know what the software would do and which edit it would place the effect or clip into. That’s the reason for the beta version’s new smart indicator. The smart indicator provides a small marker in the timeline so users get constant feedback and always know where DaVinci Resolve 16.1 will place edits and transitions. The new smart indicator constantly live-updates as the editor moves around the timeline.

One of the most common items requested by users was a faster way to cut clips in the timeline, so now DaVinci Resolve 16.1 includes a “cut clip” icon in the user interface. Clicking on it will slice the clips in the timeline at the CTI point.

Multiple changes have also been made to the new DaVinci Resolve Editor Keyboard, including a new adaptive scroll feature on the search dial, which will automatically slow down a job when editors are hunting for an in point. The live trimming buttons have been renamed to the same labels as the functions in the edit page, and they have been changed to trim in, trim out, transition duration, slip in and slip out. The function keys along the top of the keyboard are now being used for various editing functions.

There are additional edit models on the function keys, allowing users to access more types of editing directly from dedicated keys on the keyboard. There’s also a new transition window that uses the F4 key, and pressing and rotating the search dial allows instant selection from all the transition types in DaVinci Resolve. Users who need quick picture picture-in in-picture effects can use F5 and apply them instantly.

Sometimes when editing projects with tight deadlines, there is little time to keep replaying the edit to see where it drags. DaVinci Resolve 16.1 features something called a Boring Detector that highlights the timeline where any shot is too long and might be boring for viewers. The Boring Detector can also show jump cuts, where shots are too short. This tool allows editors to reconsider their edits and make changes. The Boring Detector is helpful when using the source tape. In that case, editors can perform many edits without playing the timeline, so the Boring Detector serves as an alternative live source of feedback.

Another one of the most requested features of DaVinci Resolve 16.1 is the new sync bin. The sync bin is a digital assistant editor that constantly sorts through thousands of clips to find only what the editor needs and then displays them synced to the point in the timeline the editor is on. The sync bin will show the clips from all cameras on a shoot stacked by camera number. Also, the viewer transforms into a multi-viewer so users can see their options for clips that sync to the shot in the timeline. The sync bin uses date and timecode to find and sync clips, and by using metadata and locking cameras to time of day, users can save time in the edit.

According to Blackmagic, the sync bin changes how multi-camera editing can be completed. Editors can scroll off the end of the timeline and keep adding shots. When using the DaVinci Resolve Editor Keyboard, editors can hold the camera number and rotate the search dial to “live overwrite” the clip into the timeline, making editing faster.

The closeup edit feature has been enhanced in DaVinci Resolve 16.1. It now does face detection and analysis and will zoom the shot based on face positioning to ensure the person is nicely framed.

If pros are using shots from cameras without timecode, the new sync window lets them sort and sync clips from multiple cameras. The sync window supports sync by timecode and can also detect audio and sync clips by sound. These clips will display a sync icon in the media pool so editors can tell which clips are synced and ready for use. Manually syncing clips using the new sync window allows workflows such as multiple action cameras to use new features such as source overwrite editing and the new sync bin.

Blackmagic Pocket Cinema Camera
Besides releasing the DaVinci Resolve 16.1 public beta, Blackmagic also updated the Blackmagic Pocket Cinema Camera. Blackmagic not only upgraded the camera from 4K to 6K resolution, but it changed the mount to the much used Canon EF style. Previous iterations of the Pocket Cinema Camera used a Micro 4/3s mount, but many users chose to purchase a Micro 4/3s-to-Canon EF adapter, which easily runs over $500 new. Because of the mount change in the Pocket Cinema Camera 6K, users can avoid buying the adapter and — if they shoot with Canon EF — can use the same lenses.

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Audio houses Squeak E. Clean and Nylon Studios have merged

Music and sound studios Squeak E. Clean and Nylon Studios have merged to form Squeak E. Clean Studios. This union brings together a diverse roster of artists offering musical talent and exceptional audio production to agencies and brands. The company combines leadership from both former houses, with Nylon’s Hamish Macdonald serving as managing director and Nylon’s Simon Lister and Squeak E. Clean’s Sam Spiegel overseeing the company’s creative vision as co-executive creative directors. Nylon’s founding partner, David Gaddie, will become strategy partner.

The new Squeak E. Clean Studios has absorbed and operates all the existing studios of the former companies in Los Angeles, New York, Chicago, Austin, Sydney and Melbourne. Clients can now access a full range of services in every studio, including original composition, sound design and mix, music licensing, artist partnerships, experiential and spatial sound and sonic branding. Clients will also be able to license tracks from a vast, consolidated music catalog.

New York-based EP Christina Carlo is transferring to the West Coast to lead the Los Angeles studio alongside Amanda Patterson as senior producer. Deb Oh is executive producer of the New York studio, with Cindy Chao as head of sales. Squeak E. Clean Studios’ Sydney studio is led by executive creative producer Karla Henwood, Ceri Davies is EP of the Melbourne studio, and Jocelyn Brown is leading the Chicago location.The company is deeply committed to strong support of the Free the Bid initiative, with three full-time female staff composers already on the roster.

“I always admired the ‘culture changing’ work that Squeak E. Clean Productions crafted–like the Adidas Hello Tomorrow spot with Karen O and Spike Jonze’s Kenzo World with Ape Drums (featuring Assassin),” says Lister. “These are truly the kind of jobs that are not just famous in advertising, but are part of our popular culture.”

“It’s exciting to be able to combine the revolutionary creativity of Squeak E. Clean with the outstanding post, creative music and exceptional client service that Nylon Studios has always offered at the highest level. We love what we do, and this collaboration is going to be an amazing opportunity for all of our artists and clients,” adds Spiegel. “As a combined force, we will make music and sound that people love.”

Main Image: (L-R) Hamish Macdonald, Simon Lister, Sam Spiegel
Image Credit: Shruti Ashok

 

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Creating and mixing authentic sounds for HBO’s Deadwood movie

By Jennifer Walden

HBO’s award-winning series Deadwood might have aired its final episode 13 years ago, but it’s recently found new life as a movie. Set in 1889 — a decade after the series finale — Deadwood: The Movie picks up the threads of many of the main characters’ stories and weaves them together as the town of Deadwood celebrates the statehood of South Dakota.

Deadwood: The Movie

The Deadwood: The Movie sound team.

The film, which aired on HBO and is available on Amazon, picked up eight 2019 Emmy nominations including in the categories of sound editing, sound mixing and  best television movie.

Series creator David Milch has returned as writer on the film. So has director Daniel Minahan, who helmed several episodes of the series. The film’s cast is populated by returning members, as is much of the crew. On the sound side, there are freelance production sound mixer Geoffrey Patterson; 424 Post’s sound designer, Benjamin Cook; NBCUniversal StudioPost’s re-recording mixer, William Freesh; and Mind Meld Arts’ music editor, Micha Liberman. “Series composers Reinhold Heil and Johnny Klimek — who haven’t been a composing team in many years — have reunited just to do this film. A lot of people came back for this opportunity. Who wouldn’t want to go back to Deadwood?” says Liberman.

Freelance supervising sound editor Mandell Winter adds, “The loop group used on the series was also used on the film. It was like a reunion. People came out of retirement to do this. The richness of voices they brought to the stage was amazing. We shot two days of group for the film, covering a lot of material in that limited time to populate Deadwood.”

Deadwood (the film and series) was shot on a dedicated film ranch called Melody Ranch Motion Picture Studio in Newhall, California. The streets, buildings and “districts” are consistently laid out the same way. This allowed the sound team to use a map of the town to orient sounds to match each specific location and direction that the camera is facing.

For example, there’s a scene in which the town bell is ringing. As the picture cuts to different locations, the ringing sound is panned to show where the bell is in relation to that location on screen. “We did that for everything,” says co-supervising sound editor Daniel Colman, who along with Freesh and re-recording mixer John Cook, works at NBCUniversal StudioPost. “You hear the sounds of the blacksmith’s place coming from where it would be.”

“Or, if you’re close to the Chinese section of the town, then you hear that. If you were near the saloons, that’s what you hear. They all had different sounds that were pulled forward from the series into the film,” adds re-recording mixer Freesh.

Many of the exterior and interior sounds on set were captured by Benjamin Cook, who was sound effects editor on the original Deadwood series. Since it’s a practical location, they had real horses and carriages that Cook recorded. He captured every door and many of the props. Colman says, “We weren’t guessing at what something sounded like; we were putting in the actual sounds.”

The street sounds were an active part of the ambience in the series, both day and night. There were numerous extras playing vendors plying their wares and practicing their crafts. Inside the saloons and out in front of them, patrons talked and laughed. Their voices — performed by the loop group in post — helped to bring Deadwood alive. “The loop group we had was more than just sound effects. We had to populate the town with people,” says Winter, who scripted lines for the loopers because they were played more prominently in the mix than what you’d typically hear. “Having the group play so far forward in a show is very rare. It had to make sense and feel timely and not modern.”

In the movie, the street ambience isn’t as strong a sonic component. “The town had calmed down a little bit as it’s going about its business. It’s not quite as bustling as it was in the series. So that left room for a different approach,” says Freesh.

The attenuation of street ambience was conducive to the cinematic approach that director Minahan wanted to take on Deadwood: The Movie. He used music to help the film feel bigger and more dramatic than the series, notes Liberman. Re-recording mixer John Cook adds, “We experimented a lot with music cues. We saw scenes take on different qualities, depending on whether the music was in or out. We worked hard with Dan [Minahan] to end up with the appropriate amount of music in the film.”

Minahan even introduced music on set by way of a piano player inside the Gem Saloon. Production sound mixer Patterson says, “Dan was very active on the set in creating a mood with that music for everyone that was there. It was part and parcel of the place at that time.”

Authenticity was a major driving force behind Deadwood’s aesthetics. Each location on set was carefully dressed with era-specific props, and the characters were dressed with equal care, right down to their accessories, tools and weapons. “The sound of Seth Bullock’s gun is an actual 1889 Remington revolver, and Calamity Jane’s gun is an 1860’s Colt Army cavalry gun. We’ve made every detail as real and authentic as possible, including the train whistle that opens the film. I wasn’t going to just put in any train whistle. It’s the 1880s Black Hills steam engine that actually went through Deadwood,” reports Colman.

The set’s wooden structures and elevated boardwalk that runs in front of the establishments in the heart of town lent an authentic character to the production sound. The creaky wooden doors and thumpiness of footsteps across the raised wooden floors are natural sounds the audience would expect to hear from that environment. “The set for Deadwood was practical and beautiful and amazing. You want to make sure that you preserve that realness and let the 1800s noises come through. You don’t want to over sterilize the tracks. You want them to feel organic,” says Patterson.

Freesh adds, “These places were creaky and noisy. Wind whistled through the windows. You just embrace it. You enhance it. That was part of the original series sound, and it followed through in the movie as well.”

The location was challenging due to its proximity to real-world civilization and all of our modern-day sonic intrusions, like traffic, airplanes and landscaping equipment from a nearby neighborhood. Those sounds have no place in the 1880s world of Deadwood, but “if we always waited for the moment to be perfect, we would never make a day’s work,” says Patterson. “My mantra was always to protect every precious word of David Milch’s script and to preserve the performances of that incredible cast.”

In the end, the modern-day noises at the location weren’t enough to require excessive ADR. John Cook says, “Geoffrey [Patterson] did a great job of capturing the dialogue. Then, between the choices the picture editors made for different takes and the work that Mandell [Winter] did, there were only one or two scenes in the whole movie that required extra attention for dialogue.”

Winter adds, “Even denoising the tracks, I didn’t take much out. The tracks sounded really good when they got to us. I just used iZotope RX 7 and did our normal pass with it.”

Any fan of Deadwood knows just how important dialogue clarity is since the show’s writing is like Shakespeare for the American West — with prolific profanity, of course. The word choices and their flow aren’t standard TV script fare. To help each word come through clearly, Winter notes they often cut in both the boom and lav mic tracks. This created nice, rich dialogue for John Cook to mix.

On the stage, John Cook used the FabFilter Pro-Q 2 to work each syllable, making sure the dialogue sounded bright and punchy and not too muddy or tubby. “I wanted the audience to hear every word without losing the dynamics of a given monologue or delivery. I wanted to maintain the dynamics, but make sure that the quieter moments were just as intelligible as the louder moments,” he says.

In the film, several main characters experience flashback moments in which they remember events from the series. For example, Al Swearengen (Ian McShane) recalls the death of Jen (Jennifer Lutheran) from the Season 3 finale. These flashbacks — or hauntings, as the post team refers to them — went through several iterations before the team decided on the most effective way to play each one. “We experimented with how to treat them. Do we go into the actor’s head and become completely immersed in the past? Or, do we stay in the present — wherever we are — and give it a slight treatment? Or, should there not be any sounds in the haunting? In the end, we decided they weren’t all going to be handled the same,” says Freesh.

Before coming together for the final mix on Mix 6 at NBCUniversal StudioPost on the Universal Studios Lot in Los Angeles, John Cook and Freesh pre-dubbed Deadwood: The Movie in separate rooms as they’d do on a typical film — with Freesh pre-dubbing the backgrounds, effects, and Foley while Cook pre-dubbed the dialogue and music.

The pre-dubbing process gave Freesh and John Cook time to get the tracks into great shape before meeting up for the final mix. Freesh concludes, “We were able to, with all the people involved, listen to the film in real good condition from the first pass down and make intelligent decisions based on what we were hearing. It really made a big difference in making this feel like Deadwood.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Creating Foley for FX’s Fosse/Verdon

Alchemy Post Sound created Foley for Fosse/Verdon, FX’s miniseries about choreographer Bob Fosse (Sam Rockwell) and his collaborator and wife, the singer/dancer Gwen Verdon (Michelle Williams). Working under the direction of supervising sound editors Daniel Timmons and Tony Volante, Foley artist Leslie Bloome and his team performed and recorded hundreds of custom sound effects to support the show’s dance sequences and add realistic ambience to its historic settings.

Spanning five decades, Fosse/Verdon focuses on the romantic and creative partnership between Bob Fosse and Gwen Verdon. The former was a visionary filmmaker and one of the theater’s most influential choreographers and directors, while the latter was one of the greatest Broadway dancers of all time.

Given the subject matter, it’s hardly surprising that post production sound was a crucial element in the series. For its many musical scenes, Timmons and Volante were tasked with conjuring intricate sound beds to match the choreography and meld seamlessly with the score. They also created dense soundscapes to back the very distinctive environments of film sets and Broadway stages, as well as a myriad of other exterior and interior locations.

For Timmons, the project’s mix of music and drama posed significant creative challenges but also a unique opportunity. “I grew up in upstate New York and originally hoped to work in live sound, potentially on Broadway,” he recalls. “With this show, I got to work with artists who perform in that world at the highest level. It was not so much a television show as a blend of Broadway music, Broadway acting and television. It was fun to collaborate with people who were working at the top of their game.”

The crew drew on an incredible mix of sources in assembling the sound. Timmons notes that to recreate Fosse’s hacking cough (a symptom of his overuse of prescription medicine), they poured through audio stems from the classic 1979 film All That Jazz. “Roy Scheider, who played Bob Fosse’s alter ego in the film, was unable to cough like him, so Bob went into a recording studio and did some of the coughing himself,” Timmons says. “We ended up using those old recordings along with ADR of Sam Rockwell. When Bob’s health starts to go south, some of the coughing you hear is actually him. Maybe I’m superstitious, but for me it helped to capture his identity. I felt like the spirit of Bob Fosse was there on the set.”

A large portion of the post sound effects were created by Alchemy Post Sound. Most notably, Foley artists meticulously reproduced the footsteps of dancers. Foley tap dancing can be heard throughout the series, not only in musical sequences, but also in certain transitions. “Bob Fosse got his start as a tap dancer, so we used tap sounds as a motif,” explains Timmons. “You hear them when we go into and out of flashbacks and interior monologues.”Along with Bloome, Alchemy’s team included Foley artist Joanna Fang, Foley mixers Ryan Collison and Nick Seaman, and Foley assistant Laura Heinzinger.

Ironically, Alchemy had to avoid delivering sounds that were “too perfect.”  Fang points out that scenes depicting musical performances from films were meant to represent the production of those scenes rather than the final product. “We were careful to include natural background sounds that would have been edited out before the film was delivered to theaters,” she explains, adding that those scenes also required Foley to match the dancers’ body motion and costuming. “We spent a lot of time watching old footage of Bob Fosse talking about his work, and how conscious he was not just of the dancers’ footwork, but their shuffling and body language. That’s part of what made his art unique.”

Foley production was unusually collaborative. Alchemy’s team maintained a regular dialogue with the sound editors and were continually exchanging and refining sound elements. “We knew going into the series that we needed to bring out the magic in the dance sequences,” recalls production Foley editor Jonathan Fuhrer. “I spoke with Alchemy every day. I talked with Ryan and Nick about the tonalities we were aiming for and how they would play in the mix. Leslie and Joanna had so many interesting ideas and approaches; I was ceaselessly amazed by the thought they put into performances, props, shoes and surfaces.”

Alchemy also worked hard to achieve realism in creating sounds for non-musical scenes. That included tracking down props to match the series’ different time periods. For a scene set in a film editing room in the 1950s, the crew located a 70-year-old Steenbeck flatbed editor to capture its unique sounds. As musical sequences involved more than tap dancing, the crew assembled a collection of hundreds of pairs of shoes to match the footwear worn by individual performers in specific scenes.

Some sounds undergo subtle changes over the course of the series relative to the passage of time. “Bob Fosse struggled with addictions and he is often seen taking anti-depression medication,” notes Seaman. “In early scenes, we recorded pills in a glass vial, but for scenes in later decades, we switched to plastic.”

Such subtleties add richness to the soundtrack and help cement the character of the era, says Timmons. “Alchemy fulfilled every request we made, no matter how far-fetched,” he recalls. “The number of shoes that they used was incredible. Broadway performers tend to wear shoes with softer soles during rehearsals and shoes with harder soles when they get close to the show. The harder soles are more strenuous. So the Foley team was always careful to choose the right shoes depending on the point in rehearsal depicted in the scene. That’s accuracy.”

The extra effort also resulted in Foley that blended easily with other sound elements, dialogue and music. “I like Alchemy’s work because it has a real, natural and open sound; nothing sounds augmented,” concludes Timmons. “It sounds like the room. It enhances the story even if the audience doesn’t realize it’s there. That’s good Foley.”

Alchemy used Neumann KMR 81 and U 87 mics, Millennia mic pres, Apogee converters, and C24 mixer into Avid Pro Tools.

Steinberg’s SpectraLayers Pro 6: visual audio editing with ARA support

Steinberg’s SpectraLayers Pro 6 audio editing software is now available. First distributed by Sony Creative Software and then by Magix Software, the developers behind SpectraLayers have joined forces with Steinberg to release its sixth iteration.

Unlike most audio editing tools, SpectraLayers offers a visual approach to audio editing, allowing users to visualize audio in the spectral domain (in 2D and 3D) and to manipulate its spectral data in many different ways. While many dedicated audio pros typically edit with their ears, this offering targets those who are more comfortable with visuals leading their editing decisions.

With its 25 advanced tools, SpectraLayers Pro 6 provides precision-editing within the spectral domain, comparable with the editing capabilities applied in high-performance photo editing software: modification, selection, measurement and drawing. Think Adobe Photoshop for audio editing.

The features newly introduced in SpectraLayers Pro 6 include ARA 2 support; next to the standalone application, Version 6 offers an ARA plug-in that seamlessly integrates into every ARA 2-compatible DAW, such as Nuendo and Cubase, to be used as a native editor. Fades along the selection border are one of the innovative features in SpectraLayers, and Pro 6 now includes visible fade masks and allows users to select from the many available fade types.

SpectraLayers’ advanced selection engine now features nine revamped selection tools — including the new Transient Selector — making selections more flexible. The new Move tool helps users transform audio intuitively: grab layers to activate and move or scale them. SpectraLayers Pro 6 also provides external editor integration, allowing users to include other editor software so that any selection can be processed by them as well.

“This new version of SpectraLayers offers a refined and more intuitive user interface inspired by picture editors and a new selection system combining multiple fade masks, bringing spectral editing and remixing to a whole new level. We’re also excited by the possibilities unlocked by the new ARA connection between SpectraLayers, Cubase and Nuendo, bringing spectral mixing and editing right within your DAW,” says Robin Lobel, creator of SpectraLayers.

The user interface of SpectraLayers Pro 6 has completely been redesigned to build on the original use of image editing software. The menus have been redesigned and the panels are collapsible; the Layers panel is customizable; and users can now refer to comprehensive tool tip documentation and a new user manual.

The full retail version of SpectraLayers Pro 6 is available as download through the Steinberg Online Shop at the suggested retail price of $399.99, together with various downloadable updates from previous versions.
of the respective owners.

Behind the Title: Cinematic Media head of sound Martin Hernández

This audio post pro’s favorite part of the job is the start of a project — having a conversation with the producer and the director. “It’s exciting, like any new relationship,” he says.

Name: Martin Hernández

Job Title: Supervising Sound Editor

Company: Mexico City’s Cinematic Media

Can you describe Cinematic Media and your role there?
I lead a new sound post department at Cinematic Media, Mexico’s largest post facility focused on television and cinema. We take production sound through the full post process: effects, backgrounds, music editing… the whole thing. We finish the sound on our mix stages.

What would surprise people most about what you do?
We want the sound to go unnoticed. The viewer shouldn’t be aware that something has been added or is unnatural. If the viewer is distracted from the story by the sound, it’s a lousy job. It’s like an actor whose performance draws attention to himself. That’s bad acting. The same applies to every aspect of filmmaking, including sound. Sound needs to help the narrative in a subjective and quiet way. The sound should be unnoticed… but still eloquent. When done properly, it’s magical.

Hernandez has been working on Easy for Netflix.

What’s your favorite part of the job?
Entering the project for the first time and having a conversation with the team: the producer and the director. It’s exciting, like any new relationship. It’s beautiful. Even if you’re working with people you’ve worked with before, the project is newborn.

My second favorite part is the start of sound production, when I have a picture but the sound is a blank page. We must consider what to add. What will work? What won’t? How much is enough or too much? It’s a lot like cooking. The dish might need more of this spice and a little less of that. You work with your ingredients, apply your personal taste and find the right flavor. I enjoy cooking sound.

What’s your least favorite part of the job?
Me.

What do you mean?
I am very hard on myself. I only see my shortcomings, which are, to tell you the truth, many. I see my limitations very clearly. In my perception of things, it is very hard to get where I want to go. Often you fail, but every once in a while, a few things actually work. That’s why I’m so stubborn. I know I am going to have a lot of misses, so I do more than expected. I will shoot three or four times, hoping to hit the mark once or twice. It’s very difficult for me to work with me.

What is your most productive time of the day?
In the morning. I’m a morning person. I work from my own place, very early, like 5:30am. I wake up thinking about things that I left behind in the session. It’s useless to remain in bed, so I go to my studio and start working on these ideas. It’s amazing how much you can accomplish between 6am and 9am. You have no distractions. No one’s calling. No emails. Nothing. I am very happy working in the mornings.

If you didn’t have this job, what would you be doing?
That’s a tough question! I don’t know anything else. Probably, I would cook. I’d go to a restaurant and offer myself as an intern in the kitchen.

For most people I know, their career is not something they’ve chosen; it was embedded in them when they were born. It’s a matter of realizing what’s there inside you and embracing it. I never, in my wildest dreams, expected to be doing this work.

When I was young, I enjoyed watching films, going to the movies, listening to music. My earliest childhood memories are sound memories, but I never thought that would be my work. It happened by accident. Actually, it was one accident after another. I found myself working with sound as a hobby. I really liked it, so I embraced it. My hobby then became my job.

So you knew early on that audio would be your path?
I started working in radio when I was 20. It happened by chance. A neighbor told me about a radio station that was starting up from scratch. I told my friend from school, Alejandro Gonzalez Iñárritu, the director. Suddenly, we’re working at a radio station. We’re writing radio pieces and doing production sound. It was beautiful. We had our own on-air, live shows. I was on in the mornings. He did the noon show. Then he decided to make films and I followed him.

Easy

What are some of your recent projects?
I just finished a series for Joe Swanberg, the third season of Easy. It’s on Netflix. It’s the fourth project I’ve done with Joe. I’ve also done two shows here in Mexico. The first one is my first full-time job as supervisor/designer for Argos, the company lead by Epigmenio Ibarra. Yankee is our first series together for Netflix, and we’re cutting another one to be aired later in the year. It’s a very exciting for me.

Is there a project that you’re most proud of?
I am very proud of the results that we’ve been getting on the first two series here in Mexico. We built the sound crew from scratch. Some are editors I’ve worked with before, but we’ve also brought in new talent. That’s a very joyful process. Finding talent is not easy, but once you do, it’s very gratifying. I’m also proud of this work because the quality is very good. Our clients are happy, and when they’re happy, I’m happy.

What pieces of technology can you not live without?
Avid Pro Tools. It’s the universal language for sound. It allows me to share sound elements and sessions from all over the world, just like we do locally, between editing and mixing stages. The second is my converter. We are using the Red system from Focusrite. It’s a beautiful machine.

This is a high-stress job with deadlines and client expectations. What do you do to de-stress from it all?
Keep working.

Mixing sounds of fantasy and reality for Rocketman

By Jennifer Walden

Paramount Pictures’ Rocketman is a musical fantasy about the early years of Elton John. The story is told through flashbacks, giving director Dexter Fletcher the freedom to bend reality. He blended memories and music to tell an emotional truth as opposed to delivering hard facts.

Mike Prestwood Smith

The story begins with Elton John (Taron Egerton) attending a group therapy session with other recovering addicts. Even as he’s sharing details of his life, he’s stretching the truth. “His recollection of the past is not reliable. He often fantasizes. He’ll say a truth that isn’t really the case, because when you flash back to his memory, it is not what he’s saying,” says BAFTA-winning re-recording mixer Mike Prestwood Smith, who handled the film’s dialogue and music. “So we’re constantly crossing the line of fantasy even in the reality sections.”

For Smith, finding the balance between fantasy and reality was what made Rocketman unique. There’s a sequence in which pre-teen Elton (Kit Connor) evolves into grown-up Elton to the tune of “Saturday’s Alright for Fighting.” It was a continuous shot, so the camera tracks pre-teen Elton playing the piano, who then then gets into a bar fight that spills into an alleyway that leads to a fairground where a huge choreographed dance number happens. Egerton (whose actual voice is featured) is singing the whole way, and there’s a full-on band under him, but specific effects from his surrounding environment poke through the mix. “We have to believe in this layer of reality that is gluing the whole thing together, but we never let that reality get in the way of enjoying the music.”

Smith helped the pre-recorded singing to feel in-situ by adding different reverbs — like Audio Ease’s AltiVerb, Exponential Audio’s PhoenixVerb and Avid’s ReVibe. He created custom reverbs from impulse responses taken from the rooms on set to ground the vocal in that space and help sell the reality of it.

For instance, when Elton is in the alleyway, Smith added a slap verb to Egerton’s voice to make it feel like it’s bouncing off the walls. “But once he gets into the main verses, we slowly move away from reality. There’s this flux between making the audience believe that this is happening and then suspending that belief for a bit so they can enjoy the song. It was a fine line and very subjective,” he says.

He and re-recording mixer/supervising sound editor Matthew Collinge spent a lot of time getting it to play just right. “We had to be very selective about the sound of reality,” says Smith. “The balance of that whole sequence was very complex. You can never do those scenes in one take.”

Another way Smith helped the pre-recorded vocals to sound realistic was by creating movement using subtle shifts in EQ. When Elton moves his head, Smith slightly EQ’d Egerton’s vocals to match. These EQ shifts “seem little, but collectively they have a big impact on selling that reality and making it feel like he’s actually performing live,” says Smith. “It’s one of those things that if you don’t know about it, then you just accept it as real. But getting it to sound that real is quite complicated.”

For example, there’s a scene in which Egerton is working out “Your Song,” and the camera cuts from upstairs to downstairs. “We are playing very real perspectives using reverb and EQ,” says Smith. Then, once Elton gets the song, he gives Bernie Taupin (Jamie Bell) a knowing look. The music gets fleshed out with a more complicated score, with strings and guitar. Next, Elton is recording the song in a studio. As he’s singing, he’s looking down and playing piano. Smith EQ’d all of that to add movement, so “it feels like that performance is happening at that time. But not one single sound of it is from that moment on set. There is a laugh from Bernie, a little giggle that he does, and that’s the only thing from the on-set performance. Everything else is manufactured.”

In addition to EQ and reverb, Smith used plugins from Helsinki-based sound company Oeksound to help the studio recordings to sound like production recordings. In particular, Oeksound’s Spiff plugin was useful for controlling transients “to get rid of that close-mic’d sound and make it feel more like it was captured on set,” Smith says. “Combining EQ and compression and adding reverb helped the vocals to sound like sync, but at the same time, I was careful not to take away too much from the quality of the recording. It’s always a fine line between those things.”

The most challenging transitions were going from dialogue into singing. Such was the case with quiet moments like “Your Song” and “Goodbye Yellow Brick Road.” In the latter, Elton quietly sings to his reflection in a mirror backstage. The music slowly builds up under his voice as he takes off down the hallway and by the time he hops into a cab outside it’s a full-on song. Part of what makes the fantasy feel real is that his singing feels like sync. The vocals had to sound impactful and engage the audience emotionally, but at the same time they had to sound believable — at least initially. “Once you’re into the track, you have the audience there. But getting in and out is hard. The filmmakers want the audience to believe what they’re seeing, that Taron was actually in the situations surrounded by a certain level of reality at any given point, even though it’s a fantasy,” says Smith.

The “Rocketman” song sequence is different though. Reality is secondary and the fantasy takes control, says Smith. “Elton happens to be having a drug overdose at that time, so his reality becomes incredibly subjective, and that gives us license to play it much more through the song and his vocal.”

During “Rocketman,” Elton is sinking to the bottom of a swimming pool, watching a younger version of himself play piano underwater. On the music side, Smith was able to spread the instruments around the Dolby Atmos surround field, placing guitar parts and effect-like orchestrations into speakers discretely and moving those elements into the ceiling and walls. The bubble sound effects and underwater atmosphere also add to the illusion of being submerged. “Atmos works really well when you have quiet, and you can place sounds in the sound field and really hear them. There’s a lot of movement musically in Rocketman and it’s wonderful to have that space to put all of these great elements into,” says Smith.

That sequence ends with Elton coming on stage at Dodger Stadium and hitting a baseball into the massive crowd. The whole audience — 100,000 people — sing the chorus with him. “The moment the crowd comes in is spine-tingling. You’re just so with him at that point, and the sound and the music are doing all of that work,” he explains.

The Music
The music was a key ingredient to the success of Rocketman. According to Smith, they were changing performances from Egerton and also orchestrations right through the post sound mix, making sure that each piece was the best it could be. “Taron [Egerton] was very involved; he was on the dub stage a lot. Once everything was up on the screen, he’d want to do certain lines again to get a better performance. So, he did pre-records, on-set performances and post recording as well,” notes Smith.

Smith needed to keep those tracks live through the mix to accommodate the changes, so he and Collinge chose Avid S6 control surfaces and mixed in-the-box as opposed to printing the tracks for a mix on a traditional large-format console. “To have locked down the music and vocals in any way would have been a disaster. I’ve always been a proponent of mixing inside Pro Tools mainly because workflow-wise, it’s very collaborative. On Rocketman, having the tracks constantly addressable — not just by me but for the music editors Cecile Tournesac and Andy Patterson as well — was vital. We were able to constantly tweak bits and pieces as we went along. I love the collaborative nature of making and mixing sound for film, and this workflow allows for that much more so than any other. I couldn’t imagine doing this any other way,” says Smith.

Smith and Collinge mixed in native Dolby Atmos at Goldcrest London in Theatre 1 and Theatre 2, and also at Warner Bros. De Lane Lea. “It was such a tight schedule that we had all three mixing stages going for the very end of it, because it got a bit crazy as these things do,” says Smith. “All the stages we mixed at had S6s, and I just brought the drives with me. At one point we were print mastering and creating M&Es on one stage and doing some fold-downs on a different stage, all with the same session. That made it so much more straightforward and foolproof.”

As for the fold-down from Atmos to 5.1, Smith says it was nearly seamless. The pre-recorded music tracks were mixed by music producer Giles Martin at Abbey Road. Smith pulled those tracks apart, spread them into the Atmos surround field and then folded them down to 5.1. “Ultimately, the mixing that Giles Martin did at Abbey Road was a great thing because it meant the fold-downs really had the best backbone possible. Also, the way that Dolby has been tweaking their fold-down processing, it’s become something special. The fold-downs were a lot easier than I thought they’d be,” concludes Smith.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Accusonus intros plugin bundles for sound and video editors

Accusonus is bringing its single-knob audio cleaning and noise reduction technology to its new ERA 4 Bundles for video editors, audio engineers and podcasters.

The ERA 4 Bundles (Enhancement and Repair of Audio) are a collection of single-knob audio cleaning plugins designed to reduce the complexity of the sound design and audio workflow without compromising sound quality or fidelity.

Accusonus says that its patented single-knob design appeals to professional editors, filmmakers and podcasters because it reduces the time-consuming audio repair workflow to a twist of a dial. Additionally, the ERA 4 Standard family of plugins enables aspiring content creators, YouTubers and film and audio students to quickly master audio workflows with minimal effort or expertise.

ERA 4 Bundles are available in two collections: The Standard Bundle and the Pro Bundle.

The ERA 4 Standard Bundle features audio cleaning plugins designed for speed and fidelity with minimal effort, even if users have never edited audio before. The Standard Bundle offers professional sound design and includes: Noise Remover, Reverb Remover, De-esser, Plosive Remover, Voice Leveler and De-clipper.

The ERA 4 Pro Bundle targets professional editors, audio engineers and podcasters in advanced post and music production environments. It includes all of the plugins from the Standard Bundle and adds the sophisticated ERA De-Esser Pro plugin. Except from the large main knob, ERA De-Esser Pro offers extra controls for greater granularity and fine-tuning when fixing an especially rough recording.

The Accusonus ERA Bundle is fully supported by Avid Pro Tools 12.6 (or higher), Audacity 2.2.2, Apple Logic Pro 10.4.3 (or higher), Ableton Live 9 (or higher), Cockos Reaper v5.9, Image Line FL Studio 12, Presonus Studio One 3 (or higher), Steinberg Cubase 8 (or higher), Adobe Audition CC 2017 (or higher) and Apple GarageBand 10.3.2

The ERA Bundle supports Adobe Premiere CC 2017 (or higher), Apple Final Cut Pro X 10.4 (or higher), Blackmagic DaVinci Resolve 14 (or higher), Avid Media Composer 2018.12 and Magix Vegas Pro 15 (or higher).

The ERA 4 Standard Bundle is available at a special introductory price of $119 until July 31. After that, the price will be $149. The ERA 4 Pro Bundle is available at a special introductory price of $349 until July 31. After that, the price will be $499.

Picture Shop buys The Farm Group

Burbank’s Picture Shop has acquired UK-based The Farm Group. The Farm Group was founded in 1998 and currently has four locations in London, as well as facilities in Manchester, Bristol and Los Angeles.

The Farm, London

The Farm also operates the in-house post production teams for BBC Sport in Salford, England; UKTV; and Fremantle Media. This deal marks Picture Shop’s second international acquisition, followed by the deal it made for Vancouver’s Finalé Post earlier this year.

The founders of The Farm, Nicky Sargent and Vikki Dunn, will stay involved in The Farm Group. In a joint statement, Sargent and Dunn said, “We are delighted that after 20 successful years, we have a new partner. Picture Shop is poised to expand in the international post market and provide the combination of technical, creative and professional excellence to the world’s content creators.”

The duo will also re-invest in the expanded Picture Head Group, which includes Picture Head and audio post company Formosa Group, in addition to Picture Shop.

L-R: The Farm Group’s Nicky Sargent and Vikki Dunn.

Bill Romeo, president of Picture Shop, says, “Based on the amount of content being created internationally, we felt it was important to have a presence worldwide and support our clients’ needs. The Farm, based on its reputation and creative talent, will be able to maintain the philosophy of Picture Shop. It is a perfect fit. Our clients will benefit from our collaborative efforts internationally, as well as benefit from our technology and experience. We will continue to partner and support our clients while maintaining our boutique feel.”

Recent work from The Farm Group includes BBC Two’s Summer of Rockets, Sky One’s Jamestown and Britain’s Got Talent.

 

Andy Greenberg on One Union Recording’s fire and rebuild

San Francisco’s One Union Recording Studios has been serving the sound needs of ad agencies, game companies, TV and film producers, and corporate media departments in the Bay Area and beyond for nearly 25 years.

In the summer of 2017, the facility was hit by a terrible fire that affected all six of its recording studios. The company, led by president John McGleenan, immediately began an ambitious rebuilding effort, which it completed earlier this year. One Union Recording is now back up to full operation and its five recording studios, outfitted with the latest sound technologies including Dolby Atmos capability, are better than ever.

Andy Greenberg, One Union Recording’s facility engineer and senior mix engineer, who works alongside engineers Joaby Deal, Eben Carr, Matt Wood and Isaac Olsen. We recently spoke with Greenberg about the company’s rebuild and plans for the future.

Rebuilding the facility after the fire must have been an enormous task.
You’re not kidding. I’ve worked at One Union for 22 years, and I’ve been through every growth phase and upgrade. I was very proud of the technology we had in place in 2017. We had six rooms, all cutting-edge. The software was fully up to date. We had few if any technical problems and zero downtime. So, when the fire hit, we were devastated. But John took a very business-oriented approach to it, and within a few days he was formulating a plan. He took it as an opportunity to implement new technology, like Dolby Atmos, and to grow. He turned sadness into enthusiasm.

How did the facility change?
Ironically, the timing was good. A lot of new technology had just come out that I was very excited about. We were able to consolidate what were large systems into smaller units while increasing quality 10-fold. We moved leaps and bounds beyond where we had been.

Prior to the fire, we were running Avid Pro Tools 12.1. Now we’re on Pro Tools Ultimate. We had just purchased four Avid/Euphonix System 5 digital audio consoles with extra DSP in March of 2017 but had not had time to install them before the fire due to bookings. These new consoles are super powerful. Our number of inputs and outputs quadrupled. The routing power and the bus power are vastly improved. It’s phenomenal.

We also installed Avid MTRX, an expandable interface designed in Denmark and very popular now, especially for Atmos. The box feels right at home with the Avid S5 because it’s MADI and takes the physical outputs of our ProTools systems up to 64 or 128 channels.

That’s a substantial increase.
A lot of delivered projects use from two to six channels. Complex projects might go to 20. Being able to go far beyond that increases the power and flexibility of the studio tremendously. And then, of course, our new Atmos room requires that kind of channel count to work in immersive surround sound.

What do you do for data storage?
Even before the fire, we had moved to a shared storage network solution. We had a very strong infrastructure and workflow in terms of data storage, archiving and the ability to recall sessions. Our new infrastructure includes 40TB of active storage of client data. Forty terabytes is not much for video, but for audio, it’s a lot. We also have 90TB of instantly recallable data.

We have client data archived back 25 years, and we can have anything online in any room in just a few minutes. It’s literally drag and drop. We pride ourselves on maintaining triple redundancy in backups. Even during the fire, we didn’t lose any client data because it was all backed up on tape and off site. We take backup and data security very seriously. Backups happen automatically every day…  actually every three hours.

What are some of the other technical features of the rebuilt studios?
There’s actually a lot. For example, our rooms — including the two Dolby-certified Atmos rooms — have new Genelec SAM studio monitors. They are “smart” speakers that are self-tuning. We can run some test tones and in five minutes the rooms are perfectly tuned. We have custom tunings set up for 5.1 and Atmos. We can adjust the tuning via computer and the speakers have built-in DPS, so we don’t have to rely on external systems.

Another cool technology that we are using is Dante, which is part of the Avid MTRX interface. Dante is basically audio-over-IP or audio-over-Cat6. It essentially replaced our AES router. We were one of the first facilities in San Francisco to have a full audio AES router, and it was very strong for us at the time. It was a 64×64 stereo-paired AES router. It has been replaced by the MTRX interface box that has, believe it or not, a three-inch by two-inch card that handles 64×64 routing per room. So, each room’s routing capability went up exponentially by 64.

We use Dante to route secondary audio, like our ISDN and web-based IP communication devices. We can route signals from room to room and over the web securely. It’s seamless, and it comes up literally into your computer. It’s amazing technology. The other day, I did a music session and used a 96K sample rate, which is very high. The quality of the headphone mix was astounding. Everyone was happy and it took just one, quick setting and we were off and running. The sound is fantastic and there is no noise and no latency problems. It’s super-clean, super-fast and easy to use.

What about video monitoring?
We have 4K monitors and 4K projection in all the rooms via Sony XBR 55A1E Bravia OLED monitors, Sony VPL-VW885ES True 4K Laser Projectors and a DLP 4K550 projector.Our clients appreciate the high-quality images and the huge projection screens.

London’s Media Production Show: technology for content creation

By Mel Lambert

The fourth annual Media Production Show, held June 11-12 at Olympia West, London, once again attracted a wide cross section of European production, broadcast, post and media-distribution pros. According to its organizers, the two-day confab drew 5,300 attendees and “showcased the technology and creativity behind content creation,” focusing on state-of-the-art products and services. The full program of standing room-only discussion seminars covered a number of contemporary topics, while 150-plus exhibitors presented wares from the media industry’s leading brands.

The State of the Nation: Post Production panel.

During a session called “The State of the Nation: Post Production,” Rowan Bray, managing director of Clear Cut Pictures, said that “while [wage and infrastructure] costs are rising, our income is not keeping up.” And with salaries, facility rent and equipment amortization representing 85% of fixed costs, “it leaves little over for investment in new technology and services. In other words, increasing costs are preventing us from embracing new technologies.”

Focusing on the long-term economic health of the UK post industry, Bray pointed out that few post facilities in London’s Soho area are changing hands, which she says “indicates that this is not a healthy sector [for investment].”

“Several years ago, a number of US companies [including Technicolor and Deluxe] invested £100 million [$130 million] in Soho; they are now gone,” stated Ian Dodd, head of post at Dock10.

Some 25 years ago, there were at least 20 leading post facilities in London. “Now we have a handful of high-end shops, a few medium-sized ones and a handful of boutiques,” Dodd concluded. Other panelists included Cara Kotschy, managing director of Fifty Fifty Post Production.

The Women in Sound panel

During his keynote presentation called “How we made Bohemian Rhapsody,” leading production designer Aaron Haye explained how the film’s large stadium concert scenes were staged and supplemented with high-resolution CGI; he is currently working on Charlie’s Angels (2019) with director/actress Elizabeth Banks.

The panel discussion “Women in Sound” brought together a trio of re-recording mixers with divergent secondary capabilities and experience. Participants were Emma Butt, a freelance mixer who also handles sound editorial and ADR recordings; Lucy Mitchell, a freelance sound editor and mixer; plus Kate Davis, head of sound at Directors Cut Films. As the audience discovered, their roles in professional sound differ. While exploring these differences, the panel revealed helpful tips and tricks for succeeding in the post world.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Izotope’s Neutron 3 streamlines mix workflows with machine learning

Izotope, makers of the RX audio tools, has introduced Neutron 3, a plug-in that — thanks to advances in machine learning — listens to the entire session and communicates with every track in the mix. Mixers can use Neutron 3’s new Mix Assistant to create a balanced starting point for an initial-level mix built around their chosen focus, saving time and energy when making creative mix decisions. Once a focal point is defined, Neutron 3 automatically set levels before the mixer ever has to touch a fader.

Neutron 3 also has a new module called Sculptor (available in Neutron 3 Standard and Advanced) for sweetening, fixing and creative applications. Using never-before-seen signal processing, Sculptor works like a per-band army of compressors and EQs to shape any track. It also communicates with Track Assistant to understand each instrument and gives realtime feedback to help mixers shape tracks to a target EQ curve or experiment with new sounds.

In addition, Neutron 3 includes many new improvements and enhancements based on feedback from the community, such as the redesigned Masking Meter that automatically flags masking issues and allows them to be fixed from a convenient one-window display. This improvement prevents tracks from stepping on each other and muddying the mix.

Neutron 3 has also had a major overhaul in performance for faster processing and load times and smooth metering. Sessions with multiple Neutrons open much quicker, and refresh rates for visualizations have doubled.

Other Neutron 3 Features
• Visual Mixer and Izotope Relay: Users can launch Mix Assistant directly from Visual Mixer and move tracks in a virtual space, tapping into Izotope-enabled inter-plug-in communication
• Improved interface: Smooth visualizations and a resizable interface
• Improved Track Assistant listens to audio and creates a custom preset based on what it hears
• Eight plug-ins in one: Users can build a signal chain directly within one highly connected, intelligent interface with Sculptor, EQ with Soft Saturation mode, Transient Shaper, 2 Compressors, Gate, Exciter, and Limiter
• Component plug-ins: Users can control Neutron’s eight modules as a single plug-in or as eight individual plug-ins
• Tonal Balance Control: Updated to support Neutron 3
• 7.1 Surround sound support and zero-latency mode in all eight modules for professional, lightweight processing for audio post or surround music mixes

Visual Mixer and Izotope Relay will be Included free with all Neutron 3 Advanced demo downloads. In addition, Music Production Suite 2.1 will now include Neutron 3 Advanced, and iZotope Elements Suite will be updated to include Neutron Elements (v3).

Neutron 3 will be available in three different options — Neutron Elements, Neutron 3 Standard and Neutron 3 Advanced. See the comparison chart for more information on what features are included in each version.

Neutron will be available June 30. Check out the iZotope site for pricing.

Sound Lounge ups Becca Falborn to EP 

New York’s Sound Lounge, an audio post house that provides sound services for advertising, television and feature films, has promoted Becca Falborn to executive producer.

In her new role, Falborn will manage the studio’s advertising division and supervise its team of producers. She will also lead client relations and sales. Additionally, she will manage Sound Lounge Everywhere, the company’s remote sound services offering, which currently operates in Boston and Boulder, Colorado.

“Becca is a smart, savvy and passionate producer, qualities that are critical to success in her new role,” said Sound Lounge COO and partner Marshall Grupp. “She has developed an excellent rapport with our team of mixers and clients and has consistently delivered projects on time and on budget, even under the most challenging circumstances.”

Falborn joined Sound Lounge in 2017 as a producer and was elevated to senior producer last year. She has produced voiceover recordings, sound design, and mixing for many advertising projects, including seven out of the nine spots produced by Sound Lounge that debuted during this year’s Super Bowl telecast.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including past positions with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Sugar Studios LA gets social for celebrity-owned Ladder supplement

Sugar Studios LA completed a social media campaign for Ladder perfect protein powder and clean energy booster supplements starring celebrity founders Arnold Schwarzenegger, LeBron James, DJ Khaled, Cindy Crawford and Lindsey Vonn. The playful ad campaign focuses on social media, foregoing the usual TV commercial push and pitching the protein powder directly to consumers.

One spot shows Arnold in the gym annoyed by a noisy dude on the phone, prompting him to turn up his workout soundtrack. Then DJ Khaled is scratching encouragement for LeBron’s workout until Arnold drowns them out with his own personal live oompah band.

The ads were produced and directed by longtime Schwarzenegger collaborator Peter Grigsby, while Sugar Studios’ editor Nico Alba (Chevrolet, Ferrari, Morongo Casino, Mattel) cut the project using Adobe Premiere. When asked about using random spot lengths, as opposed to traditional :15s, :30s, and :60s, Alba explains, “Because it’s social media, we’re not always bound to those segments of time anymore. Basically, it’s ‘find the story,’ and because there are no rules, it makes the storytelling more fun. It’s a process of honing everything down without losing the rhythm or the message and maintaining a nice flow.”

Nico Alba and Jijo Reed. Credit: David Goggin

“Peter Grigsby requested a skilled big-brand commercial editor on this campaign,” Reed says. “Nico was the perfect fit to create that rhythm and flow that only a seasoned commercial editor could bring to the table.”

“We needed a heavy-weight gym ambience to set the stage,” says Alba, who worked closely with sound design/mixers Bret Mazur and Troy Ambroff to complement his editing. “It starts out with a barrage of noisy talking and sounds that really irritate Arnold, setting up the dueling music playlists and the sonic payoff.”

The audio team mixed and created sound design with Avid Pro tools Ultimate. Audio plugins called on include Waves Mercury bundle,, DTS Surround tools and iZotope RX7 Advanced.

The Sugar team also created a cinematic look to the spots, thanks to colorist Bruce Bolden, who called on Blackmagic DaVinci Resolve and a Sony BVM OLED monitor. “He’s a veteran feature film colorist,” says Reed, “so he often brings that sensibility to advertising spots as well, meaning rich blacks and nice, even color palettes.”

Storage used at the studio is Avid Nexis and Facilis Terrablock.

Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.