Evercast 5.4. Ad 5, Evercast Ad 6 5.18

Category Archives: Audio

Barking Owl adds first NYC hire, sound designer/mixer Dan Flosdorf

Music and sound company Barking Owl has made its first New York hire with sound designer/mixer Dan Flosdorf. His addition comes as the LA-based company ramps up to open a studio in New York City this summer.

Flosdorf’s resume includes extensive work in engineering, mixing and designing sound and music for feature films, commercials and other projects. Prior to his role at Barking Owl, Flosdorf was mixer and sound designer at NYC-based audio post studio Heard City for seven years.

In addition to his role at Heard City, Flosdorf has had a long collaboration with director Derek Cianfrance creating sound design for his films The Place Beyond the Pines and Blue Valentine. He also worked with Ron Howard on his feature, Made in America. Flosdorf’s commercial work includes working with brands such as Google, Volvo, Facebook and Sprite and multiple HBO campaigns for Watchmen, Westworld and Game of Thrones.

Leading up to the opening of Barking Owl’s NYC studio, Flosdorf is four-walling in NYC and working on projects for both East and West Coast clients

Even with the COVID crisis, Barking Owl’s New York studio plans continue. “COVID-19 has been brutal for our industry and many others, but you have to find the opportunity in everything,” says Kelly Bayett, founder/creative director at Barking Owl. “Since we were shut down two weeks into New York construction, we were able to change our air systems in the space to ones that bring in fresh air three times an hour, install UV systems and design the seating to accommodate the new way of living. We have been working consistently on a remote basis with clients and so in that way, we haven’t missed a beat. It might take us a few months longer to open there, but it affords us the opportunity to make relevant choices and not rush to open.”

Posting Michael Jordan’s The Last Dance — before and during lockdown

By Craig Ellenport

One thing viewers learned from watching The Last Dance — ESPN’s 10-part documentary series about Michael Jordan and the Chicago Bulls — is that Jordan might be the most competitive person on the planet. Even the slightest challenge led him to raise his game to new heights.

Photo by Andrew D. Bernstein/NBAE via Getty Images

Jordan’s competitive nature may have rubbed off on Sim NY, the post facility that worked on the docuseries. Since they were only able to post the first three of the 10 episodes at Sim before the COVID-19 shutdown, the post house had to manage a work-from-home plan in addition to dealing with an accelerated timeline that pushed up the deadline a full two months.

The Last Dance, which chronicles Jordan’s rise to superstardom and the Bulls’ six NBA title runs in the 1990s, was originally set to air on ESPN after this year’s NBA Finals ended in June. With the sports world starved for content during the pandemic, ESPN made the decision to begin the show on April 19 — airing two episodes a night on five consecutive Sunday nights.

Sim’s New York facility offers edit rooms, edit systems and finishing services. Projects that rent these rooms will then rely on Sim’s artists for color correction and sound editing, ADR and mixing. Sim was involved with The Last Dance for two years, with ESPN’s editors working on Avid Media Composer systems at Sim.

When it became known that the 1997-98 season was going to be Jordan’s last, the NBA gave a film crew unprecedented access to the team. They compiled 500 hours of 16mm film from the ‘97-’98 season, which was scanned at 2K for mastering. The Last Dance used a combination of the rescanned 16mm footage, other archival footage and interviews shot with Red and Sony cameras.

Photo by Andrew D. Bernstein/NBAE via Getty Images

“The primary challenge posed in working with different video formats is conforming the older standard definition picture to the high definition 16:9 frame,” says editor Chad Beck. “The mixing of formats required us to resize and reposition the older footage so that it fit the frame in the ideal composition.”

One of the issues with positioning the archival game footage was making sure that viewers could focus when shifting their attention between the ball and the score graphics.

“While cutting the scenes, we would carefully play through each piece of standard definition game action to find the ideal frame composition. We would find the best position to crop broadcast game graphics, recreate our own game graphics in creative ways, and occasionally create motion effects within the frame to make sure the audience was catching all the details and flow of the play,” says Beck. “We discovered that tracking the position of the backboard and keeping it as consistent as possible became important to ensuring the audience was able to quickly orient themselves with all the fast-moving game footage.”

From a color standpoint, the trick was taking all that footage, which was shot over a span of decades, and creating a cohesive look.

Rob Sciarratta

“One of main goals was to create a filmic, dramatic natural look that would blend well with all the various sources,” says Sim colorist Rob Sciarratta, who worked with Blackmagic DaVinci Resolve 15. “We went with a rich, slightly warm feeling. One of the more challenging events in color correction was blending the archival work into the interview and film scans. The older video footage tended to have various quality resolutions and would often have very little black detail existing from all the transcoding throughout the years. We would add a filmic texture and soften the blacks so it would blend into the 16mm film scans and interviews seamlessly. … We wanted everything to feel cohesive and flow so the viewer could immerse themselves in the story and characters.”

On the sound side, senior re-recording mixer/supervising sound editor Keith Hodne used Avid Pro Tools. “The challenge was to create a seamless woven sonic landscape from 100-plus interviews and locations, 500 hours of unseen raw behind-the-scenes footage, classic hip-hop tracks, beautifully scored instrumentation and crowd effects, along with the prerecorded live broadcasts,” he says. “Director Jason Hehir and I wanted to create a cinematic blanket of a basketball game wrapped around those broadcasts. What it sounds like to be at the basketball game, feel the game, feel the crowd — the suspense. To feel the weight of the action — not just what it sounds like to watch the game on TV. We tried to capture nostalgia.”

When ESPN made the call to air the first two episodes on April 19, Sim’s crew still had the final seven episodes to finish while dealing with a work-from-home environment. Expectations were only heightened after the first two episodes of The Last Dance averaged more than 6 million viewers. Sim was now charged with finishing what would become the most watched sports documentary in ESPN’s history — and they had to do this during a pandemic.

Stacy Chaet

When the shutdown began in mid-March, Sim’s staff needed to figure out the best way to finish the project remotely.

“I feel like we started the discussions of possible work from home before we knew it was pushed up,” says Stacy Chaet, Sim’s supervising workflow producer. “That’s when our engineering team and I started testing different hardware and software and figuring out what we thought would be the best for the colorist, what’s the best for the online team, what’s the best for the audio team.”

Sim ended up using Teradici to get Sciarratta connected to a machine at the facility. “Teradici has become a widely used solution for remote at home work,” says Chaet. “We were easily able to acquire and install it.”

A Sony X300 monitor was hand-delivered to Sciarratta’s apartment in lower Manhattan, which was also connected to Sciarratta’s machine at Sim through an Evercast stream. Sim shipped him other computer monitors, a Mac mini and Resolve panels. Sciarratta’s living room became a makeshift color bay.

“It was during work on the promos that Jason and Rob started working together, and they locked in pretty quickly,” says David Feldman, Sim’s senior VP, film and television, East Coast. “Jason knows what he wants, and Rob was able to quickly show him a few color looks to give him options.

David Feldman

“So when Sim transitioned to a remote workflow, Sciarratta was already in sync with what the director, Jason Hehir, was looking for. Rob graded each of the remaining seven episodes from his apartment on his X300 unsupervised. Sim then created watermarked QTs with final color and audio. Rob reviewed each QT to make sure his grade translated perfectly when reviewed on Jason’s retina display MacBook. At that point, Sim provided the director and editorial team access for final review.”

The biggest remote challenge, according to producer Matt Maxson, was that the rest of the team couldn’t see Sciarratta’s work on the X300 monitor.

“You moved from a facility with incredible 4K grading monitors and scopes to the more casual consumer-style monitors we all worked with at home,” says Maxson. “In a way, it provided a benefit because you were watching it the way millions of people were going to experience it. The challenge was matching everyone’s experience — Jason’s, Rob’s and our editors’ — to make sure they were all seeing the same thing.”

Keth Hodne

For his part, Hodne had enough gear in his house in Bay Ridge, Brooklyn. Using Pro Tools with Mac Pro computers at Sim, he had to work with a pared-down version of that in his home studio. It was a challenge, but he got the job done.

Hodne says he actually had more back-and-forth with Hehir on the final episode than any of the previous nine. They wanted to capture Jordan’s moments of reflection.

“This episode contains wildly loud, intense crowd and music moments, but we counterbalance those with haunting quiet,” says Hodne. “We were trying to achieve what it feels like to be a global superstar with all eyes on Jordan, all expectations on Jordan. Just moments on the clock to write history. The buildup of that final play. What does that feel and sound like? Throughout the episode, we stress that one of his main strengths is the ability to be present. Jason and I made a conscious decision to strip all sound out to create the feeling of being present and in the moment. As someone whose main job it is to add sound, sometimes there is more power in having the restraint to pull back on sound.”

ESPN Films_Netflix_Mandalay Sports Media_NBA Entertainment

Even when they were working remotely, the creatives were able to communicate in real time via phone, text or Zoom sessions. Still, as Chaet points out, “you’re not getting the body language from that newly official feedback.”

From a remote post production technology standpoint, Chaet and Feldman both say one of the biggest challenges the industry faces is sufficient and consistent Internet bandwidth. Residential ISPs often do not guarantee speeds needed for flawless functionality. “We were able to get ahead of the situation and put systems in place that made things just as smooth as they could be,” says Chaet. “Some things may have taken a bit longer due to the remote situation, but it all got done.”

One thing they didn’t have to worry about was their team’s dedication to the project.

“Whatever challenges we faced after the shutdown, we benefitted from having lived together at the facility for so long,” says Feldman. “There was this trust that, somehow, we were going to figure out a way to get it done.”


Craig Ellenport is a veteran sports writer who also covers the world of post production. 

Evercast 5.4. Ad 5, Evercast Ad 6 5.18

Posting John Krasinski’s Some Good News

By Randi Altman

Need an escape from a world filled with coronavirus and murder hornets? You should try John Krasinski’s weekly YouTube show, Some Good News. It focuses on the good things that are happening during the COVID-19 crisis, giving people a reason to smile with things such as a virtual prom, Krasinski’s chat with astronauts on the ISS and bringing the original Broadway cast of Hamilton together for a Zoom singalong.

L-R: Remy, Olivier, Josh and Lila Senior

Josh Senior, owner of Leroi and Senior Post in Dumbo, New York, is providing editing and post to SGN. His involvement began when he got a call from a mutual friend of Krasinski’s, asking if he could help put something together. They sent him clips via Dropbox, and a workflow was born.

While the show is shot at Krasinski’s house in New York at different times during the week, Senior’s Fridays, Saturdays and Sundays are spent editing and posting SGN.

In addition to his post duties, Senior is an EP on the show, along with his producing partner Evan Wolf Buxbaum at their production company, Leroi. The two work in concert with Allyson Seeger and Alexa Ginsburg, who executive produced for Krasinski’s company, Sunday Night Productions. Production meetings are held on Tuesday, and then shooting begins. After footage is captured, it’s still shared via Dropbox or good old iMessage.

Let’s find out more…

What does John use for the shoot?
John films on two iPhones. A good portion of the show is screen-recorded on Zoom, and then there’s the found footage user-generated content component.

What’s your process once you get the footage? And, I’m assuming, it’s probably a little challenging getting footage from different kinds of cameras?
Yes. In the alternate reality where there’s no coronavirus, we run a pretty big post house in Dumbo, Brooklyn. And none of the tools of the trade that we have there are really at play here, outside of our server, which exists as the ever-present backend for all of our remote work.

The assets are pulled down from wherever they originate. The masters are then housed behind an encrypted firewall, like we do for all of our TV shows at the post house. Our online editor is the gatekeeper. All the editors, assistant editors, producers, animators, sound folks — they all get a mirrored drive that they download, locally, and we all get to work.

Do you have a style guide?
We have a bible, which is a living document that we’ve made week over week. It has music cues, editing style, technique, structure, recurring themes, a living archive of all the notes that we’ve received and how we’ve addressed them. Also, any style that’s specific to segments, post processing, any phasing or audio adjustments that we make all live within a document, that we give to whoever we onboard to the show.

Evan Wolf Buxbaum

Our post producers made this really elegant workflow that’s a combination of Vimeo and Slack where we post project files and review links and share notes. There’s nothing formal about this show, and that’s really cool. I mean, at the same time, as we’re doing this, we’re rapidly finishing and delivering the second season of Ramy on Hulu. It comes out on May 29.

I bet that workflow is a bit different than SGN’s.
It’s like bouncing between two poles. That show has a hierarchy, it’s formalized, there’s a production company, there’s a network, there’s a lot of infrastructure. This show is created in a group text with a bunch of friends.

What are you using to edit and color Some Good News?
We edit in Adobe Premiere, and that helps mitigate some of the challenges of the mixed media that comes in. We typically color inside of Adobe, and we use Pro Tools for our sound mix. We online and deliver out of Resolve, which is pretty much how we work on most of our things. Some of our shows edit in Avid Media Composer, but on our own productions we almost always post in Premiere — so when we can control the full pipeline, we tend to prefer Adobe software.

Are review and approvals with John and the producers done through iMessage in Dropbox too?
Yes, and we post links on Vimeo. Thankfully we actually produce Some Good News as well as post it, so that intersection is really fluid. With Ramy it’s a bit more formalized. We do notes together and, usually internally, we get a cut that we like. Then it goes to John, and he gives us his thoughts and we retool the edit; it’s like a rapid prototyping rather than a gated milestone. There are no network cuts or anything like that.

Joanna Naugle

For me, what’s super-interesting is that everyone’s ideas are merited and validated. I feel like there’s nothing that you shouldn’t say because this show has no agenda outside of making people happy, and everybody’s uniquely qualified to speak to that. With other projects, there are people who have an experience advantage, a technical advantage or some established thought leadership. Everybody knows what makes people happy. So you can make the show, I can make the show, my mom can make the show, and because of that, everything’s almost implicitly right or wrong.

Let’s talk about specific episodes, like the ones featuring the prom and Hamilton? What were some of the challenges of working with all of that footage. Maybe start with Hamilton?
That one was a really fun puzzle. My partner at Senior Post, Joanna Naugle, edited that. She drew on a lot of her experience editing music videos, performance content, comedy specials, multicam live tapings. It was a lot like a multicam live pre-taped event being put together.

We all love Hamilton, so that helps. This was a combination of performers pre-taping the entire song and a live performance. The editing technique really dissolves into the background, but it’s clear that there’s an abundance of skill that’s been brought to that. For me, that piece is a great showcase of the aesthetic of the show, which is that it should feel homemade and lo-fi, but there’s this undercurrent of a feat to the way that it’s put together.

Getting all of those people into the Zoom, getting everyone to sound right, having the ability to emphasize or de-emphasize different faces. To restructure the grid of the Zoom, if we needed to, to make sure that there’s more than one screen worth of people there and to make sure that everybody was visible and audible. It took a few days, but the whole show is made from Thursday to Sunday, so that’s a limiting factor, and it’s also this great challenge. It’s like a 48-hour film festival at a really high level.

What about the prom episode?
The prom episode was fantastic. We made the music performances the day before and preloaded them into the live player so that we could cut to them during the prom. Then we got to watch the prom. To be able to participate as an audience member in the content that you’re still creating is such a unique feeling and experience. The only agenda is happiness, and people need a prom, so there’s a service aspect of it, which feels really good.

John Krasinski setting up his shot.

Any challenges?
It’s hard to put things together that are flat, and I think one of the challenges that we found at the onset was that we weren’t getting multiple takes of anything, so we weren’t getting a lot of angles to play with. Things are coming in pretty baked from a production standpoint, so we’ve had to find unique and novel ways to be nonlinear when we want to emphasize and de-emphasize certain things. We want to present things in an expositional way, which is not that common. I couldn’t even tell you another thing that we’ve worked on that didn’t have any subjectivity to it.

Let’s talk sound. Is he just picking up audio from the iPhones or is he wearing a mic?
Nope. No, mic. Audio from the iPhones that we just run through a few filters on Pro Tools. Nobody mics themselves. We do spend a lot of time balancing out the sound, but there’s not a lot of effect work.

Other than SGN and Ramy, what are some other shows you guys have worked on?
John Mulaney & the Sack Lunch Bunch, 2 Dope Queens, Random Acts of Flyness, Julio Torres: My Favorite Shapes by Julio Torres and others.

Anything that I haven’t asked that you think is important?
It’s really important for me to acknowledge that this is something that is enabling a New York-based production company and post house to work fully remotely. In doing this week over week, we’re really honing what we think are tangible practices that we can then turn around and evangelize out to the people that we want to work with in the future.

I don’t know when we’re going to get back to the post house, so being able to work on a show like this is providing this wonderful learning opportunity for my whole team to figure out what we can modulate from our workflow in the office to be a viable partner from home.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Soundwhale app intros new editing features for remote audio collaboration

Soundwhale, which makes a Mac and iOS-based remote audio collaboration app, has introduced a new suite of editing capabilities targeting teams working apart but together during this COVID crisis. It’s a virtual studio that lets engineers match sound to picture and lets actors, with no audio experience, record their lines. The company says this is done with minimal latency and no new hardware or additional specialized software required. The app also allows pro-quality mixing, recording and other post tasks, and can work alongside a user’s DAW of choice.
“Production teams are scattered and in self-isolation all around the world,” says Soundwhale founder Ameen Abdulla, who is an audio engineer. “They can’t get expensive hardware to everyone. They have to get people without any access to, or knowledge of, a digital audio workspace like Pro Tools to collaborate. That’s why we felt some urgency to launch more stand-alone editing options within Soundwhale, specifically designed for tasks like ADR.”



Soundwhale allows users to:
– Record against picture
– Control another user’s timeline and playback
– Manage recorded takes
– Cope with slow connections thanks to improved compression
– Optimize stream settings
– Share takes in timeline of other users
– Customize I/O for different setups
– Do basic copy, paste, and moving of audio files
– Share any file by drag and drop
– Share screens and video chat

Soundwhale stems from Abdulla’s own challenges trying to perfect the post process from his recording studio, Mothlab, in Minneapolis. His clients were often on the West Coast and he needed to work with them remotely. Nothing available at the time worked very well, and drawing on his technical background, he set out to fix the issues, which included frustrating lags.

“Asynchronous edits and feedback are hell,” Abdulla notes. “As the show goes on, audio professionals need ways to edit and work with talent in real time over the Internet. Everybody’s experiencing this same thing. Everyone needs the same thing at the same time.”


Sound Devices producing 30,000 face shields per day

During times of crisis, people and companies step up. One of those companies is Wisconsin-based pro audio equipment manufacturer Sound Devices, wnhich is producing more than 30,000 face shields each day to help keep frontline workers safe in the fight against COVID-19. The company has pulled together a coalition of local manufacturers in the Reedsburg, Wisconsin, area to achieve this number, including Columbia Parcar, VARC, Cellox and Hankscraft AJS.

Sound Devices realized it could simultaneously play a direct role in helping protect health care workers and keep local area production-line workers employed. Around 100 people in the Reedsburg area are working daily to bring in material, assemble and ship the FS-1 face shields. Sound Devices sells the shields at a nonprofit price and has already shipped nearly a quarter million shields around Wisconsin and the rest of the US.

“The real heroes in this operation have been our line workers,” says Lisa Wiedenfeld, VP of finance and operations at Sound Devices. “They have been coming in day after day and cranking out these face shields while maintaining strict safety standards including wearing face masks, 10-foot distancing and extensive sanitation procedures. Under normal circumstances, ramping up manufacturing on a high volume of a new product is challenging enough, let alone avoiding a dangerous virus at the same time. My hat is off to all of our workers.”

“We started production of our FS-1 and FS-1NL face shields on March 24th, producing about 400 per day. As we’ve increased production to 30,000 per day, one of the most difficult aspects has been procuring enough parts to build consistently,” said Matt Anderson, CEO /president of Sound Devices. “Luckily, we have an extremely resourceful purchasing team. They have tapped our excellent network of Wisconsin-based suppliers. When our production levels outstripped what our suppliers here could do, our overseas suppliers pitched in to augment the supply of parts. But getting parts sent to us has been extremely difficult due to the reduced capacity of shippers. This whole experience has been very challenging but rewarding.”

Sound Devices now has FS-1 (original) and FS-1NL (latex-free) shields in stock. Face shields may be purchased by anyone in the US directly from store.sounddevices.com or by contacting sales@sounddevices.com.


COVID-19: How our industry is stepping up

We’ve been using this space to talk about how companies are discounting products, raising money and introducing technology to help with remote workflows, as well as highlighting how pros are personally pitching in.

Here are the latest updates, followed by what we’ve gathered to date:

Adobe
Adobe has made a $4.5 million commitment to trusted organizations that are providing vital assistance to those most in need.

• Adobe is joining forces with other tech leaders in the Bay Area to support the COVID-19 Coronavirus Regional Response Fund of the Silicon Valley Community Foundation, a trusted foundation that serves a network of local nonprofits. Adobe’s $1 million donation will help provide low-income people in Santa Clara County through The Santa Clara County Homelessness Prevention System Financial Assistance Program  with immediate financial assistance to help pay rent or meet other basic needs. Additionally, Adobe is donating $250,000 to the Valley Medical Center Foundation to purchase life-saving ventilators for Bay Area hospitals.
• Adobe has donated $1 million to the COVID-19 Fund of the International Federation of Red Cross and Red Crescent Societies, the recognized global leader in providing rapid disaster relief and basic human and medical services. Adobe’s support will help aid vulnerable communities impacted by COVID-19 around the world. This is in addition to the $250,000 the company is donating to Direct Relief as a part of Adobe’s #HonorHeroes campaign.
• To support the community in India, Adobe is donating $1 million towards the American India Foundation (AIF) and the Akshaya Patra Foundation. The donation will help AIF source much-needed ventilators for hospitals, while the grant for Akshaya Patra will provide approximately 5 million meals to impacted families.

Harbor
Harbor is releasing Inspiration in Isolation, a new talk series that features filmmakers in candid conversation about their creative process during this unprecedented time and beyond. The web series aims to reveal the ideas and rituals that contribute to their creative process. The premiere episode features celebrated cinematographer Bradford Young and senior colorist Joe Gawler. The two, who are collaborators and friends, talk community, family, adapting to change and much more.

The full-length episodes will be released on Harbor’s new platform, HarborPresents, with additional content on Harbor’s social media (@HarborPictureCo).

HPA
The HPA has formed the HPA Industry Recovery Task Force, which will focus on sustainably resuming production and post services, with the aim of understanding how to enable content creation in an evolving world impacted by the pandemic.

The task force’s key objectives are:
• To serve as a forum for collaboration, communication and thought leadership regarding how to resume global production and post production in a sustainable fashion.
• To understand and influence evolving technical requirements, such as the impact of remote collaboration, work from home and other workflows that have been highlighted by the current crisis.
• To provide up-to-date information and access to emerging health and safety guidelines that will be issued by various governments, municipalities, unions, guilds, industry organizations and content creators.
• To provide collaborative support and guidance to those impacted by the crisis.

Genelec
Genelec is donating a percentage of every sale of its new Raw loudspeaker range to the Audio Engineering Society (AES) for the remainder of this year. Additionally, Genelec will fund 10 one-year AES memberships for those whose lives have been impacted by the COVID-19 crisis. A longtime sustaining member of AES, Genelec is making the donation to help sustain the society’s cash flow, which has been significantly affected by the coronavirus situation.

OWC
OWC has expanded its safety protocols, as they continue to operate as an essential business in Illinois. They have expanded their already strong standard operating practice in terms of cleanliness with additional surface disinfection actions, as well as both gloves and masks being used by their warehouse and build teams. Even before recent events, manufacturing teams used gloves to prevent fingerprinting units during build, but those gloves have new importance now. In addition, OWC has both MERV air filters in place and a UV air purifier, which combined are considered to be 99.999% effective in killing/capturing all airborne bacteria and viruses.

Red

For a limited time, existing DSMC2 and Red Ranger Helium and Gemini customers can purchase a Red Extended Warranty at a discounted price. Existing customers who are into their second year of warranty can pay the standard pricing they would receive within their first year instead of the markup price. For example, instead of paying $1,740 (the 20% markup), a DSMC2 Gemini owner who is in within the second year of warranty can purchase an Extended Warranty for $1,450.

This promotion has been extended to June 30. Adding the Red Extended Warranty not only increases the warranty coverage period but also provides benefits such as priority repair, expedited shipping, and premium technical support directly from Red. Customers also have access to the Red Rapid Replacement Program. Extended Warranty is also transferable to new owners if completing a Transfer of Ownership with Red.

DejaSoft
DejaSoft has extended its offering of giving editors 50% off all their DejaEdit licenses — it now goes through the end of June. In addition, the company will help users implement DejaEdit in the best way possible to suit their workflow. DejaEdit allows editors to share media files and timelines automatically and securely with remote co-workers around the world, without having to be online continuously. It helps editors working on Avid Nexis, Media Composer and EditShare workflows across studios, production companies and post facilities ensure that media files, bins and timelines are kept up to date across multiple remote edit stations.

Assimilate
Assimilate is offering all of its products — including Scratch 9.2, Scratch VR 9.2, PlayPro 9.2, Scratch Web and the recently released Live Looks and Live Assist — for free through October 31. Users can register for free licenses. Online tutorials are here and free access to Lowepost online Scratch training is here.

B&H
B&H is partnering with suppliers to donate gear to the teams at Mount Sinai and other NYC hospitals to help health care professionals and first responders stay in touch with their loved ones. Some much-needed items are chargers, power sources, battery packs and mobile accessories. B&H is supporting the Mayor’s Fund to Advance New York City and Direct Relief.

FXhome
FXhome last month turned the attention of its “Pay What You Want” initiative to direct proceeds to help fight Covid-19. This month, in an effort to teach the community new skills, and inspire them with new ideas to help them reinvent themselves, FXhome has today launched a new, entirely free Master Class series designed to teach everything from basic editing, to creating flashy title sequences, to editing audio and of course, learning basic VFX and compositing.

Nugen Audio 
Nugen Audio has a new “Staying Home, Staying Creative” initiative aimed at promoting collaboration and creativity in a time of social distancing. Included are a variety of videos, interviews and articles that will inspire new artistic approaches for post production workflows. The company is also providing temporary replacement licenses for any users who do not have access to their in-office workstations.

Already available on the Staying Creative web page is a special interview with audio post production specialist Keith Alexander. Building from his specialty in remote recording and sound design for broadcast, film and gaming, Alexander shares some helpful tips on how to work efficiently in a home-based setting and how to manage audio cleanup and broadcast-audio editing projects from home. There’s also an article focused on three ways to improve lo-fi drum recording in a less-than-ideal space.

Nugen is also offering temporary two-month licenses for current iLok customers, along with one additional Challenge Response license code authorization. The company has also reduced the prices of all products in its web store.

Tovusound 
Tovusound has extended its 20% discount until the end of the month and has added some new special offers.

The Spot Edward Ultimate Suite expansion, regularly $149, is now $79 with coupon. It adds the Spot creature footstep and movement instrument to the Edward footstep, cloth and props designer. Customers also get free WAV files with the purchase of all Edward instruments and expansions and with all Tovusound bundles. Anyone who purchased one of the applicable products after April 1 also has free access to the WAV files.

Tovusound will continue to donate an additional 10% of the sales price to the CleanOceanProject.org. Customers may claim their discounts by entering STAYHOME in the “apply coupon” field at checkout. All offers end on April 30.

 

Previous Updates

Object Matrix and Cinesys-Oceana
Object Matrix and Cinesys-Oceana are hosting a series of informal online Beer Roundtable events in the coming months. The series will discuss the various challenges with implementing hybrid technology for continuity, remote working and self-serve access to archive content.You can register for the next Beer Roundtable here. The sessions will be open, fun and relaxed. Participants are asked to grab themselves a drink and simply raise their glass when they wish to ask a question.

During the first session, Cinesys-Oceana CTO Brent Angle and Object Matrix CEO Jonathan Morgan will introduce what they believe to be the mandatory elements of the ultimate hybrid technology stack. This will be followed by a roundtable discussion hosted by Harry Skopas, director M&E solutions architecture and technical sales at Cinesys-Oceana, with guest appearances from the media and sports technology communities.

MZed
MZed, an online platform for master classes in filmmaking, photography and visual storytelling, is donating 20% of all sales to the Los Angeles Food Bank throughout April. For every new MZed Pro membership, $60 is donated, equating to 240 meals to feed hungry children, seniors and families. MZed serves the creative community, a large portion of which lives in the LA area and is being hit hard by the lockdown due to the coronavirus. MZed hopes to help play a role in keeping high-risk members of the community fed during a time of extreme uncertainty.

MZed has also launched a “Get One, Gift One” initiative. When someone purchases an MZed Pro membership, that person will not only be supporting the LA Food Bank but will instantly receive a Pro membership to give to someone else. MZed will email details upon purchase.

MZed offers hundreds of hours of training courses covering everything from photography and filmmaking to audio and lighting in courses like “The Art of Storytelling” with Alex Buono and Philip Bloom’s Cinematic Masterclass.

NAB Show
NAB Show’s new digital experience, NAB Show Express, will take place May 13-14. The platform is free and offers 24-hour access to three educational channels, on-demand content and a Solutions Marketplace featuring exhibitor product information, announcements and demos. Registration for the event will open on April 20 at NABShowExpress.com. Each channel will feature eight hours of content streamed daily and available on-demand to accommodate the global NAB Show audience. NAB Show Express will also offer NAB Show’s signature podcast, exploring relevant themes and featuring prominent speakers.

Additionally, NAB Show Express will feature three stand-alone training and executive leadership events for which separate registrations will be available soon. These include:
• Executive Leadership Summit (May 11), produced in partnership with Variety
• Cybersecurity & Content Protection Summit (May 12), produced in partnership with Content Delivery & Security Association (CDSA) and Media & Entertainment Services Alliance (MESA) – registration fees apply
• Post | Production World Online (May 17-19), produced in partnership with Future Media Conferences (FMC) – registration fees apply.

Atto 
Atto Technology is supporting content producers who face new workflow and performance challenges by making Atto Disk Benchmark for macOS more widely available and by updating Atto 360 tuning, monitoring and analytics software. Atto 360 for macOS and Linux have been updated for enhanced stability and include an additional tuning profile. The current Windows release already includes these updates. The software is free and can be downloaded directly from Atto.

Sigma
Sigma has launched a charitable giving initiative in partnership with authorized Sigma lens dealers nationwide. From now until June 30, 2020, 5% of all Sigma lens sales made through participating dealers will be donated to a charitable organization of the dealers’ choice. Donations will be made to organizations working on COVID-19 relief efforts to help ease the devastation many communities are feeling as a result of the global crisis. A full list of participating Sigma dealers and benefiting charities can be found here.

FXhome 
To support those who are putting their lives on the line to provide care and healing to those impacted by the global pandemic, FXhome is adding Partners In Health, Doctors Without Borders and the Center for Disaster Philanthropy as new beneficiaries of the FXhome “Pay What You Want” initiative.

Pay What You Want is a goodwill program inspired by the HitFilm Express community’s desire to contribute to the future development of HitFilm Express, the company’s free video editing and VFX software. Through the initiative, users can contribute financially, and those funds will be allocated for future development and improvements to HitFilm. Additionally, FXhome is contributing a percentage of the proceeds to organizations dedicated to global causes important to the company and its community. The larger the contribution from customers, the more FXhome will donate.

Besides adding the three new health-related beneficiaries, FXhome has extended its campaign to support each new cause from one month to three months, beginning in April and running through the end of June. A percentage of all proceeds of revenues generated during this time period will be donated to each cause.

Covid-19 Film and TV Emergency Relief Fund
Created by The Film and TV Charity in close partnership with the BFI, the new COVID-19 Film and TV Emergency Relief Fund provides support to the many thousands of active workers and freelancers who have been hit hardest by the closure of productions across the UK. The fund has received initial donations totaling £2.5 million from Netflix, the BFI, BBC Studios, BBC Content, WarnerMedia and several generous individuals.

It is being administered by The Film and TV Charity, with support from BFI staff. The Film and TV Charity and the BFI is covering all overheads, enabling donations to go directly to eligible workers and freelancers across film, TV and cinema. One-off grants of between £500 and £2,500 will be awarded based on need. Applications for the one-off grants can be made via The Film and TV Charity’s website. The application process will remain open for two weeks.

The Film and TV Charity also has a new COVID-19 Film and TV Repayable Grants Scheme offering support for industry freelancers waiting for payments under the Government’s Self-employment Income Support Scheme. Interest-free grants of up to £2,000 will be offered to those eligible for Self-employment Income Support but who are struggling with the wait for payments in June. The Covid-19 Film and TV Repayable Grants Scheme opens April 15. Applicants will have one week to make a claim via The Film and TV Charity’s website.

Lenovo
Lenovo is offering a free 120-day license of Mechdyne’s TGX Remote Desktop software, which uses Nvidia Quadro GPUs and a built-in video encoder to compress and send information from the host workstation to the end-point device to decode. This eliminates lag on complex and detailed application files.

Teams can share powerful, high-end workstation resources across the business, easily dialing up performance and powerful GPUs from their standard workstation to collaborate remotely with coworkers around the world.

Users keep data and company IP secure on-site while reducing the risk of data breaches and remotely administering computer hardware assets from anywhere, anytime.
Users install the trial on their host workstations and install the receiver software on their local devices to access their applications and projects as if they were in the office.

Ambidio 
To help sound editors, mixers and other post pro who suddenly find themselves working from home, Ambidio is making its immersive sound technology, Ambidio Looking Glass, available for free. Sound professionals can apply for a free license through Ambidio’s website. Ambidio is also waiving its per-title releasing fee for home entertainment titles during the current cinema shutdown. It applies to new titles that haven’t previously been released through Blu-ray, DVD, digital download or streaming. The free offer is available through May 31.

Ambidio Looking Glass can be used as a monitoring tool for theatrical and television projects requiring immersive sound. Ambidio Looking Glass produces immersive sound that approximates what can be achieved on a studio mix stage, except it is playable through standard stereo speaker systems. Editors and mixers working from home studios can use it to check their work and share it with clients, who can also hear the results without immersive sound playback systems.

“The COVID-19 pandemic is forcing sound editors and mixers to work remotely,” says Ambidio founder Iris Wu. “Many need to finish projects that require immersive sound from home studios that lack complex speaker arrays. Ambidio Looking Glass provides a way for them to continue working with dimensional sound and meet deadlines, even if they can’t get to a mix stage.”

Qumulo
Through July 2020, Qumulo is offering its cloud-native file software for free to public and private-sector medical and health care research organizations that are working to minimize the spread and impact of the COVID-19 virus.

“Research and health care organizations across the world are working tirelessly to find answers and collaborate faster in their COVID-19 vaccine mission,” said Matt McIlwain, chairman of the board of trustees of the Fred Hutchinson Cancer Research Center and managing partner at Madrona Venture Group. “It will be through the work of these professionals, globally sharing and analyzing all available data in the cloud, that a cure for COVID-19 will be discovered.”

Qumulo’s cloud-native file and data services allows organizations to use the cloud to capture, process, analyze and share data with researchers distributed across geographies. Qumulo’s software works seamlessly with the applications medical and health care researchers have been using for decades, as well as with artificial intelligence and analytics services more recently developed in the cloud.

Medical organizations can register to use Qumulo’s file software in the cloud, which will be deployable through the Amazon Web Services and Google Cloud marketplaces.

Goldcrest Post
Goldcrest Post has established the capability to conduct most picture and sound post production work remotely. Colorists, conform editors and other staff are now able to work from home or a remote site and connect to the facility’s central storage and technical resources via remote collaboration software. Clients can monitor work through similar secure, fast and reliable desktop connections.

The service allows Goldcrest to ensure theatrical and television projects remain on track while allowing clients to oversee work in as normal a manner as possible under current circumstances.

Goldcrest has set up a temporary color grading facility at a remote site convenient for its staff colorists. The site includes a color grading control panel, two color-calibrated monitors and a high-speed connection to the main Goldcrest facility. The company has also installed desktop workstations and monitors in the homes of editors and other staff involved in picture conforming and deliverables. Sound mixing is still being conducted on-site, but sound editorial and ancillary sound work is being done from home.In taking these measures, the facility has reduced its on-site staff to a bare minimum while keeping workflow disruption to a minimum.

Ziva Dynamics
Ziva Dynamics is making Ziva VFX character simulation software free for students and educators. The same tools used on Game of Thrones, Hellboy and John Wick: Chapter 3 are now available for noncommercial projects, offering students the chance to learn physics-based character creation before they graduate. Ziva VFX Academic licenses are fully featured and receive the same access and support as other Ziva products.

In addition to the software, Ziva Academic users will now receive free access to Ziva Dynamics’ simulation-ready assets Zeke the Lion (previously $10,000) and Lila the Cheetah. Thanks to Ziva VFX’s Anatomy Transfer feature, the Zeke rig has helped make squirrels, cougars, dogs and more for films like John Wick 3, A Dog’s Way Home and Primal.

Ziva Dynamics will also be providing a free Ziva Academic floating lab license to universities so students can access the software in labs across campuses whenever they want. Ziva VFX Academic licenses are free and open to any fully accredited institution, student, professor or researcher (an $1,800 value). New licenses can be found in the Ziva store and are provided following a few eligibility questions. Academic users on the original paid plan can now increase their license count for free.

OpenDrives 
OpenDrives’ OpenDrives Anywhere is an in-place private cloud model that enables customers with OpenDrives to work on the same project from multiple locations without compromising performance. With existing office infrastructure, teams already have an in-place private cloud and can extend its power to each of their remote professionals. No reinvestment in storage is needed.

Nothing changes from a workflow perspective except physical proximity. With simple adjustments, remote control of existing enterprise workstations can be extended via a secure connection. HP’s ZCentral Remote Boost (formerly RGS) software will facilitate remote access over secure connection to your workstations, or Teradici can provide both dedicated external hardware and software solutions for this purpose, giving teams the ability to support collaborative workflows at low cost. OpenDrives can also get teams quickly set up in under two hours on a corporate VPN and in under 24 hours without.

Prime Focus Technologies 
Prime Focus Technologies (PFT), the technology arm of Prime Focus, has added new features and advanced security enhancements to Clear to help customers embrace the virtual work environment. In terms of security, Clear now has a new-generation HTML 5 player enabled with Hollywood-grade DRM encryption. There’s also support for just-in-time visual watermarking embedded within the stream for streaming through Clear as a secure alternative to generating watermarking on the client side.

Clear also has new features that make it easier to use, including direct and faster download from S3 and Azure storage, easier partner onboarding and an admin module enhancement with condensed permissions to easily handle custom user roles. Content acquisition is made easier with a host of new functionalities to simplify content acquisition processes and reduce dependencies as much as possible. Likewise, for easier content servicing, there is now automation in content localization, to make it easier to perform and review tasks on Clear. For content distribution, PFT has enabled on-demand cloud distribution on Clear through the most commonly used cloud technologies.

Brady and Stephenie Betzel
Many of you know postPerspective contributor and online video editor Brady Betzel from his great reviews and tips pieces. During this crisis, he is helping his wife, Stephenie, make masks for her sister (a nurse) and colleagues working at St. John’s Regional Medical Center in Oxnard, California, in addition to anyone else who works on the “front lines.” She’s sewn over 300 masks so far and is not stopping. Creativity and sewing is not new to her. Her day job is also creating. You can check out her work on Facebook and Instagram.

Object Matrix 
Object Matrix co-founder Nick Pearce has another LinkedIn dispatch, this time launching Good News Friday, where folks from around the globe check in with good news!  You can also watch it on YouTube. Pearce and crew are also offering video tips for surviving working from home. The videos, hosted by Pearce, and are updated weekly. Check them out  here.

Conductor
Conductor is waiving charges for orchestrating renders in the cloud. Updated pricing is reflected in the cost calculator on Conductor’s Pricing page. These changes will last at least through May 2020. To help expedite any transition needs, the Conductor team will be on call for virtual render wrangling of cloud submissions, from debugging scenes and scripts to optimizing settings for cost, turnaround time, etc. If you need this option, then email support@conductortech.com.

Conductor is working with partners to set up online training sessions to help studios quickly adopt cloud strategies and workflows. The company will send out further notifications as the sessions are formalized. Conductor staff is also available for one-on-one studio sessions as needed for those with specific pipeline considerations.

Conductor’s president and CEO Mac Moore said this: “The sudden onset of this pandemic has put a tremendous strain on our industry, completely changing the way studios need to operate virtually overnight. Given Conductor was built on the ‘work from anywhere’ premise, I felt it our responsibility to help studios to the greatest extent possible during this critical time.”

Symply
Symply is providing as many remote workers in the industry as possible with a free 90-day license to SymplyConveyor, its secure, high-speed transfer and sync software. Symply techs will be available to install SymplyConveyor remotely on any PC, Mac or Linux workstation pair or server and workstation.

The no-obligation offer is available at gosymply.com. Users sign up, and as long as they are in the industry and have a need, Symply techs will install the software. The number of free 90-day licenses is limited only by Symply’s ability to install them given its limited resources.

Foundry
Foundry has reset its trial database so that users can access a new 30-day trial for all products regardless of the date of their last trial. The company continues to offer unlimited non-commercial use of Nuke and Mari. On the educational side, students who are unable to access school facilities can get a year of free access to Nuke, Modo, Mari and Katana.

They have also announced virtual events, including:

• Foundry LiveStream – a series of talks around projects, pipelines and tools.
• Foundry Webinars – A 30 to 40-minute technical deep dive into Foundry products, workflows and third-party tools.
• Foundry Skill-Ups – A 30-minute guide to improving your skills as a compositor/lighter/texture artist to get to that next level in your career.
• Foundry Sessions – Special conversations with our customers sharing insights, tips and tricks.
• Foundry Workflow Wednesdays –10-minute weekly videos posted on social media showing tips and tricks with Nuke from our experts.

Alibi Music Library
Alibi Music Library is offering free whitelisted licensing of its Alibi Music and Sound FX catalogs to freelancers, agencies and production companies needing to create or update their demo reels during this challenging time.

Those who would like to take advantage of this opportunity can choose Demo Reel 2020 Gratis from the shopping cart feature on Alibi’s website next to any desired track(s). For more info, click here.

2C Creative
Caleb & Calder Sloan’s Awesome Foundation, the charity of 2C Creative founders Chris Sloan and Carla Kaufman Sloan, is running a campaign that will match individual donations (up to $250 each) to charities supporting first responders, organizations and those affected by COVID-19. 2C is a creative agency & production company serving the TV/streaming business with promos, brand integrations, trailers, upfront presentations and other campaigns. So far, the organization’s “COVID-19 Has Met Its Match” campaign has raised more than $50,000. While the initial deadline date for people to participate was April 6, this has now been extended to April 13. To participate, please visit ccawesomefoundation.org for a list of charities already vetted by the foundation or choose your own. Then, simply email a copy of your donation receipt to: cncawesomefoundation@gmail.com and they will match it!

Red Giant 
For the filmmaking education community, Red Giant is offering Red Giant Complete — the full set of tools including Trapcode Suite, Magic Bullet Suite, Universe, VFX Suite and Shooter Suite — free for students or faculty members of a university, college or high school. Instead of buying separate suites or choosing which tools best suits one’s educational needs or budget, students and teachers can get every tool Red Giant makes completely free of charge. All that’s required is a simple verification.

How to get a free Red Giant Complete license if you are a student, teacher or faculty member:
1. Use school or organization ID or any proof of current employment or enrollment for verification. More information on academic verification is available here.
2. Send your academic verification to academic@redgiant.com.
3. Wait for approval via email before purchasing.
4. Once you get approval, go to the Red Giant Complete Product Page and “buy” your free version. You will only be able to buy the free version if you have been pre-approved.

The free education subscription will last 180 days. When that time period ends, users will need to reverify their academic status to renew their free subscription.

Flanders Scientific
Remote collaboration and review benefits greatly from having the same type of display calibrated the same way in both locations. To help facilitate such workflow consistency, FSI is launching a limited time buy one, get one for $1,000 off special on its most popular monitor, the DM240.

Nvidia
For those pros needing to power graphics workloads without local hardware, cloud providers, such as Amazon Web Services and Google Cloud, offer Nvidia Quadro Virtual Workstation instances to support remote, graphics-intensive work quickly without the need for any on-prem infrastructure. End-users only need a connected laptop or thin client, as the virtual workstations support the same Nvidia Quadro drivers and features as the physical Quadro GPUs used by pro artists and designers in local workstations.

Additionally, last week, Nvidia has expanded its free virtual GPU software evaluation to 500 licenses for 90 days to help companies support their remote workers with their existing GPU infrastructure. Nvidia vGPU software licenses — including Quadro Virtual Workstation — enable GPU-accelerated virtualization so that content creators, designers, engineers and others can continue their work. More details are available here.  Nvidia has also posted a separate blog on virtual GPUs to help admins who are working to support remote employees

Harman
Harman is offering a free e-learning program called Learning Sessions in conjunction with Harman Pro University.

The Learning Sessions and the Live Workshop Series provide a range of free on-demand and instructor-led webinars hosted by experts from around the world. The Industry Expert workshops feature tips and tricks from front of house engineers, lighting designers, technicians and other industry experts, while the Harman Expert workshops feature in-depth product and solution webinars by Harman product specialists.

• April 7—Lighting for Churches: Live and Video with Lucas Jameson and Chris Pyron
• April 9—Audio Challenges in Esports with Cameron O’Neill
• April 15—Special Martin Lighting Product Launch with Markus Klüesener
• April 16—Lighting Programming Workshop with Susan Rose
• April 23—Performance Manager: Beginner to Expert with Nowell Helms

Apple
Apple is offering free 90-day trials of Final Cut Pro X and Logic Pro X apps for all in order to help those working from home and looking for something new to master, as well as for students who are already using the tools in school but don’t have the apps on their home computers.

Avid
For its part, Avid is offering free temp licenses for remote users of the company’s creative tools. Commercial customers can get a free 90-day license for each registered user of Media Composer | Ultimate, Pro Tools, Pro Tools | Ultimate and Sibelius | Ultimate. For students whose school campuses are closed, any student of an Avid-based learning institution that uses Media Composer, Pro Tools or Sibelius can receive a free 90-day license for the same products.

Aris
Aris, a full-service production and post house based in Los Angeles, is partnering with ThinkLA to offer free online editing classes for those who want to sharpen their skills while staying close to home during this worldwide crisis. The series will be taught by Aris EP/founder Greg Bassenian, who is also an award-winning writer and director. He has also edited numerous projects for clients including Coca-Cola, Chevy and Zappos.

mLogic
mLogic is offering a 15% discount on its mTape Thunderbolt 3 LTO-7 and LTO-8 solutions The discount applies to orders placed on the mTape website through April 20th. Use discount code mLogicpostPerspective15%.

Xytech
Xytech has launched “Xytech After Dark,” a podcast focusing on trends in the media and broadcasting industries. The first two episodes are now available on iTunes, Spotify and all podcasting platforms.

Xytech’s Greg Dolan says the podcast “is not a forum to sell, but instead to talk about why create the functionality in MediaPulse and the types of things happening in our industry.”

Hosted by Xytech’s Gregg Sandheinrich, the podcast will feature Xytech staff, along with special guests. The first two episodes cover topics including the recent HPA Tech Retreat (featuring HPA president Seth Hallen), as well as the cancellation of the NAB Show, the value of trade shows and the effects of COVID-19 on the industry.

Adobe
Adobe shared a guide to best practices for working from home. It’s meant to support creators and filmmakers who might be shifting to remote work and need to stay connected with their teams and continue to complete projects. You can find the guide here.

Adobe’s principal Creative Cloud evangelist, Jason Levine, hosted a live stream — Video Workflows With Team Projects that focus on remote workflows.

Additionally, Karl Soule, Senior Technical Business Development Manager, hosed a stream focusing on Remote video workflows and collaboration in the enterprise. If you sign up on this page, you can see his presentation.

Streambox
Streambox has introduced a pay-as-you-go software plan for video professionals who use its Chroma 4K, Chroma UHD, Chroma HD and Chroma X streaming encoder/decoder hardware. Since the software has been “decoupled” from the hardware platform, those who own the hardware can rent the software on a monthly basis, pause the subscription between projects and reinstate it as needed. By renting software for a fixed period, creatives can take on jobs without having to pay outright for technology that might have been impractical

Frame.io 
Through the end of March, Frame.io is offering 2TB of free extra storage .capacity for 90 days. Those who could use that additional storage to accommodate work from home workflows should email rapid-response@frame.io to get it set up.

Frame.io is also offering free Frame.io Enterprise plans for the next 90 days to support educational institutions, nonprofits and health care organizations that have been impacted. Please email rapid-response@frame.io to set up this account.

To help guide companies through this new reality of remote working, Frame.io is launching a new “Workflow From Home” series on YouTube, hosted by Michael Cioni, with the first episode launching Monday, March 23rd. Cioni will walk through everything artists need to keep post production humming as smoothly as possible. Subscribe to the Frame.io YouTube channel to get notified when it’s released.

EditShare
EditShare has made its web-based, remote production and collaboration tool, Flow Media Management, free through July 1st. Flow enables individuals as well as large creative workgroups to collaborate on story development with capabilities to perform extensive review approval from anywhere in the world. Those interested can complete this form and one of EditShare’s Flow experts will follow up.

Veritone 
Veritone will extend free access to its core applications — Veritone Essentials, Attribute and Digital Media Hub — for 60 days. Targeted to media and entertainment clients in radio, TV, film, sports and podcasting, Veritone Essentials, Attribute, and Digital Media Hub are designed to make data and content sharing easy, efficient and universal. The solutions give any workforce (whether in the office or remote) tools that accelerate workflows and facilitate collaboration. The solutions are fully cloud-based, which means that staff can access them from any home office in the world as long as there is internet access.

More information about the free access is here. Certain limitations apply. Offer is subject to change without notice.

SNS
In an effort to quickly help EVO users who are suddenly required to work on editing projects from home, SNS has released Nomad for on-the-go, work-from-anywhere, remote workflows. It is a simple utility that runs on any Mac or Windows system that’s connected to EVO.

Nomad helps users repurpose their existing ShareBrowser preview files into proxy files for offline editing. These proxy files are much smaller versions of the source media files, and therefore easier to use for remote work. They take up less space on the computer, take less time to copy and are easier to manage. Users can edit with these proxy files, and after they’re finished putting the final touches on the production, their NLE can export a master file using the full-quality, high-resolution source files.

Nomad is available immediately and free to all EVO customers.

Ftrack
Remote creative collaboration tool ftrack Review is free for all until May 31. This date might extend as the global situation continues to unfold. ftrack Review is an out-of-the-box remote review and approval tool that enables creative teams to collaborate on, review and approve media via their desktop or mobile browser. Contextual comments and annotations eliminate confusion and reduce reliance on email threads. ftrack Review accepts many media formats as well as PDFs. Every ftrack Review workspace receives 250 GB of storage.

Cinedeck 
Cinedeck’s cineXtools allows editing and correcting your file deliveries from home.
From now until April 3rd, pros can get a one month license of cineXtools free of charge.

 

 


Adding precise and realistic Foley to The Invisible Man

Foley artists normally produce sound effects by mimicking the action of characters on a screen, but for Universal Pictures’ new horror-thriller, The Invisible Man, the Foley team from New York’s Alchemy Post Sound faced the novel assignment of creating the patter of footsteps and swish of clothing for a character who cannot be seen.

Directed by Leigh Whannell, The Invisible Man centers on Cecilia Kass (Elisabeth Moss), a Bay Area architect who is terrorized by her former boyfriend, Adrian Griffin (Oliver Jackson-Cohen), a wealthy entrepreneur who develops a digital technology that makes him invisible. Adrian causes Cecelia to appear to be going insane by drugging her, tampering with her work and committing similarly fiendish acts while remaining hidden from sight.

The film’s sound team was led by the LA-based duo of sound designer/supervising sound editor P.K. Hooker and re-recording mixer Will Files. Files recalls that he and Hooker had extensive conversations with Whannell during pre-production about the unique role sound would play in telling the film’s story. “Leigh encouraged us to think at right angles to the way we normally think,” he recalls. “He told us to use all the tools at our disposal to keep the audience on the edge of their seats. He wanted us to be bold and create something very special.”

Hooker and Files asked Alchemy Post Sound to create a huge assortment of sound effects for the film. The Foley team produced footsteps, floor creaks and fist fights, but its most innovative work involved sounds that convey Adrian’s onscreen presence when he is wearing his high-tech invisibility suit. “Sound effects let the audience know Adrian is around when they can’t see him,” explains lead Foley artist Leslie Bloome. “The Invisible Man is a very quiet film and so the sounds we added for Adrian needed to be very precise and real. The details and textures had to be spot on.”

Alchemy’s Andrea Bloome, Ryan Collison and Leslie Bloome

Foley mixer Ryan Collison adds that getting the Foley sound just right was exceedingly tough because it needed to communicate Adrian’s presence, but in a hesitant, ephemeral manner. “He’s trying to be as quiet as possible because he doesn’t want to be heard,” Collison explains. “You want the audience to hear him, but they should strain just a bit to do so.”

Many of Adrian’s invisible scenes were shot with a stand-in wearing a green suit who interacted with other actors and was later digitally removed. Alchemy’s Foley team had access to the original footage and used it in recording matching footsteps and body motions. “We were lucky to be able to perform Foley to what was originally shot on the set, but unlike normal Foley work, we were given artistic license to enhance the performance,” notes Foley artist Joanna Fang. “We could make him walk faster or slower, seem creepier or step with more creakiness than what was originally there.”

Foley sound was also used to suggest the presence of Adrian’s suit, which is made from neoprene and covered in tiny optical devices. “Every time Adrian moves his hand or throws a punch, we created the sound of his suit rustling,” Fang explains. “We used glass beads from an old chandelier and light bulb filaments for the tinkle of the optics and a yoga mat for the material of the suit itself. The result sounds super high-tech and has a menacing quality.”

Special attention was applied to Adrian’s footsteps. “The Invisible Man’s feet needed a very signature sound so that when you hear it, you know it’s him,” says Files. “We asked the Foley team for different options.”

Ultimately, Alchemy’s solution involved something other than shoes. “Like his suit, Adrian’s shoes are made of neoprene,” explains Bloome, whose team used Neumann KMR 81 mics, an Avid C24 Pro Tools mixing console, a Millennia HV-3D eight-channel preamp, an Apogee Maestro control interface and Adam A77X speakers. “So they make a soft sound, but we didn’t want it to sound like he’s wearing sneakers, so I pulled large rubber gloves over my feet and did the footsteps that way.”

Invisible Adrian makes his first appearance in the film’s opening scene when he invades Cecilia’s home while she is asleep in bed. For that scene, the Foley team created sounds for both the unseen Adrian and for Cecilia as she moves about her house looking for the intruder. “P.K. Hooker told us to imagine that we were a kid who’s come home late and is trying to sneak about the house without waking his parents,” recalls Foley editor Nick Seaman. “When Cecilia is tiptoeing through the kitchen, she stumbles into a dog food can. We made that sound larger than life, so that it resonates through the whole place. It’s designed to make the audience jump.”

Will Files

“P.K. wanted the scene to have more detail than usual to create a feeling of heightened reality,” adds Foley editor Laura Heinzinger. “As Cecelia moves through her house, sound reverberates all around her, as if she were in a museum.”

The creepiness was enhanced by the way the effects were mixed. “We trick the audience into feeling safe by turning down the sound,” explains Files. “We dial it down in pieces. First, we removed the music, and then the waves, so you just hear her bare feet and breath. Then, out of nowhere, comes this really loud sound, the bowl banging and dog food scattering across the floor. The Foley team provided multiple layers that we panned throughout the theater. It feels like this huge disaster because of how shocking it is.”

At another point in the film, Cecilia meets Adrian as she is about to get into her car. It’s raining and the droplets of water reveal the contours of his otherwise invisible frame. To add to the eeriness of the moment, Alchemy’s Foley team recorded the patter of raindrops. “We recorded drops differently depending on whether they were landing on the hood of the car or its trunk,” says Fang. “The drops that land on Adrian make a tinkling sound. We created that by letting water roll off my finger. I also stood on a ladder and dropped water onto a chamois for the sound of droplets striking Adrian’s suit.”

 

The film climaxes with a scene in a psychiatric hospital where Cecilia and several guards engage in a desperate struggle with the invisible Adrian. “It’s a chaotic moment but the footsteps help the audience track Adrian as the fight unfolds,” says Foley mixer Connor Nagy. “The audience knows where Adrian is, but the guards don’t. They hear him as he comes around corners and moves in and out of the room. The guards, meanwhile, are shaking in disbelief.”

“The Foley had a lot of detail and texture,” adds Files. “It was also done with finesse. And we needed that, because Foley was featured in a way it normally isn’t in the mix.”

Alchemy often uses Foley sound to suggest the presence of characters who are off screen, but this was the first instance when they were asked to create sound for a character whose presence onscreen derives from sound alone. “It was a total group effort,” says Bloome. “It took a combination of Foley performance, editing and mixing to convince the audience that there is someone on the screen in front of them who they can’t see. It’s freaky.”


Talking localization with Deluxe’s Chris Reynolds

In a world where cable networks and streaming services have made global content the norm, localization work is more important than ever. For example, Deluxe’s global localization team provides content creators with transcription, scripting, translation, audio description, subtitling and dubbing services. Their team is made up of 1,300 full-time employees and a distributed workforce of over 6,000 translators, scripting editors, AD writers and linguistic quality experts that cover more than 75 languages.

Chris Reynolds

We reached out to Chris Reynolds, Deluxe’s SVP/GM of worldwide localization, to find out more.

Can you talk about dubbing, which is a big part of this puzzle?
We use our own Deluxe-owned studios across the globe, along with our extensive partner network of more than 350 dubbing studios around the world. We also have technology partners that we call on for automated language detection, conform, transcription and translation tools.

What technology do you use for these services?
Our localization solution is part of Deluxe’s cloud-based platform, Deluxe One. It uses cloud-based automation and integrated web applications for our workforce to help content creators and distributors who need to localize content in order to reach audiences.

You seem to have a giant well of talent to pull from.
We’ve been building up our workforce for over 15 years. Today’s translations and audio mixes have to be culturally adapted so that content reflects the creative and emotional intent of writers, directors and actors. We want the content to resonate and the audience to react appropriately.

How did Deluxe build this network?
Deluxe launched its localization group over 15 years ago, and from the beginning we believed that you need a global workforce to support the demands of global audiences so they could access high-quality localized content quickly and affordably.

Because our localization platform and services have been developed to support Deluxe’s end-to-end media supply chain offering, we know how to provide quality results across multiple release windows.

We continue to refine our services to simplify reuse of localized assets across theatrical, broadcast and streaming platforms. The build-up of our distributed workforce was intentional and based on ensuring that we’re recruiting talent whose quality of work supports these goals. We match our people to the content and workflows that properly leverage their skill sets.

Can you talk about your workflow/supply chain? What tools do you call on?
We’ve been widening our use of automation and AI technologies. The goal is always to speed up our processes while maintaining pristine quality. This means expanding our use of automated speech recognition (ASR) and machine translation (MT), as well as implementing automated QC, conversion, conform, compare and task assignment features to streamline our localization supply chain. The integration of these technologies into our cloud-based localization platform has been a significant focus for us.

Is IMF part of that workflow?
IMF is absolutely a part of the workflow, In fact, driving its adoption is the rapid growth of localized international iterations for over-the-top (OTT), video on demand (VOD), and subscription video on demand (SVOD). Deluxe has been using localized component workflows since their inception, which is the core concept that IMF uses to simplify versioning.

Is the workflow automated?
To an extent … adding new technology into our workflow is designed to make things more efficient. And these technologies are not meant as a replacement for our talent. Automation helps free up those artists from the more manual tasks and allows them to focus on the more creative aspects of localization.

By using automation in our workflows, we have been able to take on additional projects and explore new areas in M&E localization. We will continue to use workflow automation and AI/ML in our work.

Can you talk about transcription and how you handle that process?
Transcription is a critical part of the localization process and is a step that demands the highest possible quality. Whether we’re creating a script, delivering live or prerecorded captions, or creating an English template for subsequent translations, the initial transcription must be accurate.

Our teams use ASR to help speed up the process, but because the expectation is so high and many transcription tasks also require annotation that current AI technologies can’t deliver, our human workforce must review, qualify, amend and adapt the ASR output.

All of our transcription work undergoes a secondary QA at some point. Sometimes the initial deliverable is immediate, as is the case with live captions, but even then, revisions are often made during secondary key-outs or before the file is delivered for subsequent downstream use.

What are some of the biggest challenges for localization?
The rise in original content distribution and global distribution and the need to localize that content faster than ever is probably the biggest general challenge. We also continue to see new competitors entering the already crowded market.

And it’s not just competitors — customers are challenging our industry standards too, with some bringing localization in house. To accommodate this change, we’re always adapting and refining workflows to fit what our customers need. We are always checking in with them to make sure our teams can anticipate change and create solutions that solve challenges before they impact the rest of the supply chain.

What are some projects that you’ve worked on recently?
Some examples are Star Wars: The Rise of Skywalker, The Mandalorian, The Irishman, Joker, Marriage Story and The Marvelous Mrs. Maisel.

Finally, taking into account the COVID-19 crisis, I imagine that worldwide content will be needed even more. How will this affect your part of the process?
The demand for in-home entertainment continues to climb, mainly driven by an uptick in OTT and gaming in light of these unprecedented events. We are working with creators, media owners and platforms to provide localization services that can help respond to this recent influx in the global distribution of films and series.

Unfortunately, because several productions and dubbing studios around the world have had to shut down, there will be delays getting new content out. We’re working closely with our customers to complete as much work as we can during this time so that everyone can ramp up quickly once things start back up.

We’re also seeing big increases in catalog content orders for streaming platforms. Our teams are helping by providing large-scale subtitle and audio conforms, creating any new subtitles as needed, and creating dubbed audio versions for those languages that are not affected by studio closures.


Mixing and sound design for NatGeo’s Cosmos: Possible Worlds

By Patrick Birk

National Geographic’s Cosmos returned for 2020 with Possible Worlds, writer/director/producer Ann Druyan’s reimagining of the house that Carl Sagan built. Through cutting-edge visuals combined with the earnest, insightful narration of astrophysicist Neil deGrasse Tyson, the series aims to show audiences how brilliant the future could be… if we learn to better understand the natural phenomena of which we are a part.

I recently spoke with supervising sound editor/founder Greg King and sound designer Jon Greasley of LA’s King Soundworks about how they tackled the challenges of bringing the worlds of forests and bees to life in Episode 6, “The Search for Intelligent Life on Earth.”

L-R: Greg King and Jon Greasley

In this episode, Neil deGrasse Tyson talks about ripples in space time. It sounds like drops of water, but it also sounds a little synthesized to me. Was that part of the process?
Jon Greasley: Sometimes we do use synthesized sound, but it depends on the project. For example, we use the synthesizer a great deal when we’re doing science-fiction work, like The Orville, to create user interface beeps, spaceship noises and things. But for this show, we stayed away from that because it’s about the organic universe around us, and how we fit into it.

We tried to stick with recordings of real things for this show, and then we did a lot of processing and manipulation, but we tried to do it in a way where everything still sounded grounded and organic and natural. So if there was an instance where we might perhaps want to use some sort of synth bass thing, we would instead, for example, use a real bass guitar or stringed instrument — things that provided the show with an organic feel.

Did you guys provide the score as well?
Greasley: No, Alan Silvestri did the score, but there’s just so much we can do. Everybody that works at King Soundworks, almost without exception, is a musician. We’ve got drummers, guitarists, bass players and keyboard players. Having a sense of musicality really helps with the work that we do, so those are just honestly tools in our tool kit that we can go to very leisurely because it’s second nature to us. There’s a bunch of guitars on the wall at our main office, and everybody’s pulling guitars and basses out and playing throughout the day.

Greg King: We even use a didgeridoo as part one of the elements for the Imagination — the ship that Neil deGrasse Tyson flies around in — because we like the low throbbing oscillating tone and the pitch ranges we can get in it.

Sometimes I wasn’t sure where the sound design and score intersected. How do you balance those two, and what was the creative process like between yourselves and the Silvestri?
King: Alan is one of the top composers in Hollywood. Probably the biggest recent thing he did was the Avenger movies. He’s a super-pro, so he knows the score, he understands what territory the sound design is going to take and when each element is going to take center stage. More often than not, when we’re working with composers, that tends to be when things bump or don’t bump, but when you’re dealing with a pro like Alan, it’s innate with him — when score or design take over.

Due to the show’s production schedule, we were often getting VFX while we were mixing it, which required some improvisation. We’d get input from executive producers Brannon Braga and Ann Druyan, and once we had the VFX, if we needed to move Neil’s VO by a second, we could do that. We could start the music five seconds later, or maybe sound design would need to take over, and we get out of the music for 30 seconds. And conversely, if we just had 30 seconds of this intense sound design moment, we could get rid of our sound effects and sound design and let music carry this scene.

You pre-plan as much as you can, but because of the nature of this show, there was a lot of improvisation happening on the stage as we were mixing. We would very often just try things, and we were given the latitude by Ann and Brannon to try that stuff and experiment. The only rule was to tell the story better.

I heard that sense of musicality you’d mentioned, even in things like the backgrounds of the show. For example, Neil deGrasse Tyson’s walking through the forest, and you have it punctuated with woodpeckers.
Greasley: That was a good layer. There’s a sense of rhythm in nature anyway. We talk about this a lot… not necessarily being able to identify a constant or consistent beat or rhythm, but just the fact that the natural world has all of these ebbs and flows and rhythms and beats.

In music theory classes, they’ll talk about how there’s a reason 4/4 is the most common time signature, and it’s because so many things we do in life are in fours: walking or your heartbeat, anything like that. That’s the theory, anyway.

King: Exactly, because one of the overarching messages of this series is that we’re all connected, everything’s connected. We don’t live in isolation. So from the cellular level in our planet to within our bodies to this big macro level through the universe, things have a natural rhythm in a sense and communicate consciously or unconsciously. So we try to tie things together by building rhythmic beats and hits so they feel connected in some way.

Did you use all of the elements of sound design for that? Backgrounds? Effects?
King: Absolutely. Yeah, we’ll do that in the backgrounds, like when Neil deGrasse is walking across the calendar, we’ll be infusing that kind of thing. So as you go from scene to scene and episode to episode, there’s a natural feel to things. It doesn’t feel like individual events happening, but they’re somehow, even subconsciously, tied together.

It definitely contributed to an emotional experience by the end of the episode. For the mycelium network, what sound effects or recordings did you start off with? Sounds from nature, and then you process them?
King: Yes. And sparking. We had recordings, a whole bunch of different levels of sparking, and we took these electrical arcs and manipulated and processed them to give it that lighter, more organic feeling. Because when we saw the mycelium, we were thinking of connecting the communication of the bees, brain waves and mycelium, sending information among the different plants. That’s an example of things we’re all trying to tie together on that organic level.

It’s all natural, so we wanted to keep it feeling that way so that the mycelium sound would then tie into brain wave sounds or bees communicating.

Greasley: Some of the specific elements used in the mycelium include layers that are made from chimes, like metallic or wooden chimes, that are processed and manipulated. Then we used the sounds of gas — pressure release-type sound of air escaping. That gives you that delicate almost white noise, but in a really specific way. We use a lot of layers of those sorts of things to create the idea of those particles moving around and communicating with each other as well.

You stress the organic nature of the sound design, but at times elements sounded bitcrushed or digitized a bit, and that made sense to me. The way I understand things like neural networks is almost already in a digital context. Did you distort the sounds, mix them together to glue them?
Greasley: There’s definitely a ton of layers, and sometimes yeah, it can help to run everything through one process to help the elements stick. I don’t specifically bitcrush, although we did a lot of stuff with some time stretching. So sometimes you do end up with artifacting, and sometimes it’s desirable and sometimes it isn’t. There’s lots of reverb because reverb is one of the more naturalistic sounding processes you can do.

With reverbs in mind, how much of the reverbs on Tyson’s voice were recorded on set, and how much was added in post?
King: That’s a great question because the show’s production period was long. In one shot, Mr. Tyson may be standing on cliffs next to the ocean, and then the next time you see him he’s in this very lush forest. Not only are those filmed at different times, but because they’re traveling around so much, they often hire a local sound recordist. So not only is his voice recorded at different times and in different locations, but by different sets of equipment.

There’s also a bunch of in-studio narration, and that came in multiple parts as well. As they were editing, they discovered we needed to flush out this line more, or now that we’ve cut it this way, we have to add this information, or change this cadence.

So now you had old studio recordings, new studio recordings and all the various different location recordings. And we’re trying to make it sound like it’s one continuous piece, so you don’t hear all those differences. We used a combination of reverbs so that when you went from one location, you didn’t have a jarring reverb change.

A lot of it was our ADR mixer Laird Fryer, who really took it upon himself to research those original production recordings so when Neil came into the studio here, he could match the microphones as much as possible. Then our ADR supervisor Elliot Thompson would go through and find the best takes that matched. It was actually one of the bigger tasks of the show.

Do you use automated EQ match tools as a starting point?
King: Absolutely. I use iZotope EQ matching all the time. That’s the starting point. And sometimes you get lucky and it matches great right away, and you go, “Wow, that was awesome. Fantastic.” But usually, it’s a starting point, and then it’ll be a combination of additional EQ by ear, and you’ll do reverb matching and EQing the reverb. Then I’ll use a multi-band compressor. I like the FabFilter multiband compressor, and I’ll use that to even further roll the EQ of the dialogue in a gentler way.

I’ve used all those different tools to try getting it as close as I could. And sometimes there will be a shift in the quality of his dialogue, but we decided that was a better way to go because maybe there was just a performance element of the way he delivered a line. So essentially the trade-off was to go with a minor mismatch to keep the performance.

What would your desert island EQ be?
King: We have different opinions. We both do sound effects, and we both do dialogue, so it’s a personal taste. On dialogue, right now I’m a real fan of the FabFilter suite of EQs. For music, I tend to use the McDSP EQs.

Greasley: We’re on the same page for the music EQs. When I’m mixing music, I love McDSP Channel G. Not only the EQ; the compressor is also fantastic on that particular plugin. I use that on all of my sound effects and sound design tracks too. Obviously, before you get to the mix, there’s a whole bunch of other stuff you could use from the design stage, but once I’m actually mixing it, the Channel G is my go-to.

VFX play a heavy role in both the mycelium network and the bee dances. Can you talk about how that affected your workflow/process?
Greasley: When we started prepping the design, some of the visuals were actually not very far along. It’s fun to watch the early cuts of the episodes because what’s ultimately going to end up being Neil standing there with a DNA strand floating above the palm of his hand begins with him standing in front of a greenscreen, and there’s the light bulb in a C stand in his hand.

Sometimes, we had to start working our sound concepts based almost purely on the description of what we were eventually going to be seeing. Based off that, and the conversations that we had with Ann Druyan and Brannon Braga in the spotting sessions, the sound concepts would have to develop in tandem with the visual concepts — both are based off of the intellectual concepts. Then on the mix stage, we would get some of these visual elements in, and we would have to tweak what we had done and what the rest of that team had done right up until the 11th hour.

Were your early sound sketches shown to the VFX department so they could take inspiration from that?
Greasley: That’s a good question. We did provide some stuff, not necessarily to the VFX department, but to the picture editing team. They would ask us to send things not to help so much with conceptualization of things, but with timings. So one of the things they asked us for early on was sounds for the Ship of the Imagination. They would lay those sounds in, and that helped them to get the rhythm of the show and to get a feel for where certain sounds are going to align.

I’m surprised to hear how early in the production process you began working on your sound design, based on how well the bee dance sounds match the light tracer along the back of the bee.
King: That was a lot of improvisation Jon and I were doing on the mix stage. We’re both sound designers, sound editors and mixers, so while we were mixing, we would be getting updates because part of the bee dance sequence is animated — pure hand-drawn animated stuff in the bee sequence — and some of it is actually beehive material, where they show you in a graphical way how the bees communicate with their wiggles and their waggles.

We then figured out a way to grab a bee sound and make it sound like it’s doing those movements, rhythms and wiggles. There’s a big satellite dish in the show, and at the end, you hear these communications coming through the computer panel that are suggested as alien transmissions. We actually took the communication methods we had developed for the bee wiggles and waggles and turned that into alien communication.

What did you process it with to achieve that?
King: Initially, we recorded actual bee sounds. We’re lucky that I live about an hour outside of LA in Santa Paula, which has beehives everywhere. We took constant bee sounds, edited them and used LFOs filters to get the rhythms, and then we’d do sound editing for the start and stops.

For the extraterrestrial communication at the end, we took the bee sounds and really thinned them out and processed them to make them sound a little more radio frequency/telecommunication-like. Then we also took shortwave radio sounds and ran that through the exact process of the LFO filters and the editing so we had the same rhythm. So while the sound is different, it sounds like a similar form of communication.

What I really learned from the series is that there’s all this communication going on that we aren’t aware of, and the mycelium’s a great example of that. I didn’t know different trees and plants communicated with each other — communicate the condition of the soil, root supply and pest invasion. It makes you see a forest in a different way.

It’s the same with the bees. I knew bees were intelligent insects, but I had no idea that a bee could pinpoint an exact location two or three miles away by a sophisticated form of communication. So that gave us the inspiration that runs through the whole series. We’re all dependent on each other; we’re all communicating with each other. In our sound design process, we wanted there to be a thread between all those forms of communication, whatever they are — that they’re basically all coming from the same place.

There’s a scene where a bee goes out to scout a new hive and ends up in a hollowed-out tree. It’s a single bee floating and moving up, down, left, right, front, back. I imagine you’d achieve that movement through panning and the depth would be through volume. Is there any mixing trick that you’re using to do the up and down?
Greasley: That’s such a level of detail. That’s cool that you even asked the question. Yes, left and right obviously; we’re in 5.1, so panning left and right, up and down. As with most things, it’s the simplest things that get you the best results. So EQ and reverb, essentially. You can create the illusion of height with the EQ. Say you do a notch at a certain frequency, and then as the bee flies up, you just roll the center of that frequency up higher. So you track the up and down movement of the bee with a little notch in EQ, and it gives you this extra sense of movement. Since the frequency is moving up and down, you can trick the ear and the brain into perceiving it as height because that’s what you’re looking at. It’s that great symbiosis of audio and video working together.

Then you can use a little bit of a woody-sounding reverb, like a convolution reverb that was recorded in a tight wood room, and then take that as the inside of this hollowed-out tree.

King: A lot of pitch work was done with the bees too. Because when you record a bee, they’re so quiet; it basically goes “bzz” and it’s gone. So you actually end up using a lot of, let’s call them static bees, where the bee is buzzing. Now, you’re having to pitch that in fake dopplers to give the sense of movement. You’re going to have it pitched down as it gets further away and add more reverb, and then do an EQ layer on that, and the same as one’s approaching or one’s flying by. So you’re actually spending a lot of time just creating what feels like very natural sounds but that aren’t really possible to record.

A plugin like Serato Pitch ‘n Time is great for variable pitch too, because if you want something to sound like it’s moving away from you, you have a drop in pitch during the course of it, and the reverse for something approaching you.

Greg King playing guitar

How do you get a single bee sound?
King: You gather a few bees and do a few different things. The easiest way is to get a few bees, bring them into your house, and release them at a brightly lit window. Then the bees are buzzing away like crazy to try to get out the window. You can just track it with the microphone. You’ll then have to go through and edit out any of the louder window knocks.

I’ve tried all different things through the years, like having them in jars and all that kind of stuff, but there’s too much acoustic to that. I’ve discovered that with flies, grasshoppers, or any of the larger winged insects that actually make a noise, doing it in the daytime against the window is the best way because they’ll go for a long time.

What was your biggest takeaway, as an artist, as a sound designer, from working on this project?
Greasley: It was so mind-blowing how much we learned from the people on Cosmos. The people that put the show together can accurately be described as geniuses, particularly Ann. She’s just so unbelievably smart.

Each episode had its individual challenges and taught us things in terms of the craft, but I think for me, the biggest takeaway on a personal and intellectual level is the interconnectedness of everything in the observable world. And the further we get with science, the more we’re able to observe, whether it’s at the subatomic quantum level or billions of light-years away.

Just the level to which all life and matter is interconnected and interdependent.

I also think we’re seeing practical examples of that right now with the coronavirus, in terms of unexpected consequences. It’s like a microcosm for what could happen in the future.
King: On a whole philosophical level, we’re at this particular point in time globally, where we seem to be going down a path of ignoring science, or denying science is there. And when you get to watch a series like Cosmos, you can see science is how we’re going to survive. If we learn to interact with nature, and use nature as a technology, as opposed to using nature as a resource, what we could eventually do is mind-blowing. So I think the timing of this is ideal.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

Apple and Avid offer free temp licenses during COVID-19 crisis

Apple is offering free 90-day trials of Final Cut Pro X and Logic Pro X apps for all in order to help those working from home and looking for something new to master, as well as for students who are already using the tools in school but don’t have the apps on their home computers.

Apple Final Cut X

Apple is extending what is normally a 30-day trial for Final Cut Pro X, while a free trial is new to Logic Pro X. The extension to 90 days is for a limited time and will revert to 30 days across both apps in the future.

Trials for both Final Cut Pro X and Logic Pro X are now available. Customers can download the free trials on the web pages for Final Cut Pro X  and Logic Pro X. The 90-day extension is also available to customers who have already downloaded the free 30-day trial of Final Cut Pro X.

For its part, Avid is offering free temp licenses for remote users of the company’s creative tools. Commercial customers can get a free 90-day license for each registered user of Media Composer | Ultimate, Pro Tools, Pro Tools | Ultimate and Sibelius | Ultimate. For students whose school campuses are closed, any student of an Avid-based learning institution that uses Media Composer, Pro Tools or Sibelius can receive a free 90-day license for the same products.

The offer is open through April 17.

Main Image: Courtesy of Avid

Sebastian Robertson, Mark Johnson on making Playing For Change’s The Weight

By Randi Altman

If you have any sort of social media presence, it’s likely that you have seen Playing For Change’s The Weight video featuring The Band’s Robbie Robertson, Ringo Starr, Lukas Nelson and musicians from all over the world. It’s amazing, and if you haven’t seen it, please click here now. Right now. Then come back and read how it was made.

L-R: Mark Johnson, Robbie Robertson, Sebastian Robertson, Raan Williams and Robin Moxey

The Weight was produced by Mark Johnson and Sebastian Robertson, Robbie’s son. It was a celebration of the 50th anniversary of The Band’s first studio album, Music From Big Pink, where the song “The Weight” first appeared. Raan Williams and Robin Moxey were also producers on the project.

Playing For Change (PFC) was co-founded by Johnson and Whitney Kroenke in 2002 with the goal to share the music of street musicians worldwide. And it seems the seed of the idea involved the younger Robertson and Johnson. “Mark Johnson is an old friend of mine,” explains Robertson. “I was sitting around in his apartment when he initially conceived the idea of Playing For Change. At first, it was a vehicle that brought street musicians into the spotlight, then it became world musicians, and then it evolved into a big musical celebration.”

Johnson explains further: “Playing For Change was born out of the idea that no matter how many things in life divide us, they will never be as strong as the power of music to bring us all together. We record and film songs around the world to reconnect all of us to our shared humanity and to show the world through the lens of music and art.” Pretty profound words considering current events.

Mermans Mosengo – Kinshasa Congo

Each went on with their busy lives, Robertson as a musician and composer, and Johnson traveling the world capturing all types of music. They reconnected a couple of years ago, and the timing was ideal. “I wanted to do something to commemorate the 50th anniversary of The Band’s Music From Big Pink — this beautiful album and this beautiful song that my dad wrote — so I brought it to Mark. I wanted to team up with some friends and we all came together to do something really special for him. That was the driving force behind the production of this video.”

To date, Playing For Change has created over 50 “Songs Around the World” videos — including The Grateful Dead’s Ripple and Jimi Hendrix’s All Along the Watchtower — and recorded and filmed over 1,000 musicians across more than 60 countries.

The Weight is beautifully shot and edited, featuring amazingly talented musicians, interesting locales and one of my favorite songs to sing along to. I reached out to Robertson and Johnson to talk through the production, post and audio post.

This was a big undertaking. All those musicians and locales… how did you choose the musicians that were going to take part in it?
Robertson: First, some friends and I went into the studio to record the very basic tracks of the song — the bass, drums, guitar, a piano and a scratch vocal. The first instrument that was added was my dad on rhythm and lead guitar. He heard this very kind of rough demo version of what we had done and played along with it. Then, slowly along the way, we started to replace all those rough instruments with other musicians around them. That’s basically how the process worked.

Larkin Poe – Venice, California

Was there an audition process, or people you knew, like Lukas Nelson and Marcus King? Or did Playing For Change suggest them?
Robertson: Playing For Change was responsible for the world musicians, and I brought in artists like Lukas, my dad, Ringo and Larkin Poe. They have this incredible syndicate of world musicians, so there is no auditioning. So we knew they were going to be amazing. We brought what we had, they added this flavor, and then the song started to take on a new identity because of all these incredible cultures that are added to it. And it just so happened that Lukas was in Los Angeles because he had been recording up at Shangri-La in Malibu. My friend Eric (Lynn) runs that studio, so we got in touch. Then we filmed Lukas.

Is Shangri-La where you initially went to record the very basic parts of the song?
Robertson: It is. The funny and kind of amazing coincidence is that Shangri-La was The Band’s clubhouse in the ’70s. Since then, producer Rick Rubin has taken over. That’s where the band recorded the studio songs of The Last Waltz (film). That’s where they recorded their album, Northern Lights – Southern Cross. Now, here we are 50 years later, recording The Weight.

Mark, how did you choose the locations for the musicians? They were all so colorful and visually stunning.
Johnson: We generally try to work with each musician to find an outdoor location that inspires them and a place that can give the audience a window into their world. Not every location is always so planned out, so we do a lot of improvising to find a suitable location to record and film music live outside.

Shooting Marcus King in Greenville, South Carolina

What did you shoot on? Did you have one DP/crew or use some from all over the world? Were you on set?
Johnson: Most of the PFC videos are recorded and filmed by one crew (Guigo Foggiatto and Joe Miller), including myself, an additional audio person and two camera operators. We work with a local guide to help us find both musicians and locations. We filmed The Weight around the world on 4K with Sony A7 cameras — one side angle, one zoom and a Ronin for more motion.

How did you capture the performances from an audio aspect, and who did the audio post?
Johnson: We record all the musicians around the world live and outside using the same mobile recording studio we’ve used since the beginning of our “Song Around the World” videos over 10 years ago. The only thing that has changed is the way we power everything. In the beginning it was golf cart batteries and then car batteries with big heavy equipment, but fortunately it evolved into lightweight battery packs.

We primarily use Grace mic preamps and Schoeps microphones, and our recording mantra comes from a good friend and musician named Keb’ Mo’. He once told us, “Sound is a feeling first, so if it feels good it will always sound good…” This inspires us to help the musicians to feel comfortable and aware that they are performing along with other musicians from around the world to create something bigger than themselves.

One interesting thing that often comes from this project that differs from life in the studio is that the musicians playing on our songs around the world tend to listen more and play less. They know they are only a part of the performance and so they try to find the best way to fit in and support the song without any ego. This reality makes the editing and mixing process much easier to handle in post.

Lukas Nelson – Austin, Texas

The Weight was recorded by the Playing For Change crew and mixed by Greg Morgenstein, Robin Moxey, Sebastian and me.

What about the editing? All that footage and lining up the song must have been very challenging. I’m assuming cutting your previous videos has given you a lot of experience with this.
Johnson: That is a great question, and one of the most challenging and rewarding parts of the process. It can get really complicated sometimes to edit because we have three cameras per shoot/musician and sometimes many takes of each performance. And sometimes we comp the audio. For example, the first section came from Take 1, the second from Take 6, etc. … and we need to match the video to correspond to each different audio take/performance. We always rough-mix the music first in Avid Pro Tools and then find the corresponding video takes in Adobe Premiere. Whenever we return from a trip, we add the new layer to the Pro Tools session, then the video edit and build the song as we go.

The Weight was a really big audio session in Pro Tools with so many tracks and options to choose from as to who would play what fill or riff and who would sing each verse, and the video session was also huge. with about 20 performances around the world combined with all the takes that go along with them. One of the best parts of the process for me is soloing all the various instruments from around the world and seeing how amazing they all fit together.

You edited this yourself? And who did the color grade?
Johnson: The video was colored by Jon Walls and Yasuhiro Takeuchi on Blackmagic DaVinci Resolve and edited by me, along with everyone’s help, using Premiere. The entire song and video took over a year to make, so we had time throughout the process to work together on the rough mixes and rough edits from each location and build it brick by brick as we went along the journey.

Sherieta Lewis and Roselyn Williams – Trenchtown, Jamaica

When your dad is on the bench playing and wearing headphones — and the other artists as well — what are they listening to? Are they listening to the initial sort of music that you recorded in studio, or was it as it evolved, adding the different instruments and stuff? Is that what he was listening to and playing along to?
Robertson: Yeah. My dad would listen to what we recorded, except in his case we muted the guitar, so he was now playing the guitar part. Then, as elements from my dad and Ringo are added, those [scratch] elements were removed from what we would call the demo. So then as it’s traveling around the world, people are hearing more and more of what the actual production is going to be. It was not long before all those scratch tracks were gone and people were listening to Ringo and my dad. Then we just started filling in with the singers and so on and so forth.

I’m assuming that each artist played the song from start to finish in the video, or at least for the video, and then the editor went in and cut different lines together?
Robertson: Yes and no. For example, we asked Lukas to do a very specific part as far as singing. He would sing his verse, and then he would sing a couple choruses and play guitar over his section. It varied like that. Sometimes when necessary, if somebody is playing percussion throughout the whole song, then they would listen to it from start to finish. But if somebody was just being asked to sing a specific section, they would just sing that section.

Rajeev Shrestha – Nepal

How was your dad’s reaction to all of it? From recording his own bit to watching it and listening to the final?
Robertson: He obviously came on board very early because we needed to get his guitar, and we wanted to get him filmed at the beginning of the process. He was kind of like, “I don’t know what the hell you guys are doing, but it seems cool.” And then by the time the end result came, he was like, “Oh my God.” Also, the response that his friends and colleagues had to it… I think they had the similar response to what you had, which is A, how the hell did you do this? And, B, this is one of the most beautiful things I’ve ever seen.

It really is amazing. One of my favorite parts of the video is the very end, when your dad’s done playing, looks up and has that huge smile on his face.
Robertson: Yeah. It’s a pulling-at-the-heart-strings moment for me, because that was really a perfect picture of the feeling that I had when it all came together.

You’re a musician as well. What are you up to these days?
Robertson: I have a label under the Universal Production Music umbrella, called Sonic Beat Records. The focus of the label is on contemporary, up-to-the-minute super-slick productions. My collaboration with Universal has been a great one so far; we just started in the fall of 2019, so it’s really new. But I’m finding my way in that family, and they’ve welcomed me with open arms.

Another really fun collaboration was working with my dad on the score for Martin Scorsese’s The Irishman. That was a wonderful experience for me. I’m happy with how the music that we did turned out. Over the course of my life, my dad and I haven’t collaborated that much. We’ve just been father and son, and good friends, but as of late, we’ve started to put our forces together, and that has been a lot of fun.

L-R: Mark Johnson and Ahmed Al Harmi – Bahrain

Any other scores on the horizon?
Robertson: Yeah. I just did another score for a documentary film called Let There Be Drums!, which is a look into the mindset of rock and roll drummers. My friend, Justin Kreutzmann, directed it. He’s the son of Bill Kreutzmann, the drummer of the Grateful Dead. He gave me some original drum tracks of his dad’s and Mickey Hart’s, so I would have all these rhythmic elements to play with, and I got to compose a score on top of Mickey Hart and Bill Kreutzmann’s percussive and drumming works. That was a thrill of a lifetime.

Any final thoughts? And what’s next for you, Mark?
Johnson: One of the many amazing things that came out making this video was our partnership with Sheik Abdulla bin Hamad bin Isa Al Khalifa from Bahrain, who works with us to help end the stereotype of terrorism through music by including musicians from the Middle East in our videos. In The Weight watch an oud master in Bahrain cut to a sitar master in Nepal followed by Robbie Robertson and Ringo Starr, and they all work so well together.

One of the best things about Playing For Change is that it never ends. There are always more songs to make, more musicians to record and more people to inspire through the power of music. One heart and one song at a time…


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.

Netflix’s Mindhunter: Skywalker’s audio adds to David Fincher’s vision

By Patrick Birk

Scott Lewis

I was late in discovering David Fincher’s gripping series on serial killers, Mindhunter. But last summer, I noticed the Netflix original lurking in my suggested titles and decided to give it a whirl. I burned through both seasons within a week. The show is both thrilling and chilling, but the majority of these moments are not achieved through blazing guns, jump scares and pyrotechnics. It instead focuses on the inner lives of multiple murderers and the FBI agents whose job it is to understand them through subtle but detail-rich conversation.

Sound plays a crucial role in setting the tone of the series and heightening tension through each narrative arc. I recently spoke to rerecording mixers Scott Lewis and Stephen Urata as well as supervising sound editor Jeremy Molod — all from Skywalker Sound — about their process creating a haunting and detail-laden soundtrack. Let’s start with Lewis and Urata and then work our way to Molod.

How is working with David Fincher? Does he have any directorial preferences when it comes to sound? I know he’s been big on loud backgrounds in crowded spaces since The Social Network.
Scott Lewis: David is extremely detail-oriented and knowledgeable about sound. So he would give us very indepth notes about the mix… down to the decibel.

Stephen Urata: That level of attention to detail is one of the more challenging parts of working on a show like Mindhunter.

Working with a director who is so involved in the audio, does that limit your freedom at all?
Lewis: No. It doesn’t curtail your freedom, because when a director has a really clear vision, it’s more about crafting the track to be what he’s looking for. Ultimately, it’s the director’s show, and he has a way of bringing the best work out of people. I’m sure you heard about how he does hundreds of takes with actors to get many options. He takes a similar approach with sound in that we might give him multiple options for a certain scene or give him many different flavors of something to choose from. And he’ll push us to deliver the goods. For example, you might deliver a technically perfect mix but he’ll dig in until it’s exactly what he wants it to be.

Stephen Urata

Urata: Exactly. It’s not that he’s curtailing or handcuffing us from doing something creative. This project has been one of my favorites because it was just the editorial team and sound design, and then it would come to the mix stage. That’s where it would be just Scott and me in a mix room just the two of us and we’d get a shot at our own aesthetic and our own choice. It was really a lot of fun trying to nail down what our favorite version of the mix would be, and David really gave us that opportunity. If he wanted something else he would have just said, “I want it like this and only do it like this.”

But at the same time, we would do something maybe completely different than he was expecting, and if he liked it, he would say, “I wasn’t thinking that, but if you’re going to go that direction, try this also.” So he wasn’t handcuffing us, he was pushing us.

Do you have an example of something that you guys brought to the table that Fincher wasn’t expecting and asked you to go with it?
Urata: The first thing we did was the train scene. It was the scene in an empty parking garage and there is the sound of an incoming train from two miles away. That was actually the first thing that we did. It was the middle of Episode 2 or something, and that’s where we started.

Where they’re talking to the BTK survivor, Kevin?
Lewis: Exactly.

Urata: He’s fidgeting and really uncomfortable telling his story, and David wanted to see if that scene would work at all, because it really relied heavily on sound. So we got our shot at it. He said, “This is the kind of the direction I want you guys to go in.” Scott and I played off of each other for a good amount of time that first day, trying to figure out what the best version would be and we presented it to him. I don’t remember him having that many notes on that first one, which is rare.

It really paid off. Among the mixes you showed Fincher, did you notice a trend in terms of his preferences?
Lewis: When I say we gave him options it might be down to something like with Son of Sam. Throughout that scene we used a slight pitching to slowly lower his voice over the length of the scene so that by the time he reveals that he actually isn’t crazy and he’s playing everybody, his voice drops a register. So when we present him options, it’s things like how much we’re pitching him down over time or things like that. It’s a constant review process.

The show takes place in the mid ‘70s and early ’80s. Were there any period-specific sounds or mixing tricks you used when it came to diegetic music and things like that?
Lewis: Oh yeah. Ren Klyce is the supervising sound designer on the show, and he’s fantastic. He’s the sound designer on all of David’s films. He is really good about making sure that we stay to the period. So with regard to mixing, panning is something that he’s really focused on because it’s the ‘70s. He’d tell us not to go nuts on the panning, the surrounds, that kind of thing; just keep it kind of down the middle. Also, futzes are a big thing in that show; music futzes, phone futzes … we did a ton of work on making sure that everything was period-specific and sounded right.

Are you using things like impulse responses and Altiverb or worldizing?
Lewis: I used a lot of Speakerphone by Audio Ease as well as EQ and reverb.

What mixing choices did you make to immerse the viewer in Holden’s reality, i.e. the PTSD he experiences?
Lewis: When he’s experiencing anxiety, it’s really important to make sure that we’re telling the story that we’re setting out to tell. Through mixing, you can focus the viewers’ attention on what you want them to track. So that could be dialogue in the background of a scene, like the end of Episode 1, when he’s having a panic attack, and in the distance, his boss and Tench are talking. It was very important that you make out the dialogue there, even though you’re focusing on Holden having a panic attack. So it’s moments like that when it’s making sure that the viewer is feeling that claustrophobia but also picking up on the story point that we want you to follow.

Lewis: Also, Stephen did something really great there — there are sprinklers in the background and you don’t even notice, but the tension is building through them.

There’s a very intense moment when Holden’s trying to figure out who let their boss know about a missing segment of tape in an interview, and he accuses Greg, who leans back in his chair, and there’s a squeal in there that kind of ramps up the tension.
Urata: David’s really, really honed in on Foley in general — chair squeaks, the type of shoes somebody’s wearing, the squeak of the old wooden floor under their feet. All those things have to play with David. Like when Wendy’s creeping over to the stairwell to listen to her girlfriend and her ex-husband talking. David said, “I want to hear the wooden floor squeaking while she’s sneaking over.”

It’s not just the music crescendo-ing and making you feel really nervous or scared. It’s also Foley work that’s happening in the scene, I want to hear more of that or less of that. Or more backgrounds to just add to the sound pressure to build to the climax of the scene. David uses all those tools to accomplish the storytelling in the scene with sound.

How much ambience do you have built into the raw Foley tracks that you get, and how much is reverb added after the fact? Things like car door slams have so much body to them.
Urata: Some of those, like door slams, were recorded by Ren Klyce. Instead of just recording a door slam with a mic right next to the door and then adding reverb later on, he actually goes into a huge mansion and slams a huge door from 40 feet away and records that to make it sound really realistic. Sometimes we add it ourselves. I think the most challenging part about all of that is marrying and making all the sounds work together for the specific aesthetic of the soundtrack.

Do you have a go-to digital solution for that? Is it always something different or do you find yourself going to the same place?
Urata: It definitely varies. There’s a classic reverb, a digital version of it: the Lexicon 480. We use that a good amount. It has a really great natural film sound that people are familiar with and it sounds natural. There are other ones but it’s really just another tool to make it. If it doesn’t work, we just have to use something else.

Were there any super memorable ADR moments?
Lewis: I can just tell you that there’s a lot of ADR. Some whole scenes are ADR. Any Fincher show that I’ve mixed dialogue on, where I also mixed the ADR, I am 10 times better than I was before I started. Because David’s so focused on storytelling, if there’s a subtle inflection that he’s looking for that he didn’t get on set, he will loop the line to make sure that he gets that nuance.

Did you coordinate with the composer? How do you like to mix the score so that it has a really complementary relationship to the rest of the elements?
Lewis: As re-recording mixers, they don’t involve us in the composition part of it; it just comes to us after they’ve spotted the score.

Jason Hill was the composer, and his score is great. It’s so spooky and eerie. It complements the sound design and sound effects layers really well so that a lot of it will kind of will sit in there. The score is great and it’s not traditional. He’s not working with big strings and horns all over the place. He’s got a lot of synth and guitars and stuff. He would use a lot of analog gear as well. So when it comes to mix sometimes you get kind of anomalies that you don’t commonly get, whether it’s hiss or whatever, elements he’s adding to add kind of an analog sound to it.

Lewis: And a lot of times we would keep that in because it’s part of his score.

Now let’s jump in with sound editor Jeremy Molod

As a sound editor, what was it like working with David Fincher?
Jeremy Molod: David and I have done abot seven or eight films together, so by the time we started on Season Two of Mindhunter, we pretty much knew each other’s styles. I’m a huge fan of David’s movies. It’s a privilege to work with him because he’s such a good director, and the stuff he creates is so entertaining and beautifully done. I really admire his organization and how detailed he is. He really gets in there and gives us detail that no other director has ever given us.

Jeremy Molod

You worked with him on The Social Network. In college, my sound professors would always cite the famous bar scene, where Mark Zuckerberg and his girlfriend had to shout at each other over the backgrounds.
Molod: I remember that moment well. When we were mixing that scene, because the music was so loud and so pulsating, David said, “I don’t want this to sound like we’re watching a movie about a club; I want this to be like we’re in the club watching this.” To make it realistic, when you’re in the club, you’re straining to hear sounds and people’s voices. He said that’s what it should be like. Our mixer, David Parker, kept pushing the music up louder and louder, so you can barely make out those words.

I feel like I’m seeing iterations of that in Mindhunter as well.
Molod: Absolutely. That makes it more stressful and like you said, gives it a lot more tension.

Scott said that David’s down to the decibel in terms of how he likes his sound mixed. I’m assuming he’s that specific when it comes to the editorial as well?
Molod: That is correct. It’s actually even more to that quarter decibel. He literally does that all the time. He gets really, really in there.

He does the same thing with editorial, and what I love about his process is he doesn’t just say, “I want this character to sound old and scared,” he gives real detail. He’ll say, “This guy’s very scared and he’s dirty and his shoelaces are untied and he’s got a rag and a piece of snot rag hanging out of his pocket. And you can hear the lint and the Swiss army knife with the toothpick part missing.” He gets into painting a picture, he wants us literally to translate the sound, but he wants us to make it sound like the picture he’s painting.

So he wanted to make Kevin sound really nervous in the truck scene. Kevin’s in the back and you don’t really see him too much. He’s blurred out. David really wanted to sell his fear by using sound, so we had him tapping the leg nervously, scratching the side of the car, kind of slapping his leg and obviously breathing really heavy and sniffing a lot, and it was those bounds that really helped sell that scene.

So while he does have the acumen and vocabulary within sound to talk to you on a technical level, he’ll give you direction in a similar way to how he would an actor.
Molod: Absolutely, and that’s always how I’ve looked at it. When he’s giving us direction, it’s actually the same way as he’s giving an actor direction to be a character. He’s giving the sound team direction to help those characters and help paint those characters and the scenes.

With that in mind, what was the dialogue editing process like? I’ve heard that his attention to detail really comes into play with inflection of lines. Were you organizing and pre-syncing the alternate takes as closely as you could with the picture selection?
Molod: We did that all the time. The inclination and the intonation and the cadence of the voices of the characters is really important to him, and he’s really good about figuring out which words of which takes he can stitch together to do it. So there might be two sentences that one actor says at one time and those sentences are actually made up of five different takes. And he does so many takes that we have a wealth of material to choose from.

We’d probably send about five or six versions to David to listen to and then he would make his notes. That would happen almost every day and we would start honing in on the performances he liked. Eventually he might say, “I don’t like any of them. You’ve got to loop this guy on the ADR stage.” He likes us to stitch up the best little parts and loop together like a puzzle.

What is the ADR stage like at Skywalker?
Molod: We actually did all of our ADR at Disney Studios in LA because David was down there, as were the actors. We did a fair amount of ADR in Mindhunter, there’s lots of it in there.

We usually have three or four microphones running during an ADR session, one of which will be a radio mic. The other three would be booms set in different locations, the same microphones that they use in production. We also throw in an extra [Sennheiser MKH 50] just to have it with the track of sound that we could choose from.

The process went great, we went through it, we’d come back and give him about five or six choices and then he would start making notes and we had to pin it down to the way he liked it. So by the time we got to the mix stage, the decision was done.

There was a scene where people are walking around talking after a murder had been committed, and what David really wanted was to kind of be talking a little softly about this murder. So we had to go in and loop that whole scene again with them performing it at a more quiet, sustained volume. We couldn’t just turn it down. They had to perform it as if they were not quite whispering but trying to speak a little lower so no one could hear.

To what extent did loop groups play a part in the soundtrack? With the prominence of backgrounds in the show it seems like customization would be helpful, to have time-specific little bits of dialogue that might pop out.
Molod: We’ve used a group called the Loop Squad for all the features, House of Cards shows and the Mindhunters. We would send a list of all of our cues, get on the phone and explain what the reasoning was, what the storylines were. All their actors would on their own, go and research everything that was happening at the time, so if they were just standing by a movie theater, they had something to talk about that was relevant at the time.

When it came to production sound on the show, which track did you normally find yourself working from?
Molod: In most scenes, they would have a couple of radio mics attached to the actors and they’d have several booms. Normally, there were maybe eight different microphones set up. You would have one general boom over the whole thing, you’d have the boom that was close to each character.

We almost always went with one of the booms, unless we were having trouble making out what they were saying. And then it depended just on which actor was standing closest to the boom. One of the tricks our editors did in order to make it sound better is they would phase the two. So if the boom wasn’t quite working on its own and the radio either, one of our tricks would be to make those two play together in a way, and accomplish what we wanted where you could hear it but also give the space in the room.

Were there any moments that you remember from the production tracks for effects?
Molod: Whenever we could use production effects, we always tried to get those in, because they always sound the most realistic and most pertinent to that scene and that location. If we can maintain any footsteps in the production, we always do because those always sound great.

Any kind of subtle things like creaks, bed creaks, the floor creaking, we always try to salvage those and those help a lot too. Fincher is very, very, very into Foley. We have Foley covering the whole thing, end to end. He gives us notes on everybody’s footsteps and we do tests of each character with different types of shoes on and different strides of walking, and we send it to him.

So much of the show’s drama plays out in characters’ internal worlds. In a lot of the prison interview scenes, I notice door slams here and there that I think serve to heighten the tension. Did you develop a kind of a logical language when it came to that, or did you find it was more intuitive?
Molod: No, we did have our language to it and that was based on Fincher’s direction, and when it was really crazy he wanted to hear the door slams and buzzers and keys jingling and tons of prisoners yelling offsite. We spent days recording loop-group prisoners and they would be sprinkled throughout the scene. And when something about the conversation had an upsetting subject matter, we might ramp up the voices in the back.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

A Closer Look: Delta Soundworks’ Ana Monte and Danielo Deboy

Germany’s Delta Soundworks  was co-founded by Ana Monte and Danielo Deboy back in 2016 in Heidelberg, Germany. This 3D/immersive audio post studio’s projects span across installations, virtual reality, 360-degree films and gaming, as well as feature films, documentaries, TV shows and commercials. Its staff includes production sound mixers, recording engineers, sound designers, Foley artists, composers and music producers.

Below the partners answer some questions about their company and how they work.

How did Delta come about?
Ana Monte: Delta Soundworks grew from the combination of my creative background in film sound design and Daniel’s high-level understanding of the science of sound. I studied music industry and technology at California State University, Chico and I earned my master’s degree in film sound and sound design at the Film Academy Baden-Württemberg, here in Germany.

Daniel is a graduate of the Graz University of Technology, where he focused his studies on 3D audio and music production. He was honored with a Student Award from the German Acoustical Society (DEGA) for his research in the field of 3D sound reproduction. He has also received gold, silver and bronze awards from the Audio Engineering Society (AES) for his music recordings.

Can you talk about some recent projects?
Deboy: I think our biggest current project is working for The Science Dome at the Experimenta, a massive science center in Heilbronn, Germany. It’s a 360-degree theater with a 360-degree projection system and a 29-channel audio system, which is not standard. We create the entire sound production for all the theater’s in-house shows. For one of the productions, our composer Jasmin Reuter wrote a beautiful score, which we recorded with a chamber orchestra. It included a lot of sound design elements, like rally cars. We put all these pieces together and finally mixed them in a 3D format. It was a great ride for us.

Monte: The Science Dome has a very unique format. It’s not a standard planetarium, where everyone is looking up and to the middle, but rather a mixture of theater plus planetarium, wherein people look in front, above and behind. For example, there’s a children’s show with pirates who travel to the moon. They begin in the ocean with space projected above them, and the whole video rotates 180-degrees around the audience. It’s a very cool format and something that is pretty unique, not only in Europe, but globally. The partnership with the Experimenta is very important for us because they do their own productions and, eventually, they might license it to other planetariums.

With such a wide array of projects and requirements, tell us about your workflow.
Deboy: Delta is able to quickly and easily adjust to different workflows because we know, or at least love to be, at the edge of what’s possible. We are always happy to take on new and interesting projects, try out new workflows and design, and look at up-and-coming techniques. I think that’s kind of a unique selling point for us. We are way more flexible than a typical post production house would be, and that includes our work for cinema sound production.

What are some tools you guys use in your work?
Deboy: Avid Pro Tools Ultimate, Reaper, Exponential Audio, iZotope RX 6 and Metric Halo 2882 3D. We also have had a license for Nugen Halo Upmix for a while, and we’ve been using it quite a bit for 5.1 production. We rely on it significantly for the Experimenta Science Dome projects because we also work with a lot of external source material from composers who deliver it in stereo format. Also, the Dome is not a 5.1/7.1 theater; it’s 29 channels. So, Upmix really helped us go from a stereo format to something that we could distribute in the room. I was able to adjust all my sources through the plugin and, ultimately, create a 3D mix. Using Nugen, you can really have fun with your audio.

Monte: I use Nugen. Halo Upmix for sound design, especially to create atmosphere sounds, like a forest. I plug in my source and Upmix just works. It’s really great; I don’t have to spend hours tweaking the sound just to have it only serve as a bed to add extra elements on top. For example, maybe I want an extra bird chirping over there and then, okay, we’re in the forest now. It works really well for tasks like that.

Showrunner Derek Simonds talks USA Network’s The Sinner

By Iain Blair

Three years ago, USA Network’s Golden Globe- and Emmy-nominated series The Sinner snuck up behind viewers, grabbed them by the throat and left them gasping for air while they watched a seemingly innocent man stabbed to death at the beach. The second season pulled no punches either, focusing on the murder of a couple by a young boy.

Derek Simonds (right) on set with Bill Pullman.

Now the anthology is back with a third installment, which once again centers around Detective Harry Ambrose (Bill Pullman) as he begins a routine investigation of a tragic car accident on the outskirts of Dorchester, in upstate New York. Piece by piece, Ambrose gradually uncovers a hidden crime that pulls him into another dangerous and disturbing case focusing on Jamie Burns (Matt Bomer), a Dorchester resident, high school teacher and expectant father. The Season 3 finale airs at the end of the month on USA Network.

Also back is the show’s creator and showrunner Derek Simonds, whose credits include ABC’s limited series When We Rise, and ABC’s 2015 limited series The Astronaut Wives Club. He has developed television pilots, wrote, directed and composed the score for his feature film Seven and a Match, and has developed many independent film projects as a writer/producer, including the Oscar-Nominated Sony Pictures Classics release Call Me by Your Name.

I recently spoke with Simonds about making the show — which is executive-produced by Jessica Biel (who starred in Season 1) and Michelle Purple through their company Iron Ocean — the Emmys, and his love of post.

THE SINNER -- "Part II" Episode 302 -- Pictured: Bill Pullman as Detective Lt. Harry Ambrose -- (Photo by: Peter Kramer/USA Network)

When you created this show, did you always envision it as a tortured human drama, a police procedural, or both?
(Laughs) Both, I guess. My previous writing and developing stuff for film and TV was never procedural-oriented. The opportunity with this show came with the book being developed and Jessica Biel being attached.  I was one of many writers vying for the chance to adapt it, and they chose my pitch. The reason the book and the bones of the show appealed to me was the “whydunnit” aspect at the core of Season 1. That really sold me, as I wasn’t very interested in doing a typical procedural mystery or a serial killer drama that was really plot-oriented.

The focus on motive and what trauma could have led to such a rash act — Cora (in Season 1) stabbing the stranger on the beach — that is essentially the mystery, the psychological mystery. So I’ve always been character-oriented in my writing. I love a good story and watching a plot unfold, but really my main interest as a writer and why I came onto the show is because it delves so deeply into character.

Fair to say that Season 3 marks a bit of a shift in the show?
Yes, and I’d say this season is less of a mystery and more of a psychological thriller. It really concerns Detective Harry Ambrose, who’s been in the two earlier seasons, and he encounters this more mundane event — a fatal, tragic car crash — but a car crash, and something that happens all the time.

As he starts looking into it, he realizes that the survivor is not telling the whole story, and that some of the details just don’t add up. His intuition makes him look deeper, and he ends up getting into this relationship that is part suspect, part detective, part pursuer, part pursuee and part almost-friendship with this character played by Matt Bomer.

It also seems more philosophical in tone than the previous two seasons.
I think you’re right. The idea was born out of thinking about Dostoevsky and questions about “why do we kill?” Could it be for philosophical reasons, not just the result of trauma? Could it be that kind of decision? What is morality? Is it learned or is it invented? So there were all these questions and ideas, and I was also very excited to create a male character — not a helpless child or a woman, not someone so innocent — as the new character and have that character reflect Ambrose’s darker side and impulses back to him. So there was this doppelganger-y twinning going on.

Where do you shoot?
All out of New York City. We have stages in Brooklyn where we have our sets, and then we do a lot of location work all over the city and also just outside in Westchester and Rockland counties. They offer us great areas where we can cheat a more bucolic setting than we’re actually in.

It has more of a cinematic feel and look than most TV shows.
Thank you for noticing! As the creator, I’m biased about that, but I spend a lot of time and energy with my team and DP Radium Cheung and designers to really try and avoid the usual TV tropes and clichés and TV-style lighting and shooting every step of the way.

We try to think of every episode as a little film. In fact, every season is like a long film, as they’re stand-alone stories, and I embark on each season like, “OK, we’re making a 5½-hour movie,” and all the decisions we make are kind of holistic.

Do you like being a showrunner?
It’s a great privilege to be able to tell a story and make the decisions about what that story says, and to be able to make it on the scale that we do. It’s totally thrilling, and I love having a moment at the podium to talk about the culture and what’s on my mind through the characters. But I think it’s also one of the hardest jobs you could ever have. In fact, it’s really like having four or five jobs rolled into one, and it’s really, really exhausting, as you’re running between them at all times. So there’s this feeling that you never have enough time, or enough time to think as deeply in every area as you’d like. It takes its toll physically, but it’s so gratifying to get a new season done and out in the world.

Where do you post?
All in New York at Technicolor Postworks, and we do most of the editing there too, and all of our online work. We do all the sound work at Decibel 11, which is also in Manhattan. As for our VFX, we switch year to year, and this year we’re working with The Molecule.

Do you like the post process?
I love post. There’s so much relief once you finish production and that daily stress of worrying about whether you’ll get what you need is over. You can see what you have.

But you don’t have much time for post in TV as compared with film.
True.  it’s an incredibly fast schedule, and as the EP I only have about four or five full days to sit down with the editor and re-cut an episode and consider what the best possible version of it is.

Let’s talk about editing. You have several editors, I assume because of the time factor. How does that work?
I really love the whole editing process, and I spend a lot of time cutting the episodes — 10 hours a day, or more, for those five days, fine-tuning all the cuts before we have to lock them. I’m not the type of showrunner who gives a few notes and goes off to the next room. I’m very hands-on, and we’ve had the same three editors except for one new guy this season.

Everyone comes back, so there’s a growing understanding of the tone and what the show is. So all three editors rotate on the eight episodes. The big editing challenges are refining performance as things become clearer from previous episodes, and running time. We’re a broadcast show, so we don’t have the leeway of a streaming show, and there’s a lot of hair-pulling over how to cut episodes down. That can be very stressful for me, as I feel we might be losing key content that brings a lot of nuance. There’s also keeping a consistent tone and rhythm, and I’m very specific about that.

You’re also a musician, so I assume you must spend a lot of time on sound and the music?
I do. I work in depth on the score with composer Ronit Kirchman, so that’s an aspect of post I really, really love, and where I spend far more time than a typical showrunner does. I understand it and can talk about it, and I have very specific ideas about what I want. But TV’s so different from movies. We do our final mix review in four hours per episode. With a movie you’d have four, five days. So there’s very little time for experimentation, and you have to have a very clear vision of what works.

Derek Simonds

How important are the Emmys to a show like this?
Very, and Jessica Biel was nominated for her role in Season 1, but we haven’t won yet. We have a lot of fans in the industry, but maybe we’re also a bit under the radar, a bit cultish.

The show could easily run for many more years. Will you do more seasons?
I hope so. The beauty of an anthology is that you can constantly refresh the story and introduce new characters, which is very appealing. If we keep going, I think it’ll pivot in a larger way to keep it really fresh. I just never want it to become predictable, where you sense a pattern.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Blackmagic releases Resolve 16.2, beefs up audio post tools

Blackmagic has updated its color, edit, VFX and audio post tool to Resolve 16.2. This new version features major Fairlight updates for audio post as well as many improvements for color correction, editing and more.

This new version has major new updates for editing in the Fairlight audio timeline when using a mouse and keyboard. This is because the new edit selection mode unlocks functionality previously only available via the audio editor on the full Fairlight console, so editing is much faster than before. In addition, the edit selection mode makes adding fades and cuts and even moving clips only a mouse click away. New scalable waveforms let users zoom in without adjusting the volume. Bouncing lets customers render a clip with custom sound effects directly from the Fairlight timeline.

Adding multiple clips is also easier, as users can now add them to the timeline vertically, not just horizontally, making it simpler to add multiple tracks of audio at once. Multichannel tracks can now be converted into linked groups directly in the timeline so users no longer have to change clips manually and reimport. There’s added support for frame boundary editing, which improves file export compatibility for film and broadcast deliveries. Frame boundary editing now adds precision so users can easily trim to frame boundaries without having to zoom all the way in the timeline. The new version supports modifier keys so that clips can be duplicated directly in the timeline using the keyboard and mouse. Users can also copy clips across multiple timelines with ease.

Resolve 16.2 also includes support for the Blackmagic Fairlight Sound Library with new support for metadata based searches, so customers don’t need to know the filename to find a sound effect. Search results also display both the file name and description, so finding the perfect sound effect is faster and easier than before.

MPEG-H 3D immersive surround sound audio bussing and monitoring workflows are now supported. Additionally, improved pan and balance behavior includes the ability to constrain panning.

Fairlight audio editing also has index improvements. The edit index is now available in the Fairlight page and works as it does in the other pages, displaying a list of all media used; users simply click on a clip to navigate directly to its location in the timeline. The track index now supports drag selections for mute, solo, record enable and lock as well as visibility controls so editors can quickly swipe through a stack of tracks without having to click on each one individually. Audio tracks can also be rearranged by click and dragging a single track or a group of tracks in the track index.

This new release also includes improvements in AAF import and export. AAF support has been refined so that AAF sequences can be imported directly to the timeline in use. Additionally, if the project features a different time scale, the AAF data can also be imported with an offset value to match. AAF files that contain multiple channels will also be recognized as linked groups automatically. The AAF export has been updated and now supports industry-standard broadcast wave files. Audio cross-fades and fade handles are now added to the AAF files exported from Fairlight and will be recognized in other applications.

For traditional Fairlight users, this new update makes major improvements in importing old legacy Fairlight projects —including improved speed when opening projects with over 1,000 media files, so projects are imported more quickly.

Audio mixing is also improved. A new EQ curve preset for clip EQ in the inspector allows removal of troublesome frequencies. New FairlightFX filters include a new meter plug-in that adds a floating meter for any track or bus, so users can keep an eye on levels even if the monitoring panel or mixer are closed. There’s also a new LFE filter designed to smoothly roll off the higher frequencies when mixing low-frequency effects in surround.

Working with immersive sound workflows using the Fairlight audio editor has been updated and now includes dedicated controls for panning up and down. Additionally, clip EQ can now be altered in the inspector on the editor panel. Copy and paste functions have been updated, and now all attributes — including EQ, automation and clip gain — are copied. Sound engineers can set up their preferred workflow, including creating and applying their own presets for clip EQ. Plug-in parameters can also be customized or added so that users have fast access to their preferred tool set.

Clip levels can now be changed relatively, allowing users to adjust the overall gain while respecting existing adjustments. Clip levels can also be reset to unity, easily removing any level adjustments that might have previously been made. Fades can also be deleted directly from the Fairlight Editor, making it faster to do than before. Sound engineers can also now save their preferred track view so that they get the view they want without having to create it each time. More functions previously only available via the keyboard are now accessible using the panel, including layered editing. This also means that automation curves can now be selected via the keyboard or audio panel.

Continuing on with the extensive improvements to the Fairlight audio, there has also been major updates to the audio editor transport control. Track navigation is now improved and even works when nothing is selected. Users can navigate directly to the timecode entry window above the timeline from the audio editor panel, and there is added support for high-frame-rate timecodes. Timecode entry now supports values relative to the current CTI location, so the playhead can move along the timeline relative to the position rather than a set timecode.

Support has also been added so the colon key can be used in place of the user typing 00. Master spill on console faders now lets users spill out all the tracks to a bus fader for quick adjustments in the mix. There’s also more precision with rotary controls on the panel and when using a mouse with a modifier key. Users can also change the layout and select either icon or text-only labels on the Fairlight editor. Legacy Fairlight users can now use the traditional — and perhaps more familiar — Fairlight layout. Moving around the timeline is even quicker with added support for “media left” and “media right” selection keys to jump the playhead forward and back.

This update also improves editing in Resolve. Loading and switching timelines on the edit page is now faster, with improved performance when working with a large number of audio tracks. Compound clips can now be made from in and out points so that editors can be more selective about which media they want to see directly in the edit page. There is also support for previewing timeline audio when performing live overwrites of video-only edits. Now when trimming, the duration will reflect the clip duration as users actively trim, so they can set a specific clip length. Support for a change transition duration dialogue.

The media pool now includes metadata support for audio files with up to 24 embedded channels. Users can also duplicate clips and timelines into the same bin using copy and paste commands. Support for running the primary DaVinci Resolve screen as a window when dual-screen mode is enabled. Smart filters now let users sort media based on metadata fields, including keywords and people tags, so users can find the clips they need faster.

Amazon’s The Expanse Season 4 gets HDR finish

The fourth season of the sci-fi series The Expanse was finished in HDR for the first time streaming via Amazon Prime Video. Deluxe Toronto handled end-to-end post services, including online editorial, sound remixing and color grading. The series was shot on ARRI Alexa Minis.

In preparation for production, cinematographer Jeremy Benning, CSC, shot anamorphic test footage at a quarry that would serve as the filming stand-in for the season’s new alien planet, Ilus. Deluxe Toronto senior colorist Joanne Rourke then worked with Benning, VFX supervisor Bret Culp, showrunner Naren Shankar and series regular Breck Eisner to develop looks that would convey the location’s uninviting and forlorn nature, keeping the overall look desaturated and removing color from the vegetation. Further distinguishing Ilus from other environments, production chose to display scenes on or above Ilus in a 2.39 aspect ratio, while those featuring Earth and Mars remained in a 16:9 format.

“Moving into HDR for Season 4 of our show was something Naren and I have wanted to do for a couple of years,” says Benning. “We did test HDR grading a couple seasons ago with Joanne at Deluxe, but it was not mandated by the broadcaster at the time, so we didn’t move forward. But Naren and I were very excited by those tests and hoped that one day we would go HDR. With Amazon as our new home [after airing on Syfy], HDR was part of their delivery spec, so those tests we had done previously had prepared us for how to think in HDR.

“Watching Season 4 come to life with such new depth, range and the dimension that HDR provides was like seeing our world with new eyes,” continues Benning. “It became even more immersive. I am very much looking forward to doing Season 5, which we are shooting now, in HDR with Joanne.”

Rourke, who has worked on every season of The Expanse, explains, “Jeremy likes to set scene looks on set so everyone becomes married to the look throughout editorial. He is fastidious about sending stills each week, and the intended directive of each scene is clear long before it reaches my suite. This was our first foray into HDR with this show, which was exciting, as it is well suited for the format. Getting that extra bit of detail in the highlights made such a huge visual impact overall. It allowed us to see the comm units, monitors, and plumes on spaceships as intended by the VFX department and accentuate the hologram games.”

After making adjustments and ensuring initial footage was even, Rourke then refined the image by lifting faces and story points and incorporating VFX. This was done with input provided by producer Lewin Webb; Benning; cinematographer Ray Dumas, CSC; Culp or VFX supervisor Robert Crowther.

To manage the show’s high volume of VFX shots, Rourke relied on Deluxe Toronto senior online editor Motassem Younes and assistant editor James Yazbeck to keep everything in meticulous order. (For that they used the Grass Valley Rio online editing and finishing system.) The pair’s work was also essential to Deluxe Toronto re-recording mixers Steve Foster and Kirk Lynds, who have both worked on The Expanse since Season 2. Once ready, scenes were sent in HDR via Streambox to Shankar for review at Alcon Entertainment in Los Angeles.

“Much of the science behind The Expanse is quite accurate thanks to Naren, and that attention to detail makes the show a lot of fun to work on and more engaging for fans,” notes Foster. “Ilus is a bit like the wild west, so the technology of its settlers is partially reflected in communication transmissions. Their comms have a dirty quality, whereas the ship comms are cleaner-sounding and more closely emulate NASA transmissions.”

Adds Lynds, “One of my big challenges for this season was figuring out how to make Ilus seem habitable and sonically interesting without familiar sounds like rustling trees or bird and insect noises. There are also a lot of amazing VFX moments, and we wanted to make sure the sound, visuals and score always came together in a way that was balanced and hit the right emotions story-wise.”

Foster and Lynds worked side by side on the season’s 5.1 surround mix, with Foster focusing on dialogue and music and Lynds on sound effects and design elements. When each had completed his respective passes using Avid ProTools workstations, they came together for the final mix, spending time on fine strokes, ensuring the dialogue was clear, and making adjustments as VFX shots were dropped in. Final mix playbacks were streamed to Deluxe’s Hollywood facility, where Naren could hear adjustments completed in real time.

In addition to color finishing Season 4 in HDR, Rourke also remastered the three previous seasons of The Expanse in HDR, using her work on Season 4 as a guide and finishing with Blackmagic DaVinci Resolve 15. Throughout the process, she was mindful to pull out additional detail in highlights without altering the original grade.

“I felt a great responsibility to be faithful to the show for the creators and its fans,” concludes Rourke. “I was excited to revisit the episodes and could appreciate the wonderful performances and visuals all over again.”

London’s Molinare launches new ADR suite

Molinare has officially opened a new ADR suite in its Soho studio in anticipation of increased ADR output and to complement last month’s CAS award-winning ADR work on Fleabag. Other recent ADR credits for the company include Good Omens, The Capture and Strike Back. Molinare sister company Hackenbacker also picked up some award love with a  a BAFTA TV Craft and an AMPS award for Killing Eve.

Molinare and Hackenbacker’s audio setup includes nine mixing theaters, three of which have Dolby 5.1/7.1 Theatrical or Commercials & Trailers Certification, and one has full Dolby Atmos home entertainment mix capability.

Molinare works on high-end TV dramas, feature films, feature documentaries and TV reality programming. Recent audio credits include BBC One’s Dracula, The War of the Worlds from Mammoth Screen and Worzel Gummidge. Hackenbacker has recently worked on HBO’s Avenue 5 for returning director Armando Iannucci and Carnival Film’s Downton Abbey and has contributed to the latest season of Peaky Blinders.

Behind the Title: Harbor sound editor/mixer Tony Volante

“As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.”

Name: Tony Volante

Company: Harbor

Can you describe what Harbor does?
Harbor was founded in 2012 to serve the feature film, episodic and advertising industries. Harbor brings together production and post production under one roof — what we like to call “a unified process allowing for total creative control.”

Since then, Harbor has grown into a global company with locations in New York, Los Angeles and London. Harbor hones every detail throughout the moving-image-making process: live-action, dailies, creative and offline editorial, design, animation, visual effects, CG, sound and picture finishing.

What’s your job title?
Supervising Sound Editor/Re-Recording Mixer

What does that entail?
I supervise the sound editorial crew for motion pictures and TV series along with being the re-recording mixer on many of my projects. I put together the appropriate crew and schedule along with helping to finalize a budget through the bidding process. As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.

What would surprise people the most about what falls under that title?
How almost all the sound that someone hears in a movie has been replaced by a sound editor.

What’s your favorite part of the job?
Creatively collaborating with co-workers and hearing it all come together in the final mix.

What is your most productive time of day?
Whenever I can turn off my emails and can concentrate on mixing.

If you didn’t have this job, what would you be doing instead?
Fishing!

When did you know this would be your path?
I played drums in a rock band and got interested in sound at around 18 years old. I was always interested in the “sound” of an album along with the musicality. I found myself buying records based on who had produced and engineered them.

Can you name some recent projects?
Fosse/Verdo (FX) and Boys State, which just one Grand Jury Prize at Sundance.

How has the industry changed since you began working?
Technology has improved workflows immensely and has helped us with the creative process. It has also opened up the door to accelerating schedules to the point of sacrificing artistic expression and detail.

Name three pieces of technology you can’t live without
Avid Pro Tools, my iPhone and my car’s navigation system.

How do you de-stress from it all?
I stand in the middle of a flowing stream fishing with my fly rod. If I catch something that’s a bonus!

Talking with 1917’s Oscar-nominated sound editing team

By Patrick Birk

Sam Mendes’ 1917 tells the harrowing story of Lance Corporals Will Schofield and Tom Blake, following the two young British soldiers on their perilous trek across no man’s land to deliver lifesaving orders to the Second Battalion of the Devonshire Regiment.

Oliver Tarney

The story is based on accounts of World War I by the director’s grandfather, Alfred Mendes. The production went to great lengths to create an immersive experience, placing the viewer alongside the protagonists in a painstakingly recreated world, woven together seamlessly, with no obvious cuts. The film’s sound department had to rise to the challenge of bringing this rarely portrayed sonic world to life.

We checked in with supervising sound editor Oliver Tarney and ADR/dialogue supervisor Rachael Tate, who worked out of London’s Twickenham Studios. Both Tarney and Tate are Oscar-nominated in the Sound Editing category. Their work was instrumental in transporting audiences to a largely forgotten time, helping to further humanize the monochrome faces of the trenches. I know that I will keep their techniques — from worldizing to recording more ambient Foley — in mind on the next project I work on.

Rachael Tate

A lot of the film is made up of quiet, intimate moments punctuated by extremely traumatic events. How did you decide on the most key sounds for those quiet moments?
Oliver Tarney: When Sam described how it was going to be filmed, it was expected that people would comment on how it was made from a technical perspective. But for Sam, it’s a story about the friendship between these two men and the courage and sacrifice that they show. Because of this, it was important to have those quieter moments when you aren’t just engaged in full-tilt action the whole time.

The other factor is that the film had no edits — or certainly no obvious edits (which actually meant many edits) — and was incredibly well-rehearsed. It would have been a dangerous thing to have had everything playing aggressively the whole way through. I think it would have been very fatiguing for the audience to watch something like that.

Rachael Tate: Also, you can’t rely on a cut in the normal way to inform pace and energy, so you are using things like music and sound to sort of ebb and flow the energy levels. So after the plane crash, for example, you’ll notice it goes very quiet, and also with the mine collapse, there’s a huge section of very little sound, and that’s on purpose so your ears can reacclimatize.

Absolutely, and I feel like that’s a good way to go — not to oversaturate the audience with the extreme end of the sound design. In other interviews, you said that you didn’t want it to seem overly processed.
Tarney: Well, we didn’t want the weapons to sound heroic in any way. We didn’t want it to seem like they were enjoying what they were doing. It’s very realistic; it’s brutal and harsh. Certainly, Schofield does shoot at people, but it’s out of necessity rather than enjoying his role there. In terms of dynamics, we broke the film up into a series of arcs, and we worked out that some would be five minutes, some would be nine minutes and so on.

In terms of the guns, we went more naturalistic in our recordings. We wanted the audience to feel everything from their perspective — that’s what Sam wanted with the entire film. Rather than having very direct recordings, we split our energies between that and very ambient recordings in natural spaces to make it feel more realistic. The distance that enemy fire was coming from is much more realistic than you would normally play in a film, and the same goes for the biplane recordings. We had microphones all across airfields to get that lovely phase-y kind of sound. For the dogfight with the planes, we sold the fact that you’re watching Blake and Schofield watching the dogfight rather than being drawn directly to the dogfight. I guess it was trying to mirror the visual, which would stick with the two leads.

Tate: We did the same with the crowd. We tried to keep it more realistic by using half actual territorial army guys, along with voice actors, rather than just being a crowdy-sounding crowd. When we put that into the mix, we also chose which bits to focus on — Sam described it as wanting it to be like a vignette, like an old photo. You have the brown edging that fades away in the corners. He wanted you to zoom in on them so much that the stuff around them is there, but at the level they would hear it. So, if there’s a crowd on the screen further back from them, in reality you wouldn’t really hear it. In most films you put something in everyone’s mouth, but we kept it pared right back so that you’re just listening to their voices and their breaths. This is similar to how it was done with the guns and effects.

You said you weren’t going for any Hollywood-type effects, but I did notice that there are some psychoacoustic cues, like when a bomb goes in the bunker, and I think a tinnitus-type effect.
Tarney: There are a few areas where you have to go with a more conventional film language. When the plane’s very close — on the bridge perhaps — once he’s being fired upon, we start going into something that’s a little more conventional, and then we set the lingo back into him. It was that thing that Sam mentioned, which was subjectivity, objectivity; you can flip between them a little bit, otherwise it becomes too linear.

Tate: It needed to pack a punch.

Foley plays a massive part in this production. Assuming you used period weaponry and vehicles?
Tarney: Sam was so passionate about this project. When you visited the sets, the detail was just beautiful. They set the bar in terms of what we had to achieve realism-wise. We had real World War I rifles and machine guns, both British and German, and biplanes. We also did wild track Foley at the first trench and the last trench: the muddy trench and then the chalk one at the end.

Tate: We even put Blakeys on the boots.

Tarney: Yes, we bought various boots with different hobnails and metal tips.

That’s what a Blakey is?
Tate: The metal things that they put in the bottom of their shoes so that they didn’t slip around.

Tarney: And we went over the various surfaces and found which worked the best. Some were real hobnail boots, and some had metal stuck into them. We still wanted each character to have a certain personality; you don’t want everything sounding the same. We also recorded them without the nails, so when we were in a quieter part of the film, it was more like a normal boot. If you’d had that clang, clang, clang all the way through the film…

Tate: It would throw your attention away from what they were saying.

Tarney: With everything we did on the Foley, it was important to keep focus on them the whole time. We would work in layers, and as we would build up to one of the bigger events, we’d start introducing the heavier, more detailed Foley and take away the more diffuse, mellow Foley.

You only hear webbing and that kind of stuff at certain times because it would be too annoying. We would start introducing that as they went into more dangerous areas. You want them to feel conspicuous, too — when they’re in no man’s land, you want the audience to think, “Wow, there are two guys, alone, with absolutely no idea what’s out there. Is there a sniper? What’s the danger?” So once you start building up that tension, you make them a little bit louder again, so you’re aware they are a target.

How much ADR did the film require? I’m sure there was a lot of crew noise in the background.
Tate: Yes, there was a lot of crew noise — there were only two lines of “technical” ADR, which is when a line needs to be redone because the original could not be used/cleaned sufficiently. My priority was to try and keep as much production as possible. Because we started a couple of weeks after shooting started, and as they were piecing it together, it was as if it was locked. It’s not the normal way.

With this, I had the time to go deep and spectrally remove all the crew feet from the mics because they had low-end thuds on their clip mics, which couldn’t be avoided. The recordist, Stuart Wilson, did a great job, giving me a few options with the clip mics, and he was always trying to get a boom in wherever he could.

He had multiple lavaliers on the actors?
Tate: Yes, he had up to three on both those guys most of the time, and we went with the one on their helmets. It was like a mini boom. But, occasionally, they would get wind on them and stuff like that. That’s when I used iZotope RX 7. It was great having the time to do it. Ordinarily people might say, “Oh no, let’s ADR all the breaths there,” but I could get the breaths out. When you hear them breathing, that’s what they were doing at the time. There’s so much performance in them, I would hate to get them standing in a studio in London, you know, in jeans, trying to recreate that feeling.

So even if there’s slight artifacting, the littlest bit, you’d still go with that over ADR?
Tate: Absolutely. I would hope there’s not too much there though.

Tarney: Film editor Lee Smith and Sam have such a great working relationship; they really were on the same page putting this thing together. We had a big decision to make early on: Do we risk being really progressive and organize Foley recording sessions whilst they were still filming? Because, if everything was going according to plan, they were going to be really hungry for sound since there was no cutting once they had chosen the takes. If it didn’t go to plan, then we’d be forever swapping out seven-minute takes, which would be a nightmare to redo. We took a gamble and budgeted to spend the resources front heavy, and it worked out.

Tate: Lee Smith used to be a sound guy, which didn’t hurt.

I saw how detailed they were with the planning. The model of the town for figuring out the trajectory of the flair for lighting, for example.
Tate: They also mapped out the trenches so they were long enough to cover the amount of dialogue the actors were going to say — so the trenches went on for 500 yards. Before that, they were on theater stages with cardboard boxes to represent trenches, walking through them again and again. Everything was very well-planned.

Apart from dialogue and breaths, were there any pleasant surprises from the production audio that you were able to use in the final cut?
Tate: In the woods, toward the end of the film, Schofield stumbles out of the river and hears singing, and the singing that you hear is the guy doing it live. That’s the take. We didn’t get him in to sing and then put it on; that’s just his clip mic, heavily affected. We actually took his recording out into the New Forest, which is south of London.

A worldizing-type technique?
Tate: Yes, we found a remote part, and we played it and recorded it from different distances, and we had that woven against the original with a few plugins on it for the reverbs.

Tarney: We don’t know if Schofield is concussed and if he’s hallucinating. So we really wanted it to feel sort of ethereal, sort of wafting in and out on the wind — is he actually hearing this or not?

Tate: Yeah, we played the first few lines out of sequence, so you can’t really catch if there’s a melody. Just little bits on the breeze so that you’re not even quite sure what you’re hearing at that point, and it gradually comes to a more normal-sounding tune.

Tarney: Basically, that’s the thing with the whole film; things are revealed to the audience as they’re revealed to the lead characters.

Tate: There are no establishing shots.

Were there any elements of the sound design you wouldn’t expect to be in there that worked for one reason or another?
Tarney: No, there’s nothing… we were pretty accurate. Even the first thing you hear in the film — the backgrounds that were recorded in April.

Tate: In the field.

Tarney: Rachael and I went to Ypres in Belgium to visit the World War I museum and immerse ourselves in that world a little bit.

Tate: We didn’t really know that much about World War I. It wasn’t taught in my school, so I really didn’t know anything before I started this; we needed to educate ourselves.

Can you talk about the loop groups and dialing down to the finest details in terms of the vocabulary used?
Tate: Oh, God, I’ve got so many books, and we got military guys for that sort of flat way they operate. You can’t really explain that fresh to a voice actor and get them to do it properly. But the voice actors helped those guys perform and get out of their shells, and the military guys helped the voice actors in showing them how it’s done.

I gave them all many sheets of key words they could use, or conversation starters, so that they could improvise but stay on the right track in terms of content. Things like slang, poems from a cheap newspaper that was handed out to the soldiers. There was an officer’s manual, so I could tell them the right equipment and stuff. We didn’t want to get anything wrong.

That reminds me of this series of color photographs taken in the early 1900s in Russia. Automatically, it brings you so much closer to life at that point in time. Do you feel like you were able to achieve that via the sound design of this film?
Tarney: I think the whole project did that. When you’ve watched a film every day for six months, day in and day out, you can’t help but think about that era more, and it’s slightly embarrassing that it’s one generation past your grandparents.

How much more worldizing did you do, apart from the nice moment with the song?
Tarney: The Foley that you hear in the trench at the beginning and in the trench at the end is a combination between worldizing and sound designer Mike Fentum’s work. We both went down about three weeks before we started because Stuart Wilson gave us a heads up that they were wrapping at that location, so we spoke to the producer, and he gave us access.

So, in terms of worldizing, it’s not quite worldizing in the conventional sense of taking a recording and then playing it in a space. We actually went to the space and recorded the feet in that space, and the Foley supervisor Hugo Adams went to Salisbury Plain (the chalk trench at the end), and those were the first recordings that we edited and gave to Lee Smith. And then, we would get the two Foley artists that we had — Andrea King and Sue Harding — to top that with a performed pass against a screen. The whole film is layered between real recordings and studio Foley, and it’s the blend of natural presence and the performed studio Foley, with all the nuance and detail that you get from that.

Tate: Similarly, the crowd that we recorded out on a field in the back lot of Shepperton, with a 50 array; we did as much as we could without a screen with them just acting and going through the motions. We had an authentic World War I stretcher, which we used with hilarious consequences. We got them to run up and down carrying their friends on stretchers and things like that and passing enormous tables to each other and stuff so that we had the energy of it. There is something about recording outside and that sort of natural slap that you get off the buildings. It was embedded with production quite seamlessly really, and you can’t really get the same from a studio. We had to do the odd individual line in there, but most of it was done out in a field.

When need be, were you using things like convolution reverbs, such as Audio Ease Altiverb, in the mix?
Tarney: Absolutely. As good as the recordings were, it’s only when you put it against picture that you really understand what it is you need to achieve. So we would definitely augment with a lot — Altiverb is a favorite. Re-recording mixer Mark Taylor and I, we would use that a lot to augment and just change perspective a little bit more.

Can you talk about the Atmos mix and what it brought to the film?
Tarney: I’ve worked on many films with Atmos, and it’s a great tool for us. Sam’s very performance-orientated and would like things to be more screen-focused. The minute you have to turn around, you’ve lost that connection with the lead characters. So, in general, we kept things a little more front-loaded than we might have done with another director, but I really liked the results. It’s actually all the more shocking when you hear the biplane going overhead when they’re in no man’s land.

Sam wanted to know all the way through, “Can I hear it in 5.1, 7.1 and Atmos?” We’d make sure that in the three mixes — other than the obvious — we had another
plane coming over from behind. There’s not a wild difference in Atmos. The low end is nicer, and the discrete surrounds play really well, but it’s not a showy kind of mix in that sense. That would not have been true to everything we were trying to achieve, which was something real.

So Sam Mendes knows sound?
Tarney: He’s incredibly hungry to understand everything, in the best way possible. He’s very good at articulating what he wants and makes it his business to understand everything. He was fantastic. We would play him a section in 5.1, 7.1 and Atmos, and he would describe what he liked and disliked about each format, and we would then try to make each format have the same value as the other ones.


Patrick Birk is a musician and sound engineer at Silver Sound, a boutique sound house based in New York City.

Editor David Cea joins Chicago’s Optimus  

Chicago-based production and post house Optimus has added editor David Cea to its roster. With 15 years of experience in New York and Chicago, Cea brings a varied portfolio of commercial editing experience to Optimus.

Cea has cut spots for brands such as Bank of America, Chevrolet, Exxon, Jeep, Hallmark, McDonald’s, Microsoft and Target. He has partnered with many agencies, including BBDO, Commonwealth, DDB, Digitas, Hill Holliday, Leo Burnett, Mother and Saatchi & Saatchi.

“I grew up watching movies with my dad and knew I wanted to be a part of that magical process in some way,” explains Cea. “The combination of Goodfellas and Monty Python gave me all the fuel I needed to start my film journey. It wasn’t until I took an editing class in college that I discovered the part of filmmaking I wanted to pursue. The editor is the one who gets to shape the final product and bring out the true soul of the footage.”

After studying film at Long Island’s Hofstra University, Cea met Optimus editor and partner Angelo Valencia while working as his assistant at Whitehouse New York in 2005. Cea then moved on to hone his craft further at Cosmo Street in New York. Chicago became home for him in 2013 as he spent three years at Whitehouse. After heading back east for a couple of years, he returned to Chicago to put down roots.

While Avid Media Composer is Cea’s go-to choice for editing, he is also proficient in Adobe Premiere.

CAS Awards recognize GOT, Fleabag, Ford v Ferrari, more

The CAS Awards were held this past weekend, with the sound mixing team from Ford v Ferrari  — Steven A. Morrow, CAS, Paul Massey CAS, David Giammarco CAS, Tyson Lozensky, David Betancourt and Richard Duarte — taking home the Cinema Audio Society Award for Outstanding Sound Mixing Motion Picture – Live Action.

Game of Thrones – The Bells

Top honors for Motion Picture – Animated went to Toy Story 4 and the sound mixing team of Doc Kane CAS, Vince Caro CAS, Michael Semanick CAS, Nathan Nance, David Boucher and Scott Curtis. The CAS Award for Outstanding Sound Mixing Motion Picture – Documentary went to Making Waves: The Art of Cinematic Sound and the team of David J. Turner, Tom Myers, Dan Blanck and Frank Rinella.

Held in the Wilshire Grand Ballroom of the InterContinental Los Angeles Downtown, the awards were presented in seven categories for Outstanding Sound Mixing Motion Picture and Television and two Outstanding Product Awards. The evening saw CAS president Karol Urban pay tribute to recently retired CAS executive board member Peter R. Damski for his years of service to the organization. The contributions of re-recording mixer Tom Fleischman, CAS, were recognized as he received the CAS Career Achievement Award. Presenter Gary Bourgeois spoke to Fleischman’s commitment to excellence demonstrated in a career that spans over 40 years,  nearly 200 films and collaborations with dozens of notable directors.  

James Mangold

James Mangold received the CAS Filmmaker Award in a presentation that included remarks  by re-recording mixer Paul Massey, CAS, who was joined in the presentation by Harrison Ford. Mangold had even more to celebrate as he watched his sound team take top honors for Outstanding Achievement in Sound Mixing Motion Picture – Live Action. 

Here is the complete list of winners:

MOTION PICTURE – LIVE ACTION

Ford v Ferrari

Ford v Ferrari team

Production Mixer – Steven A. Morrow CAS 

Re-recording Mixer – Paul Massey CAS 

Re-recording Mixer – David Giammarco CAS 

Scoring Mixer – Tyson Lozensky

ADR Mixer – David Betancourt 

Foley Mixer – Richard Duarte

MOTION PICTURE – ANIMATED 

Toy Story 4

Original Dialogue Mixer – Doc Kane CAS

Original Dialogue Mixer – Vince Caro CAS

Re-recording Mixer – Michael Semanick CAS 

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Scott Curtis

 

MOTION PICTURE – DOCUMENTARY

Making Waves: The Art of Cinematic Sound

Production Mixer – David J. Turner 

Re-recording Mixer – Tom Myers 

Scoring Mixer – Dan Blanck

ADR Mixer – Frank Rinella

 

TELEVISION SERIES – 1 HOUR

Game of Thrones: The Bells

Production Mixer – Ronan Hill CAS 

Production Mixer –Simon Kerr 

Production Mixer – Daniel Crowley 

Re-recording Mixer – Onnalee Blank CAS 

Re-recording Mixer – Mathew Waters CAS 

Foley Mixer – Brett Voss CAS

TELEVISION SERIES – 1/2 HOUR 

TIE

Barry: ronny/lily

Production Mixer – Benjamin A. Patrick CAS 

Re-recording Mixer – Elmo Ponsdomenech CAS 

Re-recording Mixer – Jason “Frenchie” Gaya 

ADR Mixer – Aaron Hasson

Foley Mixer – John Sanacore CAS

 

Fleabag: Episode #2.6

Production Mixer – Christian Bourne 

Re-recording Mixer – David Drake 

ADR Mixer – James Gregory

 

TELEVISION MOVIE or LIMITED SERIES

Chernobyl: 1:23:45

Production Mixer – Vincent Piponnier 

Re-recording Mixer – Stuart Hilliker 

ADR Mixer – Gibran Farrah

Foley Mixer – Philip Clements

 

TELEVISION NON-FICTION, VARIETY or MUSIC SERIES or SPECIALS

David Bowie: Finding Fame

Production Mixer – Sean O’Neil 

Re-recording Mixer – Greg Gettens

 

OUTSTANDING PRODUCT – PRODUCTION

Sound Devices, LLC

Scorpio

 

OUTSTANDING PRODUCT – POST PRODUCTION 

iZotope

Dialogue Match

 

STUDENT RECOGNITION AWARD

Bo Pang

Chapman University

 

Main Image: Presenters Whit Norris, Elisha Cuthbert, Award winners Onnalee Blank, Ronan Hill and Brett Voss at the CAS Awards. (Tyler Curtis/ABImages) 

 

 

Wylie Stateman on Once Upon a Time… in Hollywood‘s Oscar nod for sound

By Beth Marchant

To director Quentin Tarantino, sound and music are primal forces in the creation of his idiosyncratic films. Often using his personal music collection to jumpstart his initial writing process and later to set a film’s tone in the opening credits, Tarantino always gives his images a deep, multi-sensory well to swim in. According to his music supervisor Mary Ramos, his bold use of music is as much a character as each film’s set of quirky protagonists.

Wylie Stateman – Credit: Andrea Resnick

Less showy than those memorable and often nostalgic set-piece songs, the sound design that holds them together is just as critically important to Tarantino’s aesthetic. In Once Upon a Time… in Hollywood it even replaces the traditional composed score. That’s one of many reasons why the film’s supervising sound editor Wylie Stateman, a long-time Tarantino collaborator, relished his latest Oscar-nominated project with the director (he previously received nominations for Django Unchained and Inglourious Basterds and has a lifetime total of nine Oscar nominations).

Before joining team Tarantino, Stateman sound designed some of the most iconic films of the ‘80s and ‘90s, including Tron, Footloose, Ferris Bueller’s Day Off (among 15 films he made with John Hughes), Born on the Fourth of July and Jerry Maguire. He also worked for many years with Oliver Stone, winning a BAFTA for his sound work on JFK. He went on to cofound the Topanga, California-based sound studio Twentyfourseven.

We talked to Stateman about how he interpreted Tarantino’s sound vision for his latest film — about a star having trouble evolving to new roles in Hollywood and his stuntman — revealing just how closely the soundtrack is connected to every camera move and cut.

How does Tarantino’s style as a director influence the way you approach the sound design?
I believe that sound is a very important department within the process of making any film. And so, when I met Quentin many years ago, I was meeting him under the guise that he wanted help and he wanted somebody who could focus their time, experience and attention on this very specific department called sound.

I’ve been very fortunate, especially on Quentin’s films, to also have a great production sound mixer and great rerecording mixers. We have both sides of the process in really tremendously skilled hands and tremendously experienced hands. Mark Ulano, our production sound mixer, won an Oscar for Titanic. He knows how to deal with dialogue. He knows how to deal with a complex set, a set where there are a lot of moving parts.

On the other side of that, we have Mike Minkler doing the final re-recording mixing. Mike, who I worked with on JFK, is tremendously skilled with multiple Oscars to his credit. He’s just an amazing creative in terms of re-recording mixing.

The role that I like to play as  supervising sound editor and designer, is how to speak to the filmmaker in terms of sound. For this film, we realized we could drive the soundtrack without a composer by using the chosen songs and KHJ radio, and select these bits and pieces from the shows of infamous DJ “Humble Harve,” or from the clips of all the other DJs on KHJ radio who really defined 1969 in Los Angeles.

And as the film shows, most people heard them over the car radio in car-centric LA.
The DJs were powerful messengers of popular culture. They were powerful messengers of what was happening in the minds and in the streets and in popular culture of that time. That was Quentin’s idea. When he wrote the script, he had written into it all of the KHJ radio segments, and he listens a lot, and he’s a real student of the filmmaking process and a real master.

On the student side, he’s constantly learning and he’s constantly looking and he’s constantly listening. On the master side, he then applies that to the characters that he wants to develop and those situations that he’s looking to be at the base and basis of his story. So, basically, Quentin comes to me for a better understanding of his intention in terms of sound, and he has a tremendous understanding to begin with. That’s what makes it so exciting.

When talking to Quentin and his editor Fred Raskin, who are both really deeply knowledgeable filmmakers, it can be quite challenging to stay in front of them and/or to chase behind them. It’s usually a combination of the two. But Quentin is a very generous collaborator, meaning he knows what he wants, but then he’s able to stop, listen and evaluate other ideas.

How did you find all of the clips we hear on the various radios?
Quentin went through hundreds of hours of archival material. And he has a tremendous working knowledge of music to begin with, and he’s also a real student of that period.

Can you talk about how you approached the other elements of specific, Tarantino-esque sound, like Cliff crunching on a celery stick in that bar scene?
Quentin’s movies are bold in the sense of some of the subject matter that he tackles, but they’re highly detailed and also very much inside his actors’ heads. So when you talk about crunching on a piece of celery, I interpret everything that Quentin imparts on his characters as having some kind of potential vocabulary in terms of sound. And that vocabulary… it applies to the camera. If the camera hides behind something and then comes out and reveals something or if the camera’s looking at a big, long shot — like Cliff Booth’s walk to George Spahn’s house down that open area in the Spahn Ranch — every one of those moves has a potential sound component and every editorial cut could have a vocabulary of sound to accompany it.

We also use those [combinations] to alter time, whether it’s to jump forward or jump back or just crash in. He does a lot of very explosive editing moves and all of that has an audio vocabulary. It’s been quite interesting to work with a filmmaker that sees picture and sound as sort of a romance and a dance. And the sound could lead the picture, or it could lag the picture. The sound can establish a mood, or it can justify a mood or an action. So it’s this constant push-pull.

Robert Bresson, the father of the French New Wave, basically said, “When the ear leads the eye, the eye becomes impatient. When the eye leads the ear, the ear becomes impatient. Use those impatiences.” So what I’m saying is that sound and pictures are this wonderful choreographed dance. Stimulate peoples’ ears and their eye is looking for something; stimulate their eyes and their ears are looking for something, and using those together is a really intimate and very powerful tool that Quentin, I think, is a master at.

How does the sound design help define the characters of Rick Dalton (Leonardo DiCaprio) and Cliff Booth (Brad Pitt)?
This is essentially a buddy movie. Rick Dalton is the insecure actor who’s watching a certain period — when they had great success and comfort — transition into a new period. You’re going from the John Wayne/True Grit way of making movies to Butch Cassidy and the Sundance Kid or Easy Rider, and Rick is not really that comfortable making this transition. His character is full of that kind of anxiety.

The Cliff Booth character is a very internally disturbed character. He’s an unsuccessful crafts/below-the-line person who’s got personal issues and is kind of typical of a character that’s pretty well-known in the filmmaking process. Rick Dalton’s anxious world is about heightened senses. But when he forgets his line during the bar scene in the Lancer set, the world doesn’t become noisy. The world becomes quiet. We go to silence because that’s what’s inside his head. He can’t remember the line and it’s completely silent. But you could play that same scene 180 degrees in the opposite direction and make him confused in a world of noise.

The year 1969 was very important in the history of filmmaking, and that’s another key to Rick’s and Cliff’s characters. If you look at 1969, it was the turning point in Hollywood when indie filmmaking was introduced. It was also the end of a great era of traditional studio fair and traditional acting, and was more defined by the looser, run-and-gun style of Easy Rider. In a way, the Peter Fonda/Dennis Hopper dynamic of Hopper’s film is somewhat similar to that of Rick Dalton and Cliff Booth.

I saw Easy Rider again recently and the ending hit me like a ton of bricks. The cultural panic, and the violence it invokes, is so palpable because you realize that clash of cultures never really went away; it’s still with us all these years later. Tarantino definitely taps into that tension in this film.
It’s funny that you say that because my wife and I went to the Cannes Film Festival with the team, and they were playing Easy Rider on the beach on a giant screen with a thousand seats in the sand. We walked up on it and we stood there for literally an hour and a half transfixed, just watching it. I hadn’t seen it in years.

What a great use of music and location photography! And then, of course, the story and the ending; it’s like, wow. It’s such a huge departure from True Grit and that generation that made that film. That’s what I love about Quentin, because he plays off the tension between those generations in so many ways in the film. We start out with Al Pacino, and they’re drinking whiskey sours, and then we go all the way through the gambit of what 1969 really felt like to the counterculture.

Was there anything unusual that you did in the edit to manipulate sound to make a scene work?
Sound design is a real design-level responsibility. We invent sound. We go to the libraries and we go to great lengths to record things in nature or wherever we can find it. In this case, we recorded all the cars. We apply a very methodical approach to sound.

Sound design, for me, is the art of shaping noise to suit the picture and to enhance the story and great sound lives somewhere between the science of audio and the subjectivity of storytelling. The science part is really well-known, and it’s been perfected over many, many years with lots of talented artists and artisans. But the story part is what excites me, and it’s what excites Quentin. So it becomes what we don’t do that’s so interesting, like using silence instead of noise or creating a soundtrack without a composer. I don’t think you miss having score music. When we couldn’t figure out a song, we made sound design elements. So, yeah, we would make tension sounds.

Shaping noise is not something I could explain to you with an “an eye of newt plus a tail of yak” secret recipe. It’s a feeling. It’s just working with audio, shaping sound effects and noise to become imperceptibly conjoined with music. You can’t tell where the sound design is beginning and ending and where it transfers into more traditional song or music. That is the beauty of Quentin’s films. In terms of sound, the audio has shapes that are very musical.

His deep-cut versions of songs are so interesting, too. Using “California Dreaming” by the Mamas and Papas would have been way too obvious, so he uses a José Feliciano cover of it and puts the actual Mamas and Papas into the film as walk-on characters.
Yeah. I love his choice of music. From Sharon and Roman listening to “Hush” by Deep Purple in the convertible, their hair flying, to going straight into “Son of a Lovin’ Man” after they arrive at the Playboy Mansion. Talk about 1969 and setting it off! It’s not from the San Francisco catalog; it’s just this lovely way that Quentin imagines time and can relate to it as sound and music. The world as it relates to sound is very different than the world of imagery. And the type of director that Quentin is, he’s a writer, he’s a director, and he’s a producer, so he really understands the coalescing of these disciplines.

You haven’t done a lot of interviews in the past. Why not?
I don’t do what I do to call attention to either myself or my work. Over the first 35 years of my career, there’s very little record of any conversation that I had outside of my team and directly with my filmmakers. But at this point in life, when we’re at the cusp of this huge streaming technology shift and everything is becoming more politically sensitive, with deep fakes in both image and audio, I think it’s time sound should have somebody step up and point out, “Hey, we are invisible. We are transitory.” Meaning, when you stop the electricity going to the speakers, the sound disappears, which is kind of an amazing thing. You can pause the picture and you can study it. Sound only exists in real time. It’s just the vibration in the air.

And to be clear, I don’t see motion picture sound as an art form. I see it, rather, as a form of art and it takes a long time to become a sculptor in sound who can work in a very simple style. After all, it’s the simplest lines that just blow your mind!

What blew your mind about this film, either while you worked on it or when you saw the finished product?
I really love the whole look of the film. I love the costumes, and I have great respect for the team that Quentin consistently pulls together. When I work on Quentin’s films, I never turn around and find somebody that doesn’t have a great idea or deep experience in their craft. Everywhere you turn, you bump into extraordinary talent.

Dakota Fanning’s scene at the Spahn Ranch… I mean, wow! Knocks my socks off. That’s really great stuff. It’s a remarkable thing to work with a director who has that kind of love for filmmaking and that allows for really talented people to also get in the sandbox and play.


Beth Marchant is a veteran journalist focused on the production and post community and contributes to “The Envelope” section of the Los Angeles Times. Follow her on Twitter @bethmarchant.

Behind the Title: Sound Lounge ADR mixer Pat Christensen

This ADR mixer was a musician as a kid and took engineering classes in college, making him perfect for this job.

Name: Pat Christensen

Company: Sound Lounge (@soundloungeny)

What’s your job title?
ADR mixer

What does Sound Lounge do?
Sound Lounge is a New York City-based audio post facility. We provide sound services for TV, commercials, feature films, television series, digital campaigns, games, podcasts and other media. Our services include sound design, editing and mixing; ADR recording and voice casting.

What does your job entail?
As an ADR mixer, I re-record dialogue for film and television. It is necessary when dialogue cannot be recorded properly on the set or for creative reasons or because additional dialogue is needed. My stage is set up differently from a standard mix stage as it includes a voiceover booth for actors.

We also have an ADR stage with a larger recording environment to support groups of talent. The stage also allows us to enhance sound quality and record performances with greater dynamics, high and low. The recording environment is designed to be “dead,” that is without ambient sound. That results in a clean recording so when it gets to the next stage, the mixer can add reverb or other processing to make it fit the environment of the finished soundtrack.

What would people find most surprising about your job?
People who aren’t familiar with ADR are often surprised how it’s possible to make an actor’s voice lipsync perfectly with the image on screen and indistinguishable from dialogue recorded on the day.

What’s your favorite part of the job?
Interacting with people — the sound team, the director or the showrunner, and the actors. I enjoy helping directors in guiding the actors and being part of the creative process. I act as a liaison between the technical and creative sides. It’s fun and it’s different every day. There’s never a boring session.

What’s your least favorite?
I don’t know if there is one. I have a great studio and all the tools that I need. I work with good people. I love coming to work every day.

What’s your most productive time of the day?
Whenever I’m booked. It could be 9am. It could be 7a.m. I do night sessions. When the client needs the service, I am ready to go.

If you didn’t have this job, what would you be doing instead?
In high school, I played bass in a punk rock band. I learned the ins and outs of being a musician while taking classes in engineering. I also took classes in automotive technology. If I’d gone that route, I wouldn’t be working in a muffler shop; I’d be fine-tuning Formula 1 engines.

How early on did you know that sound would be your path?
My mom bought me a four-string Washburn bass for Christmas when I was in the eighth grade, but even then I was drawn to the technical side. I was super interested in learning about audio consoles and other gear and how they were used to record music. Luckily, my high school offered a radio and television class, which I took during my senior year. I fell in love with it from day one.

Silicon Valley

What are some of your recent projects?
I worked on the last season of HBO’s Silicon Valley and the second season of CBS’ God Friended Me. We also did Starz’s Power and the new Adam Sandler movie Palm Springs. There are many more credits on my IMDB page. I try to keep it up-to-date.

Is there a project that you’re most proud of?
Power. We’ve done all seven seasons. It’s been exciting to watch how successful that show has become. It’s also been fun working with the actors and getting to know many of them on a personal level. I enjoy seeing them whenever they come it. They trust me to bridge the gap between the booth and the original performance and deliver something that will be seen, and heard, by millions of people. It’s very fulfilling.

Name three pieces of technology you cannot live without.
A good microphone, a good preamp and good speakers. The speakers in my studio are ADAM A7Xs.

What social media channels do you follow?
Instagram and Facebook.

What do you do to relax?
I play hockey. On weekends, I enjoy getting on the ice, expending energy and playing hard. It’s a lot of fun. I also love spending time with my family.

67th MPSE Golden Reel Winners

By Dayna McCallum

The Motion Picture Sound Editors (MPSE) Golden Reel Awards shared the love among a host of films when handing out awards this past weekend at their 67th annual ceremony.

The feature film winners included Ford v Ferrari for effects/Foley, 1917 for dialogue/ADR, Rocketman for the musical category, Jojo Rabbit for musical underscore, Parasite for foreign-language feature, Toy Story 4 for animated feature, and Echo in the Canyon for feature documentary.

The Golden Reel Awards, recognizing outstanding achievement in sound editing, were presented in 23 categories, including feature films, long-form and short-form television, animation, documentaries, games, special venue and other media.

Academy Award-nominated producer Amy Pascal (Little Women) surprised Marvel’s Victoria Alonso when she presented her with the 2020 MPSE Filmmaker Award (re-recording mixer Kevin O’Connell and supervising sound editor Steven Ticknor were honorary presenters).

The 2020 MPSE Career Achievement Award was presented to Academy Award-winning supervising sound editor Cecelia “Cece” Hall by two-time Academy Award-winning supervising sound editor Stephen H. Flick.

“Business models, formats and distribution are all changing,” said MPSE president-elect Mark Lanza during the ceremony. “Original scripted TV shows have set a record in 2019. There were 532 original shows this year. This number is expected to surge in 2020. Our editors and supervisors are paving the way and making our product and the user experience better every year.”

Here is the complete list of winners:

Outstanding Achievement in Sound Editing – Animation Short Form

3 Below “Tales of Arcadia”

Netflix

Supervising Sound Editor: Otis Van Osten
Sound Designer: James Miller
Dialogue Editors: Jason Oliver, Carlos Sanches
Foley Artists: Aran Tanchum, Vincent Guisetti
Foley Editor: Tommy Sarioglou 

Outstanding Achievement in Sound Editing – Non-Theatrical Animation Long Form

Lego DC Batman: Family Matters

Warner Bros. Home Entertainment

Supervising Sound Editor: Rob McIntyre, D.J. Lynch
Sound Designer: Lawrence Reyes
Sound Effects Editors: Ezra Walker
ADR Editor: George Peters
Foley Editor: Aran Tanchum, Derek Swanson
Foley Artists:  Vincent Guisetti 

Outstanding Achievement in Sound Editing – Feature Animation

Toy Story 4

Walt Disney Studios Motion Pictures

Supervising Sound Editor: Coya Elliott
Sound Designer: Ren Klyce
Supervising Dialogue Editor: Cheryl Nardi
Sound Effects Editors: Kimberly Patrick, Qianbaihui Yang, Jonathon Stevens
Foley Editors: Thom Brennan, James Spencer
Foley Artists:  John Roesch, MPSE, Shelley Roden, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Documentary

Serengeti

Discovery Channel

Supervising Sound Editor: Paul Cowgill
Foley Editor: Peter Davies 
Music Editor: Alessandro Baldessari
Foley Artists: Paul Ackerman 

Outstanding Achievement in Sound Editing – Feature Documentary

Echo in the Canyon

Greenwich Entertainment

Sound Designer: Robby Stambler, MPSE
Dialogue Editor:  Sal Ojeda, MPSE

Outstanding Achievement in Sound Editing – Computer Cinematic

Call of Duty: Modern Warfare (2019)

Activision Blizzard
Audio Director: Stephen Miller
Supervising Sound Editor: Dave Rowe
Supervising Sound Designer: Charles Deenen, MPSE Csaba Wagner
Supervising Music Editor:  Peter Scaturro

Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Sound Designers: Bryan Watkins, Mark Ganus, Eddie Pacheco, Darren Blondin
Dialogue Lead: Dave Natale
Dialogue Editors: Chrissy Arya, Michael Krystek
Sound Editors: Braden Parkes, Nick Martin, Tim Walston, MPSE, Brent Burge, Alex Ephraim, MPSE, Samuel Justice, MPSE
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti
Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Computer Interactive Game Play
Call of Duty: Modern Warfare (2019)
Infinity Ward
Audio Director: Stephen Miller
Senior Lead Sound Designer: Dave Rowe
Senior Lead Technical Sound Designer: Tim Stasica
Supervising Music Editor: Peter Scaturro
Lead Music Editor: Ted Kocher
Principal Sound Designer: Stuart Provine
Senior Sound Designers: Chris Egert, Doug Prior
Supervising Sound Designers: Charles Deenen, MPSE, Csaba Wagner
Sound Designers: Chris Staples, Eddie Pacheco, MPSE, Darren Blondin, Andy Bayless, Ian Mika, Corina Bello, John Drelick, Mark Ganus
Dialogue Leads: Dave Natale, Bryan Watkins, Adam Boyd, MPSE, Mark Loperfido
Sound Editors: Braden Parkes, Nick Martin, Brent Burge, Tim Walston, Alex Ephraim, Samuel Justice
Dialogue Editors: Michael Krystek, Chrissy Arya, Cesar Marenco>
Music Editors: Anthony Caruso, Scott Bergstrom, Adam Kallibjian, Ernest Johnson, Tao-Ping Chen, James Zolyak, Sonia Coronado, Nick Mastroianni, Chris Rossetti

Foley Artists: Gary Hecker, MPSE, Rick Owens, MPSE

Outstanding Achievement in Sound Editing – Non-Theatrical Feature

Togo

Disney+

Supervising Sound Editors: Odin Benitez, MPSE, Todd Toon, MPSE
Sound Designer: Martyn Zub, MPSE
Dialogue Editor: John C. Stuver, MPSE
Sound Effects Editors: Jason King, Adam Kopald, MPSE, Luke Gibleon, Christopher Bonis
ADR Editor: Dave McMoyler
Supervising Music Editor: Peter “Oso” Snell, MPSE
Foley Artists: Mike Horton, Tim McKeown
Supervising Foley Editor: Walter Spencer

Outstanding Achievement in Sound Editing – Special Venue

Vader Immortal: A Star Wars VR Series “Episode 1”

Oculus

Supervising Sound Editors: Kevin Bolen, Paul Stoughton
Sound Designer: Andy Martin
Supervising ADR Editors: Gary Rydstrom, Steve Slanec
Dialogue Editors: Anthony DeFrancesco, Christopher Barnett, MPSE Benjamin A. Burtt, MPSE
Foley Artists: Shelley Roden, MPSE Jana Vance

Outstanding Achievement in Sound Editing – Foreign Language Feature

Parasite

Neon

Supervising Sound Editor: Choi Tae Young
Sound Designer: Kang Hye Young
Supervising ADR Editor: Kim Byung In
Sound Effects Editors: Kang Hye Young
Foley Artists: Park Sung Gyun, Lee Chung Gyu
Foley Editor: Shin I Na
 

Outstanding Achievement in Sound Editing – Live Action Under 35:00

Barry “ronny/lily”

HBO

Supervising Sound Editors:  Sean Heissinger, Matthew E. Taylor
Sound Designer:  Rickley W. Dumm, MPSE
Sound Effects Editor: Mark Allen
Dialogue Editors:  John Creed, Harrison Meyle
Music Editor:  Michael Brake
Foley Artists:  Alyson Dee Moore, Chris Moriana 
Foley Editors:  John Sanacore, Clayton Weber

Outstanding Achievement in Sound Editing – Episodic Short Form – Music

Wu Tang: An American Saga “All In Together Now”

Hulu 

Music Editor: Shie Rozow

Outstanding Achievement in Sound Editing – Episodic Short Form – Dialogue/ADR

Modern Love “Take Me as I Am”

Prime Video
Supervising Sound Editor: Lewis Goldstein
Supervising ADR Editor: Gina Alfano, MPSE
Dialogue Editor:  Alfred DeGrand

Outstanding Achievement in Sound Editing – Episodic Short Form – Effects / Foley

The Mandalorian “Chapter One”

Disney+

Supervising Sound Editors: David Acord, Matthew Wood
Sound Effects Editors: Bonnie Wild, Jon Borland, Chris Frazier, Pascal Garneau, Steve Slanec
Foley Editor: Richard Gould
Foley Artists: Ronni Brown, Jana Vance

Outstanding Achievement in Sound Editing – Student Film (Verna Fields Award)

Heatwave

National Film and Television School

Supervising Sound Editor: Kevin Langhamer

Outstanding Achievement in Sound Editing – Single Presentation

El Camino: A Breaking Bad Movie

Netflix

Supervising Sound Editors: Nick Forshager, Todd Toon, MPSE
Supervising ADR Editor: Kathryn Madsen
Sound Effects Editor: Luke Gibleon
Dialogue Editor: Jane Boegel
Foley Editor: Jeff Cranford
Supervising Music Editor: Blake Bunzel
Music Editor: Jason Tregoe Newman
Foley Artists: Gregg Barbanell, MPSE, Alex Ullrich 

Outstanding Achievement in Sound Editing – Episodic Long Form – Music

Game of Thrones “The Long Night”

HBO 

Music Editor: David Klotz

Outstanding Achievement in Sound Editing – Episodic Long Form – Dialogue/ADR

Chernobyl “Please Remain Calm”

HBO

Supervising Sound Editor: Stefan Henrix
Supervising ADR Editor:  Harry Barnes
Dialogue Editor: Michael Maroussas

Outstanding Achievement in Sound Editing – Episodic Long Form – Effects / Foley

Chernobyl “1:23:45”

HBO

Supervising Sound Editor: Stefan Henrix
Sound Designer: Joe Beal
Foley Editors: Philip Clements, Tom Stewart
Foley Artist:  Anna Wright

Outstanding Achievement in Sound Editing – Feature Motion Picture – Music Underscore

JoJo Rabbit

Fox Searchlight Pictures

Music Editor: Paul Apelgren

Outstanding Achievement in Sound Editing – Feature Motion Picture – Musical

Rocketman

Paramount Pictures

Music Editors: Andy Patterson, Cecile Tournesac

Outstanding Achievement in Sound Editing – Feature Motion Picture – Dialogue/ADR

1917

Universal Pictures

Supervising Sound Editor: Oliver Tarney, MPSE
Dialogue Editor: Rachael Tate, MPSE

Outstanding Achievement in Sound Editing – Effects / Foley

Ford v Ferrari

Twentieth Century Fox 

Supervising Sound Editor: Donald Sylvester

Sound Designers: Jay Wilkenson, David Giammarco

Sound Effects Editor: Eric Norris, MPSE

Foley Editor: Anna MacKenzie

 Foley Artists: Dan O’Connell, John Cucci, MPSE, Andy Malcolm, Goro Koyama


Main Image Caption: Amy Pascal and Victoria Alonso

 

Skywalker Sound and Cinnafilm create next-gen audio toolset

Iconic audio post studio Skywalker Sound and the makers of PixelStrings media conversion technology Cinnafilm are working together on a new audio tool expected to hit in the first quarter of 2020.

As the paradigms of theatrical, broadcast and online content begin to converge, the need to properly conform finished programs to specifications suitable for a variety of distribution channels has become more important than ever. To ensure high fidelity is maintained throughout the conversion process, it is important to implement high-quality tools to aid in time-domain, level, spatial and file-format processing for all transformed content intended for various audiences and playout systems.

“PixelStrings represents our body of work in image processing and media conversions. It is simple,  scalable and built for the future. But it is not just about image processing, it’s an ecosystem. We recognize success only happens by working with other like-minded technology companies. When Skywalker approached us with their ideas, it was immediate validation of this vision. We plan to put as much enthusiasm and passion into this new sound endeavor as we have in the past with picture — the customers will benefit as they see, and hear, the difference these tools make on the viewer experience,” says Cinnafilm CEO/ founder Lance Maurer.

To address this need, Skywalker Sound has created an audio tool set based on proprietary signal processing and orchestration technology. Skywalker Audio Tools will offer an intelligent, automated audio pipeline with features including sample-accurate retiming, loudness and standards analysis and correction, downmixing, channel mapping and segment creation/manipulation — all faster than realtime. These tools will be available exclusively within Cinnafilm’s PixelStrings media conversion platform.

Talking work and trends with Wave Studios New York

By Jennifer Walden

The ad industry is highly competitive by nature. Advertisers compete for consumers, ad agencies compete for clients and post houses compete for ad agencies. Now put all that in the dog-eat-dog milieu of New York City, and the market becomes more intimidating.

When you factor in the saturation level of the audio post industry in New York City — where audio facilities are literally stacked on top of each other (occupying different floors of the same building or located just down the hall from each other) — then the odds of a new post sound house succeeding seem dismal. But there’s always a place for those willing to work for it, as Wave Studios’ New York location is proving.

Wave Studios — a multi-national sound company with facilities in London and Amsterdam — opened its doors in NYC a little over a year ago. Co-founder/sound designer/mixer Aaron Reynolds worked on The New York Times “The Truth Is Worth It” ad campaign for Droga5 that earned two Grand Prix awards at the 2019 Cannes Lions International Festival of Creativity, and Reynolds’ sound design on the campaign won three Gold Lions. In addition, Wave Studios was recently named Sound Company of the Year 2019 at Germany’s Ciclope International Festival of Craft.

Here, Reynolds and Wave Studios New York executive producer Vicky Ferraro (who has two decades of experience in advertising and post) talk about what it takes to make it, what agency clients are looking for. They also share details on their creative approach to two standout spots they’ve done this year for Droga5.

How was your first year-plus in NYC? What were some challenges of being the new kid in town?
Vicky Ferraro: I joined Wave to help open the New York City office in May 2018. I had worked at Sound Lounge for 12 years, and I’ve worked on the ad agency side as well, so I’m familiar with the landscape.

One of the big challenges is that New York is quite a saturated market when it comes to audio. There are a lot of great audio places in the city. People have their favorite spots. So our challenges are to forge new relationships and differentiate ourselves from the competition, and figure out how to do that.

Also, the business model has changed quite a bit; a lot of agencies have in-house facilities. I used to work at Hogarth, so I’m quite familiar with how that side of the business works as well. You have a lot of brands that are working in-house with agencies.

So, opening a new spot was a little daunting despite all the success that Wave Studios in London and Amsterdam have had.
Aaron Reynolds: I worked in London, and we always had work from New York clients. We knew friends and people over here. Opening a facility in New York was something we always wanted to do, since 2007. The challenge was to get out there and tell people that we’re here. We were finally coming over from London and forging those relationships with clients we had worked with remotely.

New York has a slightly different work ethic in that they tend to do the sound design with us and then do the mix elsewhere. One challenge was to get across to our clients that we offer both, from start to finish.

Sound design and mixing are one and the same thing. When I’m doing my sound design, I’m thinking about how I want it to sound in the mix. It’s quite unique to do the sound design at one place and then do the mix somewhere else.

What are some trends you’re seeing in the New York City audio post scene? What are your advertising clients looking for?
Reynolds: On the work side, they come here for a creative sound design approach. They don’t want just a bit of sound here and a bit of sound there. They want something to be brought to the job through sound. That’s something that Wave has always done, and that’s been a bastion of our company. We have an idea, and we want to create the best sound design for the spot. It’s not just a case of, “bring me the sounds and we’ll do it for you.” We want to add a creative aspect to the work as well.

And what about format? Are clients asking for 5.1 mixes? Or stereo mixes still?
Reynolds: 99% of our work is done in stereo. Then, we’ll get the odd job mixed in 5.1 if it’s going to broadcast in 5.1 or play back in the cinema. But the majority of our mixes are still done in stereo.

Ferraro: That’s something that people might not be aware of, that most of our mixes are stereo. We deliver stereo and 5.1, but unless you’re watching in a 5.1 environment (and most people’s homes are not a 5.1 environment), you want to listen to a stereo mix. We’ve been talking about that with a lot of clients, and they’ve been appreciative of that as well.

Reynolds: If you tend to mix in 5.1 and then fold down to a stereo mix, you’re not getting a true stereo mix. It’s an artificial one. We’re saying, “Let’s do a stereo mix. And then let’s do a separate 5.1 mix. Then you’re getting the best of both.”

Most of what you’re listening to is stereo, so you want to have the best possible stereo mix you can have. You don’t want a second rate mix when 99% of the media will be played in stereo.

What are some of the benefits and challenges of having studios in three countries? Do you collaborate on projects?
Ferraro: We definitely collaborate! It’s been a great selling point, and a fantastic time-saver in a lot of cases. Sometimes we’ll get a project from London or Amsterdam, or vice versa. We have two sound studios in New York, and sometimes a job will come in and if we can’t accommodate it, we can send it over to London. (This is especially true for unsupervised work.) Then they’ll do the work, and our client has it the next morning. Based on the time zone difference, it’s been a real asset, especially when we’re under the gun.

Aaron has a great list of clients that he works with in London and Amsterdam who continue to work with him here in New York. It’s been very seamless. It’s very easy to send a project from one studio to another.

Reynolds: We all work on the same system — Steinberg Nuendo — so if I send a job to London, I can have it back the next morning, open it up, and have the clients review it with me. I can carry on working in the same session. It’s almost as if we can work on a 24-hour cycle.

All the Wave Studios use Steinberg Nuendo as their DAW?
Reynolds: It’s audio post software designed with sound designers in mind. Pro Tools is more mixing software, good for recording music and live bands. It’s good for mixing, but it’s not particularly great for doing sound design. Nuendo, on the other hand, has been built for sound design, roots up. It has a lot of great built-in plugins. With Pro Tools you need to get a lot of third-party plugins. Having all these built-in plugins makes the software really solid and reliable.

When it comes to third-party plugins, we really don’t need that many because Nuendo has so many built in. But some of the most-used third-party plugins are reverbs, like Audio Ease’s Altiverb and Speakerphone.

I think we’re one of the only studios that uses Nuendo as our main DAW. But Wave has always been a bit rogue. When we first set up years ago, we were using Fairlight, which no one else was using at the time. We’ve always had the desire to use the best tool that we can for the job, which is not necessarily the “industry standard.” When it came to upgrading all of our systems, we were looking into Pro Tools and Nuendo, but one of the partners at Wave, Johnnie Burn, uses Nuendo for the film side. He found it to be really powerful, so we made the decision to put it in all the facilities.

Why should agencies choose an independent audio facility instead of keeping their work in-house? What’s the benefit for them?
Ferraro: I can tell you from firsthand knowledge several benefits to going out-of-house. The main thing that draws clients to Wave Studios — and away from in-house — is that there is a high level of creativity and experience that comes with our engineers. We bring a different perspective than what you get from an in-house team. While there is a lot of talent in-house, those models often deal with freelancers that aren’t as vested in the company, and it poses challenges in building the brand. It’s a different approach to working and finishing up a piece.

Those two aspects play into it — the creativity and having engineers dedicated to our studio. We’re not bringing in freelancers or working with an unknown pool of people. That’s important.

From my own experience, sometimes the approach can feel more formulaic. As an independent audio facility, our approach is very collaborative. There’s a partnership that we create with all of our clients as soon as they’re on board. Sometimes we get involved even before we have a job assigned, just to help them explore how to expand their ideas through sound, how they should be capturing the sound on-set, and how they should be thinking about audio post. It’s a very involved process.

Reynolds: What we bring is a creative approach. Elsewhere, that can be more formulaic, as Vicky said. Here, we want to be as creative as possible and treat jobs with attention and care.

Wave Studios is an international audio company. Is that a draw for clients?
Ferraro: One hundred percent. You’ve got to admit, it’s got a bit of cachet to it for sure. It’s rare to be a commercial studio with outposts in other countries. I think clients really like that, and it does help us bring a different perspective. Aaron’s perspective coming from London is very different from somebody in New York. It’s also cool because our other engineer is based in the New York market, and so his perspective is different from Aaron’s. In this way, we have a blend of both.

There have been some big commercial audio post houses go under, like Howard Schwartz and Nutmeg. What does it take for an audio post house in NYC to be successful in the long run?
Reynolds: The thing to do to maintain a good studio — whether in New York City or anywhere — is not to get complacent. Don’t ever rest on your laurels. Take every job you do as if it’s your first — have that much enthusiasm about it. Keep forging for the best, and that will always shine through. Keep doing the most creative work you can do, and that will make people want to come back. Don’t get tired. Don’t get lazy. Don’t get complacent. That’s the key.

Ferraro: I also think that you need to be able to evolve with the changing environment. You need to be aware of how advertising is changing, stay on top of the trends and move with it rather than resisting it.

What are some spots that you’ve done recently at Wave Studios NYC? How do they stand out, soundwise?
Reynolds: There’s a New York Times campaign that I have been working on for Droga5. A spot in there is called Fearlessness, which was all about a journalist investigating ISIS. The visuals tell a strong story, and so I wanted to do that in an acoustic sort of way. I wanted people to be able to close their eyes and hear all of the details of the journey the writer was taking and the struggles she came across. Bombs had blown up a derelict building, and they are walking through the rubble. I wanted the viewer to feel the grit of that environment.

There’s a distorted subway train sound that I added to the track that sets the tone and mood. We explored a lot of sounds for the piece. The soundscapes were created from different layers using sounds like twisting metals and people shouting in both English and Arabic, which we sourced from libraries like Bluezone and BBC, in particular. We wanted to create a tone that was uneasy and builds to a crescendo.

We’ve got a massive amount of sound libraries — about 500,000 sound effects — that are managed via Nuendo. We don’t need any independent search engine. It’s all built within the Nuendo system. Our sound effects libraries are shared across all of our facilities in all three countries, and it’s all accessed through Nuendo via a local server for each facility.

We did another interesting spot for Droga5 called Night Trails for Harley-Davidson’s electric motorcycle. In the spot, the guy is riding through the city at night, and all of the lights get drawn into his bike. Ringan Ledwidge, one of the industry’s top directors, directed the spot. Soundwise, we were working with the actual sound of the bike itself, and I elaborated on it to make it a little more futuristic. In certain places, I used the sound of hard drives spinning and accelerating to create an electric bike-by. I had to be quite careful with it because they do have an actual sound for the bike. I didn’t want to change it too much.

For the sound of the lights, I used whispers of people talking, which I stretched out. So as the bike goes past a streetlight, for example, you hear a vocal “whoosh” element as the light travels down into the bike. I wanted the sound of the lights not to be too electric, but more light and airy. That’s why I used whispers instead of buzzing electrical sounds. In one scene, the light bends around a telephone pole, and I needed the sound to be dynamic and match that movement. So I performed that with my voice, changing the pitch of my voice to give the sound a natural arc and bend.

Main Image: (L-R) Aaron Reynolds and Vicky Ferraro


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Review: iZotope’s Ozone 9 isn’t just for mastering

By Pat Birk

Izotope is back with its latest release, Ozone 9. And with it, iZotope hopes to provide a comprehensive package of tools to streamline the audio engineer’s workflow. The company has been on my radar for a number of years now, having used the RX suite extensively for clean up and restoration of production audio.

I have always been impressed by RX’s ability to improve poor location sound but was unfamiliar with the company’s more music-focused products. But, in addition to being an engineer, I am also a musician, so I was excited to try out the Ozone suite for myself.

Ozone is first and foremost a mastering suite. It features a series of EQ, compression, saturation and limiting modules meant to be used in a mastering chain — putting the final touches on your audio before it hits streaming platforms or physical media.

Since Ozone is primarily (though by no means solely) aimed at mastering engineers, the plugin features a host of options for manipulating a finished stereo mix, with all elements in place and no stems to adjust. Full disclosure; My mastering experience prior to this review comprised loading up an instance of Waves Abbey Road TG Mastering Chain, applying a preset and playing with the settings from there. However, that didn’t stop me from loading a recent mix into Ozone and taking a crack at mastering it.

The Master Assistant feature helps create a starting point on your master. Note the colored lines beneath the waveform, which accurately depict the song’s structure.

Ozone has deeply integrated machine learning and I immediately found that the program lives up to the hype surrounding that technology. I loaded my song into the standalone app and analyzed it with the Master Assistant feature. I was asked to choose between the Vintage and Modern setting, select either a manual EQ setting or load a mastered song for reference and then tell Ozone whether the track was being mastered for streaming or CD.

Within about 15 seconds of making these selections and playing the track, Ozone had chained together a selection of EQs, compressors and limiters that added punch, clarity and, of course, loudness. I was really impressed with the ballpark iZotope’s AI had gotten my track into. Another really nice touch was the fact that Ozone had analyzed the track and assigned a series of colored lines beneath the waveform to represent each section of the song. It was dead on, and really streamlined the process of checking each section for making adjustments.

Vintage
As a musician who came up listening to the great recordings of the ‘60s and ‘70s, I often find myself wanting to add some analog credibility to my largely in-the-box productions. iZotope delivers in a big way here, incorporating four vintage modules to add as much tube and transistor warmth as you desire. The Vintage EQ module is based on the classic Pultec, emulating its distinctive curves and representing them graphically. My ears knew that a little goes a long way with Pultec-type EQs, but the graphic EQ really helped me understand what was going on more deeply.

The Vintage Compressor emulates analog feedback compressors such as the Urei 1176 and Teletronix LA2A and is specifically designed to minimize pumping effects that can appear when compression is overdone. I had to push the compressor pretty hard before I heard anything like that and found that it did a really nice job of subtly attenuating transients.

Vintage tape adds analog warmth, and this reviewer found it pulls the sound together.

The Vintage Limiter is based on the prized Fairchild 670 limiter and it does what a limiter is meant to do: raise the level of the mix and decrease dynamic range, all while adding a distinctive analog warmth to the signal. I’ve never gotten my hands on a Fairchild, but I know that this emulation sounds good, regardless of how true it is to the original.

The Master Assistant feature arranged all of these modules in a nicely gain-staged chain for me, and after some light tweaking, I was well within the neighborhood of what I was hoping for in a master. But I wanted to add a little more warmth, a little more “glue.” That’s where the Vintage Tape module came in. iZotope has based its tape emulation on the Studer A810. The company says that the plugin features all of the benefits of tape — added warmth, saturation and glue — without any of the wow, flutter and crosstalk that occurs on actual tape machines.

Adjustable tape speeds have a noticeable effect on frequency response, with 7.5ips being darker and 30ips being brighter. More tonal adjustments can be made via the bias and low and high emphasis controls, and saturation is controlled via the input drive control. The plugin departs
from the realm of physical tape emulation with the added Harmonics control, which adds even harmonics to the signal, providing further warmth.

I appreciated the warmth and presence Vintage Tape added to the signal, but I did find myself missing some of the “imperfection” options included on other tape emulation plug-ins, such as the Waves J37 tape machine. Slight wow and flutter can add character to a recording and can be especially interesting if the tape emulator has a send-and-return section for setting up delays. But Ozone is a mastering suite, so I can see why these kinds of features weren’t included.

The vintage EQ purports to offer Pultec-style cuts and boosts.

Modern Sounds
Each of the vintage modules has a modern counterpart in the form of the Dynamics, Dynamic EQ, EQ and Exciter plugins. Each of these plugins is simple to operate, with a sleek, modern UI. Each plugin is also multiband with the EQs featuring up to eight bands, the Dynamic EQ featuring six and the Exciter and Dynamics modules featuring four bands each. This opens up a wide range of possibilities for precisely manipulating audio.

I was particularly intrigued by the Exciter’s ability to divide the frequency spectrum into four quadrants and apply a different type of analog harmonic excitement to each. Tube, transistor and tape saturation are all available, and the Exciter truly represents a modern method of using of classic analog sound signatures.

The Modern modules will also be of interest to sound designers and other audio post pros. Dynamic EQ allows you to set a threshold and ratio at which a selected band will begin to affect audio. While this is, of course, useful for managing problems such as sibilance and other harsh frequencies in a musical context, problematic frequencies are just as prevalent in dialogue recording, if not more so. Used judiciously, Dynamic EQ has the potential to save a lot of time in a dialogue edit. Dynamic EQ or the multiband compression section of Ozone’s Dynamics module have the potential to rescue production audio.

Exciter allows for precise amounts of harmonic distortion to be added across four bands.

For instance, in the case of a fantastic performance during which the actor creates a loud transient noise by hitting a prop, the Dynamic EQ can easily tame the transient noise without greatly affecting the actor’s voice and without creating artifacts. And while the EQ modules in Ozone feature a wide selection of filter categories and precisely adjustable Qs, which will no doubt be useful throughout the design process, it is important to note that they are limited to 6dB boosts and 12dB cuts in gain. The plugin is still primarily aimed at the subtleties of mastering.

Dialogue Editors, Listen Up
Ozone’s machine learning does provide two more fantastic features for dialogue editors: Match EQ and Master Rebalance. Match EQ intelligently builds an EQ profile of a given audio selection and can apply it another piece of audio. This can aid greatly in matching a lavalier mic to a boom track or incorporating ADR into a take. I also tested it by referencing George Harrison’s “What Is Life?” and applying it to a mix of my song. I was shocked by how close the plugin got my mix sounding like George’s.

Ozone’s standard equalizer

Master Rebalance, meanwhile, is meant for a mastering engineer to be able to bring up or lower the vocals, bass, or drums in a song with only a stereo mix to work from. I tested it on music and was very impressed by how effectively it raised and cut each category without affecting the parts around it. But this will also have utility for dialogue editors — the module is so effective at recognizing the human voice that it can bring up dialogue within production tracks, further separating it from whatever noise is happening around it.

Match EQ yields impressive results and could be time-saving for music engineers crossing over into the audio post world — like those who do not own RX 7 Advanced, which features a similar module.

The Imager module also has potential for post. Its Stereoize feature can add an impressive amount of width to any track and has a multiband feature, meaning you have the option to, for example, keep the low frequencies tight and centered while spreading the mids and highs more widely across the stereo field. And while it is not a substitution for true stereo recording, the Stereoize feature can add depth to mono ambience and world tone recordings, making them usable in the right context.

Master rebalance features a simple interface with

The collection of plugins is available at three price points — Elements, Standard and Advanced — which allows engineers to get started with Ozone at any budget. Elements is a stripped down package of Ozone’s bare essentials, Standard introduces the standalone app and a sizeable step up in terms of featureset and Advanced is replete with every mastering innovation IZotope has developed to date, including new toys like Low-End Focus and Master Rebalance. A complete list of each tier’s corresponding features can be found on Izotope’s website.

Summing Up
Ozone 9 integrates an immense amount of technology and research into a sleek, user-friendly package. For music recording and mastering engineers, this suite is a no-brainer. For other types of audio post engineers, the plugin provides enough perks to be interesting and useful, from editing to design to final mix. Ozone 9 Elements, Standard and Advanced editions are available now from IZotope.


Pat Birk is a musician and sound engineer at Silver Sound, a boutique sound house based in New York City.

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film. Murray received an Oscar nomination in the category of Sound Editing for his work on Joker, while Ozanich, Zupancic and Tod Maitland were nominated for their Sound Mixing work.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Killer Tracks rebrands as Universal Production Music

Production music company Killer Tracks has rebranded as Universal Production Music. The new name strengthens alignment with parent company Universal Music Group.

As part of its rebrand, Universal Production Music has launched new US website. Using the new theme “Find Your Anthem,” the site provides intuitive tools for searching, sharing and collaborating, all of which are designed to help users discover unique tracks to tell their stories and make their projects stand out. New features include a “My Account” section that allows users to control access, download tracks, manage licenses and pay invoices.

“Customers will gain faster access to tracks, simplified licensing and more great music,” notes VP of repertoire Carl Peel. “At the same time, they can still speak directly with our music search specialists for help in finding that perfect track and building playlists. Our licensing experts will continue to provide guidance with questions related to rights and usage.”

Drawing on a roster of talent that includes top composers, producers and artists, Universal Production Music releases more than 30 albums of original music each month. It also offers more than 150 curated playlists organized by theme.

“We look forward to working closely with our colleagues in the US to share insights into emerging musical trends, develop innovative services and pursue co-production ventures,” says Jane Carter, managing director of Universal Production Music, UK. “Most importantly, our customers will enjoy an even wider selection of premium music to bring their projects to life.”

Nugen’s new navigable alert solution for VisLM loudness metering tool

Nugen Audio will be at IBC with the latest updates to its VisLM loudness metering software. Targeting loudness metering, VisLM now offers a ‘Flag’ feature that builds upon the Alert functionality found in previous versions of the plug-in. This will allow users to navigate through True Peak and short-term/momentary loudness alerts, as well as manual flags for other points of interest. Included with the update is the latest maximum loudness range (LRA 18) for its Netflix preset that will benefit forward-thinking productions supplying content to the SVOD platform. The company is also rolling out navigable/visual alerts that further simplify operation.

VisLM offers a user interface design that is focused on the world’s standard loudness parameters, such as the newly implemented LRA 18 for Netflix productions. Using this solution, editors can have access to detailed historical information that enables them to hit the target every time. Additional loudness logging and timecode functions allow for analysis and proof of compliance.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker a