Tag Archives: audio post

Localization: Removing language barriers on global content

By Jennifer Walden

Foreign films aren’t just for cinephiles anymore. Streaming platforms are serving up international content to the masses. There are incredible series — like Netflix’s Spanish series Money Heist, Danish series The Rain and the German series Dark — that would have been otherwise unknown to American audiences. The same holds true for American content reaching foreign audiences. For instance, Starz series American Gods is available in French. Great stories are always worth sharing and language shouldn’t be the barrier that holds back the flood of global entertainment.

Now I know there are purists who feel a film or show should be experienced in its original language, but admit it, sometimes you just don’t feel like reading subtitles. (Or, if you do, you can certainly watch those aforementioned shows with subtitles and hear the original language.) So you pop on the audio for your preferred language and settle in.

Chris Carey in the Burbank studio

Dubbing used to be a poorly lipsynced affair, with bad voiceovers that didn’t fit the characters on screen in any capacity. Not so anymore. In fact, dubbing has evolved so much that it’s earned a new moniker — localization. The increased offering of globally produced content has dramatically increased the demand for localization. And as they say, practice makes perfect… or better, anyway.

Two major localization providers — BTI Studios and Iyuno Media Group — have recently joined forces under the Iyuno brand, which is now headquartered in London. Together, they have 40 studio facilities in 30 different countries, and support 82 different languages, according to its chief revenue officer/managing director of the Americas Chris Carey.

Those are impressive numbers. But what does this mean for the localization end result?

Iyuno is able to localize audio locally. The language localization for a specific market is happening in that market. This means the language is current. The actors aren’t just fluent; they’re native speakers. “Dialects change really fast. Slang changes. Colloquialisms change. These things are changing all the time, and if you’re not in the market with the target audience you can miss a lot of things that a good geographically diverse network of performers can give you,” says Carey.

Language expertise doesn’t end with actor performance. There are also the scripts and subtitles to think about. Localization isn’t a straight translation. There’s the process of script adaptation in which words are chosen based on meaning (of course) but also on syllable count in order to match lipsync as closely as possible. It’s a feat that requires language fluency and creativity.

BTI France

“If you think about the Eastern languages, and the European and Eastern European languages, they use a lot of consonants and syllables to make a simple English word. So we’re rewriting the script to use a different word that means the same thing but will fit better with the actor on-screen. So when the actor says the line in Polish and it comes out of what appears to be the mouth of the American actor on-screen, the lipsync is better,” explains Carey.

Iyuno doesn’t just do translations — dubbing and subtitles — to and from English. Of the 82 languages it covers, it can translate any one of those into another. This process requires a network of global linguists and a cloud-based infrastructure that can support tons of video streaming and asset sharing — including the “dubbing script” that’s been adapted into the destination language.

The magic of localization is 49% script adaptation, 49% dialogue editing and 2% processing in Avid Pro Tools, like time shifting and time compression/expansion to finesse the sync. “You’re looking at the actors on screen and watching their lip movement and trying to adjust this different language to come out of their mouth as close as possible,” says Carey. “There isn’t an automated-fit sound tool that would apply for localization. The actor, the director and the engineer are in the studio together working on the sync, adjusting the lines and editing the takes.”

As the voice record session is happening, “sometimes the actor will suggest a better way to say a line, too, and they’ll do an ‘as recorded script,’” says Carey. “They’ll make red lines and markups to the script, and all of that workflow we have managed into our technology platform, so we can deliver back to the customer the finished dub, the mix, and the ‘as recorded script’ with all of the adaptations and modifications that we had done.”

Darkest Hours is just one of the many titles they’ve worked on.

Iyuno’s technology platform (its cloud-based collaboration infrastructure) is custom-built. It can be modified and updated as needed to improve the workflow. “That backend platform does all the script management and file asset management; we are getting the workflow very efficient. We break all the scripts down into line counts by actor, so he/she can do the entire session’s worth of lines throughout that show. Then we’ll bring in the next actor to do it,” says Carey.

Pro Tools is the de facto DAW for all the studios in the Iyuno Media Group. Having one DAW as the standard makes it easy to share sessions between facilities. When it comes to mic selection, Carey says the studios’ engineers make those choices based on what’s best for each project. He adds, “And then factor in the acoustic space, which can impart a character to the sound in a variety of different ways. We use good studios that we built with great acoustic properties and use great miking techniques to create a sound that is natural and sounds like the original production.”

Iyuno is looking to improve the localization process even further by building up a searchable database of actors’ voices. “We’re looking at a bit more sophisticated science around waveform analysis. You can do a Fourier transform on the audio to get a spectral analysis of somebody’s voice. We’re looking at how to do that to build a sound-alike library so that when we have a show, we can listen to the actor we are trying to replace and find actors in our database that have a voice match for that. Then we can pull those actors in to do a casting test,” says Carey.

Subtitles
As for subtitles, Iyuno is moving toward a machine-assisted workflow. According to Carey, Iyuno is inputting data on language pairs (source and destination) into software that trains on that combination. Once it “learns” how to do those translations, the software will provide a first pass “in a pretty automated fashion, quite faster than a human would have done that. Then a human QCs it to make sure the words are right, makes some corrections, corrects intentions that weren’t literal and needs to be adjusted,” he says. “So we’re bringing a lot of advancement in with AI and machine learning to the subtitling world. We will expect that to continue to move pretty dramatically toward an all-machine-based workflow.”

But will machines eventually replace human actors on the performance side? Carey asks, “When were you moved by Google assistant, Alexa or Siri talking to you? I reckon we have another few turns of the technology crank before we can have a machine produce a really good emotional performance with a synthesized voice. It’s not there yet. We’re not going to have that too soon, but I think it’ll come eventually.”

Main Image: Starz’s American Gods – a localization client.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Mixing sounds of fantasy and reality for Rocketman

By Jennifer Walden

Paramount Pictures’ Rocketman is a musical fantasy about the early years of Elton John. The story is told through flashbacks, giving director Dexter Fletcher the freedom to bend reality. He blended memories and music to tell an emotional truth as opposed to delivering hard facts.

Mike Prestwood Smith

The story begins with Elton John (Taron Egerton) attending a group therapy session with other recovering addicts. Even as he’s sharing details of his life, he’s stretching the truth. “His recollection of the past is not reliable. He often fantasizes. He’ll say a truth that isn’t really the case, because when you flash back to his memory, it is not what he’s saying,” says BAFTA-winning re-recording mixer Mike Prestwood Smith, who handled the film’s dialogue and music. “So we’re constantly crossing the line of fantasy even in the reality sections.”

For Smith, finding the balance between fantasy and reality was what made Rocketman unique. There’s a sequence in which pre-teen Elton (Kit Connor) evolves into grown-up Elton to the tune of “Saturday’s Alright for Fighting.” It was a continuous shot, so the camera tracks pre-teen Elton playing the piano, who then then gets into a bar fight that spills into an alleyway that leads to a fairground where a huge choreographed dance number happens. Egerton (whose actual voice is featured) is singing the whole way, and there’s a full-on band under him, but specific effects from his surrounding environment poke through the mix. “We have to believe in this layer of reality that is gluing the whole thing together, but we never let that reality get in the way of enjoying the music.”

Smith helped the pre-recorded singing to feel in-situ by adding different reverbs — like Audio Ease’s AltiVerb, Exponential Audio’s PhoenixVerb and Avid’s ReVibe. He created custom reverbs from impulse responses taken from the rooms on set to ground the vocal in that space and help sell the reality of it.

For instance, when Elton is in the alleyway, Smith added a slap verb to Egerton’s voice to make it feel like it’s bouncing off the walls. “But once he gets into the main verses, we slowly move away from reality. There’s this flux between making the audience believe that this is happening and then suspending that belief for a bit so they can enjoy the song. It was a fine line and very subjective,” he says.

He and re-recording mixer/supervising sound editor Matthew Collinge spent a lot of time getting it to play just right. “We had to be very selective about the sound of reality,” says Smith. “The balance of that whole sequence was very complex. You can never do those scenes in one take.”

Another way Smith helped the pre-recorded vocals to sound realistic was by creating movement using subtle shifts in EQ. When Elton moves his head, Smith slightly EQ’d Egerton’s vocals to match. These EQ shifts “seem little, but collectively they have a big impact on selling that reality and making it feel like he’s actually performing live,” says Smith. “It’s one of those things that if you don’t know about it, then you just accept it as real. But getting it to sound that real is quite complicated.”

For example, there’s a scene in which Egerton is working out “Your Song,” and the camera cuts from upstairs to downstairs. “We are playing very real perspectives using reverb and EQ,” says Smith. Then, once Elton gets the song, he gives Bernie Taupin (Jamie Bell) a knowing look. The music gets fleshed out with a more complicated score, with strings and guitar. Next, Elton is recording the song in a studio. As he’s singing, he’s looking down and playing piano. Smith EQ’d all of that to add movement, so “it feels like that performance is happening at that time. But not one single sound of it is from that moment on set. There is a laugh from Bernie, a little giggle that he does, and that’s the only thing from the on-set performance. Everything else is manufactured.”

In addition to EQ and reverb, Smith used plugins from Helsinki-based sound company Oeksound to help the studio recordings to sound like production recordings. In particular, Oeksound’s Spiff plugin was useful for controlling transients “to get rid of that close-mic’d sound and make it feel more like it was captured on set,” Smith says. “Combining EQ and compression and adding reverb helped the vocals to sound like sync, but at the same time, I was careful not to take away too much from the quality of the recording. It’s always a fine line between those things.”

The most challenging transitions were going from dialogue into singing. Such was the case with quiet moments like “Your Song” and “Goodbye Yellow Brick Road.” In the latter, Elton quietly sings to his reflection in a mirror backstage. The music slowly builds up under his voice as he takes off down the hallway and by the time he hops into a cab outside it’s a full-on song. Part of what makes the fantasy feel real is that his singing feels like sync. The vocals had to sound impactful and engage the audience emotionally, but at the same time they had to sound believable — at least initially. “Once you’re into the track, you have the audience there. But getting in and out is hard. The filmmakers want the audience to believe what they’re seeing, that Taron was actually in the situations surrounded by a certain level of reality at any given point, even though it’s a fantasy,” says Smith.

The “Rocketman” song sequence is different though. Reality is secondary and the fantasy takes control, says Smith. “Elton happens to be having a drug overdose at that time, so his reality becomes incredibly subjective, and that gives us license to play it much more through the song and his vocal.”

During “Rocketman,” Elton is sinking to the bottom of a swimming pool, watching a younger version of himself play piano underwater. On the music side, Smith was able to spread the instruments around the Dolby Atmos surround field, placing guitar parts and effect-like orchestrations into speakers discretely and moving those elements into the ceiling and walls. The bubble sound effects and underwater atmosphere also add to the illusion of being submerged. “Atmos works really well when you have quiet, and you can place sounds in the sound field and really hear them. There’s a lot of movement musically in Rocketman and it’s wonderful to have that space to put all of these great elements into,” says Smith.

That sequence ends with Elton coming on stage at Dodger Stadium and hitting a baseball into the massive crowd. The whole audience — 100,000 people — sing the chorus with him. “The moment the crowd comes in is spine-tingling. You’re just so with him at that point, and the sound and the music are doing all of that work,” he explains.

The Music
The music was a key ingredient to the success of Rocketman. According to Smith, they were changing performances from Egerton and also orchestrations right through the post sound mix, making sure that each piece was the best it could be. “Taron [Egerton] was very involved; he was on the dub stage a lot. Once everything was up on the screen, he’d want to do certain lines again to get a better performance. So, he did pre-records, on-set performances and post recording as well,” notes Smith.

Smith needed to keep those tracks live through the mix to accommodate the changes, so he and Collinge chose Avid S6 control surfaces and mixed in-the-box as opposed to printing the tracks for a mix on a traditional large-format console. “To have locked down the music and vocals in any way would have been a disaster. I’ve always been a proponent of mixing inside Pro Tools mainly because workflow-wise, it’s very collaborative. On Rocketman, having the tracks constantly addressable — not just by me but for the music editors Cecile Tournesac and Andy Patterson as well — was vital. We were able to constantly tweak bits and pieces as we went along. I love the collaborative nature of making and mixing sound for film, and this workflow allows for that much more so than any other. I couldn’t imagine doing this any other way,” says Smith.

Smith and Collinge mixed in native Dolby Atmos at Goldcrest London in Theatre 1 and Theatre 2, and also at Warner Bros. De Lane Lea. “It was such a tight schedule that we had all three mixing stages going for the very end of it, because it got a bit crazy as these things do,” says Smith. “All the stages we mixed at had S6s, and I just brought the drives with me. At one point we were print mastering and creating M&Es on one stage and doing some fold-downs on a different stage, all with the same session. That made it so much more straightforward and foolproof.”

As for the fold-down from Atmos to 5.1, Smith says it was nearly seamless. The pre-recorded music tracks were mixed by music producer Giles Martin at Abbey Road. Smith pulled those tracks apart, spread them into the Atmos surround field and then folded them down to 5.1. “Ultimately, the mixing that Giles Martin did at Abbey Road was a great thing because it meant the fold-downs really had the best backbone possible. Also, the way that Dolby has been tweaking their fold-down processing, it’s become something special. The fold-downs were a lot easier than I thought they’d be,” concludes Smith.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.

Posting director Darren Lynn Bousman’s horror film, St. Agatha

Atlanta’s Moonshine Post helped create a total post production pipeline — from dailies to finishing — for the film St. Agatha, directed by Darren Lynn Bousman (Saw II, Saw III, Saw IV, Repo the Genetic Opera). 

The project, from producers Seth and Sara Michaels, was co-edited by Moonshine’s Gerhardt Slawitschka and Patrick Perry and colored by Moonshine’s John Peterson.

St. Agatha is a horror film that shot in the town of Madison, Georgia. “The house we needed for the convent was perfect, as the area was one of the few places that had not burned down during the Civil War,” explains Seth Michaels. “It was our first time shooting in Atlanta, and the number one reason was because of the tax incentive. But we also knew Georgia had an infrastructure that could handle our production.”

What the producers didn’t know during production was that Moonshine Post could handle all aspects of post, and were initially brought in only for dailies. With the opportunity to do a producer’s cut, they returned to Moonshine Post.

Time and budget dictated everything, and Moonshine Post was able to offer two editors working in tandem to edit a final cut. “Why not cut in collaboration?” suggested Drew Sawyer, founder of Moonshine Post and executive producer. “It will cut the time in half, and you can explore different ideas faster.”

“We quite literally split the movie in half,” reports Perry, who, along with Slawitschka, cut on Adobe Premiere “It’s a 90-minute film, and there was a clear break. It’s a little unusual, I will admit, but almost always when we are working on something, we don’t have a lot of time, so splitting it in half works.”

Patrick Perry

Gerhardt Slawitschka

“Since it was a producer’s cut, when it came to us it was in Premiere, and it didn’t make sense to switch over to Avid,” adds Slawitschka. “Patrick and I can use both interchangeably, but prefer Premiere; it offers a lot of flexibility.”

“The editors, Patrick and Gerhardt, were great,” says Sara Michaels. “They watched every single second of footage we had, so when we recut the movie, they knew exactly what we had and how to use it.”

“We have the same sensibilities,” explains Gerhardt. “On long-form projects we take a feature in tandem, maybe split it in half or in reels. Or, on a TV series, each of us take a few episodes, compare notes, and arrive at a ‘group mind,’ which is our language of how a project is working. On St. Agatha, Patrick and I took a bit of a risk and generated a four-page document of proposed thoughts and changes. Some very macro, some very micro.”

Colorist John Peterson, a partner at Moonshine Post, worked closely with the director on final color using Blackmagic’s Resolve. “From day one, the first looks we got from camera raw were beautiful.” Typically, projects shot in Atlanta ship back to a post house in a bigger city, “and maybe you see it and maybe you don’t. This one became a local win, we processed dailies, and it came back to us for a chance to finish it here,” he says.

Peterson liked working directly with the director on this film. “I enjoyed having him in session because he’s an artist. He knew what he was looking for. On the flashbacks, we played with a variety of looks to define which one we liked. We added a certain amount of film grain and stylistically for some scenes, we used heavy vignetting, and heavy keys with isolation windows. Darren is a director, but he also knows the terminology, which gave me the opportunity to take his words and put them on the screen for him. At the end of the week, we had a successful film.”

John Peterson

The recent expansion of Moonshine Post, which included a partnership with the audio company Bare Knuckles Creative and a visual effects company Crafty Apes, “was necessary, so we could take on the kind of movies and series we wanted to work with,” explains Sawyer. “But we were very careful about what we took and how we expanded.”

They recently secured two AMC series, along with projects from Netflix. “We are not trying to do all the post in town, but we want to foster and grow the post production scene here so that we can continue to win people’s trust and solidify the Atlanta market,” he says.

Uncork’d Entertainment’s St. Agatha was in theaters and became available on-demand starting February 8. Look for it on iTunes, Amazon, Google Play, Vudu, Fandango Now, Xbox, Dish Network and local cable providers.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy

Behind the Title: New Math Managing Partner/EP Kala Sherman

Name: Kala Sherman

Company: New Math

Can you describe your company?
We are a bicoastal audio production company, with offices in NYC and LA, specializing in original music, sound design, audio mix and music supervision.

What’s your job title?
Managing Partner/EP

What does that entail?
I do everything from managing our staff to producing projects to sales and development.

What would surprise people the most about what’s underneath that title?
I am an untrained, but really good psychotherapist.

New Math, New York

What have you learned over the years about running a business?
It’s highly competitive and you have to continue to hustle and push the creative product in order to stay relevant. Also, it’s paramount to assemble the best talent and treat them with the utmost respect; without our producers or composers there wouldn’t be a business.

A lot of it must be about trying to keep employees and clients happy. How do you balance that?
We face at least one root challenge: How do you keep both your clients and your creative staff happy? I think how you approach and sell an idea to the composers while still delivering what the client needs is a real art form. It gets tricky with limited music budgets these days, but I’ve found over the years that there are ways to structure the deals where the clients feel like they can get the music and sound design they need while the composers feel well-compensated and creatively fulfilled.

What’s your favorite part of the job?
I love the fact that we are creating music and I get to be part of that process.

What’s your least favorite?
Competitive demoing. Partnering with clients is just way more fun than knowing you are competing with other companies. And not too ironically, it usually results in the best and freshest creative product.

What is your favorite time of the day?
I love the evenings when I get home and hang with my daughter.

If you didn’t have this job, what would you be doing instead?
I always knew I had to work in music, so I would have probably stayed on the label side of the music business.

Can you name some recent clients?
Google, Trojan, Smirnoff, KFC, Chobani, Walmart, Zappos and ESPN.

Name three pieces of technology you can’t live without.
Spotify. Laptop. iPhone.

You recently added final mix capabilities in both of your locations. Can you talk about why now was the time?
We want to be a full-service audio company for our clients. It just makes sense when many of our clients want to work with one company for all audio needs. If we are already providing the music and sound design, why not record the VO and provide mix as well. Plus, it’s really fun to have clients in the studio.

What tools will be used for the mixing rooms?
Focal 5.1 monitor system in both the NY and LA mix rooms. Pro Tools mix system with the latest plugin suites. High-quality analog outboard gear from Neve, API, DW Fearn, Summit and more.

Any recent jobs in these studios you can talk about?
Yes. We just completed Chobani, Acuvue and Yellawood mixes.

Main Image: (L-R) New Math partners David Wittman, Kala Sherman, Raymond Loewy

Review: Audionamix IDC for cleaning dialogue

By Claudio Santos

Sound editing has many different faces. It is part of big-budget blockbuster movies and also an integral part of small hobby podcasting projects. Every project has its own sound needs. Some edit thousands upon thousands of sound effects. Others have to edit hundreds of hours of interviews. What most projects have in common, though, is that they circle around dialogue, whether in the form of character lines, interviews, narrators or any other format by which the spoken word guides the experience.

Now let’s be honest, dialogue is not always well recorded. Archival footage needs to be understood, even if the original recording was made with a microphone that was 20 feet away from the speaker in a basement full of machines. Interviews are often quickly recorded in the five minutes an artist has between two events while driving from point A to point B. And until electric cars are the norm, the engine sound will always be married to that recording.

The fact is, recordings are sometimes a little bit noisier than ideal, and it falls upon the sound editor to make it a little bit clearer.

To help with that endeavor, Audionamix has come out with the newest version of their IDC (Instant Dialogue Cleaner). I have been testing it on different kinds of material and must say that overall I’m very impressed with it.Let’s first get the awkward parts of this conversation out of the way. First, let’s see what the IDC is not.

– It is not a full-featured restoration workstation, such as Izotope RX.
– It does not depend on the cloud like other Audionamix plugins.
– It is not magic.

Honestly, all that is fine because what it does do, it does very well and in a very straightforward manner.

IDC aims to keep it simple. You get three controls plus output level and bypass. This makes trying out the plugin on different samples of audio a very quick task, which means you don’t waste time on clips that are beyond salvation.
The three controls you get are:
– Strength: The aggressiveness of the algorithm
– Background: Level of the separated background noise
– Speech: Level of the separated speech

Like all digital processing tools, things sound a bit techno glitchy toward the extremes of the scales, but within reasonable parameters the plugin makes a very good job of reducing background levels without gargling up the speech too noticeably. I personally had fairly good results with strengths around 40% to 60%, and background reductions of up to -24 dB. Anything more radical than that sounded heavily processed.

Now, it’s important to make a note that not all noise is the same. In fact, there are entirely different kinds of audio muck that obscures dialogue, and the IDC is more effective against some than others.

Noise reduction comparison between original clip (1), Cedar DNS Two VST (2), Audionamix IDC (3) and Izotope RX 7 Voice Denoise (4). The clip presents loud air conditioner noise in the background of close mic’d dialogue. All plugins had their level boosted by +4dB after processing.

– Constant broadband background noise (air conditioners, waterfalls, freezers): Here the IDC does fairly well. I couldn’t notice a lot of pumping at the beginning and end of phrases, and the background didn’t sound crippled either.

– Varying broadband background noise (distant cars passing, engines from inside cars): Here again, the IDC does a good job of increasing the dialogue/background ratio. It does introduce artifacts when the background noise spikes or varies very abruptly, but if the goal is to increase intelligibility then it is definitely a success in that area.

– Wind: On this kind of noise the IDC needs a little helping hand from other processes. I tried to clean up some heavily winded dialogue, and even though the wind was indeed lowered significantly so was the speech under it, resulting in a pumping clip that went up and down following the shadow of the removed wind. I believe with some pre-processing using high pass filters and a little bit of limiting the results could have been better, but if you are emergency buying this to clean up bad wind audio I’d definitely keep that in mind. It does work well on light wind reduction, but in those cases as well it seems it benefits from some pre-processing.

Summing Up
I am happily impressed by the plugin. It does not work miracles, but no one should really expect any tool to do so. It is great at improving the signal-to-noise ratio of your sound and does so in a very easy-to-use interface, which allows you to quickly decide whether you like the results or not. That alone is a plus that should be kept in consideration.


Claudio Santos is a sound mixer and tech aficionado who works at Silver Sound in NYC. He has worked on a wide range of sound projects ranging from traditional shows like I Was Prey for the Animal Planet and VR experiences like The Mile-Long Opera.

Making audio pop for Disney’s Mary Poppins Returns

By Jennifer Walden

As the song says, “It’s a jolly holiday with Mary.” And just in time for the holidays, there’s a new Mary Poppins musical to make the season bright. In theaters now, Disney’s Mary Poppins Returns is directed by Rob Marshall, who with Chicago, Nine and Into the Woods on his resume, has become the master of modern musicals.

Renée Tondelli

In this sequel, Mary Poppins (Emily Blunt) comes back to help the now-grown up Michael (Ben Whishaw) and Jane Banks (Emily Mortimer) by attending to Michael’s three children: Annabel (Pixie Davies), John (Nathanael Saleh) and Georgie (Joel Dawson). It’s a much-needed reunion for the family as Michael is struggling with the loss of his wife.

Mary Poppins Returns is another family reunion of sorts. According to Renée Tondelli, who along with Eugene Gearty, supervised and co-designed the sound, director Marshall likes to use the same crews on all his films. “Rob creates families in each phase of the film, so we all have a shorthand with each other. It’s really the most wonderful experience you can have in a filmmaking process,” says Tondelli, who has worked with Marshall on five films, three of which were his musicals. “In the many years of working in this business, I have never worked with a more collaborative, wonderful, creative team than I have on Mary Poppins Returns. That goes for everyone involved, from the picture editor down to all of our assistants.”

Sound editorial took place in New York at Sixteen 19, the facility where the picture was being edited. Sound mixing was also done in New York, at Warner Bros. Sound.

In his musicals, Marshall weaves songs into scenes in a way that feels organic. The songs are coaxed from the emotional quotient of the story. That’s not only true for how the dialogue transitions into the singing, but also for how the music is derived from what’s happening in the scene. “Everything with Rob is incredibly rhythmic,” she says. “He has an impeccable sense of timing. Every breath, every footstep, every movement has a rhythmic cadence to it that relates to and works within the song. He does this with every artform in the production — with choreography, production design and sound design.”

From a sound perspective, Tondelli and her team worked to integrate the songs by blending the pre-recorded vocals with the production dialogue and the ADR. “We combined all of those in a micro editing process, often syllable by syllable, to create a very seamless approach so that you can’t really tell where they stop talking and start singing,” she says.

The Conversation
For example, near the beginning of the film, Michael is looking through the attic of their home on Cherry Tree Lane as he speaks to the spirit of his deceased wife, telling her how much he misses her in a song called “The Conversation.” Tondelli explains, “It’s a very delicate scene, and it’s a song that Michael was speaking/singing. We constantly cut between his pre-records and his production dialogue. It was an amazing collaboration between me, the supervising music editor Jennifer Dunnington and re-recording mixer Mike Prestwood Smith. We all worked together to create this delicate balance so you really feel that he is singing his song in that scene in that moment.”

Since Michael is moving around the attic as he’s performing the song, the environment affects the quality of the production sound. As he gets closer to the window, the sound bounces off the glass. “Mike [Prestwood Smith] really had his work cut out for him on that song. We were taking impulse responses from the end of the slates and feeding them into Audio Ease’s Altiverb to get the right room reverb on the pre-records. We did a lot of impulse responses and reverbs, and EQs to make that scene all flow, but it was worth it. It was so beautiful.”

The Bowl
They also captured impulse responses for another sequence, which takes place inside a ceramic bowl. The sequence begins with the three Banks children arguing over their mother’s bowl. They accidentally drop it and it breaks. Mary and Jack (Lin-Manuel Miranda) notice the bowl’s painted scenery has changed. The horse-drawn carriage now has a broken wheel that must be fixed. Mary spins the bowl and a gust of wind pulls them into the ceramic bowl’s world, which is presented in 2D animation. According to Tondelli, the sequence was hand-drawn, frame by frame, as an homage to the original Mary Poppins. “They actually brought some animators out of retirement to work on this film,” she says.

Tondelli and co-supervising sound editor/co-sound designer Eugene Gearty placed mics inside porcelain bowls, in a porcelain sink, and near marble tiles, which they thumped with rubber mallets, broken pieces of ceramic and other materials. The resulting ring-out was used to create reverbs that were applied to every element in the ceramic bowl sequence, from the dialogue to the Foley. “Everything they said, every step they took had to have this ceramic feel to it, so as they are speaking and walking it sounds like it’s all happening inside a bowl,” Tondelli says.

She first started working on this hand-drawn animation sequence when it showed little more than the actors against a greenscreen with a few pencil drawings. “The fastest and easiest way to make a scene like that come alive is through sound. The horse, which was possibly the first thing that was drawn, is pullling the carriage. It dances in this syncopated rhythm with the music so it provides a rhythmic base. That was the first thing that we tackled.”

After the carriage is fixed, Mary and her troupe walk to the Royal Doulton Music Hall where, ultimately, Jack and Mary are going to perform. Traditionally, a music hall in London is very rowdy and boisterous. The audience is involved in the show and there’s an air of playfulness. “Rob said to me, ‘I want this to be an English music hall, Renée. You really have to make that happen.’ So I researched what music halls were like and how they sounded.”

Since the animation wasn’t complete, Tondelli consulted with the animators to find out who — or rather what — was going to be in the audience. “There were going to be giraffes dressed up in suits with hats and Indian elephants in beautiful saris, penguins on the stage dancing with Jack and Mary, flamingos, giant moose and rabbits, baby hippos and other animals. The only way I thought I could do this was to go to London and hire actors of all ages who could do animal voices.”

But there were some specific parameters that had to be met. Tondelli defines the world of Mary Poppins Returns as being “magical realism,” so the animals couldn’t sound too cartoony. They had to sound believably like animal versions of British citizens. Also, the actors had to be able to sing in their animal voices.

According to Tondelli, they recorded 15 actors at a time for a period of five days. “I would call out, ‘Who can do an orangutan?’ And then the actors would all do voices and we’d choose one. Then they would do the whole song and sing out and call out. We had all different accents — Cockney, Welsh and Scottish,” she says. “All the British Isles came together on this and, of course, they all loved Mary and knew all the songs so they sang along with her.”

On the Dolby Atmos mix, the music hall scene really comes alive. The audience’s voices are coming from the rafters and all around the walls and the music is reverberating into the space — which, by the way, no longer sounds like it’s in a ceramic bowl even though the music hall is in the ceramic bowl world. In addition to the animal voices, there are hooves and paws for the animals’ clapping. “We had to create the clapping in Foley because it wasn’t normal clapping,” explains Tondelli. “The music hall was possibly the most challenging, but also the funnest scene to do. We just loved it. All of us had a great time on it.”

The Foley
The Foley elements in Mary Poppins Returns often had to be performed in perfect sync with the music. On the big dance numbers, like “Trip the Light Fantastic,” the Foley was an essential musical element since the dances were reconstructed sonically in post. “Everything for this scene was wiped away, even the vocals. We ended up using a lot of the records for this one and a lot less production sound,” says Tondelli.

In “Trip the Light Fantastic,” Jack is bringing the kids back home through the park, and they emerge from a tunnel to see nearly 50 lamplighters on lampposts. Marshall and John DeLuca (choreographer/producer/screen story writer) arranged the dance to happen in multiple layers, with each layer doing something different. “The background dancers were doing hand slaps and leg swipes, and another layer was stepping on and off of these slate surfaces. Every time the dancers would jump up on the lampposts, they’d hit it and each would ring out in a different pitch,” explains Tondelli.

All those complex rhythms were performed in Foley in time to the music. It’s a pretty tall order to ask of any Foley artist but Tondelli has the perfect solution for that dilemma. “I hire the co-choreographers (for this film, Joey Pizzi and Tara Hughes) or dancers that actually worked on the film to do the Foley. It’s something that I always do for Rob’s films. There’s such a difference in the performance,” she says.

Tondelli worked with the Foley team of Marko Costanzo and George Lara at c5 Sound in New York, who helped to build custom surfaces — like a slate-on-sand surface for the lamplighter dance — and arrange multi-surface layouts to optimally suit the Foley performer’s needs.

For instance, in the music hall sequence, the dance on stage incorporates books, so they needed three different surfaces: wood, leather and a papery-sounding surface set up in a logical, easily accessible way. “I wanted the dancer performing the Foley to go through the entire number while jumping off and on these different surfaces so you felt like it was a complete dance and not pieced together,” she says.

For the lamplighter dance, they had a big, thick pig iron pipe next to the slate floor so that the dancer performing the Foley could hit it every time the dancers on-screen jumped up on the lampposts. “So the performer would dance on the slate floor, then hit the pipe and then jump over to the wood floor. It was an amazingly syncopated rhythmic soundtrack,” says Tondelli.

“It was an orchestration, a beautiful sound orchestra, a Foley orchestra that we created and it had to be impeccably in sync. If there was a step out of place you’d hear it,” she continues. “It was really a process to keep it in sync through all the edit conforms and the changes in the movie. We had to be very careful doing the conforms and making the adjustments because even one small mistake and you would hear it.”

The Wind
Wind plays a prominent role in the story. Mary Poppins descends into London on a gust of wind. Later, they’re transported into the ceramic bowl world via a whirlwind. “It’s everywhere, from a tiny leaf blowing across the sidewalk to the huge gale in the park,” attests Tondelli. “Each one of those winds has a personality that Eugene [Gearty] spent a lot of time working on. He did amazing work.”

As far as the on-set fans and wind machines wreaking havoc on the production dialogue, Tondelli says there were two huge saving graces. First was production sound mixer Simon Hayes, who did a great job of capturing the dialogue despite the practical effects obstacles. Second was dialogue editor Alexa Zimmerman, who was a master at iZotope RX. All told, about 85% of the production dialogue made it into the film.

“My goal — and my unspoken order from Rob — was to not replace anything that we didn’t have to. He’s so performance-oriented. He arduously goes over every single take to make sure it’s perfect,” says Tondelli, who also points out that Marshall isn’t afraid of using ADR. “He will pick words from a take and he doesn’t care if it’s coming from a pre-record and then back to ADR and then back to production. Whichever has the best performance is what wins. Our job then is to make all of that happen for him.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeny

Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

Crafting sound for Emmy-winning Atlanta

By Jennifer Walden

FX Network’s dramedy series Atlanta, which recently won an Emmy for Outstanding Sound Editing For A Comedy or Drama Series (Half-Hour)tells the story of three friends from, well, Atlanta — a local rapper named Paper Boi whose star is on the rise (although the universe seems to be holding him down), his cousin/manager Earn and their head-in-the-clouds friend Darius.

Trevor Gates

Told through vignettes, each episode shows their lives from different perspectives instead of through a running narrative. This provides endless possibilities for creativity. One episode flows through different rooms at a swanky New Year’s party at Drake’s house; another ventures deep into the creepy woods where real animals (not party animals) make things tense.

It’s a playground for sound each week, and MPSE-award-winning supervising sound editor Trevor Gates of Formosa Group and his sound editorial team on Season 2 (aka, Robbin’ Season) got their 2018 Emmy based on the work they did on Episode 6 “Teddy Perkins,” in which Darius goes to pick up a piano from the home of an eccentric recluse but finds there’s more to the transaction than he bargained for.

Here, Gates discusses the episode’s precise use of sound and how the quiet environment was meticulously crafted to reinforce the tension in the story and to add to the awkwardness of the interactions between Darius and Teddy.

There’s very little music in “Teddy Perkins.” The soundtrack is mainly different ambiences and practical effects and Foley. Since the backgrounds play such an important role, can you tell me about the creation of these different ambiences?
Overall, Atlanta doesn’t really have a score. Music is pretty minimal and the only music that you hear is mainly source music — music coming from radios, cell phones or laptops. I think it’s an interesting creative choice by producers Hiro Murai and Donald Glover. In cases like the “Teddy Perkins” episode, we have to be careful with the sounds we choose because we don’t have a big score to hide behind. We have to be articulate with those ambient sounds and with the production dialogue.

Going into “Teddy Perkins,” Hiro (who directed the episode) and I talked about his goals for the sound. We wanted a quiet soundscape and for the house to feel cold and open. So, when we were crafting the sounds that most audience members will perceive as silence or quietness, we had very specific choices to make. We had to craft this moody air inside the house. We had to craft a few sounds for the outside world too because the house is located in a rural area.

There are a few birds but nothing overt, so that it’s not intrusive to the relationship between Darius (Lakeith Stanfield) and Teddy (Donald Glover). We had to be very careful in articulating our sound choices, to hold that quietness that was void of any music while also supporting the creepy, weird, tense dialogue between the two.

Inside the Perkins residence, the first ambience felt cold and almost oppressive. How did you create that tone?
That rumbly, oppressive air was the cold tone we were going for. It wasn’t a layer of tones; it was actually just one sound that I manipulated to be the exact frequency that I wanted for that space. There was a vastness and a claustrophobia to that space, although that sounds contradictory. That cold tone was kind of the hero sound of this episode. It was just one sound, articulately crafted, and supported by sounds from the environment.

There’s a tonal shift from the entryway into the parlor, where Darius and Teddy sit down to discuss the piano (and Teddy is eating that huge, weird egg). In there we have the sound of a clock ticking. I really enjoy using clocks. I like the meter that clocks add to a room.

In Ouija: Origin of Evil, we used the sound of a clock to hold the pace of some scenes. I slowed the clock down to just a tad over a second, and it really makes you lean in to the scene and hold what you perceive as silence. I took a page from that book for Atlanta. As you leave the cold air of the entryway, you enter into this room with a clock ticking and Teddy and Darius are sitting there looking at each other awkwardly over this weird/gross ostrich egg. The sound isn’t distracting or obtrusive; it just makes you lean into the awkwardness.

It was important for us to get the mix for the episode right, to get the right level for the ambiences and tones, so that they are present but not distracting. It had to feel natural. It’s our responsibility to craft things that show the audience what we want them to see, and at the same time we have to suspend their disbelief. That’s what we do as filmmakers; we present the sonic spaces and visual images that traverse that fine line between creativity and realism.

That cold tone plays a more prominent role near the end of the episode, during the murder-suicide scene. It builds the tension until right before Benny pulls the trigger. But there’s another element too there, a musical stinger. Why did you choose to use music at that moment?
What’s important about this season of Atlanta is that Hiro and Donald have a real talent for surrounding themselves with exceptional people — from the picture department to the sound department to the music department and everyone on-set. Through the season it was apparent that this team of exceptional people functioned with extreme togetherness. We had a homogeny about us. It was a bunch of really creative and smart people getting together in a room, creating something amazing.

We had a music department and although there isn’t much music and score, every once in a while we would break a rule that we set for ourselves on Season 2. The picture editor will be in the room with the music department and Hiro, and we’ll all make decisions together. That musical stinger wasn’t my idea exactly; it was a collective decision to use a stinger to drive the moment, to have it build and release at a specific time. I can’t attribute that sound to me only, but to this exceptional team on the show. We would bounce creative ideas off of each other and make decisions as a collective.

The effects in the murder-suicide scene do a great job of tension building. For example, when Teddy leans in on Darius, there’s that great, long floor creak.
Yeah, that was a good creak. It was important for us, throughout this episode, to make specific sound choices in many different areas. There are other episodes in the season that have a lot more sound than this episode, like “Woods,” where Paper Boi (Brian Tyree Henry) is getting chased through the woods after he was robbed. Or “Alligator Man,” with the shootout in the cold open. But that wasn’t the case with “Teddy Perkins.”

On this one, we had to make specific choices, like when Teddy leans over and there’s that long, slow creak. We tried to encompass the pace of the scene in one very specific sound, like the sound of the shackles being tightened onto Darius or the movement of the shotgun.

There’s another scene when Darius goes down into the basement, and he’s traveling through this area that he hasn’t been in before. We decided to create a world where he would hear sounds traveling through the space. He walks past a fan and then a water heater kicks on and there is some water gurgling through pipes and the clinking sound of the water heater cooling down. Then we hear Benny’s wheelchair squeak. For me, it’s about finding that one perfect sound that makes that moment. That’s hard to do because it’s not a composition of many sounds. You have one choice to make, and that’s what is going to make that moment special. It’s exciting to find that one sound. Sometimes you go through many choices until you find the right one.

There were great diegetic effects, like Darius spinning the globe, and the sound of the piano going onto the elevator, and the floor needle and the buttons and dings. Did those come from Foley? Custom recordings? Library sounds?
I had a great Foley team on this entire season, led by Foley supervisor Geordy Sincavage. The sounds like the globe spinning came from the Foley team, so that was all custom recorded. The elevator needle moving down was a custom recording from Foley. All of the shackles and handcuffs and gun movements were from Foley.

The piano moving onto the elevator was something that we created from a combination of library effects and Foley sounds. I had sound effects editor David Barbee helping me out on this episode. He gave me some library sounds for the piano and I went in and gave it a little extra love. I accentuated the movement of the piano strings. It was like piano string vocalizations as Darius is moving the piano into the elevator and it goes over the little bumps. I wanted to play up the movements that would add some realism to that moment.

Creating a precise soundtrack is harder than creating a big action soundtrack. Well, there are different sets of challenges for both, but it’s all about being able to tell a story by subtraction. When there’s too much going on, people can feel the details if you start taking things away. “Teddy Perkins” is the case of having an extremely precise soundtrack, and that was successful thanks to the work of the Foley team, my effects editor, and the dialogue editor.

The dialogue editor Jason Dotts is the unsung hero in this because we had to be so careful with the production dialogue track. When you have a big set — this old, creaky house and lots of equipment and crew noise — you have to remove all the extraneous noise that can take you out of the tension between Darius and Teddy. Jason had to go in with a fine-tooth comb and do surgery on the production dialogue just to remove every single small sound in order to get the track super quiet. That production track had to be razor-sharp and presented with extreme care. Then, with extreme care, we had to build the ambiences around it and add great Foley sounds for all the little nuances. Then we had to bake the cake together and have a great mix, a very articulate balance of sounds.

When we were all done, I remember Hiro saying to us that we realized his dream 100%. He alluded to the fact that this was an important episode going into it. I feel like I am a man of my craft and my fingerprint is very important to me, so I am always mindful of how I show my craft to the world. I will always take extreme care and go the extra mile no matter what, but it felt good to have something that was important to Hiro have such a great outcome for our team. The world responded. There were lots of Emmy nominations this year for Atlanta and that was an incredible thing.

Did you have a favorite scene for sound? Why?
It was cool to have something that we needed to craft and present in its entirety. We had to build a motif and there had to be consistency within that motif. It was awesome to build the episode as a whole. Some scenes were a bit different, like down in the basement. That had a different vibe. Then there were fun scenes like moving the piano onto the elevator. Some scenes had production challenges, like the scene with the film projector. Hiro had to shoot that scene with the projector running and that created a lot of extra noise on the production dialogue. So that was challenging from a dialogue editing standpoint and a mix standpoint.

Another challenging scene was when Darius and Teddy are in the “Father Room” of the museum. That was shot early on in the process and Donald wasn’t quite happy with his voice performance in that scene. Overall, Atlanta uses very minimal ADR because we feel that re-recorded performances can really take the magic out of a scene, but Donald wanted to redo that whole scene, and it came out great. It felt natural and I don’t think people realize that Donald’s voice was re-recorded in its entirety for that scene. That was a fun ADR session.

Donald came into the studio and once he got into the recording booth and got into the Teddy Perkins voice he didn’t get out of it until we were completely finished. So as Hiro and Donald are interacting about ideas on the performance, Donald stayed in the Teddy voice completely. He didn’t get out of it for three hours. That was an interesting experience to see Donald’s face as himself and hear Teddy’s voice.

Where there any audio tools that you couldn’t have lived without on this episode?
Not necessarily. This was an organic build and the tools that we used in this were really basic. We used some library sounds and recorded some custom sounds. We just wanted to make sure that we could make this as real and organic as possible. Our tool was to pick the best organic sounds that we could, whether we used source recordings or new recordings.

Of all the episodes in Season 2 of Atlanta, why did you choose “Teddy Perkins” for Emmy consideration?
Each episode had its different challenges. There were lots of different ways to tell the stories since each episode is different. I think that is something that is magical about Atlanta. Some of the episodes that stood out from a sound standpoint were Episode 1 “Alligator Man” with the shootout, and Episode 8 “Woods.” I had considered submitting “Woods” because it’s so surreal once Paper Boi gets into the woods. We created this submergence of sound, like the woods were alive. We took it to another level with the wildlife and used specific wildlife sounds to draw some feelings of anxiety and claustrophobia.

Even an episode like “Champagne Papi,” which seems like one of the most basic from a sound editorial perspective, was actually quite varied. They’re going between different rooms at a party and we had to build spaces of people that felt different but the same in each room. It had to feel like a real space with lots of people, and the different spaces had to feel like it belonged at the same party.

But when it came down to it, I feel like “Teddy Perkins” was special because there wasn’t music to hide behind. We had to do specific and articulate work, and make sharp choices. So it’s not the episode with the most sound but it’s the episode that has the most articulate sound. And we are very proud of how it turned out.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

Review: Blackmagic’s Resolve 15

By David Cox

DaVinci Resolve 15 from Blackmagic Design has now been released. The big news is that Blackmagic’s compositing software Fusion has been incorporated into Resolve, joining the editing and audio mixing capabilities added to color grading in recent years. However, to focus just on this would hide a wide array of updates to Resolve, large and small, across the entire platform. I’ve picked out some of my favorite updates in each area.

For Colorists
Each time Blackmagic adds a new discipline to Resolve, colorists fear that the color features take a back seat. After all, Resolve was a color grading system long before anything else. But I’m happy to say there’s nothing to fear in Version 15, as there are several very nice color tweaks and new features to keep everyone happy.

I particularly like the new “stills store” functionality, which allows the colorist to find and apply a grade from any shot in any timeline in any project. Rather than just having access to manually saved grades in the gallery area, thumbnails of any graded shot can be viewed and copied, no matter which timeline or project they are in, even those not explicitly saved as stills. This is great for multi-version work, which is every project these days.

Grades saved as stills (and LUTS) can also be previewed on the current shot using the “Live Preview” feature. Hovering the mouse cursor over a still and scrubbing left and right will show the current shot with the selected grade temporarily applied. It makes quick work of finding the most appropriate look from an existing library.

Another new feature I like is called “Shared Nodes.” A color grading node can be set as “shared,” which creates a common grading node that can be inserted into multiple shots. Changing one instance, changes all instances of that shared node. This approach is more flexible and visible than using Groups, as the node can be seen in each node layout and can sit at any point in the process flow.

As well as the addition of multiple play-heads, a popular feature in other grading systems, there is a plethora of minor improvements. For example, you can now drag the qualifier graphics to adjust settings, as opposed to just the numeric values below them. There are new features to finesse the mattes generated from the keying functions, as well as improvements to the denoise and face refinement features. Nodes can be selected with a single click instead of a double click. In fact, there are 34 color improvements or new features listed in the release notes.

For Editors
As with color, there are a wide range of minor tweaks all aimed at improving feel and ergonomics, particularly around dynamic trim modes, numeric timecode entry and the like. I really like one of the major new features, which is the ability to open multiple timelines on the screen at the same time. This is perfect for grabbing shots, sequences and settings from other timelines.

As someone who works a lot with VFX projects, I also like the new “Replace Edit” function, which is aimed at those of us that start our timelines with early drafts of VFX and then update them as improved versions come along. The new function allows updated shots to be dragged over their predecessors, replacing them but inheriting all modifications made, such as the color grade.

An additional feature to the existing markers and notes functions is called “Drawn Annotations.” An editor can point out issues in a shot with lines and arrows, then detail them with notes and highlight them with timeline markers. This is great as a “note to self” to fix later, or in collaborative workflows where notes can be left for other editors, colorists or compositors.

Previous versions of Resolve had very basic text titling. Thanks to the incorporation of Fusion, the edit page of Resolve now has a feature called Text+, a significant upgrade on the incumbent offering. It allows more detailed text control, animation, gradient fills, dotted outlines, circular typing and so on. Within Fusion there is a modifier called “Follower,” which enables letter-by-letter animation, allowing Text+ to compete with After Effects for type animation. On my beta test version of Resolve 15, this wasn’t available in the Edit page, which could be down to the beta status or an intent to keep the Text+ controls in the Edit page more streamlined.

For Audio
I’m not an audio guy, so my usefulness in reviewing these parts is distinctly limited. There are 25 listed improvements or new features, according to the release notes. One is the incorporation of Fairlight’s Automated Dialog Replacement processes, which creates a workflow for the replacement of unsalvageable originally recorded dialog.

There are also 13 new built-in audio effects plugins, such as Chorus, Echo and Flanger, as well as de-esser and de-hummer clean-up tools.
Another useful addition both for audio mixers and editors is the ability to import entire audio effects libraries, which can then be searched and star-rated from within the Edit and Fairlight pages.

Now With Added Fusion
So to the headline act — the incorporation of Fusion into Resolve. Fusion is a highly regarded node-based 2D and 3D compositing software package. I reviewed Version 9 in postPerspective last year [https://postperspective.com/review-blackmagics-fusion-9/]. Bringing it into Resolve links it directly to editing, color grading and audio mixing to create arguably the most agile post production suite available.

Combining Resolve and Fusion will create some interesting challenges for Blackmagic, who say that the integration of the two will be ongoing for some time. Their challenge isn’t just linking two software packages, each with their own long heritage, but in making a coherent system that makes sense to all users.

The issue is this: editors and colorists need to work at a fast pace, and want the minimum number of controls clearly presented. A compositor needs infinite flexibility and wants a button and value for every function, with a graph and ideally the ability to drive it with a mathematical expression or script. Creating an interface that suits both is near impossible. Dumbing down a compositing environment limits its ability, whereas complicating an editing or color environment destroys its flow.

Fusion occupies its own “page” within Resolve, alongside pages for “Color,” “Fairlight” (audio) and “Edit.” This is a good solution in so far that each interface can be tuned for its dedicated purpose. The ability to join Fusion also works very well. A user can seamlessly move from Edit to Fusion to Color and back again, without delays, rendering or importing. If a user is familiar with Resolve and Fusion, it works very well indeed. If the user is not accustomed to high-end node-based compositing, then the Fusion page can be daunting.

I think the challenge going forward will be how to make the creative possibilities of Fusion more accessible to colorists and editors without compromising the flexibility a compositor needs. Certainly, there are areas in Fusion that can be made more obvious. As with many mature software packages, Fusion has the occasional hidden right click or alt-click function that is hard for new users to discover. But beyond that, the answer is probably to let a subset of Fusion’s ability creep into the Edit and Color pages, where more common tasks can be accommodated with simplified control sets and interfaces. This is actually already the case with Text+; a Fusion “effect” that is directly accessible within the Edit section.

Another possible area to help is Fusion Macros. This is an inbuilt feature within Fusion that allows a designer to create an effect and then condense it down to a single node, including just the specific controls needed for that combined effect. Currently, Macros that integrate the Text+ effect can be loaded directly in the Edit page’s “Title Templates” section.

I would encourage Blackmagic to open this up further to allow any sort of Macro to be added for video transitions, graphics generators and the like. This could encourage a vibrant exchange of user-created effects, which would arm editors and colorists with a vast array of immediate and community sourced creative options.

Overall, the incorporation of Fusion is a definite success in my view, whether used to empower multi-skilled post creatives or to provide a common environment for specialized creatives to collaborate. The volume of updates and the speed at which the Resolve software developers address the issues exposed during public beta trials, remains nothing short of impressive.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

Cinema Audio Society sets next awards date and timeline

The Cinema Audio Society (CAS) will be holding its 55th Annual CAS Awards on Saturday, February 16, 2019 at the InterContinental Los Angeles Downtown in the Wilshire Grand Ballroom. The CAS Awards recognize outstanding sound mixing in film and television as well as outstanding products for production and post. Recipients for the CAS Career Achievement Award and CAS Filmmaker Award will be announced later in the year.

The InterContinental Los Angeles Downtown is a new venue for the awards. They were held at the Omni Los Angeles Hotel at California Plaza last year.

The timeline for the awards is as follows:
• Entry submission form will be available online on the CAS website on Thursday, October 11, 2018.
• Entry submissions are due online by 5:00pm PST on Thursday, November 15, 2018.
• Outstanding product entry submissions are due online by 5:00pm PST on Friday December 7, 2018.
• Nomination ballot voting begins online on Thursday, December 13, 2018.
• Nomination ballot voting ends online at 5:00pm PST on Thursday, January 3, 2019.
• Final nominees in each category will be announced on Tuesday, January 8, 2019.
• Final voting begins online on Thursday, January 24, 2019.
• Final voting ends online at 5:00pm PST on Wednesday, February 6, 2019.

 

Hobo’s Chris Stangroom on providing Quest doc’s sonic treatment

Following a successful film fest run that included winning a 2018 Independent Spirit Award, and being named a 2017 official selection at Sundance, the documentary Quest is having its broadcast premiere on PBS this month as part of their POV series.

Chris Stangroom

Filmed with vérité intimacy for nearly a decade, Quest follows the Rainey family who live in North Philadelphia. The story begins at the start of the Obama presidency with Christopher “Quest” Rainey, and his wife Christine (“Ma Quest”) raising a family, while also nurturing a community of hip-hop artists in their home music studio. It’s a safe space where all are welcome, but as the doc shows, this creative sanctuary can’t always shield them from the strife that grips their neighborhood.

New York-based audio post house Hobo, which is no stranger to indie documentary work (Weiner, Amanda Knox, Voyeur), lent its sonic skills to the film, including the entire sound edit (dialogue, effects and music), sound design, 5.1 theatrical and broadcast mixes.

We spoke with Hobo’s Chris Stangroom, supervising sound editor/re-recording mixer on the project about the challenges he and the Hobo team faced in their quest on this film.

Broadly speaking what did you and Hobo do on this project? How did you get involved?
We handled every aspect of the audio post on Quest for its Sundance Premiere, theatrical run and broadcast release of the film on POV.

This was my first time working with director Jonathan Olshefski and I loved every minute of it, The entire team on Quest was focused on making this film better with every decision, and he had to be the final voice on everything. We were connected through my friend producer Sabrina Gordon, who I had previously worked with on the film Undocumented. It was a pretty quick turn of events, as I think I got the first call about the film Thanksgiving weekend of 2016. We started working on the film the day after Christmas that year and were finished mix two weeks later with the entire sound edit and mix for the 2017 Sundance film festival.

How important is the audio mix/sound design in the overall cinematic experience of Quest? What was most important to Olshefski?
The sound of a film is half of the experience. I know it sounds cliché, but after years of working with clients on improving their films, the importance of a good sound mix and edit can’t be understated. I have seen films come to life by simply adding Foley to a few intimate moments in a scene. It seems like such a small detail in the grand scheme of a film’s soundtrack, but feeling that intimacy with a character connects us to them in a visceral way.

Since Quest was a film not only about the Rainey family but also their neighborhood of North Philly, I spent a lot of time researching the sounds of Philadelphia. I gathered a lot of great references and insight from friends who had grown up in Philly, like the sounds of “ghetto birds” (helicopters), the motorbikes that are driven around constantly and the SEPTA buses. As Jon and I spoke about the film’s soundtrack, those kinds of sounds and ideas were exactly what he was looking for when we were out on the streets of North Philly. It created an energy to the film that made it vivid and alive.

The film was shot over a 10-year period. How did that prolonged production affect the audio post? Were there format issues or other technical issues you needed to overcome?
It presented some challenges, but luckily Jon always recorded with a lav or a boom on his camera for the interviews, so matching their sound qualities was easier than if he had just been using a camera mic. There are probably half a dozen “narrated” scenes in Quest that are built from interview sound bites, so bouncing around from interviews 10 years apart was tricky and required a lot of attention to detail.

In addition, Quest‘s phenomenal editor Lindsay Utz was cutting scenes up until the last day of our sound mix. So even once we got an entire scene sounding clean and balanced, it would then change and we’d have to add a new line from some other interview during that decade-long period. She definitely kept me on my toes, but it was all to make the film better.

Music is a big part of the family’s lives. Did the fact that they run a recording studio out of their home affect your work?
Yes. The first thing I did once we started on the film was to go down to Quest’s studio in Philly and record “impulse responses” (IRs) of the space, essentially recording the “sound” of a room or space. I wanted to bring that feeling of the natural reverbs in his studio and home to the film. I captured the live room where the artists would be recording, his control room in the studio and even the hallway leading to the studio with doors opened and closed, because sound changes and becomes more muffled as more doors are shut between the microphone and the sound source. The IRs helped me add incredible depth and the feeling that you were there with them when I was mixing the freestyle rap sessions and any scenes that took place in the home and studio.

Jon and I also grabbed dozens of tracks that Quest had produced over the years, so that we could add them into the film in subtle ways, like when a car drives by or from someone’s headphones. It’s those kinds of little details that I love adding, like Easter eggs that only a handful of us know about. They make me smile whenever I watch a film.

Any particular scene or section or aspect of Quest that you found most challenging or interesting to work on?
The scenes involving Quest’s daughter PJ’s injury through her stay in the hospital and her return back home had a lot of challenges that came along with them. We used sound design and the score from the amazing composer T. Griffin to create the emotional arc that something dangerous and life-changing was about to happen.

Once we were in the hospital, we wanted the sound of everything to be very, very quiet. There is a scene in which Quest is whispering to PJ while she is in pain and trying to recover. The actual audio from that moment had a few nurses and women in the background having a loud conversation and occasionally laughing. It took the viewer immediately away from the emotions that we were trying to connect with, so we ended up scrapping that entire audio track and recreated the scene from scratch. Jon actually ended up getting in the sound booth and did some very low and quiet whispering of the kinds of phrases Quest said to his daughter. It took a couple hours to finesse that scene.

Lastly, the scene when PJ gets out of the hospital and is returning back into a world that didn’t stop while she was recovering. We spent a lot of time shifting back and forth between the reality of what happened, and the emotional journey PJ was going through trying to regain normalcy in her life. There was a lot of attention to detail in the mix on that scene because it had to be delivered correctly in order to not break the momentum that had been created.

What was the key technology you used on the project?
Avid Pro Tools, Izotope RX 5 Advanced, Audio Ease Altiverb, Zoom H4N; and a matched stereo pair of sE Electronics sE1a condenser mics.

Who else at Hobo was involved in Quest?
The entire Hobo team really stepped up on this project — namely our sound effects editors Stephen Davies, Diego Jimenez and Julian Angel; Foley artist Oscar Convers; and dialogue editor Jesse Peterson.

Chimney opens in New York City, hires team of post vets

Chimney, an independent content company specializing in film, television, spots and digital media, has opened a new facility in New York City. For over 20 years, the group has been producing and posting campaigns for brands, such as Ikea, Audi, H&M, Chanel, Nike, HP, UBS and more. Chimney was also the post partner for the feature films Chappaquiddick, Her, Atomic Blonde and Tinker Tailor Soldier Spy.

With this New York opening, Chimney now with 14 offices worldwide. Founded in Stockholm in 1995, they opened their first US studio in Los Angeles last year. In addition to Stockholm, New York and LA, Chimney also has facilities in Singapore, Copenhagen, Berlin and Sydney among other cities. For a full location list click here.

“Launching in New York is a benchmark long in the making, and the ultimate expression of our philosophy of ‘boutique-thinking with global power,’” says Henric Larsson, Chimney founder and COO. “Having a meaningful presence in all of the world’s economic centers with diverse cultural perspectives means we can create and execute at the highest level in partnership with our clients.”

The New York opening supports Chimney’s mission to connect its global talent and resources, effectively operating as a 24-hour, full-service content partner to brand, entertainment and agency clients, no matter where they are in the world.

Chimney has signed on several industry vets to spearhead the New York office. Leading the US presence is CEO North America Marcelo Gandola. His previous roles include COO at Harbor Picture Company; EVP at Hogarth; SVP of creative services at Deluxe Entertainment Services Group; and VP of operations at Company 3.

Colorist and director Lez Rudge serves as Chimney’s head of color North America. He is a former partner and senior colorist at Nice Shoes in New York. He has worked alongside Spike Lee and Darren Aronofsky, and on major brand campaigns for Maybelline, Revlon, NHL, Jeep, Humira, Spectrum and Budweiser.

Managing director Ed Rilli will spearhead the day-to-day logistics of the New York office. As the former head of production of Nice Shoes, his resume includes producing major campaigns for such brands as NFL, Ford, Jagermeister and Chase.

Sam O’Hare, chief creative officer and lead VFX artist, will oversee the VFX team. Bringing experience in live-action directing, VFX supervision, still photography and architecture, O’Hare’s interdisciplinary background makes him well suited for photorealistic CGI production.

In addition, Chimney has brought on cinematographer and colorist Vincent Taylor, who joins from MPC Shanghai, where he worked with brands such as Coca-Cola, Porsche, New Balance, Airbnb, BMW, Nike and L’Oréal.

The 6,000-square-foot office will feature Blackmagic Resolve color rooms, Autodesk Flame suites and a VFX bullpen, as well as multiple edit rooms, a DI theater and a Dolby Atmos mix stage through a joint venture with Gigantic Studios.

Main Image: (L-R) Ed Rilli, Sam O’Hare, Marcelo Gandola and Lez Rudge.

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Michael Semanick: Mixing SFX, Foley for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic. (Look for next week’s Star Wars story, in which re-recording mixer Ren Klyce talks about his approach to mixing John Williams’ score.)

Michael Semanick

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage.

You had all of these amazing elements — Skywalker’s effects, John Williams’ score and the dialogue. How did you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Review: Blackmagic Resolve 14

By David Cox

Blackmagic has released Version 14 of its popular DaVinci Resolve “color grading” suite, following a period of open public beta development. I put color grading in quotes, because one of the most interesting aspects about the V14 release is how far-reaching Resolve’s ambitions have become, beyond simply color grading.

Fairlight audio within Resolve.

Prior to being purchased by Blackmagic, DaVinci Resolve was one of a small group of high-end color grading systems being offered in the industry. Blackmagic then extended the product to include editing, and Version 14 offers several updates in this area, particularly around speed and fluidity of use. A surprise addition is the incorporation of Fairlight Audio — a full-featured audio mixing platform capable of producing feature film quality 3D soundscapes. It is not just an external plugin, but an integrated part of the software.

This review concentrates on the color finishing aspects of Resolve 14, and on first view the core color tools remain largely unchanged save for a handful of ergonomic improvements. This is not surprising given that Resolve is already a mature grading product. However, Blackmagic has added some very interesting tools and features clearly aimed at enabling colorists to broaden their creative control. I have been a long-time advocate of the idea that a colorist doesn’t change the color of a sequence, but changes the mood of it. Manipulating the color is just one path to that result, so I am happy to see more creatively expansive facilities being added.

Face Refinement
One new feature that epitomizes Blackmagic’s development direction is the Face Refinement tool. It provides features to “beautify” a face and underlines two interesting development points. Firstly, it shows an intention by the developers to create a platform that allows users to extend their creative control across the traditional borders of “color” and “VFX.”

Secondly, such a feature incorporates more advanced programming techniques that seek to recognize objects in the scene. Traditional color and keying tools simply replace one color for another, without “understanding” what objects those colors are attached to. This next step toward a more intelligent diagnosis of scene content will lead to some exciting tools and Blackmagic has started off with face-feature tracking.

Face Refinement

The Face Refinement function works extremely well where it recognizes a face. There is no manual intervention — the tool simply finds a face in the shot and tracks all the constituent parts (eyes, lips, etc). Where there is more than one face detected, the system offers a simple box selector for the user to specify which face to track. Once the analysis is complete, the user has a variety of simple sliders to control the smoothness, color and detail of the face overall, but also specific controls for the forehead, cheeks, chin, lips, eyes and the areas around and below the eyes.

I found the face de-shine function particularly successful. A light touch with the controls yields pleasing results very quickly. A heavy touch is what you need if you want to make someone look like an android. I liked the fact that you can go negative with some controls and make a face look more haggard!

In my tests, the facial tracking was very effective for properly framed faces, even those with exaggerated expressions, headshakes and so on. But it would fail where the face became partially obscured, such as when the camera panned off the face. This led to all the added improvements popping off mid shot. While the fully automatic operation makes it quick and simple to use, it affords no opportunity for the user to intervene and assist the facial tracking if it fails. All things considered though, this will be a big help and time saver for the majority of beauty work shots.

Resolve FX
New for Resolve 14 are a myriad of built-in effects called Resolve FX, all GPU-accelerated and available to be added in the edit “page” directly to clips, or in the color page attached to nodes. They are categorized into Blurs, Light, Color, Refine, Repair, Stylize, Texture and Warp. A few particularly caught my eye, for example in “color,” the color compressor brings together nearby colors to a central hue. This is handy for unifying colors of an unevenly lit client logo into their precise brand reference, or dealing with blotchy skin. There is also a color space transform tool that enables LUT-less conversion between all the major color “spaces.”

Color

The dehaze function derives a depth map by some mysterious magic to help improve contrast over distance. The “light” collection includes a decent lens flare that allows plenty of customizing. “Styles” creates watercolor and outline looks while Texture includes a film grain effect with several film-gauge presets. I liked the implementation of the new Warp function. Rather than using grids or splines, the user simply places “pins” in the image to drag certain areas around. Shift-adding a pin defines a locked position immune from dragging. All simple, intuitive and realtime, or close to it.

Multi-Skilled and Collaborative Workflows
A dilemma for the Resolve developers is likely to be where to draw the line between editing, color and VFX. Blackmagic also develops Fusion, so they have the advanced side of VFX covered. But in the middle, there are editors who want to make funky transitions and title sequences, and colorists who use more effects, mattes and tracking. Resolve runs out of ability in these areas quite quickly and this forces the more adventurous editor or colorist into the alien environment of Fusion. The new features of Resolve help in this area, but a few additions to Resolve, such as better keyframing of effects and easier ability to reference other timeline layers in the node panel could help to extend Resolve’s ability to handle many common VFX-ish demands.

Some have criticized Blackmagic for turning Resolve into a multi-discipline platform, suggesting that this will create an industry of “jack of all trades and masters of none.” I disagree with this view for several reasons. Firstly, if an artist wants to major in a specific discipline, having a platform that can do more does not impede them. Secondly, I think the majority of content (if you include YouTube, etc.) is created by a single person or small teams, so the growth of multi-skilled post production people is simply an inevitable and logical progression which Blackmagic is sensibly addressing.

Edit

But for professional users within larger organisations, the cross-discipline features of Resolve take on a different meaning when viewed in the context of “collaboration.” Resolve 14 permits editors to edit, colorists to color and sound mixers to mix, all using different installations of the same platform, sharing the same media and contributing to the same project, even the same timeline. On the face of it, this promises to remove “conforms” and eradicate wasteful import/export processes and frustrating compatibility issues, while enabling parallel workflows across editing, color grading and audio.

For fast-turnaround projects, or projects where client approval cannot be sought until the project progresses beyond a “rough” stage, the potential advantages are compelling. Of course, the minor hurdle to get over will be to persuade editors and audio mixers to adopt Resolve as their chosen weapon. If they do, Blackmagic might well be on the way to providing collaborative utopia.

Summing Up
Resolve 14 is a massive upgrade from Resolve 12 (there wasn’t a Resolve 13 — who would have thought that a company called Blackagic might be superstitious?). It provides a substantial broadening of ability that will suit both the multi-skilled smaller outfits or fit as a grading/finishing platform and collaborative backbone in larger installations.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

King, along with Alex Gibson, recently won the Academy Award for Achievement in Sound Editing for Dunkirk.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

Quick Chat: Monkeyland Audio’s Trip Brock

By Dayna McCallum

Monkeyland Audio recently expanded its facility, including a new Dolby Atmos equipped mixing stage. The Glendale-based Monkeyland Audio, where fluorescent lights are not allowed and creative expression is always encouraged, now offers three mixing stages, an ADR/Foley stage and six editorial suites.

Trip Brock, the owner of Monkeyland, opened the facility over 10 years ago, but the MPSE Golden Reel Award-winning supervising sound editor and mixer (All the Wilderness), started out in the business more than 23 years ago. We reached out to Brock to find out more about the expansion and where the name Monkeyland came from in the first place…

monkeyland audioOne of your two new stages is Dolby Atmos certified. Why was that important for your business?
We really believe in the Dolby Atmos format and feel it has a lot of growth potential in both the theatrical and television markets. We purpose-built our Atmos stage looking towards the future, giving our independent and studio clients a less expensive, yet completely state-of-the-art alternative to the Atmos stages found on the studio lots.

Can you talk specifically about the gear you are using on the new stages?
All of our stages are running the latest Avid Pro Tools HD 12 software across multiple Mac Pros with Avid HDX hardware. Our 7.1 mixing stage, Reposado, is based around an Avid Icon D-Control console, and Anejo, our Atmos stage, is equipped with dual 24-fader Avid S6 M40 consoles. Monitoring on Anejo is based on a 3-way JBL theatrical system, with 30 channels of discrete Crown DCi amplification, BSS processing and the DAD AX32 front end.

You’ve been in this business for over 23 years. How does that experience color the way you run your shop?
I stumbled into the post sound business coming from a music background, and immediately fell in love with the entire process. After all these years, having worked with and learned so much from so many talented clients and colleagues, I still love what I do and look forward to every day at the office. That’s what I look for and try to cultivate in my creative team — the passion for what we do. There are so many aspects and nuances in the audio post world, and I try to express that to my team — explore all the different areas of our profession, find which role really speaks to you and then embrace it!

You’ve got 10 artists on staff. Why is it important to you to employ a full team of talent, and how do you see that benefiting your clients?
I started Monkeyland as primarily a sound editorial company. Back in the day, this was much more common than the all-inclusive, independent post sound outfits offering ADR, Foley and mixing, which are more common today. The sound editorial crew always worked together in house as a team, which is a theme I’ve always felt was important to maintain as our company made the switch into full service. To us, keeping the team intact and working together at the same location allows for a lot more creative collaboration and synergy than say a set of editors all working by themselves remotely. Having staff in house also allows us flexibility when last minute changes are thrown our way. We are better able to work and communicate as a team, which leads to a superior end product for our clients.

Monkeyland AudioCan you name some of the projects you are working on and what you are doing for them?
We are currently mixing a film called The King’s Daughter, starring Pierce Brosnan and William Hurt. We also recently completed full sound design and editorial, as well as the native Atmos mix, on a new post-apocalyptic feature we are really proud of called The Worthy. Other recent editorial and mixing projects include the latest feature from Director Alan Rudolph, Ray Meets Helen, the 10-episode series Junior for director Zoe Cassavetes, and Three Days To Live, a new eight-episode true-crime series for NBC/Universal.

Most of your stage names are related to tequila… Why is that?
Haha — this is kind of a take-off from the naming of the company itself. When I was looking for a company name, I knew I didn’t want it to include the word “digital” or have any hint toward technology, which seemed to be the norm at the time. A friend in college used to tease me about my “unique” major in audio production, saying stuff like, “What kind of a degree is that? A monkey could be trained to do that.” Thus Monkeyland was born!

Same theory applied to our stage names. When we built the new stages and needed to name them, I knew I didn’t want to go with the traditional stage “A, B, C” or “1, 2, 3,” so we decided on tequila types — Anejo, Reposado, Plata, even Mezcal. It seems to fit our personality better, and who doesn’t like a good margarita after a great mix!

The sounds of Brooklyn play lead role in HBO’s High Maintenance

By Jennifer Walden

New Yorkers are jaded, and one of the many reasons is that just about anything they want can be delivered right to their door: Chinese food, prescriptions, craft beer, dry cleaning and weed. Yes, weed. This particular item is delivered by “The Guy,” the protagonist of HBO’s new series, High Maintenance.

The Guy (played by series co-creator Ben Sinclair) bikes around Brooklyn delivering pot to a cast of quintessentially quirky New York characters. Series creators Sinclair and Katja Blichfeld string together vignettes — using The Guy as the common thread — to paint a realistic picture of Brooklynites.

andrew-guastella

Nutmeg’s Andrew Guastella. Photo credit: Carl Vasile

“The Guy delivers weed to people, often going into their homes and becoming part of their lives,” explains sound editor/re-recording mixer Andrew Guastella at Nutmeg, a creative marketing and post studio based in New York. “I think that what a lot of viewers like about the show is how quickly you come to know complete strangers in a sort of intimate way.”

Blichfeld and Sinclair find inspiration for their stories from their own experiences, says Guastella, who follows suit in terms of sound. “We focus on the realism of the sound, and that’s what makes this show unique.” The sound of New York City is ever-present, just as it is in real life. “Audio post was essential for texturizing our universe,” says Sinclair. “There’s a loud and vibrant city outside of those apartment walls. It was important to us to feel the presence of a city where people live on top of each other.”

Big City Sounds
That edict for realism drives all sound-related decisions on High Maintenance. On a typical series, Guastella would strive to clean up every noise on the production dialogue, but for High Maintenance, the sound of sirens, horns, traffic, even car alarms are left in the tracks, as long as they’re not drowning out the dialogue. “It’s okay to leave sounds in that aren’t obtrusive and that sell the fact that they are in New York City,” he says.

For example, a car alarm went off during a take. It wasn’t in the way of the dialogue but it did drop out on a cut, making it stand out. “Instead of trying to remove the alarm from the dialogue, I decided to let it roll and I added a chirp from a car alarm, as if the owner turned off the alarm [or locked the car], to help incorporate it into the track. A car alarm is a sound you hear all the time in New York.”

Exterior scenes are acceptably lively, and if an interior scene is feeling too quiet, Guastella can raise a neighborly ruckus. “In New York, there’s always that noisy neighbor. Some show creators might be a little hesitant to use that because it could be distracting, but for this show, as long as it’s real, Ben and Katja are cool with it,” he says. During a particularly quiet interior scene, he tried adding the sounds of cars pulling away and other light traffic to fill up the space, but it wasn’t enough, so Guastella asked the creators, “’How do you feel about the neighbors next door arguing?’ And they said, ‘That’s real. That’s New York. Let’s try it out.’”

Guastella crafted a commotion based on his own experience of living in an apartment in Queens. Every night he and his wife would hear the downstairs neighbors fighting. “One night they were yelling and then all we heard was this loud, enormous slam. Hopefully, it was a door,” jokes Guastella. “Ben and Katja are always pulling from their own experiences, so I tried to do that myself with the soundtrack.”

Despite the skill of production sound mixer Dimitri Kouri, and a high tolerance for the ever-present sound of New York City, Guastella still finds himself cleaning dialogue tracks using iZotope’s RX 5 Advanced. One of his favorite features is RX Connect. With this plug-in feature, he can select a region of dialogue in his Avid Pro Tools session and send that region directly to iZotope’s standalone RX application where he can edit, clean and process the dialogue. Once he’s satisfied, he can return that cleaned up dialogue right back in sync on the timeline of his Pro Tools session where he originally sent it from.

“I no longer have to deal with exporting and importing audio files, which was not an efficient way to work,” he says. “And for me, it’s important that I work within the standalone application. There are plug-in versions of some RX tools, but for me, the standalone version offers more flexibility and the opportunity to use the highly detailed visual feedback of its audio-spectrum analyzer. The spectrogram makes using tools like Spectral Repair and De-click that much more effective and efficient. There are more ways to use and combine the tools in general.”

Guastella has been with the series since 2012, during its webisode days on Vimeo. Back then, it was a passion-project, something he’d work on at home on his own time. From the beginning, he’s handled everything audio: the dialogue cleaning and editing, the ambience builds and Foley and the final mix. “Andrew [Guastella] brought his professional ear and was always such a pleasure to work with. He always delivered and was always on time,” says Blichfeld.

The only aspect that Guastella doesn’t handle is the music. “That’s a combination of licensed music (secured by music supervisor Liz Fulton) and original composition by Chris Bear. The music is well-established by the time the episode gets to me,” he says.

On the Vimeo webisodes, Guastella would work an episode’s soundtrack into shape, and then send it to Blichfeld and Sinclair for notes. “They would email me or we would talk over the phone. The collaborative process wasn’t immediate,” he says. Now that HBO has picked up the series and renewed it for Season 2, Guastella is able to work on High Maintenance in his studio at Nutmeg, where he has access to all the amenities of a full-service post facility, such as sound effects libraries, an ADR booth, a 5.1 surround system and room to accommodate the series creators who like to hang around and work on the sound with Guastella. “They are very particular about sound and very specific. It’s great to have instant access to them. They were here more than I would’ve expected them to be and it was great spending all that time with them personally and professionally.”

In addition to being a series co-creator, co-writer and co-director with Blichfeld, Sinclair is also one of show’s two editors. This meant they were being pulled in several directions, which eventually prevented them from spending so much time in the studio with Guastella. “By the last three episodes of this season, I had absorbed all of their creative intentions. I was able to get an episode to the point of a full mix and they would come in just for a few hours to review and make tweaks.”

With a bigger budget from HBO, Guastella is also able to record ADR when necessary, record loop group and perform Foley for the show at Nutmeg. “Now that we have a budget and the space to record actual Foley, we’re faced with the question of how much Foley do we want to do? When you Foley sound for every movement and footstep, it doesn’t always sound realistic, and the creators are very aware of that,” says Guastella.

5.1 Surround Mix
In addition to a minimalist approach, another way he keeps the Foley sounding real is by recording it in the real world. In Episode 3, the story is told from a dog’s POV. Using a TASCAM DR 680 digital recorder and a Sennheiser 416 shotgun mic, Guastella recorded an “enormous amount of Foley at home with my Beagle, Bailey, and my father-in-law’s Yorkie and Doberman. I did a lot of Foley recording at the dog park, too, to capture Foley for the dog outside.”

Another difference between the Vimeo episodes and the HBO series is the final mix format. “HBO requires a surround sound 5.1 mix and that’s something that demands the infrastructure of a professional studio, not my living room,” says Guastella. He takes advantage of the surround field by working with ambiences, creating a richer environment during exterior shots which he can then contrast with a closer, confined sound for the interior shots.

“This is a very dialogue-driven show so I’m not putting too much information in the surrounds. But there is so much sound in New York City, and you are really able to play with perspective of the interior and exterior sounds,” he explains. For example, the opening of Episode 3, “Grandpa,” follows Gatsby the dog as he enters the front of his house and eventually exits out of the back. Guastella says he was “able to bring the exterior surrounds in with the characters, then gradually pan them from surround to a heavier LCR once he began approaching the back door and the backyard was in front of him.”

The series may have made the jump from Vimeo to HBO but the soul of the show has changed very little, and that’s by design. “Ben, Katja, and Russell Gregory [the third executive producer] are just so loyal to the people who helped get this series off the ground with them. On top of that, they wanted to keep the show feeling how it did on the web, even though it’s now on HBO. They didn’t want to disappoint any fans that were wondering if the series was going to turn into something else… something that it wasn’t. It was really important to the show creators that the series stayed the same, for their fans and for them. Part of that was keeping on a lot of the people who helped make it what it was,” concludes Guastella.

Check out High Maintenance on HBO, Fridays at 11pm.


Jennifer Walden is a NJ-based audio engineer and writer. Follow her at @audiojeney.

The sound of sensory overload for Cinemax’s ‘Outcast’

By Jennifer Walden

As a cockroach crawls along the wall, each move is watched intensely by a boy whose white knuckles grip the headboard of his bed. His shallow breaths stop just before he head-butts the cockroach and sucks its bloody remains off the wall.

That is the fantastic opening scene of Robert Kirkman’s latest series, Outcast, airing now on Cinemax. Kirkman, writer/executive producer on The Walking Dead, sets his new horror series in the small town of Rome, West Virginia, where a plague of demonic-like possessions is infecting the residents.

Ben Cook

Outcast supervising sound editor Benjamin Cook, of 424 Post in Culver City, says the opening of the pilot episode featured some of his favorite moments in terms of sound design. Each scrape of the cockroach’s feet, every twitch of its antenna, and the juicy crunch of its demise were carefully crafted. Then, following the cockroach consumption, the boy heads to the pantry and snags a bag of chips. He mindlessly crunches away as his mother and sister argue in the kitchen. When the mother yells at the boy for eating chips after supper, he doesn’t seem to notice. He just keeps crunching away. The mother gets closer as the boy turns toward her and she sees that it’s not chips he’s crunching on but his own finger. This is not your typical child.

“The idea is that you want it to seem like he’s eating potato chips, but somewhere in there you need a crossover between the chips and the flesh and bone of his finger,” says Cook. Ultimately, the finger crunching was a combination of Foley — provided by Jeff Wilhoit, Brett Voss, and Dylan Tuomy-Wilhoit at Happy Feet Foley — and 424 Post’s sound design, created by Cook and his sound designers Javier Bennassar and Charles Maynes. “We love doing all of those little details that hopefully make our soundtracks stand out. I try to work a lot of detail into my shows as a general rule.”

Sensory Overload
While hitting the details is Cook’s m.o. anyway — as evidenced by his Emmy-nominated sound editing on Black Sails — it serves a double purpose in Outcast. When people are possessed in the world of Outcast, we imagine that they are more in tune with the micro details of the human experience. Every touch and every movement makes a sound.

“Whenever we are with a possessed person we try to play up the sense that they are overwhelmed by what they are experiencing because their body has been taken over,” says Cook. “Wherever this entity comes from it doesn’t have a physical body and so what the entity is experiencing inside the human body is kind of a sensory overload. All of the Foley and sound effects are really heightened when in that experience.”

Cook says he’s very fortunate to find shows where he and his team have a lot of creative freedom, as they do on Outcast. “As a sound person that is the best; when you really are a collaborator in the storytelling.”

His initial direction for sound came from Adam Wingard, the director on the pilot episode. Wingard asked for drones and distortion, for hard-edged sounds derived from organic sources. “There are definitely more processed kinds of sounds than I would typically use. We worked with the composer Atticus Ross, so there was a handoff between the music and the sound design in the show.”

Working with a stereo music track from composer Ross, Cook and his team could figure out their palette for the sound design well before they hit the dub stage. They tailored the sound design to the music so that both worked together without stepping on each other’s toes.

He explains that Outcast was similar to Black Sails in that they were building the episodes well before they mixed them. The 424 Post team had time to experiment with the design of key sounds, like the hissing, steaming sound that happens when series protagonist Kyle Barnes (Patrick Fugit) touches a possessed person, and the sound of the entity as it is ejected from a body in a jet of black, tar-like fluid, which then evaporates into thin air. For that sound, Cook reveals that they used everything from ocean waves to elephant sounds to bubbling goo. “The entity was tough because we had to find that balance between its physical presence and its spiritual presence because it dissipates back into its original plane, where ever it came from.”

Sound Design and More
When defining the sound design for possessed people, one important consideration was what to do with their voice. Or, in this case, what not to do with their voice. Series creator Kirkman, who gave Cook carte blanche on the majority of the show’s sound work, did have one specific directive: “He didn’t want any changes to happen with their voice. He didn’t want any radical pitch shifting or any weird processing. He wanted it to sound very natural,” explains Cook, who shared the ADR workload with supervising dialogue editor Erin Oakley-Sanchez.

There was no processing to the voices at all. What you hear is what the actors were able to perform, the only exception being Joshua (Gabriel Bateman), an eight-year-old boy who is possessed. For him, the show runners wanted to hear a slight bit of difference to drive home the fact that his body had indeed been taken over. “We have Kyle beating up this kid and so we wanted to make sure that the viewers really got a sense that this wasn’t a kid he was beating up, but that he was beating up a monster,” explains Cook.

To pull off Joshua’s possessed voice, Oakley-Sanchez and Wingard had actor Bateman change his voice in different ways during their ADR session. Then, Cook doubled certain lines in the mix. “The approach was very minimalistic. We never layered in other animal sounds or anything like that. All of the change came from the actor’s performance,” Cook says.

Cook is a big proponent of using fresh sounds in his work. He used field recordings captured in Tennessee, Virginia, and Florida to build the backgrounds. He recorded hard effects like doors, body hits and furniture crashing and breaking. There were other elements used as part of the sound design, like wind and water recordings. In Sound Particles —a CGI-like software for sound design created by Nuno Fonseca — he was able to manipulate and warp sound elements to create unique sounds.

“Sound Particles has really great UI to it, like virtual mics you can place and move to record things in a virtual 3D environment. It lets you create multiple instances of sound very easily. You can randomize things like pitch and timing. You can also automate the movements and create little vignettes that can be rendered out as a piece of audio that you can bring into Pro Tools or Nuendo or other audio workstations. It’s a very fascinating concept and I’ve been using it a lot.”

Cook enjoys building rich backgrounds in shows, which he uses to help further the storyline. For example, in Episode 2 the police chief and his deputy take a trek through the woods and find an abandoned trailer. Cook used busier tracks with numerous layers of sounds at first, but as the chief and deputy get farther into the woods and closer to the abandoned trailer, the backgrounds become sparser and eerily quiet. Another good example happens in Episode 9, where there is a growing storm that builds throughout the whole episode. “It’s not a big player, just more of a subtext to the story. We do really simple things that hopefully translate and come across to people as little subtleties they can’t put their finger on,” says Cook.

Outcast is mixed in 5.1 by re-recording mixers Steve Pederson (dialogue/music) and Dan Leahy (effects/Foley/ backgrounds) via Sony Pictures Post at Deluxe in Hollywood. Cook says, “They are super talented mixers who mostly do a lot of feature films and so they bring a theatrical vibe to the series.”

New episodes of Outcast air Fridays at 10pm on Cinemax, with the season finale on August 12th. Outcast has been renewed for Season 2, and while Cook doesn’t have any inside info on where the show will go next season, he says, “at the end of Season 1, we’re not sure if the entity is alien or demonic, and they don’t really give it away one way or another. I’m really excited to see what they do in Season 2. There is lots of room to go either way. I really like the characters, like the Reverend and Kyle — both have really great back stories. They’re both so troubled and flawed and there is a lot to build on there.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.

Larson Studios pulls off an audio post slam dunk for FX’s ‘Baskets’

By Jennifer Walden

Turnarounds for TV series are notoriously fast, but imagine a three-day sound post schedule for a single-camera half-hour episodic series? Does your head hurt yet? Thankfully, Larson Studios in Los Angeles has its workflow on FX’s Baskets down to a science. In the show, Zach Galifianakis stars as Chip Baskets, who works as a California rodeo clown after failing out of a prestigious French clown school.

So how do you crunch a week and a half’s worth of work into three days without sacrificing quality or creativity? Larson’s VP, Rich Ellis, admits they had to create a very aggressive workflow, which was made easier thanks to their experience working with Baskets post supervisor Kaitlin Menear on a few other shows.

Ellis says having a supervising sound editor — Cary Stacy — was key in setting up the workflow. “There are others competing for space in this market of single-camera half-hours, and they treat post sound differently — they don’t necessarily bring a sound supervisor to it. The mixer might be cutting and mixing and wrangling all of the other elements, but we felt that it was important to continue to maintain that traditional sound supervisor role because it actually helps the process to be more efficient when it comes to the stage.”

John Chamberlin and Cary Stacy

John Chamberlin and Cary Stacy

This allows re-recording mixer John Chamberlin to stay focused on the mix while sound supervisor Stacy handles any requests that pop-up on stage, such as alternate lines or options for door creaks. “I think director Jonathan Krisel, gave Cary at least seven honorary Emmy awards for door creaks over the course of our mix time,” jokes Menear. “Cary can pull up a sound effect so quickly, and it is always exactly perfect.”

Every second counts when there are only seven hours to mix an episode from top to bottom before post producer Menear, director Krisel and the episode’s picture editor join the stage for the two-hour final fixes and mix session. Having complete confidence in Stacy’s alternate selections, Chamberlin says he puts them into the session, grabs the fader and just lets it roll. “I know that Cary is going to nail it and I go with it.”

Even before the episode gets to the stage, Chamberlin knows that Stacy won’t overload the session with unnecessary elements, which are time consuming. Even still, Chamberlin says the mix is challenging in that it’s a lot for one person to do. “Although there is care taken to not overload what is put on my plate when I sit down to mix, there are still 8 to 10 tracks of Foley, 24 or more tracks of backgrounds and, depending on the show, the mono and stereo sound effects can be 20 tracks. Dialogue is around 10 and music can be another 10 or 12, plus futz stuff, so it’s a lot. You have to have a workflow that’s efficient and you have to feel confident about what you’re doing. It’s about making decisions quickly.”

Chamberlin mixed Baskets in 5.1 — using a Pro Tools 11 system with an Avid ICON D-Command — on Stage 4 at Larson Studios, where he’s mixed many other shows, such as Portlandia, Documentary Now, Man Seeking Woman, Dice, the upcoming Netflix series Easy, Comedy Bang Bang, Meltdown With Jonah and Kumail and Kroll Show. “I’m so used to how Stage 4 sounds that I know when the mix is in a good place.”

Another factor of the three-day turn-around is choosing to forgo loop group and minimizing ADR to only when it’s absolutely necessary. The post sound team relied on location sound mixer Russell White to capture all the lines as clearly as possible on set, which was a bit of a challenge with the non-principal characters.

Baskets

Tricky On-Set Audio
According to Menear, director Krisel loves to cast non-actors in the majority of the parts. “In Baskets, outside of our three main roles, the other people are kind of random folk that Jonathan has collected throughout his different directing experiences,” she says. While that adds a nice flavor creatively, the inexperienced cast members tend to step on each other’s lines, or not project properly — problems you typically won’t have with experienced actors.

For example, Louie Anderson plays Chip’s mom Christine. “Louie has an amazing voice and it’s really full and resonant,” explains Chamberlin. “There was never a problem with Louie or the pro actors on the show. The principals were very well represented sonically, but the show has a lot of local extras, and that poses a challenge in the recording of them. Whether they were not talking loud enough or there was too much talking.”

A good example is the Easter brunch scene in Episode 104. Chip, his mother and grandmother encounter Martha (Chip’s insurance agent/pseudo-friend played by Martha Kelly) and her parents having brunch in the casino. They decide to join their tables together. “There were so many characters talking at the same time, and a lot of the side characters were just having their own conversations while we were trying to pay attention to the main characters,” says Stacy. “I had to duck those side conversations as much as possible when necessary. There was a lot of that finagling going on.”

Stacy used iZotope RX 5 features like Decrackle and Denoise to clean up the tracks, as well as the Spectral Repair feature for fixing small noises.

Multiple Locations
Another challenge for sound mixer White was that he had to quickly shoot in numerous locations for any given episode. That Easter brunch episode alone had at least eight different locations, including the casino floor, the casino’s buffet, inside and outside of a church, inside the car, and inside and outside of Christine’s house. “Russell mentioned how he used two rigs for recording because he would always have to just get up and go. He would have someone else collect all of the gear from one location while he went off to a new location,” explains Chamberlin. “They didn’t skimp on locations. When they wanted to go to a place they would go. They went to Paris. They went to a rodeo. So that has challenges for the whole team — you have to get out there and record it and capture it. Russell did a pretty fantastic job considering where he was pushed and pulled at any moment of the day or night.”

Sound Effects
White’s tracks also provided a wealth of production effects, which were a main staple of the sound design. The whole basis for the show, for picture and sound, was to have really funny, slapstick things happen, but have them play really straight. “We were cutting the show to feel as real and as normal as possible, regardless of what was actually happening,” says Menear. “Like when Chip was walking across a room full of clown toys and there were all of these strange noises, or he was falling down, or doing amazing gags. We played it as if that could happen in the real world.”

Stacy worked with sound effects editor TC Spriggs to cut in effects that supported the production effects, never sounding too slapstick or over the top, even if the action was. “There is an episode where Chip knocks over a table full of champagne glasses and trips and falls. He gets back up only to start dancing, breaking even more glasses,” describes Chamberlin.

That scene was a combination of effects and Foley provided by Larson’s Foley team of Adam De Coster (artist) and Tom Kilzer (recordist). “Foley sync had to be perfect or it fell apart. Foley and production effects had to be joined seamlessly,” notes Chamberlin. “The Foley is impeccably performed and is really used to bring the show to life.”

Spriggs also designed the numerous backgrounds. Whether it was the streets of Paris, the rodeo arena or the doldrums of Bakersfield, all the locations needed to sound realistic and simple yet distinct. On the mix side, Chamberlin used processing on the dialogue to help sell the different environments – basic interiors and exteriors, the rodeo arena and backstage dressing room, Paris nightclubs, Bakersfield dive bars, an outdoor rave concert, a volleyball tournament, hospital rooms and dream-like sequences and a flashback.

“I spent more time on the dialogue than any other element. Each place had to have its own appropriate sounding environments, typically built with reverbs and delays. This was no simple show,” says Chamberlin. For reverbs, Chamberlin used Avid’s ReVibe and Reverb One, and for futzing, he likes McDSP’s FutzBox and Audio Ease’s Speakerphone plug-ins.

One of Chamberlin’s favorite scenes to mix was Chip’s performance at the rodeo, where he does his last act as his French clown alter ego Renoir. Chip walks into the announcer booth with a gramophone and asks for a special song to be played. Chamberlin processed the music to account for the variable pitch of the gramophone, and also processed the track to sound like it was coming over the PA system. In the center of the ring you can hear the crowds and the announcer, and off-screen a bull snorts and grinds it hooves into the dirt before rushing at Chip.

Another great sequence happens in the Easter brunch episode where we see Chip walking around the casino listening to a “Learn French” lesson through ear buds while smoking a broken cigarette and dreaming of being Renoir the clown on the streets of Paris. This scene summarizes Chip’s sad clown situation in life. It’s thoughtful, and charming and lonely.

“We experimented with elaborate sound design for the voice of the narrator, however, we landed on keeping things relatively simple with just an iPhone futz,” says Stacy. “I feel this worked out for the best, as nothing in this show was over done. We brought in some very light backgrounds for Paris and tried to keep the transitions as smooth as possible. We actually had a very large build for the casino effects, but played them very subtly.”

Adds Chamberlin, “We really wanted to enhance the inner workings of Chip and to focus in on him there. It takes a while in the show to get to the point where you understand Chip, but I think that is great. A lot of that has to do with the great writing and acting, but our support on the sound side, in particular on that Easter episode, was not to reinvent the wheel. Picture editors Micah Gardner and Michael Giambra often developed ideas for sound, and those had a great influence on the final track. We took what they did in picture editorial and just made it more polished.”

The post sound process on Baskets may be down and dirty, but the final product is amazing, says Menear. “I think our Larson Studios team on the show is awesome!”

Review: Avid Pro Tools 12

By Ron DiCesare

In 1990, I was working at a music studio where I did a lot of cut downs of 60s, 30s, 15s and 10s for TV and radio commercials. Back then we used ¼-inch analog tape with a razor blade to physically cut the tape. Since I did so many ¼-inch tape edits, the studio manager was forward thinking enough to introduce a new 2-track digital editing system by Digidesign called Sound Tools. I took to it like a fish takes to water since I was already using computers, MIDI sequencers and drum machines —  even replacing chips in drum machines — which is fitting since that is how Peter Gotcher and Evan Brooks started Digidesign back in 1984. (See my History of Audio Post here.)

A short time later, Pro Tools was introduced and everyone at the studio thought it was simply an upgrade to Sound Tools but with a different name. We purchased the first available version of Pro Tools and launched the new version to discover that there were now 4 audio tracks instead of 2. My first thought was, “Oh no, what am I going to do with the 2 extra tracks?!” Fearing the worst, my second thought was, “Oh shit, I bet this thing no longer does crossfades and I will have to use those two extra tracks to “ping pong” from one set of tracks to the other for fades.” Thankfully, I quickly realized that not only could Pro Tools 1.0 do crossfades, but it could do a lot more, including revolutionizing the entire audio industry.

During my long history of working on Sound Tools and Pro Tools, I have seen all of the advancements with the software firsthand. I am pleased to say that Avid’s latest version of Pro Tools, 12.3 includes some of the most helpful improvements yet.

Offerings and Pricing Options
Avid now offers its most flexible pricing ever for Pro Tools 12 — there are three different ways to purchase or upgrade. Just like before, Pro Tools can be purchased or upgraded outright, which is called a perpetual license. Don’t let the word license scare you; it still is a one-time purchase. In addition to the perpetual license, there are two new ways to lease Pro Tools either on a monthly basis or an annual subscription basis. This is an interesting step for Avid. The advantage to both types of subscriptions is that the user is eligible for all of the upgrades and tech support included with their subscription. This is an excellent way to ensure your program is always up to date while bug fixes are made along the way.

Offering such pricing flexibility does create a bit of confusion regarding what pricing options are available, since there are three versions of Pro Tools combined with the difference between first-time purchasers verses upgrades for preexisting users.

The first available option is called Pro Tools First, which is a free version. As a free version, this is an ideal option for anyone who is looking to get on board with Pro Tools for the first time. However, to take full advantage of Pro Tools 12, which is listed here in my review, you would need to purchase one of the two main versions, Pro Tools 12 or Pro Tools|HD 12.

Here is how the pricing breaks down: Pro Tools 12 Perpetual Licensing (AKA purchase outright) is $599. the Monthly subscription with upgrade plan is $29.99 per month.
The Annual subscription with upgrade plan is$24.92 per month (or $299 annually).

Pricing can vary according to your situation if you own previous versions or you have let too much time lapse in between upgrades. Suffice it to say, that whatever your unique situation is there is a purchase plan for you.

What’s Not New
The one thing product reviews rarely, if ever, cover is what has not changed. To me, what hasn’t changed is the first thing I want to know when I am working with any new version of existing software. I cannot stress enough the importance of being able to quickly and easily pick up exactly where I left off from my old version after upgrading. Unfortunately I know how often the software’s new features can make my old way of working obsolete.

I can’t help but think of a notable recent example when the upgrade to FCP X no longer supported OMF for audio exports. What were they thinking? Keeping previous workflows intact is an extremely important issue to me. Immediately after my upgrade from Pro Tools 10 to Pro Tools HD|12, I launched a session and it worked exactly as it did in version 10, eliminating any downtime for me.

One thing that is not new, but is extremely important to mention is the switch from the original Digidesign Audio Engine to the Avid Audio Engine. This happened on Pro Tools 11. Even with the change to the Avid Audio Engine, I was not forced to abandon my old workflow. The advantage of the Avid Audio Engine is key — among other things, this is what allows for the long overdue offline bounce, or faster-than-realtime bounce. And for anyone who is still on Pro Tools 10 and below, the offline bounce is a major reason to move to Pro Tools 12.

Because everyone uses Pro Tools in so many different and complex ways, I encourage you to view Avid’s website www.avid.com for a list of all of the new and improved functions. There are too many new features and improvements to list each one in this review. That is why I came up with a list of my 12 favorite new features of Pro Tools|HD 12.

My 12 Favorite New Features of Pro Tools 12
1. Avid Application Manager. There is a new icon at the top of your screen called the Avid Application Manager. Clicking on it will launch a window allowing you to log into your account, keep up with any updates and view a list of any uninstalled plug-ins available, along with your support options. You can also verify what type of license you have and when it was activated. This is helpful if you have the month-to-month or annual subscription so you can see when your next renewal is. Even with the perpetual license, you can still see what upgrades and bug fixes are available at any time.

2. Buy or Rent Plug-ins. One very cool new feature is the option to buy or rent any plug-in from a new menu option directly in Pro Tools called The Marketplace. This is particularly useful if you are opening another person’s session that has used a plug-in you do not own or if you are opening your session at a studio where they do not own a particular plug-in that you have at your studio. The rent option is a great way to access any missing plug-ins without having to commit to them fully.

3. Pitch Shift Legacy. Call me crazy, but I am thrilled that Avid has included the original version of Pitch Shift in the audio suite. In Pro Tools 11, Pitch Shift was changed to a piano keyboard-based plug-in called Pitch 2. As cool as it is to base your work off of a piano keyboard used in Pitch 2, I missed some of the basic features found only in the original version. I am pleased to say that Avid now offers both versions of Pitch Shift in the audio suite — the new piano-based keyboard version and the original, now called Pitch Shift Legacy.

4. Track Commit. Track Commit is used for converting virtual instruments to audio files, and it can be used for saving processing power overall. Even if you do not use virtual instruments, it still can be a very useful function, offering you the option to “print” your plug-ins to the audio track. This is a great way of saving processing and plug-in power. You can also render your automation, including panning. All of this saves processing power and any possible confusion if someone else is working on your session down the line.

5. Clip Transparency. Some people may remember the days of ¼-inch tape editing that I mentioned at the start of this article. Back then, audio editing had to be done solely with your ears. When Sound Tools and Pro Tools came along, editing became a visual skill, too. Clip Transparency takes visual editing one step further. It allows you to see two clips superimposed over each other while moving them on the same audio track. This is ideal for anyone who needs to line up a new clip with the old clip like when doing ADR.

The best part is it’s not only for seeing two different clips overlaid at the same time; it can be used when you are moving a single region or clip along your audio track. Clip Transparency allows you to see the old position superimposed with the new position of the same clip while you are shifting it for comparison.

It is perfect for those countless times when I have zoomed in past the start of the clip and I can’t see how much I am moving the clip relative to the old position. Clip Transparency now allows me to see how much I am shifting the audio, no matter what my zoom setting is. I never knew how much I needed this feature until I saw it in action. Clip Transparency is by far my most favorite new feature of Pro Tools 12.

6. Batch Fade and Fade Presets. When you are working with multiple audio clips on your timeline, fading each of the clips can be time consuming, especially if each fade needs to be treated differently. Now with Batch Fade, you can create presets for fade-ins, fade-outs and crossfades. When multiple audio clips are selected, a much larger dialog window pops up with many more options for you to choose from. Of course, fading between two clips can still be done the old way, and the fade dialog box works the same as in pervious versions. The new Batch Fade is an additional function that allows you to be more selective and have more options for your fades. Batch Fade is a great example how your old workflow is preserved while still adding new features.

7. The Dashboard. Launching a session now includes the Dashboard window at the start, which is an updated version of the Quick Start menu. You can quickly and easily see all of the available templates and your recent sessions. And, of course, you can create a new blank session. I like the new look and feel of Dashboard compared to Quick Start.

8. iPad Control. Pro Tools l Control is a free app now available in the App Store. iPad Control is made possible with the introduction of EuControl v.3.3, which is the driver needed for your workstation. EuControl is a free download using your Avid account after you complete the registration in the Pro Tools l Control iOS app. Even though I do not own an iPad, I can see the advantage of controlling Pro Tools via the iPad when I am monitoring a mix from a distance from my DAW.Avid Pro Tools iPad Control

Mixing a film, for example, would be a great use of the iPad control since that would allow me to sit back farther away from the speakers, thus simulating the distance of the listener in a movie theater. Today, the line between phones and tablets is blurred with the introduction of the “phablet.” As it stands now, the app is only available for iPad. I suspect that will change in the future, but I have no confirmation of that.

9. Included virtual musical instruments. The latest versions of Xpand II and First AIR Instruments Bundle are included with Pro Tools 12. Quite simply, I am blown away with how amazing these instruments sound. I have been a musician all of my life, but surprisingly I have never used any virtual instruments in MIDI in Pro Tools. I have always opted for a dedicated composing program for MIDI dating way back to Studio Vision Pro (for those of you old enough to remember how cool that program was).

I know there are plenty of third-party virtual instruments available for Pro Tools, but these two instrument bundles included with Pro Tools 12 have really opened up my eyes. Before Pro Tools 12, I found myself sharing and swapping files between a MIDI program (for me it’s Apple Logic) and Pro Tools. I have always preferred using a dedicated program for MIDI outside of Pro Tools, but now I am instantly converting using only Pro Tools for MIDI with the addition of these versions of Xpand II and the First AIR Instruments Bundle.

Please visit Avid’s website for a list of the specifics, but some of my favorite virtual instruments are the acoustic pianos, synth basses and of course anything drums or percussion related.

10. Updated I/O and flexibility. I work mostly on TV commercials and media specifically for the web, so I am rarely asked to do surround sound mixing, especially anything in 7.1. Therefore I am not able to explore any of the new surround features, including the new templates for 7.1 mixing.

Even so, I still can mention the addition of the Default Monitor path in Pro Tools 12. Pro Tools will automatically downmix or upmix your sessions’ monitor path to the studio’s monitor path. For example, if an HD session is saved with a 5.1 monitor path and then opened on a system that only has a stereo monitor path available, the session’s 5.1 monitor path is automatically downmixed to the systems’ stereo monitor outputs. This makes for even more flexibility when swapping sessions from one studio to another regardless of whether or not there are surround sound monitoring capabilities.

Another improvement relating to the I/O and surround capabilities is the addition of virtually unlimited busses. This will help anyone who has used up or exceeded previously allowed bus limitations when mixing in surround. The new Commit feature supports multichannel set-ups, which can improve your surround workflow.

And for any of the larger audio post facilities that may use Pro Tools in a much more complex way, such as getting several edit rooms to integrate, sync and play together, there are improvements in the Satellite link workflow. This includes the reset network button, transmit and receive play selection buttons in the transport window.

11. Track Bounce. Track Bounce is another feature I didn’t know I needed that much until I started using it. It is not to be confused with Track Commit. Track Bounce gives you the ability to select and bounce tracks or auxes as audio files when exporting. This can be one track, all the tracks or any combination of the tracks done in one single bounce.

For example, if you select a music track, a VO track and an FX track, you will get all three tracks as three discrete individual audio files in one single bounce using Track Bounce. This is essential for anyone who has to make splits or stems, especially in long format.

Imagine you have an hour program where you have a music track, a VO track and a sound effect track. In the past, you had to bounce each element as one realtime bounce three separate times. That meant it would take over three hours to complete. With Track Bounce in the offline bounce mode, you can output your stems in one single step in just minutes.

One friendly reminder is that if you are using Track Bounce with any layered tracks, such as sound effects or music tracks, it will bounce each track as its own separate track rather than a mix of the specific layers. For example, selecting 10 tracks will result in 10 discreet audio files with one bounce so it is important to know when Track Bounce is useful for you and when it is not.

12. Included Plug-ins. Of course, Pro Tools 12 is all about the plug-ins, and there are more plug-ins included than ever. This includes First AIR Effects Bundle, Eleven Effects and Space. I find that I rarely use any third party plug-ins since I am often going from studio to studio on a single project. Outside of noise reduction and LKFS Metering, I rarely find the need to use anything other than Avid plug-ins that are included with Pro Tools 12.

Cloud Collaboration and Avid Everywhere
In the near future, Avid will be offering Cloud Collaboration and Avid Everywhere. Avid will finally offer the ability to work on Pro Tools remotely using media located on a central cloud server accessible anywhere there is Internet access. When introduced, Cloud Collaboration will allow people in separate locations to access the same Pro Tools 12 session to share and update files instantly. This is perfectly suited for musicians collaborating on a song who do not live near each other.

More exciting to me is the potential of Cloud Collaboration to change the way we work in audio post by allowing access to all of your media remotely. This could benefit any audio facility that has multiple rooms with multiple engineers switching from room to room. Using Cloud Collaboration, there will be one central location for all your media accessible from any audio room. For engineers who need to switch rooms when working on a project, this will eliminate any file transfers or media dumps.

But I think the biggest benefit will be for any audio engineer like myself who is often working on a single project at multiple locations over the duration of the project. I am often working from my home studio, my client’s studio and a large audio post facility on the same project spread over several days, weeks or months. Each time I change studios, I have to make sure I transfer all of my sessions from one place to another using a flash drive, or WeTransfer or Google Drive, etc. I have tried them all and they are all time consuming. And with multiple versions and constant audio revisions, it is very easy to lose track of what and where the most current version is.

Cloud Collaboration will solve this issue with one central location where I can access my session from anywhere that has Internet access. This is a giant leap forward and I am looking forward to exploring this in-depth in a future review here on postProspective.

Ron DiCesare is an audio pro whose spot work includes TV campaigns for Purina, NJ Lotto and Beggin’ Strips. His indie film work includes Con Artist, BAM 150 and Fishing without Nets. He is also involved with audio post for Vice Media on their news reports and web series, including Vice on HBO. You can contact him at rononizer@gmail.com.

London’s Halo adds dubbing suite

Last month, London’s Halo launched a dubbing suite, Studio 5, at its Noel Street facility. The studio is suited for TV mix work across all genres, as well as for DCP 5.1 and 7.1 theatrical projects, or as a pre-mix room for Halo’s Dolby Features licensed Studios 1 and 3. The new room is also pre-wired for Dolby Atmos.

The new studio features an HDX2 Pro Tools 12|HD system, a 24-fader Avid S6 M40 and a custom Dynaudio 7.1 speaker system. This is all routed via a Colin Broad TMC-1-Penta controlled DADAX32 digital audio matrix for maximum versatility and future scalability. Picture playback from Pro Tools is provided by an AJA Kona LHi card via a Barco 2K digital projector.

In addition, Halo has built a dedicated 5.1 audio editing room for their recently arrived head of sound editorial, Jay Price, to work from. Situated directly adjacent to the new studio, the room features Pro Tools 12|HD Native system and 5.1 Dynaudio Air 6 speakers.

Jigsaw24 and CB Electronics supplied the hardware and the installation know-how. Level Acoustic designed, and Munro Acoustics provided a custom speaker system.

The Revenant’s sound team takes home BAFTA

The Revenant sound team has won the Best Sound award at the British Academy of Film and Television Arts (BAFTA) Awards ceremony . Winning the award was supervising sound editor and Formosa Group talent Lon Bender, along with supervising sound editor Martin Hernandez, supervising sound editor/re-recording mixer Randy Thom, production sound mixer Chris Duesterdiek and re-recording mixers Frank A. Montano and Jon Taylor.

Other nominees in the category include Bridge of Spies, Mad Max: Fury Road, The Martian, and Star Wars Episode VII: The Force Awakens. 

“I am very pleased that our crew was recognized at the BAFTA Awards for their hard work and artistry,” said BAFTA winner and Formosa’s Lon Bender.  “It is an honor to have had our film included among all the other nominees this year.” 

Bender is also nominated for an Oscar for Best Sound Editing for The Revenant. His 30-plus year career in sound dditing includes BAFTA nominations for Shrek and The Last of the Mohicans, Oscar nominations for Drive and Blood Diamond, and BAFTA and Oscar wins for Braveheart, which he shared with Formosa’s Per Hallberg. 

The British Academy of Film and Television Arts (BAFTA) is an independent charity that supports, develops and promotes the art forms of the moving image by identifying and rewarding excellence, inspiring practitioners and benefiting the public. 

Making our dialogue-free indie feature ‘Driftwood’

By Paul Taylor and Alex Megaro

Driftwood is a dialogue-free feature film that focuses on a woman and her captor in an isolated cabin. We chose to shoot entirely MOS… because we are insane. Or perhaps we were insane to shoot a dialogue-free feature in the first place, but our choice to remove sound recording from the set was both freeing and nerve wracking due to the potential post production nightmare that lay ahead.

Our decision was based on how, without speech to carry along the narrative, every sound would need to be enhanced to fill in the isolated world of our characters. We wanted draconian control over the soundscape, from every footstep to every door creak, but we also knew the sheer volume of work involved would put off all but the bravest post studios.

The film was shot in a week with a cast of three and a crew of three in a small cabin in Upstate New York. Our camera of choice was a Canon 5D Mark II with an array of Canon L-series lenses. We chose the 5D because we already owned it — so more bang for our buck — and also because it gave us a high-quality image, even with such a small body. Its ease of use allowed us to set up extremely quickly, which was important considering our extremely truncated shooting schedule. Having no sound team on set allowed us to move around freely without the concerns of planes passing overhead or cars rumbling in the distance delaying a shot.

The Audio Post
The editing was a wonderfully liberating experience in which we cut purely to image, never once needing to worry about speech continuity or a host of other factors that often come into play with dialogue-driven films. Driftwood was edited on Apple’s Final Cut Pro X, a program that can sometimes be a bit difficult for audio editing, but for this film it was a non-issue. The Magnetic Timeline was actually quite perfect for the way we constructed this film and made the entire process smooth and simple.

Once picture locked, we brought the project to New York City’s Silver Sound Studios, who jumped at the chance to design the atmosphere for an entire feature from the ground up. We sat with the engineers at Silver Sound and went through Driftwood shot-by-shot, creating a master list of all the sounds we thought necessary to include. Some were obvious, such as footsteps, breathing, clocks ticking and others less so, such as the humming of an old refrigerator or creaking of a wooden chair.

Once the initial list was set, we discussed whether or not to use stock audio or rerecord everything at the original location. Again, because we wanted complete control to create something wholly unique, we concluded it was important to return to the cabin and capture its particular character. Over the course of a few days, the Silver Sound gang rerecorded nearly every sound in the film, leaving only some basic Foley work to complete in their studio.

Once their library was complete, one of the last steps before mixing was to ADR all of the breathing. We had the actors come into the studio over a one-week period during which they breathed, moaned and sighed inside Silver Sound’s recording booth. These subtle sounds are taken for granted in most films, but for Driftwood they were of utter importance. The way the actors would sigh or breath could change the meaning behind that sound and change the subtext of the scene. If the characters cannot talk, then their expressions must be conveyed in other ways, and in this case we chose a more physiological track.

By the time we completed the film we had spent over a year recording and mixing the audio. The finished product is a world unto itself, a testament to the laborious yet incredibly exciting work performed by Silver Sound.

Driftwood was written, directed and photographed by Paul Taylor. It was produced and edited by Alex Megaro.

Quick Chat: Ian Stynes on mixing two Sundance films

By Kristine Pregot

A few years back, I had the pleasure of working with talented sound mixer Ian Stynes on a TV sketch comedy. It’s always nice working with someone you have collaborated with before. There is a comfort level and unspoken language that is hard to achieve any other way. This year we collaborated once again for So Yong Kim’s 2016 film Lovesong, which made its premiere at this year’s Sundance and had its grade at New York’s Nice Shoes via colorist Sal Malfitano.

Ian has been busy. In fact, another film he mixed recently had its premiere at Sundance as well — Other People, from director Chris Kelly.

Ian Stynes

Ian Stynes

Since we were both at the festival, I thought what better time to ask him how he approached mixing these two very different films.

Congrats on your two films at Sundance, Lovesong (which is our main image) and Other People. How did the screenings go?
Both screenings were great; it’s a different experience to see the movie in front of an excited audience. After working on a film for a few months it’s easy to slip into only watching it from a technical standpoint — wondering, if a certain section is loud enough, or if a particular sound effect works — but seeing it with an engaged crowd (especially as a world premiere at a place like Sundance) is like seeing it with fresh eyes again. You can’t help but get caught up.

What was the process like to work with each director for the film?
I’ve been lucky enough to work with some wonderful directors, and these movies were no exception. Chris Kelly, the director for Other People, who is a writer on a bunch of TV shows including SNL and Broad City is so down to earth and funny. The movie was based on the true story of his mother, who died from cancer. So he was emotionally attached to the film in a unique way. He was very focused about what he wanted but also knew when to sit back and let me do my thing. This was Chris’s first movie, but you wouldn’t know it.

For Lovesong, I worked with director So Yong Kim once again. She makes all her films with her husband Bradley Rust Gray. They switch off with directorial duties but are both extremely involved in each other’s movies. This is my third time working on a film with the two of them — the other two were For Ellen with Paul Dano and Jon Heder, and Exploding Girl with Zoe Kazan. So is an amazing director to work with; it feels like a real collaboration mixing with her. She is creative and extremely focused with her vision, but always inclusive and kind to everyone involved in the crew.

With both films a lot of work was done ahead of time. I try and get it to a very presentable place before the directors come in. This way we can focus on the creative tasks together. One of the fun parts of my job is that I get to sit in a room for a good while and work closely with creative and fun people on something that is very meaningful to them. It’s usually a bit of a bonding experience by the end of it.

How long did each film take you to mix?
I am also extremely lucky to work with some great people at Great City Post. I was the mixer, supervising sound editor and sound designer on both films, but I have an amazing team of people working with me.

Matt Schoenfeld did a huge amount of sound designing on both movies, as well as some of the mixing on Lovesong. Jay Culliton was the dialogue editor on Other People. Renne Bautista recorded Foley and dealt with various sound editing tasks. Shaun Brennan was the Foley artist, and additional editing was done by Daniel Heffernan and Houston Snyder. We are a small team but very efficient. We spent about eight to 10 weeks on each film.

Lovesong

How is it different to mix comedy than it is to mix a drama?
When you add sound to a film it’s important to think about how it is helping the story — how it augments or moves the story along. The first level of post sound work involves cleaning and removing anything that might take the viewer out of the world of the story (hearing mics, audio distortion, change in tone etc.).

Beyond that, different films need different things. Narrative features usually call for the sound to give energy to a film but not get in the way. Of course, there are always specific moments where the sound needs to stand out and take center stage. Most people usually aren’t aware of it or know what post sound specifically entails, but they certainly notice when it is missing or a bad sound job was done. Dramas usually have more intensity to the story and comedy’s can be a bit lighter. This often informs the sound design, edit and mix. That said, every movie is still different.

What is your favorite sound design on a film of all time?
I love Ben Burtt, who did all the Star Wars movies. He also did Wall-E, which is such a great sound design movie. The first 40 or so minutes have no direct dialogue — all the audio is sound design. You might not realize it, but it is very effective. On the DVD extra Ben Burtt did a doc about the sound for that movie. The documentary ends up being about the history of sound design itself. It’s so inspiring, even for non-sound people. Here is the link.

I urge anyone reading this to watch it. I guarantee it will get you thinking about sound for film in a way you never have before.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


The sound of VR at Sundance and Slamdance

By Luke Allen

If last year’s annual Park City film and cultural meet-up was where VR filmmaking first dipped its toes in the proverbial water, count 2016’s edition as its full on coming out party. With over 30 VR pieces as official selections at Sundance’s New Frontier sub-festival, and even more content debuting at Slamdance and elsewhere, festival goers this year can barely take two steps down Main Street without being reminded of the format’s ubiquitous presence.

When I first stepped onto the main demonstration floor of New Frontier (which could be described this year as a de-facto VR mini-festival), the first thing that struck me was, why was it so loud in there? I admit I’m biased since I’m a sound designer with a couple of VR films being exhibited around town, but I am definitely backed up by a consensus among content creators regarding sound’s importance to creating the immersive environment central to VR’s promise as a format (I know, please forgive the buzzwords). In seemingly direct defiance of this principle, Sundance’s two main public exhibition areas for all the latest and greatest content were inundated with the rhythmic bass lines of booming electronic music and noisy crowds.

I suppose you can’t blame the programmers for some of this — the crowds were unavoidable — but I can’t help contrasting the New Frontier experience with the way Slamdance handled its more limited VR offering. Both festivals required visitors to sign up for a viewing time, but while the majority of Sundance’s screenings involved strapping on a headset while seated on a crowded bench in the middle of the demonstration floor, Slamdance reserved a quiet room for the screening experience. Visitors were advised to keep their voices to a murmur while in the viewing chamber, and the screenings took place in an isolated corner seated on — crucially — a chair with full range of motion.

Why is this important? Consider the nature of VR: the viewer has the freedom to look around the environment at their own discretion, and the best content creators make full use the 360-degrees at their disposal to craft the experience. A well-designed VR piece will use directional sound mixing to cue the viewer to look in different directions in order to further the story. It will also incorporate deep soundscapes that shift as one looks around the environment in order to immerse the viewer. Full range of motion, including horizontal rotation, is critical to allowing this exploration to take place.

The Visitor, which I had the pleasure of experiencing in Slamdance’s VR sanctuary, put this concept to use nicely by placing the two lead characters 90 degrees apart from one another, forcing the viewer to look around the beautifully-staged set in order to follow the story. Director James Kaelan and the post sound team at WEVR used subtly shifting backgrounds and eerie footsteps to put the viewer right in the middle of their abstract world.

VR New Frontier

Sundance’s New Frontier VR Bar.

Resonance, an experience directed by Jessica Brillhart that I sound designed and engineered, features violinist Tim Fain performing in a variety of different locations, mostly abandoned, selected both for their visual beauty and their unique sonic character. We used an Ambisonic microphone on set in order to capture the full range of acoustic reflections and, with a lot of love in the mix room at Silver Sound, were able to recreate these incredible sonic landscapes while enhancing the directionality of Fain’s playing in order to help the viewer follow him through the piece (Unfortunately, when Resonance was screening at Sundance’s New Frontier VR Bar, there was a loudspeaker playing Top 40 hits located about three feet above the viewer’s head).

In both of these live-action VR films, sound and picture serve to enhance and guide the experience of the other, much like in traditional cinema, but in a new and more enchanting way. I have had many conversations with other festival attendees here in Park City in which we recall shared VR experiences much like shared dreams, so personal and haunting is this format. We can only hope that in future exhibitions more attention is paid to ensure that viewers have the quiet they need to fully experience the artists’ work.

Luke Allen is a sound designer at Silver Sound Studios in New York City. You can reach him at luke@silversound.us

Behind the Title: Slick Sounds’ David Van Slyke

NAME: David F. Van Slyke

COMPANY: Slick Sounds Media Partners

CAN YOU DESCRIBE YOUR COMPANY?
Slick Sounds is a boutique sound design company that handles audio post — from dailies to the delivery of the DCP (Digital Cinema Package). We creatively apply the craft, especially the art of telling stories with sound. We partner with directors, picture editors, color timers, composers and mix stages.

WHAT’S YOUR JOB TITLE?
Lead Sound Designer and Re-Recording Mixer

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That I am also sales manager and CTO. I also attend conferences and regularly go to talks about how to get a jump on the new workflows. I’m constantly letting vendors know they can collaborate with us to create a cost-competitive product with professional standards that will pass a third-party QC.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Each project requires a unique sonic approach that I enjoy figuring out. The story speaks to me, and I interpret which aspect of creative sound is needed. I also do a lot of field recording. I love finding new source sounds.

WHAT IS YOUR PROCESS FOR SOUND DESIGNING?
It’s like a chef who is trying to come up with a new signature dish. You open a lot of items, chop things up, add some secret sauce, make a mess, and then you see what has the best flavors — and you trash the stuff that doesn’t taste good.

It has to be right. To me, and my clients, “right” is the feeling you get when you watch the final mix of a section or the whole piece. It creates the proper response in the viewer.

HOW DO YOU BEGIN?
I always start by getting in the zone. My room is dark and the dual 23-inch monitors are right in front of me; I lose myself in the fact that while I may not know exactly what to do at the start,  I am confident that I will figure it out. It’s fun to play in the unknown. I tap into creativity and come up with things that I later ask myself, “Where did that come from?”

CAN YOU WALK US THROUGH YOUR WORKFLOW?
I watch the picture several times and try to really get into the filmmaker’s head. Sometimes that means looking at it frame by frame. I can figure out what sounds I create quickly and what story points I need to obsess about. The sound design must always sell what the picture is telling us. I obsess about big sound moments because they need to make a big impact on the viewer.

DOES YOUR PROCESS CHANGE DEPENDING ON THE TYPE OF PROJECT?
Yes to a degree. This is where good training in the craft of sound work comes in. There are nuts and bolts things that just have to be banged out, and then there are signature sounds that take the most creative energy. I often do the creative part first knowing the basic stuff will happen quickly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
“I’d open a haberdashery” — that’s my favorite line from Spinal Tap.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
It took a little while since I enjoyed being a professional musician for a couple of years. I realized as a junior at Berklee College of Music that I needed a career that had more steady income than playing gigs or recording bands. My love of recording led me to sound design and into the digital revolution that has changed the record and the post industry.

    Çƒ˙Immortality Parts I and IIǃ˘

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I just finished mixing a feature documentary called Chris Brown — This is Me; the CSI series finale, which was a two-hour television movie called “Immortality” (pictured above); the pilot for Lucifer, a new Jerry Bruckheimer series coming out soon; and I am mixing 20-minute mini-docs for League of Legends.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
All of them… well, most of them. We give the same creative intensity to all our projects. It’s not done until it’s right! Some recent projects though are Dragon Nest: Warrior’s Dawn for Universal; Tyrus, which won the audience award at the San Diego Asian Film Festival; and Home — a Bruckheimer pilot that I’m currently sound designing and co-supervising — which will hopefully get picked up for next year.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools|HD, Serato Pitch ‘n’ Time Pro, iZotope RX5, Soundtoys and SoundMiner.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m not so good at social media. This is a referral business and very few movies are sound designed because of a social media presence. Perhaps the micro budgets get their sound designer from social media, however, if they have any budget at all they want known talent on their project at a known professional facility with amenities.

So, I do old-fashioned social media — I go to lunch with clients I like to work with.

THIS IS AN INDUSTRY WITH TIGHT DEADLINES. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
First, I say, “That’s an impossible deadline, how can the timeframe keep getting smaller and smaller?” Then I figure out how to do it. Which means sometimes having to say no to jobs because they don’t give me enough time to do it “right.”

I live and breath this gig, although it doesn’t always feel like work — it’s just fun!

Bandito Brothers: picking tools that fit their workflow

Los Angeles-based post, production and distribution company Bandito Brothers is known for its work on feature films such as Need for Speed, Act of Valor and Dust to Glory. They provide a variety of services — from shooting to post to visual effects — for spots, TV, films and other types of projects.

Lance Holte in the company’s broadcast color by working on DaVinci Resolve 12.

They are also known in our world for their Adobe-based workflows, using Premiere and After Effects in particular. But that’s not all they are. Recently, Bandito invested in Avid’s new Avid ISIS|1000 shared storage system to help them work more collaboratively with very large and difficult-to-play files across all editing applications. The system — part of the Avid MediaCentral Platform— allows Bandito’s creative teams to collaborate efficiently regardless of which editing application they use.

“We’ve been using Media Composer since 2009, although our workflows and infrastructure have always been built around Premiere,” explains Lance Holte, senior director of post production, Bandito Brothers. “We tend to use Media Composer for offline editorial on projects that require more than a few editors/assistants to be working in the same project since Avid bin-locking in one project is a lot simpler than breaking a feature into 200 different scene-based Premiere projects.

“That said, almost every project we cut in Avid is conformed and finished in Premiere, and many projects — that only require two or three editors/assistants, or require a really quick turnaround time, or have a lot of After Effects-specific VFX work — are cut in Premiere. The major reason that we’ve partnered with Avid on their new shared storage is because it works really well with the Adobe suite and can handle a number of different editorial workflows.”

MixStage             
Bandito’s Mix Stage                                                         Bandito’s Edit 4.

He says the ISIS | 1000 gives them the collaborative power to share projects across a wide range of tools and staff, and to complete projects in less time. “The fact that it’s software-agnostic means everyone can use the right tools for the job, and we don’t need to have several different servers with different projects and workflows,” says Holte.

Bandito Brothers’ ISIS|1000 system is accessible from three separate buildings at its Los Angeles campus — for music, post production and finishing. Editors can access plates being worked on by its on-site visual effects company, or copy over an AAF or OMF file for the sound team to open in Avid Pro Tools in their shared workspace.

“Bandito uses Pro Tools for mixing, which also makes the ISIS|1000 handy, since we can quickly movie media between mix and editorial anywhere across the campus,” concludes Holte.

Currently, Bandito Brothers is working on a documentary called EDM, as well as commercial campaigns for Audi, Budweiser and Red Bull.

When video editors need to deliver a CALM-compliant mix

Outpost Worldwide is a Kansas City-based production and post company that creates content for a variety of TV series, network game shows, reality shows, commercials and corporate videos.

Their television work includes shows like Strange: Exorcist, Garden Earth and Project Runway Latin America. Documentaries they have worked on include The Barber’s Diaries, No Shortcuts and Let Freedom Ring: The Lessons Are Priceless. Films include Fight Night, Dogs of Eden and Last Ounce of Courage.

With the passage of the CALM (Commercial Advertisement Loudness Mitigation) Act by Congress, the responsibility to ensure audio mixes conform to the loudness standard falls not only on audio mixers, but also on video editors for shows that did not budget for a separate audio post session.

For Mark Renz, senior video editor at Outpost Worldwide, the task of delivering compliant mixes directly from his Avid Media Composer system was an extra burden in time and effort that should have been used for creative editing. If a show gets rejected by their Extreme Reach content management and delivery system, then further delays and costs are incurred either to send the show to audio post or have the Extreme Reach system fix the loudness itself.

“While many of our shows have budget for audio post, I frequently will also work on shows that have no separate audio budget, so it’s down to me to make sure audio coming out of my Media Composer system is compliant with the CALM Act,” explains Renz. “Since the majority of my time is spent putting a compelling story together, you can imagine that worrying about loudness is not something I really have a lot of time for.”

This is where iZotope’s RX Loudness Control comes in. “There’s not a whole lot to say, because it just works,” he says. “It’s two mouse clicks, and it’s much faster than realtime. The first click quickly analyzes the audio and displays a graph showing any problem areas. If I have time, I can quickly go in and manually adjust an area if I want; otherwise, clicking ‘Render’ is all that’s required to generate a compliant final mix.”

Mark is first to admit he’s not an “audio guy” so being able to rely on a tool that guarantees a compliant audio mix has been liberating. “I don’t have to worry what someone else might be doing to the audio to force it into compliance,” says Renz.

Santa Monica’s Tono Studios expands to the OC

Eight-year-old Tono Studios, which offers commercial recording and voiceover facilities in Santa Monica, is expanding to Orange County. The audio post house’s new 1,336-square-foot studio will open in Costa Mesa in January.

“There is a big advertising market in Orange County and not many recording studios,” explains Tono CEO Raquel Ramirez, who is partnered with Jaime Zapata.

“We have a couple of clients who are based there and hate the commute to Santa Monica, so we are making it easier for them. We will also try getting business from other agencies.”

Clients needing or wanting to begin their projects at the Santa Monica facility can opt to finish at the Costa Mesa facility or vice versa, all in realtime. “They can, for example, record voiceovers at the Santa Monica facility and finalize the mix in Costa Mesa,” explains Ramirez. “The projects that go through both of our facilities will be totally seamless.”

Tono is a Mac-based studio housing Pro Tools HDX, Genelec speakers, Brent Averill mic-pres, Neumann TLM 103 and Sennheiser MKH 416 mics, a Samsung 55-inch curved 4K Ultra HD TV monitor, plug-ins and a sound effects library.

“We are taking advantage of new technologies to communicate and collaborate with our main studios in Santa Monica using video conferencing, realtime recording and cloud-based backups,” concludes Ramirez, adding that Tono will be offering video editing services in the near future.

Rachel Ramirez and Jaime Zapata are pictured in our main image

Harbor Picture Company opens theatrical sound mixing division

Harbor Picture Company has launched Harbor Grand, a multi-million-dollar studio servicing audio mixing for theatrical projects. This comes after a year of planning and construction and with support from the Empire State Development.

Harbor Sound is the sound post division of Harbor Picture Company, which offers dailies, offline editorial, VFX, picture post, sound post, digital deliverables and commercial production services. Harbor occupies 50,000 square feet in SoHo New York and offers talent, technical infrastructure and engineering to the feature film, television and commercial industries.

HARBOR_GRAND_theater_screen_0050_2133x1422

The new facility features 26-foot ceilings, a private lounge, editorial space and kitchen. The mix stage itself is equipped to mix Dolby Atmos, IMAX and 7.1 and 5.1 surround sound with 2K, 4K and 3D projection capability. They use a Euphonix System 5 console

According to Harbor, this opening not only represents the most significant expansion of a post company since the New York State Post-Production Tax Credit was strengthened in 2012, it is also the largest sound mixing stage of its kind in New York.

“Under Governor Cuomo’s leadership, a strengthened Film Tax Credit Program has made New York a top destination for both production and post-production work, creating job opportunities for thousands of people and contributing significantly to the state’s economy,” says Empire State Development president/CEO/Commissioner Howard Zemsky. “Harbor Picture Company’s expansion, from 30 to 65 employees helps demonstrate the industry’s continued growth in New York State.”

“Having a mix stage of this level in New York City will be a powerful tool to continuously attract Hollywood-level features and TV shows to complete their post-production in New York City. We now have the infrastructure, the technology and the experience to deal with projects at a premier level.” says Harbor’s president, Zak Tucker. “As larger productions choose to locate the post-production process in New York, a significant number of artistic, technical and management jobs will become available in the city, supporting the idea that New York is a primary destination for post production.”

To encourage Harbor Sound to increase employment at its headquarters in Manhattan, Empire State Development. has offered $550,000 in performance-based Excelsior Jobs Program Tax Credits, which are tied directly to the creation of new jobs in the post-production industry. This specific incentive complements the impact of the legislation that Governor Cuomo signed in 2012 to strengthen the State’s existing post incentive program in order to attract additional film post activity to all regions of New York State.

This law increased the percentage of tax credits available for projects that did not film in New York but qualify for credits for post-production work done in New York State. The qualified film and television post-production credit increased from 10 percent to 30 percent in the New York metropolitan commuter region.

Harbor’s facility expansion is part of an emerging trend of private companies boosting the creative economy by looking to New York as a home for their content-driven enterprises. For instance, Amazon’s Alpha House and Netflix’s Orange is the New Black are both being filmed at Kaufman Astoria Studios, while YouTube opened a 20,000-square-foot facility in Chelsea. New York’s largest film studio, The Weinstein Company, also struck a deal with Netflix, bringing $60 Million into NYC to produce a sequel to Crouching Tiger, Hidden Dragon.

Creating the sonic world of ‘Macbeth’

By Jennifer Walden

On December 4, we will all have the opportunity to hail Michael Fassbender as he plays Macbeth in director Justin Kurzel’s film adaption of the classic Shakespeare play. And while Macbeth is considered to be the Bard’s darkest tragedy, audiences at the Cannes Film Festival premiere felt there was nothing tragic about Kurzel’s fresh take on it.

As evidenced in his debut film, The Snowton Murders, Kurzel’s passion for dark imagery fits The Weinstein Co’s Macbeth like a custom-fitted suit of armor. “The Snowtown Murders was brutal, beautiful, uncompromising and original, and I felt sure Justin would approach Macbeth with the same vision,” says freelance supervising sound editor Steve Single. “He’s a great motivator and demanded more of the team than almost any director I’ve worked with, but we always felt that we were an important part of the process. We all put more of ourselves into this film, not only for professional pride, but to make sure we were true to Justin’s expectations and vision.”

Single, who was also the re-recording mixer on the dialogue/music, worked with London-based sound designers Markus Stemler and Alastair Sirkett to translate Kurzel’s abstract and esoteric ideas — like imagining the sound of mist — and place them in the reality of Macbeth’s world. Whether it was the sound of sword clashes or chimes for the witches, Kurzel looked beyond traditional sound devices. “He wanted the design team to continually look at what elements they were adding from a very different perspective,” explains Single.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

Sirkett notes that Kurzel’s bold cinematic style — immediately apparent by the slow-motion-laced battle sequence in the opening — led him and Stemler to make equally bold choices in sound. Adds Stemler, “I love it when films have a strong aesthetic, and it was the same with the sound design. Justin certainly pushed all of us to go for the rather unconventional route here and there.  In terms of the creative process, I think that’s a truly wonderful situation.”

Gathering, Creating Sounds
Stemler and Sirkett split up the sound design work by different worlds, as Kurzel referred to them, to ensure that each world sounded distinctly different, with its own, unique sonic fingerprint. Stemler focused on the world of the battles, the witches and the village of Inverness. “The theme of the world of the witches was certainly a challenge. Chimes had always been a key element in Justin’s vision,” says Stemler, whose approach to sound design often begins with a Schoeps mic and a Sound Devices recorder.

As he started to collect and record a variety of chimes, rainmakers and tiny bells, Stemler realized that just shaking them wasn’t going to give him the atmospheric layer he was looking for. “It needed to be way softer and smoother. In the process I found some nacre chimes (think mother-of-pearl shells) that had a really nice resonance, but the ‘clonk’ sound just didn’t fit. So I spent ages trying to kind of pet the chimes so I would only get their special resonance. That was quite a patience game.”

By having distinct sonic themes for each “world,” re-recording mixers Single and Gilbert Lake (who handled the effects/Foley/backgrounds) were able to transition back and forth between those sonic themes, diving into the next ‘world’ without fully leaving the previous one.

There’s the “gritty reality of the situation Macbeth appears to be forging, the supernatural world of the witches whose prophecy has set out his path for him, the deterioration of Macbeth’s mental state, and how Macbeth’s actions resonate with the landscape,” says Lake, explaining the contrast between the different worlds. “It was a case of us finding those worlds together and then being conscious about how they relate to one another, sometimes contrasting and sometimes blending.”

Skirett notes that the sonic themes were particularly important when crafting Macbeth’s craziness. “Justin wanted to use sound to help with Macbeth’s deterioration into paranoia and madness, whether it be using the sound of the witches, harking back to the prophecy or the initial battle and the violence that had occurred there. Weaving that into the scenes as we moved forward was alMACBETHways going to be a tricky balancing act, but I think with the sounds that we created, the fantastic music from composer Jed Kurzel, and with Steve [Single] and Gilly [Lake] mixing, we’ve achieved something quite amazing.”

Sirkett details a moment of Macbeth’s madness in which he recalls the memory of war. “I spent a lot of time finding elements from the opening battle — whether it be swords, clashes or screams — that worked well once they were processed to feel as though they were drifting in and out of his mind without the audience being able to quite grasp what they were hearing, but hopefully sensing what they were and the implication of the violence that had occurred.”

Sirkett used Audio Ease’s Altiverb 7 XL in conjunction with a surround panning tool called Spanner by The Cargo Cult “to get some great sounds and move them accurately around the theatre to help give a sense of unease for those moments that Justin wanted to heighten Macbeth’s state of mind.”

The Foley, Score, Mix
The Foley team on Macbeth included Foley mixer Adam Mendez and Foley artist Ricky Butt from London’s Twickenham Studios. Additional Foley for the armies and special sounds for the witches was provided by Foley artist Carsten Richter and Foley mixer Marcus Sujata at Tonstudio Hanse Warns in Berlin, Germany. Sirkett points out that the sonic details related to the costumes that Macbeth and Banquo (Paddy Considine) wore for the opening battle. “Their costumes look huge, heavy and bloodied by the end of the opening battle. When they were moving about or removing items, you felt the weight, blood and sweat that was in them and how it was almost sticking to their bodies,” he says.

Composer Jed Kurzel’s score often interweaves with the sound design, at times melting into the soundscape and at other times taking the lead. Stemler notes the quiet church scene in which Lady Macbeth sits in the chapel of an abandoned village. Dust particles gently descend to the sound of delicate bells twinkling in the background. “They prepare for the moment where the score is sneaking in almost like an element of the wind.  It took us some time in the mix to find that perfect balance between the score and our sound elements. We had great fun with that kind of dance between the elements.”

MACBETHDuring the funeral of Macbeth’s child in the opening of the film, Jed Kurzel’s score (the director’s brother) emotes a gentle mournfulness as it blends with the lashing wind and rain sound effects. Single feels the score is almost like another character. “Bold and unexpected, it was an absolute pleasure to bring each cue into the mix. From the rolling reverse percussion of the opening credits to the sublime theme for Lady Macbeth’s decline into madness, he crafted a score that is really very special.”

Single and Lake mixed Macbeth in 5.1 at Warner Bros.’ De Lane Lea studio in London, using an AMS Neve DFC console. On Lake’s side of the board, he loved mixing the final showdown between Macbeth and Macduff — a beautifully edited sequence where the rhythm of the fighting perfectly plays against Jed Kurzel’s score.

“We wanted the action to feel like Macbeth and Macduff were wrenching their weapons from the earth and bringing the full weight of their ambitions down on one another,” says Lake. “Markus [Stemler] steered clear of traditional sword hits and shings and I tried to be as dynamic as possible and to accentuate the weight and movement of their actions.”

To create original sword sounds, Stemler took the biggest screw wrench he could find and recorded himself banging on every big piece of metal available in their studio’s warehouse. “I hit old heaters, metal staircases, stands and pipes. I definitely left a lot of damage,” he jokes. After a bit of processing, those sounds became major elements in the sword sounds.

Director Kurzel wanted the battle sequences to immerse the audience in the reality of war, and to show how deeply it affects Macbeth to be in the middle of all that violence. “I think the balance between “real” action and the slo-mo gives you a chance to take in the horror unfolding,” says Lake. “Jed’s music is very textural and it was about finding the right sounds to work with it and knowing when to back off with the effects and let it become more about the score. It was one of those rare and fortunate events where everyone is pulling in the same direction without stepping on each other’s toes!”

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

To paraphrase the famous quote, “Copy is King” holds true for any project, in a Shakespeare adaptation, the copy is as untouchable as Vito Corleone in The Godfather. “You have in Macbeth some of the most beautiful and insightful language ever written and you have to respect that,” says Single. His challenge was to make every piece of poetic verse intelligible while still keeping the intimacy that director Kurzel and the actors had worked for on-set, which Single notes, was not an easy task. “The film was shot entirely on location, during the worst storms in the UK for the past 100 years. Add to this an abundance of smoke machines and heavy Scottish accents and it soon became apparent that no matter how good production sound mixer Stuart Wilson’s recordings were — he did a great job under very tough conditions — there was going to be a lot of cleaning to do and some difficult decisions about ADR.”

Even though there was a good bit of ADR recorded, in the end Single found he was able to restore and polish much of the original recordings, always making sure that in the process of achieving clarity the actors’ performances were maintained. In the mix, Single says it was about placing the verse in each scene first and then building up the soundtrack around that. “This was made especially easy by having such a good effects mixer in Gilly Lake,” he concludes.

Jennifer Walden is a New Jersey-based writer and audio engineer.

Quick Chat: Walter Biscardi on his new Creative Hub co-op

Post production veteran and studio owner Walter Biscardi has opened The Creative Hub within his Atlanta-area facility Biscardi Creative Media (BCM). This co-op workspace for creatives is designed to offer indie filmmakers and home-based video producers a place to work, screen their work, meet with clients and collaborate.

Biscardi has had this idea in the back of his head for the past few years, but it was how he started his post company that inspired The Creative Hub. After spending years at CNN and in the corporate world, Biscardi launched his post business in 2001, working out of a spare bedroom in his house. In 2003 he added 1,200 square feet to the back of his house, where he ran the company until 2010. In January 2011 he moved into his current facility. So he knows a thing or two about starting small and growing a business naturally.

color

Color grading

Let’s find out more.

Why was this the right time to launch this co-op?
The tools keep getting smaller and more powerful, so it’s easier than ever to work at home.  But from time to time there is still a need for “bigger iron” to help get the job done.  There’s also a need for peripherals that you might want to use such as the Tangent Element panels and FSI monitors for color grading, but making that investment for just one project isn’t feasible. Or maybe you’re planning a large project and would like to lay out your storyboards and planning where everyone can see it. Our conference room has 30 feet of corkboard and a 10-foot dry erase wall that is killer for production planning.

How will it work?
We have a beautiful space here and oftentimes we have rooms available for use. In the “traditional post production world” you would charge $50- $175/hour just for the suite, but many indie filmmakers — and even many long-form projects like reality shows and episodics — just don’t have that kind of budget.  So I looked at the co-op office space for inspiration on how to set up a pricing structure that would allow the maximum benefit for indie creatives and yet allow us to pay the bills. So we came up with the basic hourly/daily/weekly/monthly pricing structure that’s easy to follow with no commitments.

I think the time has been right for the co-op creative space for at least two years now, it just took this much time for me to finally get my act together and get everything down on paper.

What’s great about the co-op space too is that we hope it’ll foster collaboration by getting folks out of their houses for the day and into a common space where you can bounce ideas off each other, create those, “Hey, can you come look at this” moments. You see a lot of that online, but being able to actually talk to the person in the same room always leads to much better collaboration than a thread of responses to your online video.

One of the edit rooms

One of the edit rooms

Can you talk more about the pricing and room availability?
Depending on the room, we have availability by the hour, day, week and month. Prices are very straightforward such as $100/day for a fully furnished edit suite. (See pricing here.) That includes the workstation, dual monitors, Flanders Scientific reference monitor and two KRK Rokit 5 audio monitors. Those rates are definitely below “market value” but we have the space, the gear and we’re happy to open our doors and let filmmakers and creatives come on in and have some fun in our sandbox.

The caveat to all the low pricing is that it is restricted to standard business hours only. Right now that’s 8am-6pm. This follows with most of the co-ops I researched and if folks wanted to have 24-hour access or longer access to the space, that would be priced according to their needs. But the rates would revert to more market standard rates with overnight being more. We’ll see how this goes and if it takes off, we could always run a second shift at night to help maintain a lower rate in those hours.

What about gear?
For editorial, graphics, animation, sound and design, we have the full Adobe Creative Cloud in every Creative Suite.  Four of the suites run Mac and one room runs Windows.  Every suite has a Flanders Scientific Reference monitor connected via AJA or BMD hardware.

Color grading is offered via Blackmagic’s DaVinci Resolve and Adobe’s SpeedGrade on a Mac Pro with a Tangent Elements control surface and an FSI OLED Reference Monitor.

The sound mixing theater features ProTools|HD 5.1 mixing system with Genelec audio monitoring.  The main system is a Mac Pro. That theater has an eight-foot projection screen (pictured right) and can serve as a screening room for up to 12 people or a classroom for workshops with seating for up to 18 people. It’s a great workshop space.

None of our pricing includes high-speed storage as we assume people will bring their own. We do have 96TB of high-speed networked storage on site, which is available for $15/TB per day should it be needed.

So you are mostly Adobe CC based?
Adobe is provided because that’s what we use here so it’s already on all of the systems. By not having to invest in additional software, we can keep the rates low. We do have Avid Media Composer and Final Cut Pro on site, but they are older versions. If we get enough requests for Avid and FCP, we can update our software packages at a later date.

———
Walter Biscardi is a staple on social media. Follow him at @walterbiscardi.

Mark Mothersbaugh on scoring ‘The Last Man on Earth’

By Jennifer Walden

So, the toilet pool is a growing national trend — at least according to the uber-talented composer, artist and musician Mark Mothersbaugh. His recent composing work on the Fox TV series The Last Man on Earth, executive produced and partially directed by Phil Lord and Chris Miller of The Lego Movie fame, inspired him to install his very own toilet pool. For those of you who haven’t seen the series yet and are confused as to what a toilet pool is, well it’s simple: it’s a pool that you use as a toilet. If this doesn’t make you want to watch the show, now available on Hulu, I don’t know what will.

To be fair, Mothersbaugh actually credits his daughters with the implementation of his own toilet pool. “I came home one day and the kids had cut a hole in the diving board so we all started using that,” he explains, admitting they were a little self-conscious about it initially, but he feels it’s a great way to get use of the swimming pool during the winter. Now that spring is here it’s got him thinking about using the pool for its original purpose again. “We’re going to have to get out the leaf-net and clean it up a little bit because the kids are going to want to swim this summer.” Mothersbaugh reports the toilet pool doesn’t smell as bad as you’d imagine, thanks to help from the pool chemicals and Mother Nature. “The sun kind of helps. The birds and the squirrels they come and peck at it, so they help clean it too.”

Mark Mothersbaugh

Mark Mothersbaugh

Ok, so we’ve digressed in a way only Mark Mothersbaugh could. Let’s get back to The Last Man on Earth, which tells the story of last-man-on-earth Phil Miller —played by series creator/writer Will Forte — and how he copes with the transition from complete isolation, complements of a super virus that wipes out nearly all of humanity, to potential repopulation with last-woman-on-earth Carol Pilbasian (30 Rock’s Kristen Schaal). As the last remnants of civilization begin to show up, thanks to Phil’s “Alive in Tucson” billboards, things get hilariously complicated. Season 1’s twisted ending sets up some surprising situations for next year’s Season 2.

From a creative standpoint, Mothersbaugh, co-founder of Mutato Muzika in West Hollywood, says the show offered an opportunity to create an unconventional score. “The world of TV doesn’t typically allow for you to make many big steps in any direction in the music department,” he says. “Music usually plays a backseat role, and it’s very tricky to work in anything new or original. For The Last Man on Earth, we were able to play around with music almost the way Phil was playing with the soccer balls and tennis balls by putting faces on them [a la Tom Hanks in Cast Away]. It’s a primitive substitute for people, so we made substitute instruments, like using pieces of wire wrapped around nails in wood and created things on our own.”

Initially, Mothersbaugh experimented with an instrument he designed about five years ago called an “orchestrion,” which uses 65 antique and modern bird calls made out of wood, metal, plastic, rubber and leather. “I was writing a lot of music with it and then we realized we couldn’t use it all the time because there are no birds in this world.” But he was able to use the orchestrion as a “psychological enhancer” for Phil’s toilet pool cleaning score in the episode called “She Drives Me Crazy.” The orchestrion, layered into an upbeat score, makes little squeaky-clean sounds as Phil starts cleaning the pool, but soon the clean squeaks transform into longer laments when Phil’s frustration makes him curl up in fetal position. The orchestrion blends with a sappy piano as Phil finally breaks down and sobs beside his filthy toilet pool.

Fun With Porta-Potties
At the end of the “She Drives Me Crazy” episode, Phil installs a port-a-potty in his house. Mothersbaugh wrote a Mariachi-band inspired song featuring a vocal performance by Forte himself. “He is our producer, so when he says ‘I shall show you that not only am I an actor but I am a singer,’ then you just say, ‘Okay dude,’” jokes Mothersbaugh, who notes that a few other composers in his studio lined up to be the temp vocals on the track. “A couple guys in the studio have more porta-potty appropriate voices than me, so they volunteered.”

So why a Mariachi sound for the ‘Porta-Potty” tune? Mothersbaugh connects the dots. “Mariachi bands play a lot of weddings and outside events like receptions and summertime festivals where they probably use porta-potties. So it just seemed appropriate.” Perfectly sensible!

Weeks before writing the song, Mothersbaugh had real-life porta-potty inspiration parked outside Mutato Muzika on the Sunset Strip. “We are right on the path for the marathon and for some reason, right in front of my studio, they put a row of six porta-potties,” he explains. “During the marathon, runners would run in, use the porta-potty and run out.” Mothersbaugh went across the street to snap a few photos of the porta-potties in front of Mutato Muzika’s green, flying saucer-shaped building, which is designed by Brazilian architect Oscar Niemeyer. “It’s a great photo,” he confirms.

Fun With Keyboards
The theme that starts Episode 1, what Mothersbaugh dubs the “Simple Man’s Theme,” can be heard in multiple variations throughout the series. Mothersbaugh notes it’s very versatile and can be adapted for both humorous and sad situations. It was written on a pocket piano, by Critter & Guitari, a gift from Mothersbaugh’s daughters. “They found this little thumb piano with not very many keys. What’s great about it is it has a recorder in it. You have the ability to not only record and play back a loop but you can also change the pitch and timing of it, and record changing the pitch and the timing. You can make the notes detune and have this crazy, weird tremolo effect,” he explains.

Another keyboard used for The Last Man on Earth came from director Jon Turteltaub, who worked with Mothersbaugh a few years ago on the Last Vegas score. Turteltaub brought an old Wurlitzer to Mothersbaugh’s studio for the Ray Charles-inspired electric piano sound for Last Vegas and just left it there. “Jon got it by accident,” says Mothersbaugh. “He bought a house and everything was moved out of it except for this one old electric piano. The prior owners, Eydie Gormé and Steve Lawrence, said they used this piano for years to compose on, so they just left it with the house because it belongs there. So much music was written on it.” Not knowing what to do with the Wurlitzer, Turteltaub put it in storage until Last Vegas. “A number of the cues for that film were recorded on that old Wurlitzer, and that was also our secret weapon for a lot of the electric piano sounds in The Last Man On Earth, including the ‘Simple Man’s Theme.’”

There’s a poster on Mutato Muzika’s website, in the Notes section, promoting Mothersbaugh’s work on The Last Man On Earth. It shows Forte’s character, Phil, wallowing in a margarita pool. The text over the image reads, “Would you get clean? Or bathe in margarita?” To this question Mothersbaugh answers, “I never was a heavy drinker, but I started doing some projects in Mexico, both art projects and music projects, and it’s like tequila solves everything down there. The idea of a margarita pool seems to me like a good one, one that I could imagine others replicating. There’s probably more margarita pools then toilet pools because of this show. They should even market one of those. If you did it right you could fill the outer ring with ice so that would keep the margarita mix chilled even when your body was in it. That would be nice. Okay, next year, margarita pools for everyone in the press.” Brilliantly stated, Mr. Mothersbaugh. I’ll take my margarita pool in pink please.

Jennifer Walden is a New Jersey-based writer and audio engineer.

Bringing Dolby Atmos to the home via Blu-ray

Formosa’s Tim Hoogenakker walks us through his re-mastering process

By Mel Lambert

There is no denying that Dolby Atmos immersive soundtracks have added an extra allure for cinema audiences — as evidenced by the enhanced success of action movies at the box office — but, until very recently, it was an experience reserved for your local multiplex. All of that is to change with the availability of Atmos-capable receivers from a number of leading vendors, including Denon, Marantz, Onya, Continue reading

Blog: HP’s latest laptops, displays, virtual workstations

By Cory Choy

I started my career as a sound designer and re-recording mixer, but as independent film imploded in the early 2000s and was beginning to be replaced by Internet video, I found myself adding more and more “work hats” to my collection. In order to succeed in this business I needed to adapt to the ever-changing NYC film and TV landscape, and that meant adding more disciplines to my professional offerings.

So now, in addition to post sound, I find myself also post producing — mostly color correction, but I do a bunch of other things as well.

While I have access to state-of-the art sound workstations at Silver Sound, a New York City- Continue reading

Animated PSA shows benefits of solar-powered lanterns

Agency FCB Garfinkel called on design / animation company The Studio and music house Found Objects, both in New York City, on an animated PSA that shows how the Luci inflatable solar lantern by MPowerd offers areas without electricity access to light

The two-minute piece shows two young boys in Africa walking to school and how their world opens up thanks to the books they read, educating them about life outside their remote village. There is a colorful dream sequence showing tone of the boy’s world getting brighter. When they get home and the sun goes down, they can’t read anymore. But one of the boys picks up a Luci solar lantern, allowing him safe access to lights — no need for Kerosene-based lanterns. See the video here.

According to The Studio’s creative director/president Mary Nittolo, “Our original inspiration was the look of traditional shadow puppets, but as we went along we came to feel that we needed to use more familiar visual tropes. The shadow puppets concept would have looked amazing, but we didn’t think it would have made the same connection with viewers. So we ended up with an aesthetic that includes a number of influences, including shadow puppets, traditional zoetropes, cut paper animation and even a little Disney’s The Lion King.”

The Studio called on Autodesk Maya and Adobe After Effects.

2 4

What was the biggest challenge on this one? “Really the most challenging part was coming up with the overall look of the animation,” explains Nittolo. “Once we settled on an aesthetic it all came together. We did some tests, but what stands out for me about this is how much agreement there was among every one involved about how it should look. Once the initial animation was done and we heard the beautiful voiceover it was clear this was something special.”

Nitollo not only enjoyed the work, but she believed in the PSA’s message: “It was a privilege to work on this project and bring not only attention, but an affordable solution to an overwhelming problem,” adds Nittolo. “Across sub-Saharan Africa, 90 million primary students are without electricity. And, each year, indoor pollution from dirty fuels results in four million deaths. What MPowerd gives is an opportunity to make a modest investment that can be life altering.”

Quick Chat: The Hit House’s Sally House on new Lexus spots

LA-based The Hit House created and produced original music and sound design for the new Lexus NX campaign via Team One Advertising. The Corner Shop produced and Wilfrid Brimo directed. Jump Editorial’s Richard Cooperman provided the cut.

The What You Get Out of It spot features a man in a parking garage, opening a large shipping container. Suddenly people start appearing and entering the container with random items, such as a bike, luggage and a dog. The man then closes the doors and they fall away, revealing a white Lexus filled with all the people and their stuff. They drive away together.

The other commercial in this campaign, which promotes the Lexus’ NX Hybrid, F Sport and Turbo car,  is called Moving. The Hit House (@HitHouseMusic) describes the music they created as industrial and contemporary.

Continue reading

Mixtape Club discusses Android campaign for Google Creative Lab

Toward the end of 2014, New York-based animation/production house Mixtape Club partnered with Google Creative Lab to create a multi-platform campaign for the Android mobile OS. Mixtape Club created five 30-second spots and one 15-second spot for TV. Also included in the campaign was a multi-screen animation for 10 digital out-of-home installations on newsstands throughout Manhattan. To get a taste of the campaign, click here.

Mixtape Club got involved early in terms of concept development as well as provided character animation. Their sister company Huma-Huma provided the sound and music, including the
music that plays during the title card sequence at the end of the spots.

postPerspective reached out to Mixtape Club partners and creative directors Chris Lenox Smith Continue reading