Tag Archives: audio post

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Michael Semanick: Mixing SFX, Foley for Star Wars: The Last Jedi

By Jennifer Walden

Oscar-winning re-recording mixer Michael Semanick from Skywalker Sound mixed the sound effects, Foley and backgrounds on Star Wars: The Last Jedi, which has earned an Oscar nomination for Sound Mixing.

Technically, this is not Semanick’s first experience with the Star Wars franchise — he’s credited as an additional mixer on Rogue One — but on The Last Jedi he was a key figure in fine-tuning the film’s soundtrack. He worked alongside re-recording mixers Ren Klyce and David Parker, and with director Rian Johnson, to craft a soundtrack that was bold and dynamic. (Look for next week’s Star Wars story, in which re-recording mixer Ren Klyce talks about his approach to mixing John Williams’ score.)

Michael Semanick

Recently, Semanick shared his story of what went into mixing the sound effects on The Last Jedi. He mixed at Skywalker in Nicasio, California, on the Kurosawa Stage.

You had all of these amazing elements — Skywalker’s effects, John Williams’ score and the dialogue. How did you bring clarity to what could potentially be a chaotic soundtrack?
Yes, there are a lot of elements that come in, and you have to balance these things. It’s easy on a film like this to get bombastic and assault the audience, but that’s one of the things that Rian didn’t want to do. He wanted to create dynamics in the track and get really quiet so that when it does get loud it’s not overly loud.

So when creating that I have to look at all of the elements coming in and see what we’re trying to do in each specific scene. I ask myself, “What’s this scene about? What’s this storyline? What’s the music doing here? Is that the thread that takes us to the next scene or to the next place? What are the sound effects? Do we need to hear these background sounds, or do we need just the hard effects?”

Essentially, it’s me trying to figure out how many frequencies are available and how much dialogue has to come through so the audience doesn’t lose the thread of the story. It’s about deciding when it’s right to feature the sound effects or take the score down to feature a big explosion and then bring the score back up.

It’s always a balancing act, and it’s easy to get overwhelmed and throw it all in there. I might need a line of dialogue to come through, so the backgrounds go. I don’t want to distract the audience. There is so much happening visually in the film that you can’t put sound on everything. Otherwise, the audience wouldn’t know what to focus on. At least that’s my approach to it.

How did you work with the director?
As we mixed the film with Rian, we found what types of sounds defined the film and what types of moments defined the film in terms of sound. For example, by the time you reach the scene when Vice Admiral Holdo (Laura Dern) jumps to hyperspace into the First Order’s fleet, everything goes really quiet. The sound there doesn’t go completely out — it feels like it goes out, but there’s sound. As soon as the music peaks, I bring in a low space tone. Well, if there was a tone in space, I imagine that is what it would sound like. So there is sound constantly through that scene, but the quietness goes on for a long time.

One of the great things about that scene was that it was always designed that way. While I noted how great that scene was, I didn’t really get it until I saw it with an audience. They became the soundtrack, reacting with gasps. I was at a screening in Seattle, and when we hit that scene and you could hear that the people were just stunned, and one guy in the audience went, “Yeah!”

There are other areas in the film where we go extremely quiet or take the sound out completely. For example, when Rey (Daisy Ridley) and Kylo Ren (Adam Driver) first force-connect, the sound goes out completely… you only hear a little bit of their breathing. There’s one time when the force connection catches them off guard — when Kylo had just gotten done working out and Rey was walking somewhere — we took the sound completely out while she was still moving.

Rian loved it because when we were working on that scene we were trying to get something different. We used to have sound there, all the way through the scene. Then Rian said, “What happens if you just start taking some of the sounds out?” So, I started pulling sounds out and sure enough, when I got the sound all the way out — no music, no sounds, no backgrounds, no nothing — Rian was like, “That’s it! That just draws you in.” And it does. It pulls you into their moment. They’re pulled together even though they don’t want to be. Then we slowly brought it back in with their breathing, a little echo and a little footstep here or there. Having those types of dynamics worked into the film helped the scene at the end.

Rian shot and cut the picture so we could have these moments of quiet. It was already set up, visually and story-wise, to allow that to happen. When Rey goes into the mirror cave, it’s so quiet. You hear all the footsteps and the reverbs and reflections in there. The film lent itself to that.

What was the trickiest scene to mix in terms of the effects?
The moment Kylo Ren and Rey touch hands via the force connection. That was a real challenge. They’re together in the force connection, but they weren’t together physically. We were cutting back and forth from her place to Kylo Ren’s place. We were hearing her campfire and her rain. It was a very delicate balance between that and the music. We could have had the rain really loud and the music blasting, but Rian wanted the rain and fire to peel away as their hands were getting closer. It was so quiet and when they did touch there was just a bit of a low-end thump. Having a big sound there just didn’t have the intimacy that the scene demanded. It can be so hard to get the balance right to where the audience is feeling the same thing as the characters. The audience is going, “No, oh no.” You know what’s going to come, but we wanted to add that extra tension to it sonically. For me, that was one of the hardest scenes to get.

What about the action scenes?
They are tough because they take time to mix. You have to decide what you want to play. For example, when the ships are exploding as they’re trying to get away before Holdo rams her ship into the First Order’s, you have all of that stuff falling from the ceiling. We had to pick our moments. There’s all of this fire in the background and TIE fighters flying around, and you can’t hear them all or it will be a jumbled mess. I can mix those scenes pretty well because I just follow the story point. We need to hear this to go with that. We have to have a sound of falling down, so let’s put that in.

Is there a scene you had fun with?
The fight in Snoke’s (Andy Serkis) room, between Rey and Kylo Ren. That was really fun because it was like wham-bam, and you have the lightsaber flying around. In those moments, like when Rey throws the lightsaber, we drop the sound out for a split second so when Kylo turns it on it’s even more powerful.

That scene was the most fun, but the trickiest one was that force-touch scene. We went over it a hundred different ways, to just get it to feel like we were with them. For me, if the sound calls too much attention to itself, it’s pulling you out of the story, and that’s bad mixing. I wanted the audience to lean in and feel those hands about to connect. When you take the sound out and the music out, then it’s just two hands coming together slowly. It was about finding that balance to make the audience feel like they’re in that moment, in that little hut, and they’re about to touch and see into each other’s souls, so to speak. That was a challenge, but it was fun because when you get it, and you see the audience react, everyone feels good about that scene. I feel like I did something right.

What was one audio tool that you couldn’t live without on this mix?
For me, it was the AMS Neve DFC Gemini console. All the sounds came into that. The console was like an instrument that I played. I could bring any sound in from any direction, and I could EQ it and manipulate it. I could put reverb on it. I could give the director what he wanted. My editors were cutting the sound, but I had to have that console to EQ and balance the sounds. Sometimes it was about EQing frequencies out to make a sound fit better with other sounds. You have to find room for the sounds.

I could move around on it very quickly. I had Rian sitting behind me saying, “What if you roll back and adjust this or try that.” I could ease those faders up and down and hit it just right. I know how to use it so well that I could hear stuff ahead of what I was doing.

The Neve DFC was invaluable. I could take all the different sound formats and sample rates and it all came through the console, and in one place. It could blend all those sources together; it’s a mixing bowl. It brought all the sounds together so they could all talk to each other. Then I manipulated them and sent them out and that was the soundtrack — all driven by the director, of course.

Can you talk about working with the sound editor?
The editors are my right-hand people. They can shift things and move things and give me another sound. Maybe I need one with more mid-range because the one in there isn’t quite reading. We had a lot of that. Trying to get those explosions to work and to come through John Williams’ score, sometimes we needed something with more low-end and more thump or more crack. There was a handoff in some scenes.

On The Last Jedi, I had sound effects editor Jon Borland with me on the stage. Bonnie Wild had started the project and had prepped a lot of the sounds for several reels — her and Jon and Ren Klyce, who oversaw the whole thing. But Jon was my go-to person on the stage. He did a great job. It was a bit of a daunting task, but Jon is young and wants to learn and gave it everything he had. I love that.

What format was the main mix?
Everything was done in Atmos natively, then we downmixed to 7.1 and 5.1 and all the other formats. We were very diligent about having the downmixed versions match the Atmos mix the best that they could.

Any final thoughts you’d like to share?
I’m so glad that Rian chose me to be part of the mix. This film was a lot of fun and a real collaborative effort. Rian is the one who really set that tone. He wanted to hear our ideas and see what we could do. He wasn’t sold on one thing. If something wasn’t working, he would try things out until it did. It was literally sorting out frequencies and getting transitions to work just right. Rian was collaborative, and that creates a room of collaboration. We wanted a great track for the audience to enjoy… a track that went with Rian’s picture.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Review: Blackmagic Resolve 14

By David Cox

Blackmagic has released Version 14 of its popular DaVinci Resolve “color grading” suite, following a period of open public beta development. I put color grading in quotes, because one of the most interesting aspects about the V14 release is how far-reaching Resolve’s ambitions have become, beyond simply color grading.

Fairlight audio within Resolve.

Prior to being purchased by Blackmagic, DaVinci Resolve was one of a small group of high-end color grading systems being offered in the industry. Blackmagic then extended the product to include editing, and Version 14 offers several updates in this area, particularly around speed and fluidity of use. A surprise addition is the incorporation of Fairlight Audio — a full-featured audio mixing platform capable of producing feature film quality 3D soundscapes. It is not just an external plugin, but an integrated part of the software.

This review concentrates on the color finishing aspects of Resolve 14, and on first view the core color tools remain largely unchanged save for a handful of ergonomic improvements. This is not surprising given that Resolve is already a mature grading product. However, Blackmagic has added some very interesting tools and features clearly aimed at enabling colorists to broaden their creative control. I have been a long-time advocate of the idea that a colorist doesn’t change the color of a sequence, but changes the mood of it. Manipulating the color is just one path to that result, so I am happy to see more creatively expansive facilities being added.

Face Refinement
One new feature that epitomizes Blackmagic’s development direction is the Face Refinement tool. It provides features to “beautify” a face and underlines two interesting development points. Firstly, it shows an intention by the developers to create a platform that allows users to extend their creative control across the traditional borders of “color” and “VFX.”

Secondly, such a feature incorporates more advanced programming techniques that seek to recognize objects in the scene. Traditional color and keying tools simply replace one color for another, without “understanding” what objects those colors are attached to. This next step toward a more intelligent diagnosis of scene content will lead to some exciting tools and Blackmagic has started off with face-feature tracking.

Face Refinement

The Face Refinement function works extremely well where it recognizes a face. There is no manual intervention — the tool simply finds a face in the shot and tracks all the constituent parts (eyes, lips, etc). Where there is more than one face detected, the system offers a simple box selector for the user to specify which face to track. Once the analysis is complete, the user has a variety of simple sliders to control the smoothness, color and detail of the face overall, but also specific controls for the forehead, cheeks, chin, lips, eyes and the areas around and below the eyes.

I found the face de-shine function particularly successful. A light touch with the controls yields pleasing results very quickly. A heavy touch is what you need if you want to make someone look like an android. I liked the fact that you can go negative with some controls and make a face look more haggard!

In my tests, the facial tracking was very effective for properly framed faces, even those with exaggerated expressions, headshakes and so on. But it would fail where the face became partially obscured, such as when the camera panned off the face. This led to all the added improvements popping off mid shot. While the fully automatic operation makes it quick and simple to use, it affords no opportunity for the user to intervene and assist the facial tracking if it fails. All things considered though, this will be a big help and time saver for the majority of beauty work shots.

Resolve FX
New for Resolve 14 are a myriad of built-in effects called Resolve FX, all GPU-accelerated and available to be added in the edit “page” directly to clips, or in the color page attached to nodes. They are categorized into Blurs, Light, Color, Refine, Repair, Stylize, Texture and Warp. A few particularly caught my eye, for example in “color,” the color compressor brings together nearby colors to a central hue. This is handy for unifying colors of an unevenly lit client logo into their precise brand reference, or dealing with blotchy skin. There is also a color space transform tool that enables LUT-less conversion between all the major color “spaces.”

Color

The dehaze function derives a depth map by some mysterious magic to help improve contrast over distance. The “light” collection includes a decent lens flare that allows plenty of customizing. “Styles” creates watercolor and outline looks while Texture includes a film grain effect with several film-gauge presets. I liked the implementation of the new Warp function. Rather than using grids or splines, the user simply places “pins” in the image to drag certain areas around. Shift-adding a pin defines a locked position immune from dragging. All simple, intuitive and realtime, or close to it.

Multi-Skilled and Collaborative Workflows
A dilemma for the Resolve developers is likely to be where to draw the line between editing, color and VFX. Blackmagic also develops Fusion, so they have the advanced side of VFX covered. But in the middle, there are editors who want to make funky transitions and title sequences, and colorists who use more effects, mattes and tracking. Resolve runs out of ability in these areas quite quickly and this forces the more adventurous editor or colorist into the alien environment of Fusion. The new features of Resolve help in this area, but a few additions to Resolve, such as better keyframing of effects and easier ability to reference other timeline layers in the node panel could help to extend Resolve’s ability to handle many common VFX-ish demands.

Some have criticized Blackmagic for turning Resolve into a multi-discipline platform, suggesting that this will create an industry of “jack of all trades and masters of none.” I disagree with this view for several reasons. Firstly, if an artist wants to major in a specific discipline, having a platform that can do more does not impede them. Secondly, I think the majority of content (if you include YouTube, etc.) is created by a single person or small teams, so the growth of multi-skilled post production people is simply an inevitable and logical progression which Blackmagic is sensibly addressing.

Edit

But for professional users within larger organisations, the cross-discipline features of Resolve take on a different meaning when viewed in the context of “collaboration.” Resolve 14 permits editors to edit, colorists to color and sound mixers to mix, all using different installations of the same platform, sharing the same media and contributing to the same project, even the same timeline. On the face of it, this promises to remove “conforms” and eradicate wasteful import/export processes and frustrating compatibility issues, while enabling parallel workflows across editing, color grading and audio.

For fast-turnaround projects, or projects where client approval cannot be sought until the project progresses beyond a “rough” stage, the potential advantages are compelling. Of course, the minor hurdle to get over will be to persuade editors and audio mixers to adopt Resolve as their chosen weapon. If they do, Blackmagic might well be on the way to providing collaborative utopia.

Summing Up
Resolve 14 is a massive upgrade from Resolve 12 (there wasn’t a Resolve 13 — who would have thought that a company called Blackagic might be superstitious?). It provides a substantial broadening of ability that will suit both the multi-skilled smaller outfits or fit as a grading/finishing platform and collaborative backbone in larger installations.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

King, along with Alex Gibson, recently won the Academy Award for Achievement in Sound Editing for Dunkirk.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

Quick Chat: Monkeyland Audio’s Trip Brock

By Dayna McCallum

Monkeyland Audio recently expanded its facility, including a new Dolby Atmos equipped mixing stage. The Glendale-based Monkeyland Audio, where fluorescent lights are not allowed and creative expression is always encouraged, now offers three mixing stages, an ADR/Foley stage and six editorial suites.

Trip Brock, the owner of Monkeyland, opened the facility over 10 years ago, but the MPSE Golden Reel Award-winning supervising sound editor and mixer (All the Wilderness), started out in the business more than 23 years ago. We reached out to Brock to find out more about the expansion and where the name Monkeyland came from in the first place…

monkeyland audioOne of your two new stages is Dolby Atmos certified. Why was that important for your business?
We really believe in the Dolby Atmos format and feel it has a lot of growth potential in both the theatrical and television markets. We purpose-built our Atmos stage looking towards the future, giving our independent and studio clients a less expensive, yet completely state-of-the-art alternative to the Atmos stages found on the studio lots.

Can you talk specifically about the gear you are using on the new stages?
All of our stages are running the latest Avid Pro Tools HD 12 software across multiple Mac Pros with Avid HDX hardware. Our 7.1 mixing stage, Reposado, is based around an Avid Icon D-Control console, and Anejo, our Atmos stage, is equipped with dual 24-fader Avid S6 M40 consoles. Monitoring on Anejo is based on a 3-way JBL theatrical system, with 30 channels of discrete Crown DCi amplification, BSS processing and the DAD AX32 front end.

You’ve been in this business for over 23 years. How does that experience color the way you run your shop?
I stumbled into the post sound business coming from a music background, and immediately fell in love with the entire process. After all these years, having worked with and learned so much from so many talented clients and colleagues, I still love what I do and look forward to every day at the office. That’s what I look for and try to cultivate in my creative team — the passion for what we do. There are so many aspects and nuances in the audio post world, and I try to express that to my team — explore all the different areas of our profession, find which role really speaks to you and then embrace it!

You’ve got 10 artists on staff. Why is it important to you to employ a full team of talent, and how do you see that benefiting your clients?
I started Monkeyland as primarily a sound editorial company. Back in the day, this was much more common than the all-inclusive, independent post sound outfits offering ADR, Foley and mixing, which are more common today. The sound editorial crew always worked together in house as a team, which is a theme I’ve always felt was important to maintain as our company made the switch into full service. To us, keeping the team intact and working together at the same location allows for a lot more creative collaboration and synergy than say a set of editors all working by themselves remotely. Having staff in house also allows us flexibility when last minute changes are thrown our way. We are better able to work and communicate as a team, which leads to a superior end product for our clients.

Monkeyland AudioCan you name some of the projects you are working on and what you are doing for them?
We are currently mixing a film called The King’s Daughter, starring Pierce Brosnan and William Hurt. We also recently completed full sound design and editorial, as well as the native Atmos mix, on a new post-apocalyptic feature we are really proud of called The Worthy. Other recent editorial and mixing projects include the latest feature from Director Alan Rudolph, Ray Meets Helen, the 10-episode series Junior for director Zoe Cassavetes, and Three Days To Live, a new eight-episode true-crime series for NBC/Universal.

Most of your stage names are related to tequila… Why is that?
Haha — this is kind of a take-off from the naming of the company itself. When I was looking for a company name, I knew I didn’t want it to include the word “digital” or have any hint toward technology, which seemed to be the norm at the time. A friend in college used to tease me about my “unique” major in audio production, saying stuff like, “What kind of a degree is that? A monkey could be trained to do that.” Thus Monkeyland was born!

Same theory applied to our stage names. When we built the new stages and needed to name them, I knew I didn’t want to go with the traditional stage “A, B, C” or “1, 2, 3,” so we decided on tequila types — Anejo, Reposado, Plata, even Mezcal. It seems to fit our personality better, and who doesn’t like a good margarita after a great mix!

The sounds of Brooklyn play lead role in HBO’s High Maintenance

By Jennifer Walden

New Yorkers are jaded, and one of the many reasons is that just about anything they want can be delivered right to their door: Chinese food, prescriptions, craft beer, dry cleaning and weed. Yes, weed. This particular item is delivered by “The Guy,” the protagonist of HBO’s new series, High Maintenance.

The Guy (played by series co-creator Ben Sinclair) bikes around Brooklyn delivering pot to a cast of quintessentially quirky New York characters. Series creators Sinclair and Katja Blichfeld string together vignettes — using The Guy as the common thread — to paint a realistic picture of Brooklynites.

andrew-guastella

Nutmeg’s Andrew Guastella. Photo credit: Carl Vasile

“The Guy delivers weed to people, often going into their homes and becoming part of their lives,” explains sound editor/re-recording mixer Andrew Guastella at Nutmeg, a creative marketing and post studio based in New York. “I think that what a lot of viewers like about the show is how quickly you come to know complete strangers in a sort of intimate way.”

Blichfeld and Sinclair find inspiration for their stories from their own experiences, says Guastella, who follows suit in terms of sound. “We focus on the realism of the sound, and that’s what makes this show unique.” The sound of New York City is ever-present, just as it is in real life. “Audio post was essential for texturizing our universe,” says Sinclair. “There’s a loud and vibrant city outside of those apartment walls. It was important to us to feel the presence of a city where people live on top of each other.”

Big City Sounds
That edict for realism drives all sound-related decisions on High Maintenance. On a typical series, Guastella would strive to clean up every noise on the production dialogue, but for High Maintenance, the sound of sirens, horns, traffic, even car alarms are left in the tracks, as long as they’re not drowning out the dialogue. “It’s okay to leave sounds in that aren’t obtrusive and that sell the fact that they are in New York City,” he says.

For example, a car alarm went off during a take. It wasn’t in the way of the dialogue but it did drop out on a cut, making it stand out. “Instead of trying to remove the alarm from the dialogue, I decided to let it roll and I added a chirp from a car alarm, as if the owner turned off the alarm [or locked the car], to help incorporate it into the track. A car alarm is a sound you hear all the time in New York.”

Exterior scenes are acceptably lively, and if an interior scene is feeling too quiet, Guastella can raise a neighborly ruckus. “In New York, there’s always that noisy neighbor. Some show creators might be a little hesitant to use that because it could be distracting, but for this show, as long as it’s real, Ben and Katja are cool with it,” he says. During a particularly quiet interior scene, he tried adding the sounds of cars pulling away and other light traffic to fill up the space, but it wasn’t enough, so Guastella asked the creators, “’How do you feel about the neighbors next door arguing?’ And they said, ‘That’s real. That’s New York. Let’s try it out.’”

Guastella crafted a commotion based on his own experience of living in an apartment in Queens. Every night he and his wife would hear the downstairs neighbors fighting. “One night they were yelling and then all we heard was this loud, enormous slam. Hopefully, it was a door,” jokes Guastella. “Ben and Katja are always pulling from their own experiences, so I tried to do that myself with the soundtrack.”

Despite the skill of production sound mixer Dimitri Kouri, and a high tolerance for the ever-present sound of New York City, Guastella still finds himself cleaning dialogue tracks using iZotope’s RX 5 Advanced. One of his favorite features is RX Connect. With this plug-in feature, he can select a region of dialogue in his Avid Pro Tools session and send that region directly to iZotope’s standalone RX application where he can edit, clean and process the dialogue. Once he’s satisfied, he can return that cleaned up dialogue right back in sync on the timeline of his Pro Tools session where he originally sent it from.

“I no longer have to deal with exporting and importing audio files, which was not an efficient way to work,” he says. “And for me, it’s important that I work within the standalone application. There are plug-in versions of some RX tools, but for me, the standalone version offers more flexibility and the opportunity to use the highly detailed visual feedback of its audio-spectrum analyzer. The spectrogram makes using tools like Spectral Repair and De-click that much more effective and efficient. There are more ways to use and combine the tools in general.”

Guastella has been with the series since 2012, during its webisode days on Vimeo. Back then, it was a passion-project, something he’d work on at home on his own time. From the beginning, he’s handled everything audio: the dialogue cleaning and editing, the ambience builds and Foley and the final mix. “Andrew [Guastella] brought his professional ear and was always such a pleasure to work with. He always delivered and was always on time,” says Blichfeld.

The only aspect that Guastella doesn’t handle is the music. “That’s a combination of licensed music (secured by music supervisor Liz Fulton) and original composition by Chris Bear. The music is well-established by the time the episode gets to me,” he says.

On the Vimeo webisodes, Guastella would work an episode’s soundtrack into shape, and then send it to Blichfeld and Sinclair for notes. “They would email me or we would talk over the phone. The collaborative process wasn’t immediate,” he says. Now that HBO has picked up the series and renewed it for Season 2, Guastella is able to work on High Maintenance in his studio at Nutmeg, where he has access to all the amenities of a full-service post facility, such as sound effects libraries, an ADR booth, a 5.1 surround system and room to accommodate the series creators who like to hang around and work on the sound with Guastella. “They are very particular about sound and very specific. It’s great to have instant access to them. They were here more than I would’ve expected them to be and it was great spending all that time with them personally and professionally.”

In addition to being a series co-creator, co-writer and co-director with Blichfeld, Sinclair is also one of show’s two editors. This meant they were being pulled in several directions, which eventually prevented them from spending so much time in the studio with Guastella. “By the last three episodes of this season, I had absorbed all of their creative intentions. I was able to get an episode to the point of a full mix and they would come in just for a few hours to review and make tweaks.”

With a bigger budget from HBO, Guastella is also able to record ADR when necessary, record loop group and perform Foley for the show at Nutmeg. “Now that we have a budget and the space to record actual Foley, we’re faced with the question of how much Foley do we want to do? When you Foley sound for every movement and footstep, it doesn’t always sound realistic, and the creators are very aware of that,” says Guastella.

5.1 Surround Mix
In addition to a minimalist approach, another way he keeps the Foley sounding real is by recording it in the real world. In Episode 3, the story is told from a dog’s POV. Using a TASCAM DR 680 digital recorder and a Sennheiser 416 shotgun mic, Guastella recorded an “enormous amount of Foley at home with my Beagle, Bailey, and my father-in-law’s Yorkie and Doberman. I did a lot of Foley recording at the dog park, too, to capture Foley for the dog outside.”

Another difference between the Vimeo episodes and the HBO series is the final mix format. “HBO requires a surround sound 5.1 mix and that’s something that demands the infrastructure of a professional studio, not my living room,” says Guastella. He takes advantage of the surround field by working with ambiences, creating a richer environment during exterior shots which he can then contrast with a closer, confined sound for the interior shots.

“This is a very dialogue-driven show so I’m not putting too much information in the surrounds. But there is so much sound in New York City, and you are really able to play with perspective of the interior and exterior sounds,” he explains. For example, the opening of Episode 3, “Grandpa,” follows Gatsby the dog as he enters the front of his house and eventually exits out of the back. Guastella says he was “able to bring the exterior surrounds in with the characters, then gradually pan them from surround to a heavier LCR once he began approaching the back door and the backyard was in front of him.”

The series may have made the jump from Vimeo to HBO but the soul of the show has changed very little, and that’s by design. “Ben, Katja, and Russell Gregory [the third executive producer] are just so loyal to the people who helped get this series off the ground with them. On top of that, they wanted to keep the show feeling how it did on the web, even though it’s now on HBO. They didn’t want to disappoint any fans that were wondering if the series was going to turn into something else… something that it wasn’t. It was really important to the show creators that the series stayed the same, for their fans and for them. Part of that was keeping on a lot of the people who helped make it what it was,” concludes Guastella.

Check out High Maintenance on HBO, Fridays at 11pm.


Jennifer Walden is a NJ-based audio engineer and writer. Follow her at @audiojeney.

The sound of sensory overload for Cinemax’s ‘Outcast’

By Jennifer Walden

As a cockroach crawls along the wall, each move is watched intensely by a boy whose white knuckles grip the headboard of his bed. His shallow breaths stop just before he head-butts the cockroach and sucks its bloody remains off the wall.

That is the fantastic opening scene of Robert Kirkman’s latest series, Outcast, airing now on Cinemax. Kirkman, writer/executive producer on The Walking Dead, sets his new horror series in the small town of Rome, West Virginia, where a plague of demonic-like possessions is infecting the residents.

Ben Cook

Outcast supervising sound editor Benjamin Cook, of 424 Post in Culver City, says the opening of the pilot episode featured some of his favorite moments in terms of sound design. Each scrape of the cockroach’s feet, every twitch of its antenna, and the juicy crunch of its demise were carefully crafted. Then, following the cockroach consumption, the boy heads to the pantry and snags a bag of chips. He mindlessly crunches away as his mother and sister argue in the kitchen. When the mother yells at the boy for eating chips after supper, he doesn’t seem to notice. He just keeps crunching away. The mother gets closer as the boy turns toward her and she sees that it’s not chips he’s crunching on but his own finger. This is not your typical child.

“The idea is that you want it to seem like he’s eating potato chips, but somewhere in there you need a crossover between the chips and the flesh and bone of his finger,” says Cook. Ultimately, the finger crunching was a combination of Foley — provided by Jeff Wilhoit, Brett Voss, and Dylan Tuomy-Wilhoit at Happy Feet Foley — and 424 Post’s sound design, created by Cook and his sound designers Javier Bennassar and Charles Maynes. “We love doing all of those little details that hopefully make our soundtracks stand out. I try to work a lot of detail into my shows as a general rule.”

Sensory Overload
While hitting the details is Cook’s m.o. anyway — as evidenced by his Emmy-nominated sound editing on Black Sails — it serves a double purpose in Outcast. When people are possessed in the world of Outcast, we imagine that they are more in tune with the micro details of the human experience. Every touch and every movement makes a sound.

“Whenever we are with a possessed person we try to play up the sense that they are overwhelmed by what they are experiencing because their body has been taken over,” says Cook. “Wherever this entity comes from it doesn’t have a physical body and so what the entity is experiencing inside the human body is kind of a sensory overload. All of the Foley and sound effects are really heightened when in that experience.”

Cook says he’s very fortunate to find shows where he and his team have a lot of creative freedom, as they do on Outcast. “As a sound person that is the best; when you really are a collaborator in the storytelling.”

His initial direction for sound came from Adam Wingard, the director on the pilot episode. Wingard asked for drones and distortion, for hard-edged sounds derived from organic sources. “There are definitely more processed kinds of sounds than I would typically use. We worked with the composer Atticus Ross, so there was a handoff between the music and the sound design in the show.”

Working with a stereo music track from composer Ross, Cook and his team could figure out their palette for the sound design well before they hit the dub stage. They tailored the sound design to the music so that both worked together without stepping on each other’s toes.

He explains that Outcast was similar to Black Sails in that they were building the episodes well before they mixed them. The 424 Post team had time to experiment with the design of key sounds, like the hissing, steaming sound that happens when series protagonist Kyle Barnes (Patrick Fugit) touches a possessed person, and the sound of the entity as it is ejected from a body in a jet of black, tar-like fluid, which then evaporates into thin air. For that sound, Cook reveals that they used everything from ocean waves to elephant sounds to bubbling goo. “The entity was tough because we had to find that balance between its physical presence and its spiritual presence because it dissipates back into its original plane, where ever it came from.”

Sound Design and More
When defining the sound design for possessed people, one important consideration was what to do with their voice. Or, in this case, what not to do with their voice. Series creator Kirkman, who gave Cook carte blanche on the majority of the show’s sound work, did have one specific directive: “He didn’t want any changes to happen with their voice. He didn’t want any radical pitch shifting or any weird processing. He wanted it to sound very natural,” explains Cook, who shared the ADR workload with supervising dialogue editor Erin Oakley-Sanchez.

There was no processing to the voices at all. What you hear is what the actors were able to perform, the only exception being Joshua (Gabriel Bateman), an eight-year-old boy who is possessed. For him, the show runners wanted to hear a slight bit of difference to drive home the fact that his body had indeed been taken over. “We have Kyle beating up this kid and so we wanted to make sure that the viewers really got a sense that this wasn’t a kid he was beating up, but that he was beating up a monster,” explains Cook.

To pull off Joshua’s possessed voice, Oakley-Sanchez and Wingard had actor Bateman change his voice in different ways during their ADR session. Then, Cook doubled certain lines in the mix. “The approach was very minimalistic. We never layered in other animal sounds or anything like that. All of the change came from the actor’s performance,” Cook says.

Cook is a big proponent of using fresh sounds in his work. He used field recordings captured in Tennessee, Virginia, and Florida to build the backgrounds. He recorded hard effects like doors, body hits and furniture crashing and breaking. There were other elements used as part of the sound design, like wind and water recordings. In Sound Particles —a CGI-like software for sound design created by Nuno Fonseca — he was able to manipulate and warp sound elements to create unique sounds.

“Sound Particles has really great UI to it, like virtual mics you can place and move to record things in a virtual 3D environment. It lets you create multiple instances of sound very easily. You can randomize things like pitch and timing. You can also automate the movements and create little vignettes that can be rendered out as a piece of audio that you can bring into Pro Tools or Nuendo or other audio workstations. It’s a very fascinating concept and I’ve been using it a lot.”

Cook enjoys building rich backgrounds in shows, which he uses to help further the storyline. For example, in Episode 2 the police chief and his deputy take a trek through the woods and find an abandoned trailer. Cook used busier tracks with numerous layers of sounds at first, but as the chief and deputy get farther into the woods and closer to the abandoned trailer, the backgrounds become sparser and eerily quiet. Another good example happens in Episode 9, where there is a growing storm that builds throughout the whole episode. “It’s not a big player, just more of a subtext to the story. We do really simple things that hopefully translate and come across to people as little subtleties they can’t put their finger on,” says Cook.

Outcast is mixed in 5.1 by re-recording mixers Steve Pederson (dialogue/music) and Dan Leahy (effects/Foley/ backgrounds) via Sony Pictures Post at Deluxe in Hollywood. Cook says, “They are super talented mixers who mostly do a lot of feature films and so they bring a theatrical vibe to the series.”

New episodes of Outcast air Fridays at 10pm on Cinemax, with the season finale on August 12th. Outcast has been renewed for Season 2, and while Cook doesn’t have any inside info on where the show will go next season, he says, “at the end of Season 1, we’re not sure if the entity is alien or demonic, and they don’t really give it away one way or another. I’m really excited to see what they do in Season 2. There is lots of room to go either way. I really like the characters, like the Reverend and Kyle — both have really great back stories. They’re both so troubled and flawed and there is a lot to build on there.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.