Tag Archives: audio post

Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

Crafting sound for Emmy-winning Atlanta

By Jennifer Walden

FX Network’s dramedy series Atlanta, which recently won an Emmy for Outstanding Sound Editing For A Comedy or Drama Series (Half-Hour)tells the story of three friends from, well, Atlanta — a local rapper named Paper Boi whose star is on the rise (although the universe seems to be holding him down), his cousin/manager Earn and their head-in-the-clouds friend Darius.

Trevor Gates

Told through vignettes, each episode shows their lives from different perspectives instead of through a running narrative. This provides endless possibilities for creativity. One episode flows through different rooms at a swanky New Year’s party at Drake’s house; another ventures deep into the creepy woods where real animals (not party animals) make things tense.

It’s a playground for sound each week, and MPSE-award-winning supervising sound editor Trevor Gates of Formosa Group and his sound editorial team on Season 2 (aka, Robbin’ Season) got their 2018 Emmy based on the work they did on Episode 6 “Teddy Perkins,” in which Darius goes to pick up a piano from the home of an eccentric recluse but finds there’s more to the transaction than he bargained for.

Here, Gates discusses the episode’s precise use of sound and how the quiet environment was meticulously crafted to reinforce the tension in the story and to add to the awkwardness of the interactions between Darius and Teddy.

There’s very little music in “Teddy Perkins.” The soundtrack is mainly different ambiences and practical effects and Foley. Since the backgrounds play such an important role, can you tell me about the creation of these different ambiences?
Overall, Atlanta doesn’t really have a score. Music is pretty minimal and the only music that you hear is mainly source music — music coming from radios, cell phones or laptops. I think it’s an interesting creative choice by producers Hiro Murai and Donald Glover. In cases like the “Teddy Perkins” episode, we have to be careful with the sounds we choose because we don’t have a big score to hide behind. We have to be articulate with those ambient sounds and with the production dialogue.

Going into “Teddy Perkins,” Hiro (who directed the episode) and I talked about his goals for the sound. We wanted a quiet soundscape and for the house to feel cold and open. So, when we were crafting the sounds that most audience members will perceive as silence or quietness, we had very specific choices to make. We had to craft this moody air inside the house. We had to craft a few sounds for the outside world too because the house is located in a rural area.

There are a few birds but nothing overt, so that it’s not intrusive to the relationship between Darius (Lakeith Stanfield) and Teddy (Donald Glover). We had to be very careful in articulating our sound choices, to hold that quietness that was void of any music while also supporting the creepy, weird, tense dialogue between the two.

Inside the Perkins residence, the first ambience felt cold and almost oppressive. How did you create that tone?
That rumbly, oppressive air was the cold tone we were going for. It wasn’t a layer of tones; it was actually just one sound that I manipulated to be the exact frequency that I wanted for that space. There was a vastness and a claustrophobia to that space, although that sounds contradictory. That cold tone was kind of the hero sound of this episode. It was just one sound, articulately crafted, and supported by sounds from the environment.

There’s a tonal shift from the entryway into the parlor, where Darius and Teddy sit down to discuss the piano (and Teddy is eating that huge, weird egg). In there we have the sound of a clock ticking. I really enjoy using clocks. I like the meter that clocks add to a room.

In Ouija: Origin of Evil, we used the sound of a clock to hold the pace of some scenes. I slowed the clock down to just a tad over a second, and it really makes you lean in to the scene and hold what you perceive as silence. I took a page from that book for Atlanta. As you leave the cold air of the entryway, you enter into this room with a clock ticking and Teddy and Darius are sitting there looking at each other awkwardly over this weird/gross ostrich egg. The sound isn’t distracting or obtrusive; it just makes you lean into the awkwardness.

It was important for us to get the mix for the episode right, to get the right level for the ambiences and tones, so that they are present but not distracting. It had to feel natural. It’s our responsibility to craft things that show the audience what we want them to see, and at the same time we have to suspend their disbelief. That’s what we do as filmmakers; we present the sonic spaces and visual images that traverse that fine line between creativity and realism.

That cold tone plays a more prominent role near the end of the episode, during the murder-suicide scene. It builds the tension until right before Benny pulls the trigger. But there’s another element too there, a musical stinger. Why did you choose to use music at that moment?
What’s important about this season of Atlanta is that Hiro and Donald have a real talent for surrounding themselves with exceptional people — from the picture department to the sound department to the music department and everyone on-set. Through the season it was apparent that this team of exceptional people functioned with extreme togetherness. We had a homogeny about us. It was a bunch of really creative and smart people getting together in a room, creating something amazing.

We had a music department and although there isn’t much music and score, every once in a while we would break a rule that we set for ourselves on Season 2. The picture editor will be in the room with the music department and Hiro, and we’ll all make decisions together. That musical stinger wasn’t my idea exactly; it was a collective decision to use a stinger to drive the moment, to have it build and release at a specific time. I can’t attribute that sound to me only, but to this exceptional team on the show. We would bounce creative ideas off of each other and make decisions as a collective.

The effects in the murder-suicide scene do a great job of tension building. For example, when Teddy leans in on Darius, there’s that great, long floor creak.
Yeah, that was a good creak. It was important for us, throughout this episode, to make specific sound choices in many different areas. There are other episodes in the season that have a lot more sound than this episode, like “Woods,” where Paper Boi (Brian Tyree Henry) is getting chased through the woods after he was robbed. Or “Alligator Man,” with the shootout in the cold open. But that wasn’t the case with “Teddy Perkins.”

On this one, we had to make specific choices, like when Teddy leans over and there’s that long, slow creak. We tried to encompass the pace of the scene in one very specific sound, like the sound of the shackles being tightened onto Darius or the movement of the shotgun.

There’s another scene when Darius goes down into the basement, and he’s traveling through this area that he hasn’t been in before. We decided to create a world where he would hear sounds traveling through the space. He walks past a fan and then a water heater kicks on and there is some water gurgling through pipes and the clinking sound of the water heater cooling down. Then we hear Benny’s wheelchair squeak. For me, it’s about finding that one perfect sound that makes that moment. That’s hard to do because it’s not a composition of many sounds. You have one choice to make, and that’s what is going to make that moment special. It’s exciting to find that one sound. Sometimes you go through many choices until you find the right one.

There were great diegetic effects, like Darius spinning the globe, and the sound of the piano going onto the elevator, and the floor needle and the buttons and dings. Did those come from Foley? Custom recordings? Library sounds?
I had a great Foley team on this entire season, led by Foley supervisor Geordy Sincavage. The sounds like the globe spinning came from the Foley team, so that was all custom recorded. The elevator needle moving down was a custom recording from Foley. All of the shackles and handcuffs and gun movements were from Foley.

The piano moving onto the elevator was something that we created from a combination of library effects and Foley sounds. I had sound effects editor David Barbee helping me out on this episode. He gave me some library sounds for the piano and I went in and gave it a little extra love. I accentuated the movement of the piano strings. It was like piano string vocalizations as Darius is moving the piano into the elevator and it goes over the little bumps. I wanted to play up the movements that would add some realism to that moment.

Creating a precise soundtrack is harder than creating a big action soundtrack. Well, there are different sets of challenges for both, but it’s all about being able to tell a story by subtraction. When there’s too much going on, people can feel the details if you start taking things away. “Teddy Perkins” is the case of having an extremely precise soundtrack, and that was successful thanks to the work of the Foley team, my effects editor, and the dialogue editor.

The dialogue editor Jason Dotts is the unsung hero in this because we had to be so careful with the production dialogue track. When you have a big set — this old, creaky house and lots of equipment and crew noise — you have to remove all the extraneous noise that can take you out of the tension between Darius and Teddy. Jason had to go in with a fine-tooth comb and do surgery on the production dialogue just to remove every single small sound in order to get the track super quiet. That production track had to be razor-sharp and presented with extreme care. Then, with extreme care, we had to build the ambiences around it and add great Foley sounds for all the little nuances. Then we had to bake the cake together and have a great mix, a very articulate balance of sounds.

When we were all done, I remember Hiro saying to us that we realized his dream 100%. He alluded to the fact that this was an important episode going into it. I feel like I am a man of my craft and my fingerprint is very important to me, so I am always mindful of how I show my craft to the world. I will always take extreme care and go the extra mile no matter what, but it felt good to have something that was important to Hiro have such a great outcome for our team. The world responded. There were lots of Emmy nominations this year for Atlanta and that was an incredible thing.

Did you have a favorite scene for sound? Why?
It was cool to have something that we needed to craft and present in its entirety. We had to build a motif and there had to be consistency within that motif. It was awesome to build the episode as a whole. Some scenes were a bit different, like down in the basement. That had a different vibe. Then there were fun scenes like moving the piano onto the elevator. Some scenes had production challenges, like the scene with the film projector. Hiro had to shoot that scene with the projector running and that created a lot of extra noise on the production dialogue. So that was challenging from a dialogue editing standpoint and a mix standpoint.

Another challenging scene was when Darius and Teddy are in the “Father Room” of the museum. That was shot early on in the process and Donald wasn’t quite happy with his voice performance in that scene. Overall, Atlanta uses very minimal ADR because we feel that re-recorded performances can really take the magic out of a scene, but Donald wanted to redo that whole scene, and it came out great. It felt natural and I don’t think people realize that Donald’s voice was re-recorded in its entirety for that scene. That was a fun ADR session.

Donald came into the studio and once he got into the recording booth and got into the Teddy Perkins voice he didn’t get out of it until we were completely finished. So as Hiro and Donald are interacting about ideas on the performance, Donald stayed in the Teddy voice completely. He didn’t get out of it for three hours. That was an interesting experience to see Donald’s face as himself and hear Teddy’s voice.

Where there any audio tools that you couldn’t have lived without on this episode?
Not necessarily. This was an organic build and the tools that we used in this were really basic. We used some library sounds and recorded some custom sounds. We just wanted to make sure that we could make this as real and organic as possible. Our tool was to pick the best organic sounds that we could, whether we used source recordings or new recordings.

Of all the episodes in Season 2 of Atlanta, why did you choose “Teddy Perkins” for Emmy consideration?
Each episode had its different challenges. There were lots of different ways to tell the stories since each episode is different. I think that is something that is magical about Atlanta. Some of the episodes that stood out from a sound standpoint were Episode 1 “Alligator Man” with the shootout, and Episode 8 “Woods.” I had considered submitting “Woods” because it’s so surreal once Paper Boi gets into the woods. We created this submergence of sound, like the woods were alive. We took it to another level with the wildlife and used specific wildlife sounds to draw some feelings of anxiety and claustrophobia.

Even an episode like “Champagne Papi,” which seems like one of the most basic from a sound editorial perspective, was actually quite varied. They’re going between different rooms at a party and we had to build spaces of people that felt different but the same in each room. It had to feel like a real space with lots of people, and the different spaces had to feel like it belonged at the same party.

But when it came down to it, I feel like “Teddy Perkins” was special because there wasn’t music to hide behind. We had to do specific and articulate work, and make sharp choices. So it’s not the episode with the most sound but it’s the episode that has the most articulate sound. And we are very proud of how it turned out.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

Review: Blackmagic’s Resolve 15

By David Cox

DaVinci Resolve 15 from Blackmagic Design has now been released. The big news is that Blackmagic’s compositing software Fusion has been incorporated into Resolve, joining the editing and audio mixing capabilities added to color grading in recent years. However, to focus just on this would hide a wide array of updates to Resolve, large and small, across the entire platform. I’ve picked out some of my favorite updates in each area.

For Colorists
Each time Blackmagic adds a new discipline to Resolve, colorists fear that the color features take a back seat. After all, Resolve was a color grading system long before anything else. But I’m happy to say there’s nothing to fear in Version 15, as there are several very nice color tweaks and new features to keep everyone happy.

I particularly like the new “stills store” functionality, which allows the colorist to find and apply a grade from any shot in any timeline in any project. Rather than just having access to manually saved grades in the gallery area, thumbnails of any graded shot can be viewed and copied, no matter which timeline or project they are in, even those not explicitly saved as stills. This is great for multi-version work, which is every project these days.

Grades saved as stills (and LUTS) can also be previewed on the current shot using the “Live Preview” feature. Hovering the mouse cursor over a still and scrubbing left and right will show the current shot with the selected grade temporarily applied. It makes quick work of finding the most appropriate look from an existing library.

Another new feature I like is called “Shared Nodes.” A color grading node can be set as “shared,” which creates a common grading node that can be inserted into multiple shots. Changing one instance, changes all instances of that shared node. This approach is more flexible and visible than using Groups, as the node can be seen in each node layout and can sit at any point in the process flow.

As well as the addition of multiple play-heads, a popular feature in other grading systems, there is a plethora of minor improvements. For example, you can now drag the qualifier graphics to adjust settings, as opposed to just the numeric values below them. There are new features to finesse the mattes generated from the keying functions, as well as improvements to the denoise and face refinement features. Nodes can be selected with a single click instead of a double click. In fact, there are 34 color improvements or new features listed in the release notes.

For Editors
As with color, there are a wide range of minor tweaks all aimed at improving feel and ergonomics, particularly around dynamic trim modes, numeric timecode entry and the like. I really like one of the major new features, which is the ability to open multiple timelines on the screen at the same time. This is perfect for grabbing shots, sequences and settings from other timelines.

As someone who works a lot with VFX projects, I also like the new “Replace Edit” function, which is aimed at those of us that start our timelines with early drafts of VFX and then update them as improved versions come along. The new function allows updated shots to be dragged over their predecessors, replacing them but inheriting all modifications made, such as the color grade.

An additional feature to the existing markers and notes functions is called “Drawn Annotations.” An editor can point out issues in a shot with lines and arrows, then detail them with notes and highlight them with timeline markers. This is great as a “note to self” to fix later, or in collaborative workflows where notes can be left for other editors, colorists or compositors.

Previous versions of Resolve had very basic text titling. Thanks to the incorporation of Fusion, the edit page of Resolve now has a feature called Text+, a significant upgrade on the incumbent offering. It allows more detailed text control, animation, gradient fills, dotted outlines, circular typing and so on. Within Fusion there is a modifier called “Follower,” which enables letter-by-letter animation, allowing Text+ to compete with After Effects for type animation. On my beta test version of Resolve 15, this wasn’t available in the Edit page, which could be down to the beta status or an intent to keep the Text+ controls in the Edit page more streamlined.

For Audio
I’m not an audio guy, so my usefulness in reviewing these parts is distinctly limited. There are 25 listed improvements or new features, according to the release notes. One is the incorporation of Fairlight’s Automated Dialog Replacement processes, which creates a workflow for the replacement of unsalvageable originally recorded dialog.

There are also 13 new built-in audio effects plugins, such as Chorus, Echo and Flanger, as well as de-esser and de-hummer clean-up tools.
Another useful addition both for audio mixers and editors is the ability to import entire audio effects libraries, which can then be searched and star-rated from within the Edit and Fairlight pages.

Now With Added Fusion
So to the headline act — the incorporation of Fusion into Resolve. Fusion is a highly regarded node-based 2D and 3D compositing software package. I reviewed Version 9 in postPerspective last year [https://postperspective.com/review-blackmagics-fusion-9/]. Bringing it into Resolve links it directly to editing, color grading and audio mixing to create arguably the most agile post production suite available.

Combining Resolve and Fusion will create some interesting challenges for Blackmagic, who say that the integration of the two will be ongoing for some time. Their challenge isn’t just linking two software packages, each with their own long heritage, but in making a coherent system that makes sense to all users.

The issue is this: editors and colorists need to work at a fast pace, and want the minimum number of controls clearly presented. A compositor needs infinite flexibility and wants a button and value for every function, with a graph and ideally the ability to drive it with a mathematical expression or script. Creating an interface that suits both is near impossible. Dumbing down a compositing environment limits its ability, whereas complicating an editing or color environment destroys its flow.

Fusion occupies its own “page” within Resolve, alongside pages for “Color,” “Fairlight” (audio) and “Edit.” This is a good solution in so far that each interface can be tuned for its dedicated purpose. The ability to join Fusion also works very well. A user can seamlessly move from Edit to Fusion to Color and back again, without delays, rendering or importing. If a user is familiar with Resolve and Fusion, it works very well indeed. If the user is not accustomed to high-end node-based compositing, then the Fusion page can be daunting.

I think the challenge going forward will be how to make the creative possibilities of Fusion more accessible to colorists and editors without compromising the flexibility a compositor needs. Certainly, there are areas in Fusion that can be made more obvious. As with many mature software packages, Fusion has the occasional hidden right click or alt-click function that is hard for new users to discover. But beyond that, the answer is probably to let a subset of Fusion’s ability creep into the Edit and Color pages, where more common tasks can be accommodated with simplified control sets and interfaces. This is actually already the case with Text+; a Fusion “effect” that is directly accessible within the Edit section.

Another possible area to help is Fusion Macros. This is an inbuilt feature within Fusion that allows a designer to create an effect and then condense it down to a single node, including just the specific controls needed for that combined effect. Currently, Macros that integrate the Text+ effect can be loaded directly in the Edit page’s “Title Templates” section.

I would encourage Blackmagic to open this up further to allow any sort of Macro to be added for video transitions, graphics generators and the like. This could encourage a vibrant exchange of user-created effects, which would arm editors and colorists with a vast array of immediate and community sourced creative options.

Overall, the incorporation of Fusion is a definite success in my view, whether used to empower multi-skilled post creatives or to provide a common environment for specialized creatives to collaborate. The volume of updates and the speed at which the Resolve software developers address the issues exposed during public beta trials, remains nothing short of impressive.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

Cinema Audio Society sets next awards date and timeline

The Cinema Audio Society (CAS) will be holding its 55th Annual CAS Awards on Saturday, February 16, 2019 at the InterContinental Los Angeles Downtown in the Wilshire Grand Ballroom. The CAS Awards recognize outstanding sound mixing in film and television as well as outstanding products for production and post. Recipients for the CAS Career Achievement Award and CAS Filmmaker Award will be announced later in the year.

The InterContinental Los Angeles Downtown is a new venue for the awards. They were held at the Omni Los Angeles Hotel at California Plaza last year.

The timeline for the awards is as follows:
• Entry submission form will be available online on the CAS website on Thursday, October 11, 2018.
• Entry submissions are due online by 5:00pm PST on Thursday, November 15, 2018.
• Outstanding product entry submissions are due online by 5:00pm PST on Friday December 7, 2018.
• Nomination ballot voting begins online on Thursday, December 13, 2018.
• Nomination ballot voting ends online at 5:00pm PST on Thursday, January 3, 2019.
• Final nominees in each category will be announced on Tuesday, January 8, 2019.
• Final voting begins online on Thursday, January 24, 2019.
• Final voting ends online at 5:00pm PST on Wednesday, February 6, 2019.

 

Hobo’s Chris Stangroom on providing Quest doc’s sonic treatment

Following a successful film fest run that included winning a 2018 Independent Spirit Award, and being named a 2017 official selection at Sundance, the documentary Quest is having its broadcast premiere on PBS this month as part of their POV series.

Chris Stangroom

Filmed with vérité intimacy for nearly a decade, Quest follows the Rainey family who live in North Philadelphia. The story begins at the start of the Obama presidency with Christopher “Quest” Rainey, and his wife Christine (“Ma Quest”) raising a family, while also nurturing a community of hip-hop artists in their home music studio. It’s a safe space where all are welcome, but as the doc shows, this creative sanctuary can’t always shield them from the strife that grips their neighborhood.

New York-based audio post house Hobo, which is no stranger to indie documentary work (Weiner, Amanda Knox, Voyeur), lent its sonic skills to the film, including the entire sound edit (dialogue, effects and music), sound design, 5.1 theatrical and broadcast mixes.

We spoke with Hobo’s Chris Stangroom, supervising sound editor/re-recording mixer on the project about the challenges he and the Hobo team faced in their quest on this film.

Broadly speaking what did you and Hobo do on this project? How did you get involved?
We handled every aspect of the audio post on Quest for its Sundance Premiere, theatrical run and broadcast release of the film on POV.

This was my first time working with director Jonathan Olshefski and I loved every minute of it, The entire team on Quest was focused on making this film better with every decision, and he had to be the final voice on everything. We were connected through my friend producer Sabrina Gordon, who I had previously worked with on the film Undocumented. It was a pretty quick turn of events, as I think I got the first call about the film Thanksgiving weekend of 2016. We started working on the film the day after Christmas that year and were finished mix two weeks later with the entire sound edit and mix for the 2017 Sundance film festival.

How important is the audio mix/sound design in the overall cinematic experience of Quest? What was most important to Olshefski?
The sound of a film is half of the experience. I know it sounds cliché, but after years of working with clients on improving their films, the importance of a good sound mix and edit can’t be understated. I have seen films come to life by simply adding Foley to a few intimate moments in a scene. It seems like such a small detail in the grand scheme of a film’s soundtrack, but feeling that intimacy with a character connects us to them in a visceral way.

Since Quest was a film not only about the Rainey family but also their neighborhood of North Philly, I spent a lot of time researching the sounds of Philadelphia. I gathered a lot of great references and insight from friends who had grown up in Philly, like the sounds of “ghetto birds” (helicopters), the motorbikes that are driven around constantly and the SEPTA buses. As Jon and I spoke about the film’s soundtrack, those kinds of sounds and ideas were exactly what he was looking for when we were out on the streets of North Philly. It created an energy to the film that made it vivid and alive.

The film was shot over a 10-year period. How did that prolonged production affect the audio post? Were there format issues or other technical issues you needed to overcome?
It presented some challenges, but luckily Jon always recorded with a lav or a boom on his camera for the interviews, so matching their sound qualities was easier than if he had just been using a camera mic. There are probably half a dozen “narrated” scenes in Quest that are built from interview sound bites, so bouncing around from interviews 10 years apart was tricky and required a lot of attention to detail.

In addition, Quest‘s phenomenal editor Lindsay Utz was cutting scenes up until the last day of our sound mix. So even once we got an entire scene sounding clean and balanced, it would then change and we’d have to add a new line from some other interview during that decade-long period. She definitely kept me on my toes, but it was all to make the film better.

Music is a big part of the family’s lives. Did the fact that they run a recording studio out of their home affect your work?
Yes. The first thing I did once we started on the film was to go down to Quest’s studio in Philly and record “impulse responses” (IRs) of the space, essentially recording the “sound” of a room or space. I wanted to bring that feeling of the natural reverbs in his studio and home to the film. I captured the live room where the artists would be recording, his control room in the studio and even the hallway leading to the studio with doors opened and closed, because sound changes and becomes more muffled as more doors are shut between the microphone and the sound source. The IRs helped me add incredible depth and the feeling that you were there with them when I was mixing the freestyle rap sessions and any scenes that took place in the home and studio.

Jon and I also grabbed dozens of tracks that Quest had produced over the years, so that we could add them into the film in subtle ways, like when a car drives by or from someone’s headphones. It’s those kinds of little details that I love adding, like Easter eggs that only a handful of us know about. They make me smile whenever I watch a film.

Any particular scene or section or aspect of Quest that you found most challenging or interesting to work on?
The scenes involving Quest’s daughter PJ’s injury through her stay in the hospital and her return back home had a lot of challenges that came along with them. We used sound design and the score from the amazing composer T. Griffin to create the emotional arc that something dangerous and life-changing was about to happen.

Once we were in the hospital, we wanted the sound of everything to be very, very quiet. There is a scene in which Quest is whispering to PJ while she is in pain and trying to recover. The actual audio from that moment had a few nurses and women in the background having a loud conversation and occasionally laughing. It took the viewer immediately away from the emotions that we were trying to connect with, so we ended up scrapping that entire audio track and recreated the scene from scratch. Jon actually ended up getting in the sound booth and did some very low and quiet whispering of the kinds of phrases Quest said to his daughter. It took a couple hours to finesse that scene.

Lastly, the scene when PJ gets out of the hospital and is returning back into a world that didn’t stop while she was recovering. We spent a lot of time shifting back and forth between the reality of what happened, and the emotional journey PJ was going through trying to regain normalcy in her life. There was a lot of attention to detail in the mix on that scene because it had to be delivered correctly in order to not break the momentum that had been created.

What was the key technology you used on the project?
Avid Pro Tools, Izotope RX 5 Advanced, Audio Ease Altiverb, Zoom H4N; and a matched stereo pair of sE Electronics sE1a condenser mics.

Who else at Hobo was involved in Quest?
The entire Hobo team really stepped up on this project — namely our sound effects editors Stephen Davies, Diego Jimenez and Julian Angel; Foley artist Oscar Convers; and dialogue editor Jesse Peterson.

Chimney opens in New York City, hires team of post vets

Chimney, an independent content company specializing in film, television, spots and digital media, has opened a new facility in New York City. For over 20 years, the group has been producing and posting campaigns for brands, such as Ikea, Audi, H&M, Chanel, Nike, HP, UBS and more. Chimney was also the post partner for the feature films Chappaquiddick, Her, Atomic Blonde and Tinker Tailor Soldier Spy.

With this New York opening, Chimney now with 14 offices worldwide. Founded in Stockholm in 1995, they opened their first US studio in Los Angeles last year. In addition to Stockholm, New York and LA, Chimney also has facilities in Singapore, Copenhagen, Berlin and Sydney among other cities. For a full location list click here.

“Launching in New York is a benchmark long in the making, and the ultimate expression of our philosophy of ‘boutique-thinking with global power,’” says Henric Larsson, Chimney founder and COO. “Having a meaningful presence in all of the world’s economic centers with diverse cultural perspectives means we can create and execute at the highest level in partnership with our clients.”

The New York opening supports Chimney’s mission to connect its global talent and resources, effectively operating as a 24-hour, full-service content partner to brand, entertainment and agency clients, no matter where they are in the world.

Chimney has signed on several industry vets to spearhead the New York office. Leading the US presence is CEO North America Marcelo Gandola. His previous roles include COO at Harbor Picture Company; EVP at Hogarth; SVP of creative services at Deluxe Entertainment Services Group; and VP of operations at Company 3.

Colorist and director Lez Rudge serves as Chimney’s head of color North America. He is a former partner and senior colorist at Nice Shoes in New York. He has worked alongside Spike Lee and Darren Aronofsky, and on major brand campaigns for Maybelline, Revlon, NHL, Jeep, Humira, Spectrum and Budweiser.

Managing director Ed Rilli will spearhead the day-to-day logistics of the New York office. As the former head of production of Nice Shoes, his resume includes producing major campaigns for such brands as NFL, Ford, Jagermeister and Chase.

Sam O’Hare, chief creative officer and lead VFX artist, will oversee the VFX team. Bringing experience in live-action directing, VFX supervision, still photography and architecture, O’Hare’s interdisciplinary background makes him well suited for photorealistic CGI production.

In addition, Chimney has brought on cinematographer and colorist Vincent Taylor, who joins from MPC Shanghai, where he worked with brands such as Coca-Cola, Porsche, New Balance, Airbnb, BMW, Nike and L’Oréal.

The 6,000-square-foot office will feature Blackmagic Resolve color rooms, Autodesk Flame suites and a VFX bullpen, as well as multiple edit rooms, a DI theater and a Dolby Atmos mix stage through a joint venture with Gigantic Studios.

Main Image: (L-R) Ed Rilli, Sam O’Hare, Marcelo Gandola and Lez Rudge.

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.