Tag Archives: sound design

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of streaming services like Netflix is that strict program lengths don’t encumber content creators. The total number of episodes in a show’s season could be 13 or 10 or less. An episode can run 15 minutes over the traditional hour or it could be 33 minutes in length. Traditional rules don’t apply, and the story is allowed to dictate the length.

This was a huge advantage for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a good-versus-evil conflict set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which for sound earned a 2013 Oscar nom, an MPSE nom and a BAFTA film nom) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

On Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambiences and gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round is very different sounding than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know first-hand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Sky to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, and send that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was for the shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is lots of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program-length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one to the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Review: Krotos Reformer Pro for customizing sounds

By Robin Shore

Krotos has got to be one of the most.innovative developers of sound design tools in the industry right now. That is a strong statement, but I stand by it. This Scottish company has become well known over the past few years for its Dehumaniser line of products, which bring a fresh approach to the creation of creature vocals and monster sounds. Recently, they released a new DAW plugin, Reformer Pro, which aims to give sound editors creative new ways of accessing and manipulating their sound effects.

Reformer Pro brings a procedural approach to working with sound effects libraries. According to their manual, “Reformer Pro uses an input to control and select segments of prerecorded audio automatically, and recompiles them in realtime, based on the characteristics of the incoming signal.” In layman’s terms this means you can “perform” sound effects from a library in realtime, using only a microphone and your voice.

It’s dead simple to use. A menu inside the plugin lets you choose from a list of libraries that have been pre-analyzed for use with Reformer Pro. Once you’ve loaded up the library you want, all that’s left to do is provide some sort of sonic input and let the magic happen. Whatever sound you put in will be instantly “reformed” into a new sound effect of your choosing. A number of libraries come bundled in when you buy Reformer Pro and additional libraries can be purchased from the Krotos website. The choice to include the Black Leopard library as a default when you first open the plugin was a very good one. There is just something so gratifying about breathing and grunting into a microphone and hearing a deep menacing growl come out the speakers instead of your own voice. It made me an immediate fan.

There are a few knobs and switches that let you tweak the response characteristics of Reformer Pro’s output, but for the most part you’ll be using sound to control things, and the amount of control you can get over the dynamics and rhythm of Reformer Pro’s output is impressive. While my immediate instinct was to drive Reformer Pro by vocalizing through a mic, any sound source can work well as an input. I also got great results by rubbing and tapping my fingers directly against the grill of a microphone and by dragging the mic across the surface of my desk.

Things get even more interesting if you start feeding pre-recorded audio into Reformer Pro. Using a Foley footstep track as the input for library of cloth and leather sounds creates a realistic and perfectly synced rustle track. A howling wind used as the input for a library of creaks and rattles can add a nice layer of texture to a scenes ambience tracks. Pumping music through Reformer Pro can generate some really wacky sounds and is great way to find inspiration and test out abstract sound design ideas.

If the only libraries you could use with Reformer Pro’s were the 100 or so available on the Krotos website it would still be a fun and innovative tool, but its utility would be pretty limited. What makes Reformer Pro truly powerful is its analysis tool. This lets you create custom libraries out of sounds from your own collection. The possibilities here are literally endless. As long as sound exists it can turned into a unique new plugin. To be sure some sounds are better for this than others, but it doesn’t take long at all figure out what kind of sounds will work best and I was pleasantly surprised with how well most of the custom libraries I created turned out. This is a great way to breath new life into an old sound effects collection.

Summing Up
Reformer Pro adds a sense liveliness, creativity and most importantly fun to the often tedious task of syncing sound effects to picture. It’s also a great way to breath new life into an old sound effects collection. Anyone who spends their days working with sound effects would be doing themselves a disservice by not taking Reformer Pro for a test drive, I imagine most will be both impressed and excited by it’s novel approach to sound effects editing and design.


Robin Shore is an audio engineer at NYC’s Silver Sound Studios

Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.

Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.

Coco’s sound story — music, guitars and bones

By Jennifer Walden

Pixar’s animated Coco is a celebration of music, family and death. In the film, a young Mexican boy named Miguel (Anthony Gonzalez) dreams of being a musician just like his great-grandfather, even though his family is dead-set against it. On the evening of Día de los Muertos (the Mexican holiday called Day of the Dead), Miguel breaks into the tomb of legendary musician Ernesto de la Cruz (Benjamin Bratt) and tries to steal his guitar. The attempted theft transforms Miguel into a spirit, and as he flees the tomb he meets his deceased ancestors in the cemetery.

Together they travel to the Land of the Dead where Miguel discovers that in order to return to life he must have the blessing of his family. The matriarch, great-grandmother Mamá Imelda (Alanna Ubach) gives her blessing with one stipulation, that Miguel can never be a musician. Feeling as though he cannot live without music, Miguel decides to seek out the blessing of his musician great-grandfather.

Music is intrinsically tied to the film’s story, and therefore to the film’s soundtrack. Ernesto de la Cruz’s guitar is like another character in the film. The Skywalker Sound team handled all the physical guitar effects, from subtle to destructive. Although they didn’t handle any of the music, they covered everything from fret handling and body thumps to string breaks and smashing sounds. “There was a lot of interaction between music and effects, and a fine balance between them, given that the guitar played two roles,” says supervising sound editor/sound designer/re-recording mixer Christopher Boyes, who was just nominated for a CAS award for his mixing work on Coco. His Skywalker team on the film included co-supervising sound editor J.R. Grubbs, sound effects editors Justin Doyle and Jack Whittaker, and sound design assistant Lucas Miller.

Boyes bought a beautiful guitar from a pawn shop in Petaluma near their Northern California location, and he and his assistant Miller spent a day recording string sounds and handling sounds. “Lucas said that one of the editors wanted us to cut the guitar strings,” says Boyes. “I was reluctant to cut the strings on this beautiful guitar, but we finally decided to do it to get the twang sound effects. Then Lucas said that we needed to go outside and smash the guitar. This was not an inexpensive guitar. I told him there was no way we were going to smash this guitar, and we didn’t! That was not a sound we were going to create by smashing the actual guitar! But we did give it a couple of solid hits just to get a nice rhythmic sound.”

To capture the true essence of Día de los Muertos in Mexico, Boyes and Grubbs sent effects recordists Daniel Boyes, Scott Guitteau, and John Fasal to Oaxaca to get field recordings of the real 2016 Día de los Muertos celebrations. “These recordings were essential to us and director Lee Unkrich, as well as to Pixar, for documenting and honoring the holiday. As such, the recordings formed the backbone of the ambience depicted in the track. I think this was a crucial element of our journey,” says Boyes.

Just as the celebration sound of Día de los Muertos was important, so too was the sound of Miguel’s town. The team needed to provide a realistic sense of a small Mexican town to contrast with the phantasmagorical Land of the Dead, and the recordings that were captured in Mexico were a key building block for that environment. Co-supervising sound editor Grubbs says, “Those recordings were invaluable when we began to lay the background tracks for locations like the plaza, the family compound, the workshop, and the cemetery. They allowed us to create a truly rich and authentic ambiance for Miguel’s home town.”

Bone Collecting
Another prominent set of sounds in Coco are the bones. Boyes notes that director Unkrich had specific guidelines for how the bones should sound. Characters like Héctor (Gael García Bernal), who are stuck in the Land of the Dead and are being forgotten by those still alive, needed to have more rattle-y sounding bones, as if the skeleton could come apart easily. “Héctor’s life is about to dissipate away, just as we saw with his friend Chicharrón [Edward James Olmos] on the docks, so their skeletal structure is looser. Héctor’s bones demonstrated that right from the get-go,” he explains.

In contrast, if someone is well remembered, such as de la Cruz, then the skeletal structure should sound tight. “In Miguel’s family, Papá Julio [Alfonso Arau] comically bursts apart many times, but he goes back together as a pretty solid structure,” explains Boyes. “Lee [Unkrich] wanted to dig into that dynamic first of all, to have that be part of the fabric that tells the story. Certain characters are going to be loose because nobody remembers them and they’re being forgotten.”

Creating the bone sounds was the biggest challenge for Boyes as a sound designer. Unkrich wanted to hear the complexity of the bones, from the clatter and movement down to the detail of cartilage. “I was really nervous about the bones challenge because it’s a sound that’s not easily embedded into a track without calling attention to itself, especially if it’s not done well,” admits Boyes.

Boyes started his bone sound collection by recording a mobile he built using different elements, like real bones, wooden dowels, little stone chips and other things that would clatter and rattle. Then one day Boyes stumbled onto an interesting bone sound while making a coconut smoothie. “I cracked an egg into the smoothie and threw the eggshell into the empty coconut hull and it made a cool sound. So I played with that. Then I was hitting the coconut on concrete, and from all of those sources I created a library of bone sounds.” Foley also contributed to the bone sounds, particularly for the literal, physical movements, like walking.

According to Grubbs, the bone sounds were designed and edited by the Skywalker team and then presented to the directors over several playbacks. The final sound of the skeletons is a product of many design passes, which were carefully edited in conjunction with the Foley bone recordings and sometimes used in combination with the Foley.

L-R: J.R. Grubbs and Chris Boyes

Because the film is so musical, the bone tracks needed to have a sense of rhythm and timing. To hit moments in a musical way, Boyes loaded bone sounds and other elements into Native Instruments’ Kontakt and played them via a MIDI keyboard. “One place for the bones that was really fun was when Héctor went into the security office at the train station,” says Boyes.

Héctor comes apart and his fingers do a little tap dance. That kind of stuff really lent to the playfulness of his character and it demonstrated the looseness of his skeletal structure.”

From a sound perspective, Boyes feels that Coco is a great example of how movies should be made. During editorial, he and Grubbs took numerous trips to Pixar to sit down with the directors and the picture department. For several months before the final mix, they played sequences for Unkrich that they wanted to get direction on. “We would play long sections of just sound effects, and Lee — being such a student of filmmaking and being an animator — is quite comfortable with diving down into the nitty-gritty of just simple elements. It was really a collaborative and healthy experience. We wanted to create the track that Lee wanted and wanted to make sure that he knew what we were up to. He was giving us direction the whole way.”

The Mix
Boyes mixed alongside re-recording mixer Michael Semanick (music/dialogue) on Skywalker’s Kurosawa Stage. They mixed in native Dolby Atmos on a DFC console. While Boyes mixed, effects editor Doyle handled last-minute sound effects needs on the stage, and Grubbs ran the logistics of the show. Grubbs notes that although he and Boyes have worked together for a long time this was the first time they’ve shared a supervising credit.

“J.R. [Grubbs] and I have been working together for probably 30 years now.” Says Boyes. “He always helped to run the show in a very supervisory way, so I just felt it was time he started getting credit for that. He’s really kept us on track, and I’m super grateful to him.”

One helpful audio tool for Boyes during the mix was the Valhalla Room reverb, which he used on Miguel’s footsteps inside de la Cruz’s tomb. “Normally, I don’t use plug-ins at all when I’m mixing. I’m a traditional mixer who likes to use a console and TC Electronic’s TC 6000 and the Leixcon 480 reverb as outboard gear. But in this one case, the Valhalla Room plug-in had a preset that really gave me a feeling of the stone tomb.”

Unkrich allowed Semanick and Boyes to have a first pass at the soundtrack to get it to a place they felt was playable, and then he took part in the final mix process with them. “I just love Lee’s respect for us; he gives us time to get the soundtrack into shape. Then, he sat there with us for 9 to 10 hours a day, going back and forth, frame by frame at times and section by section. Lee could hear everything, and he was able to give us definitive direction throughout. The mix was achieved by and directed by Lee, every frame. I love that collaboration because we’re here to bring his vision and Pixar’s vision to the screen. And the best way to do that is to do it in the collaborative way that we did,” concludes Boyes.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Behind the Titles: Something’s Awry Productions

NAME: Amy Theorin

NAME: Kris Theorin

NAME: Kurtis Theorin

COMPANY: Something’s Awry Productions

CAN YOU DESCRIBE YOUR COMPANY?
We are a family owned production company that writes, creates and produces funny sharable web content and commercials mostly for the toy industry. We are known for our slightly offbeat but intelligent humor and stop-motion animation. We also create short films of our own both animated and live action.

WHAT’S YOUR JOB TITLE?
Amy: Producer, Marketing Manager, Business Development
Kris: Director, Animator, Editor, VFX, Sound Design
Kurtis: Creative Director, Writer

WHAT DOES THAT ENTAIL?
Amy: A lot! I am the point of contact for all the companies and agencies we work with. I oversee production schedules, all social media and marketing for the company. Because we operate out of a small town in Pennsylvania we rely on Internet service companies such as Tongal, Backstage.com, Voices.com, Design Crowd and Skype to keep us connected with the national brands and talent we work with who are mostly based in LA and New York. I don’t think we could be doing what we are doing 10 years ago without living in a hub like LA or NYC.

Kris: I handle most of production, post production and some pre-production. Specifically, storyboarding, shooting, animating, editing, sound design, VFX and so on.

Kurtis: A lot of writing. I basically write everything that our company does, including commercials, pitches and shorts. I help out on our live-action shoots and occasionally direct. I make props and sets for our animation. I am also Something Awry’s resident voice actor.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Amy: Probably that playing with toys is something we get paid to do! Building Lego sets and setting up Hot Wheels jumps is all part of the job, and we still get excited when we get a new toy delivery — who wouldn’t? We also get to explore our inner child on a daily basis.

Hot Wheels

Kurtis: A lot of the arts and crafts knowledge I gathered from my childhood has become very useful in my job. We have to make a lot of weird things and knowing how to use clay and construction paper really helps.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Amy: See above. Seriously, we get to play with toys for a living! Being on set and working with actors and crew in cool locations is also great. I also like it when our videos exceed our client’s expectations.

Kris: The best part of my job is being able to work with all kinds of different toys and just getting the chance to make these weird and entertaining movies out of them.

Kurtis: Having written something and seeing others react positively to it.

WHAT’S YOUR LEAST FAVORITE?
Amy/Kris: Working through the approval process with rounds of changes and approvals from multiple departments throughout a large company. Sometimes it goes smoothly and sometimes it doesn’t.

Kurtis: Sitting down to write.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Amy: Since most of the companies we work with are on the West Coast my day kicks into high gear around 4:00pm East Coast time.

Kris: I work best in the morning.

Kurtis: My day often consists of hours of struggling to sit down and write followed by about three to four hours where I am very focused and get everything done. Most often those hours occur from 4pm to 7pm, but it varies a lot.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Amy: Probably helping to organize events somewhere. I am not happy unless I am planning or organizing a project or event of some sort.

Kris: Without this job, I’d likely go into some kind of design career or something involving illustration. For me, drawing is one of my secondary interests after filming.

Kurtis: I’d be telling stories in another medium. Would I be making a living doing it is another question.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Amy: I have always loved advertising and creative projects. When I was younger I was the advertising manager for PNC Bank, but left the corporate world when I had kids and started my own photography business, which I operated for 10 years. Once my kids became interested in film I wanted to foster that interest and here we are!

Kris: Filmmaking is something I’ve always had an interest in. I started when I was just eight years old and from there it’s always something I loved to do. The moment when I first realized this would be something I’d follow for an actual career was really around 10th grade, when I started doing it more on a professional level by creating little videos here and there for company YouTube channels. That’s when it all started to sink in that this could actually be a career for me.

Kurtis: I knew I wanted to tell stories very early on. Around 10 years old or so I started doing some home movies. I could get people to laugh and react to the films I made. It turned out to be the medium I could most easily tell stories in so I have stuck with it ever since.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Amy: We are currently in the midst of two major projects — one is a six-video series for Hot Wheels that involves creating six original song music videos parodying different music genres. The other is a 12-episode series for Warner Bros. Scooby Doo that features live-action and stop-motion animation. Each episode is a mini-mystery that Scooby and the gang solve. The series focuses on the imaginations of different children and the stories they tell.

We also have two short animations currently on the festival circuit. One is a hybrid of Lovecraft and a Scooby-Doo chase scene called Mary and Marsha in the Manor of Madness. The other is dark fairytale called The Gift of the Woods.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Amy: Although I am proud of a lot of our projects I am most proud of the fact that even though we are such a small company, and live in the middle of nowhere, we have been able to work with companies around the world like Lego, Warner Bros. and Mattel. Things we create are seen all over the world, which is pretty cool for us.

Lego

Kris: The Lego Yellow Submarine Beatles film we created is what I’m most proud of. It just turned out to be this nice blend of wacky visuals, crazy action, and short concise storytelling that I try to do with most of my films.

Kurtis: I really like the way Mary and Marsha in the Manor of Madness turned out. So far it is the closest we have come to creating something with a unique feel and a sense of energetic momentum; two long term goals I have for our work. We also recently wrapped filming for a twelve episode branded content web series. It is our biggest project yet and I am proud that we were able to handle the production of it really well.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Amy: Skype, my iPad and the rise of online technology companies such as Tongal, Voices.com, Backstage.com and DesignCrowd that help us get our job done.

Kris: Laptop computers, Wacom drawing tablets and iPhones.

Kurtis: My laptop (and it’s software Adobe Premiere and Final Draft), my iPhone and my Kindle.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Amy: Being in this position I like to know what is going on in the industry so I follow Ad Age, Ad Week, Ad Freak, Mashable, Toy Industry News, iO9, Geek Tyrant, and of course all the social media channels of our clients like Lego, Warner Bros., Hot Wheels and StikBots. We also are on Twitter (@AmyTheorin) Instagram (@Somethingsawryproductions) and Facebook (Somethingsawry).

Kris: Mostly YouTube and Facebook.

Kurtis: I follow the essays of Film Crit Hulk. His work on screenwriting and story-telling is incredibly well done and eye opening. Other than that I try to keep up with news and I follow a handful of serialized web-comics. I try to read, watch and play a lot of different things to get new ideas. You never know when the spaghetti westerns of Sergio Leone might give you the idea for your next toy commercial.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Amy: I don’t usually but I do like to listen to podcasts. Some of my favorites are: How I Built This, Yeah, That’s Probably an Ad and Fresh Air.

Kris: I listen to whatever pop songs are most popular at the time. Currently, that would be Taylor Swift’s “Look What You Made Me Do.”

Kurtis: I listen to an eclectic mix of soundtracks, classic rock songs I‘ve heard in movies, alternative songs I heard in movies, anime theme songs… basically songs I heard with a movie or game and can’t get out of my head. As for particular artists I am partial to They Might Be Giants, Gorillaz, Queen, and the scores of Ennio Morricone, Darren Korb, Jeff Williams, Shoji Meguro and Yoko Kanno.

IS WORKING WITH FAMILY EASIER OR MORE DIFFICULT THAN WORKING/MANAGING IN A REGULAR AGENCY?
Amy: Both! I actually love working with my sons, and our skill sets are very complimentary. I love to organize and my kids don’t. Being family we can be very upfront with each other in terms of telling our opinions without having to worry about hurting each other’s feelings.

We know at the end of the day we will always be there for each other no matter what. It sounds cliché but it’s true I think. We have a network of people we also work with on a regular basis who we have great relationships with as well. Sometimes it is hard to turn work off and just be a family though, and I find myself talking with them about projects more often than what is going on with them personally. That’s something I need to work on I guess!

Kris: It’s great because you can more easily communicate and share ideas with each other. It’s generally a lot more open. After a while, it really is just like working within an agency. Everything is fine-tuned and you have worked out a pipeline for creating and producing your videos.

Kurtis: I find it much easier. We all know how we do our best work and what our strengths are. It certainly helps that my family is very good at what they do. Not to mention working from home means I get to set my own hours and don’t have a commute. Sometimes it’s difficult to stay motivated when you’re not in a professional office setting but overall the pros far outweigh the cons.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Amy: I try to take time out to walk our dog, but mostly I love it so much I don’t mind working on projects all the time. If I don’t have something to work on I am not a happy camper. Sometimes I have to remember that not everyone is working on the weekends, so I can’t bother them with work questions!

Kris: It really helps that I don’t often get stressed. At least, not after doing this job for as long as I have. You really learn how to cope with it all. Oftentimes, it’s more just getting exhausted from working long hours. I’ll often just watch some YouTube videos at the end of a day or maybe a movie if there’s something I really want to see.

Kurtis: I like to read and watch interesting stories. I play a lot games: board games, video games, table-top roleplaying. I also find bike riding improves my mood a lot.

Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

King, along with Alex Gibson, recently won the Academy Award for Achievement in Sound Editing for Dunkirk.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com