Tag Archives: audio post production

Audio post vet Paul Rodriguez has passed away

It is with a heavy heart that we share the news that post sound vet and all-around nice guy Paul Rodriguez passed away September 26th in Los Angeles of cardiac arrest after a brief hospitalization. He was 65.

Rodriguez was president of South Lake Audio Services and VP of audio services and development at Roundabout Entertainment in Burbank where he oversaw post production sound for projects including HBO’s Westworld. He was also a long-time board member of the Motion Picture Sound Editors (MPSE) and served as its treasurer for eight years. He produced the organizations’ annual MPSE Golden Reel Awards ceremony.

An active member of the professional sound community for more than 30 years, Rodriguez served in executive, sales and creative capacities at Todd-AO/Soundelux, Wilshire Stages, 4MC and EFX Systems. He was also co-owner of the Eagle Eye Film Company, a supplier of picture editing systems. He joined Roundabout Entertainment in 2015. Known for his infectious humor and gregarious personality, Rodriguez was a tireless ambassador for the art of entertainment sound and enjoyed universal respect and affection among his industry colleagues and friends.

“Paul will be remembered for the energy, wisdom and true dedication he gave to the sound industry,” said MPSE president Tom McCarthy. “His passing leaves a great void on our board and in the hearts of our members.”

postPerspective had the opportunity to interview Paul at NAB this past April. He was funny and smart and a pleasure to be around. His positive attitude and humor were contagious.

Rodriguez is survived by his son Hunter, daughter-in-law Abbie and granddaughter Charlie; daughter Rachael and son-in-law Manny Wong; daughter Alexa and her partner James Gill; his former wife, Catheryn Rodriguez; and several sisters.

Donations in Rodriguez’s name may be made to Montrose Church, Best Friends Animal Society or Alzheimer’s Association.

 

 

Eleven’s Ben Freer celebrates 10 years, Jordan Meltzer now mixer

Eleven, a Santa Monica-based audio boutique, has some mixer news. Ben Freer is celebrating his 10th year with the studio, and Jordan Meltzer has been promoted to mixer and sound designer.

A Manchester-native with a California upbringing, Freer was inspired by all things sound from a young age and was first introduced to Eleven as an intern in 2007. Mentored by Eleven founder/mixer Jeff Payne and quickly climbing the ranks to become an official staff member the same year. Freer has mixed for renowned clients in the advertising and multimedia industries, including Toyota, GMC, T-Mobile, Nike, H&R Block, The Weeknd and Lorde.

“When I started at Eleven, I didn’t know much about audio mixing, I just knew that I wanted to immerse myself in it,” says Freer. “Working with the industry’s best and eventually getting my own mix room has been an incredibly humbling experience.”

Los Angeles native Jordan Meltzer got hooked on sound and began gravitating toward the craft after seeing The Who perform at the Hollywood Bowl at age 9. He played in bands while growing up in the San Fernando Valley, eventually completing his BA in audio post production from Emerson College. After joining Eleven as an intern, similar to Freer, he climbed the ranks and took on a newfound role as assistant mixer, building his portfolio on a variety of films and commercials with clients HP, Dodge, Disney, FitBit and Sam Smith. Meltzer’s contributions led him to a recent promotion as mixer and sound designer.

“Climbing the Eleven ladder has been fulfilling, satisfying and challenging,” says Meltzer. “I remember sitting in the studio as an intern with Ben and Jeff, trying to learn and absorb it all. I always saw myself sitting in the chair, and it’s truly an honor to now be recognized as a mixer at such a warm, supportive and creative company.”

Main Image: L-R: Ben Freer and Jordan Meltzer

Emmy Awards: American Horror Story: Roanoke

A chat with supervising sound editor Gary Megregian

By Jennifer Walden

Moving across the country and buying a new house is an exciting and scary process, but when it starts raining teeth at that new residence the scary factor pretty much makes the exciting feelings void. That’s the situation that Matt and Shelby, a couple from Los Angeles, find themselves in for American Horror Story’s sixth season on FX Networks. After moving into an old mansion in Roanoke, North Carolina, they discover that the dwelling and the local neighbors aren’t so accepting of outsiders.

American Horror Story: Roanoke explores a true-crime-style format that uses re-enactments to play out the drama. The role of Matt is played by Andre Holland in “reality” and by Cuba Gooding, Jr. in the re-enactments. Shelby is played by Lily Rabe and Sarah Paulson, respectively. It’s an interesting approach that added a new dynamic to an already creative series.

Emmy-winning Technicolor at Paramount supervising sound editor Gary Megregian is currently working on his seventh season of American Horror Story, coming to FX in early September. He took some time out to talk about Season 6, Episode 1, Chapter 1, for which he and his sound editorial team have been nominated for an Emmy for Outstanding Sound Editing for a Limited Series. They won the Emmy in 2013, and this year marks their sixth nomination.

American Horror Story: Roanoke is structured as a true-crime series with re-enactments. What opportunities did this format offer you sound-wise?
This season was a lot of fun in that we had both the realistic world and the creative world to play in. The first half of the series dealt more with re-enactments than the reality-based segments, especially in Chapter 1. Aside from some interview segments, it was all re-enactments. The re-enactments were where we had more creative freedom for design. It gave us a chance to create a voice for the house and the otherworldly elements.

Gary Megregian

Was series creator Ryan Murphy still your point person for sound direction? For Chapter 1, did he have specific ideas for sound?
Ryan Murphy is definitely the single voice in all of his shows but my point person for sound direction is his executive producer Alexis Martin Woodall, as well as each episode’s picture editor.

Having been working with them for close to eight years now, there’s a lot of trust. I usually have a talk with them early each season about what direction Ryan wants to go and then talk to the picture editor and assistant as they’re building the show.

The first night in the house in Roanoke, Matt and Shelby hear this pig-like scream coming from outside. That sound occurs often throughout the episode. How did that sound come to be? What went into it?
The pig sounds are definitely a theme that goes through Season 6, but they started all the way back in Season 1 with the introduction of Piggy Man. Originally, when Shelby and Matt first hear the pig we had tried designing something that fell more into an otherworldly sound, but Ryan definitely wanted it to be real. Other times, when we see Piggy Man we went back to the design we used in Season 1.

The doors in the house sound really cool, especially that back door. What were the sources for the door sounds? Did you do any processing on the recordings to make them spookier?
Thanks. Some of the doors came from our library at Technicolor and some were from a crowd-sourced project from New Zealand-based sound designer Tim Prebble. I had participated in a project where he asked everyone involved to record a complete set of opens, closes, knocks, squeaks, etc. for 10 doors. When all was said and done, I gained a library of over 100GB of amazing door recordings. That’s my go-to for interesting doors.

As far as processing goes, nothing out of the ordinary was used. It’s all about finding the right sound.

When Shelby and Lee (Adina Porter) are in the basement, they watch this home movie featuring Piggy Man. Can you tell me about the sound work there?
The home movie was a combination of the production dialogue, Foley, the couple instances of hearing pig squeals and Piggy Man design along with VHS and CRT noise. For dialogue, we didn’t clean up the production tracks too much and Foley was used to help ground it. Once we got to the mix stage, re-recording mixers Joe Earle and Doug Andham helped bring it all together in their treatment.

What was your favorite scene to design? Why? What went into the sound?
One of my favorite scenes is the hail/teeth storm when Shelby’s alone in the house. I love the way it starts slow and builds from the inside, hearing the teeth on the skylight and windows. Once we step outside it opens up to surround us. I think our effects editor/designer Tim Cleveland did a great job on this scene. We used a number of hail/rain recordings along with Foley to help with some of the detail work, especially once we step outside.

Were there any audio tools that were helpful when working on Chapter 1? Can you share specific examples of how you used them?
I’m going to sound like many others in this profession, but I’d say iZotope RX. Ryan is not a big fan of ADR, so we have to make the production work. I can count on one hand the number of times we’ve had any actors in for ADR last season. That’s a testament to our production mixer Brendan Beebe and dialogue editor Steve Stuhr. While the production is well covered and recorded well, Steve still has his work cut out for him to present a track that’s clean. The iZotope RX suite helps with that.

Why did you choose Chapter 1 for Emmy consideration for its sound editorial?
One of the things I love about working on American Horror Story is that every season is like starting a new show. It’s fun to establish the sound and the tone of a show, and Chapter 1 is no exception. It’s a great representation of our crew’s talent and I’m really happy for them that they’re being recognized for it. It’s truly an honor.

Emmy Awards: HBO’s The Night Of

Nominee Nicholas Renbeck, supervising sound editor/re-recording mixer

By Jennifer Walden

The HBO drama series The Night Of tells the tale of Nasir “Naz” Khan, a young Pakistani-American male accused of brutally murdering a young woman in her uptown Manhattan home. The series takes the audience on a tour of New York City’s penal system, from the precinct to the morgue, into the court room and out to Riker’s Island. It also explores different neighborhoods, from uptown Manhattan across the East River into Queens. Each location has a rich tapestry of sound, a vibrant background upon which the drama plays out.

Supervising sound editor/re-recording mixer Nicholas Renbeck from c5 Sound in New York, has been nominated for two Emmys for his work on the show: one for Outstanding Sound Editing For A Limited Series for Ep. 2 “Subtle Beast,” and one for Outstanding Sound Mixing For A Limited Series for Ep.1 “The Beach.” He’s already won a 2017 Golden Reel Award for Best Sound Editing on The Night Of.

Here he shares insight on building the expressive backgrounds and mixing the effects to create a rich world around the actors.

Nicholas Renbeck

How did you get involved with the show?
They were looking to do the sound in New York and c5 Sound was one of the places they were considering. I interviewed for the job and ended up getting it.

I flew out to Los Angeles while they were wrapping up locking the picture cut. Just prior to going they had sent me screening links to watch the series, all but the last episode. So I viewed the first seven episodes pretty much straight in a row, and in less than 24 hours I got on the plane and flew out to LA to spot the entire show with Steve Zaillian (series creator/ director/writer), still not knowing what happens in the last episode. While on the plane I had all these possible sound ideas swirling around in my head, mixed with this deep desire to know what happens in the final episode.

Then upon arriving I sat and did a spotting session with Steve and Nick Houy, the picture editor. We watched all eight episodes over a two-day period and talked about the sound concerns and possibilities.

This was your first time working with show runners Richard Price and Steven Zaillian. Did they have specific plans for how they wanted to use sound in the show?
Steve had a definite vision for where he wanted to go with the show. He had very specific ideas on what it would sound like in the prison, or what the city should sound like depending on the neighborhood. When I sat down with them, they already had a lot of sounds in their Avid Media Composer that they were working with. Actually, much more than any show I’ve worked on before.

Warren Shaw (a fellow supervising sound editor/sound designer who was New York-based but went out to Los Angeles a little while ago) had been brought onto the show early on while they were still cutting. Warren did some great initial sound design for them on a few of the later episodes. I got to hear what his ideas were and we brought his work, along with everything they had in the Avid, into our working sound sessions. Then Ruy Garcia, Wyatt Sprague (sound design/effects editors) and I kept going further, adding more elements and refining ideas.

I find there’s always a transitional step when moving from a mono or stereo Avid track into a 5.1 surround environment. Everybody up to this point is used to listening to things in a certain way. Now we’ve added four more speakers, and there’s a re-adjustment processes that happens. So, I spent a good amount of time working to present all the material in a way that would play to the strengths of a 5.1 sound environment.

What came about was a wonderful combination of all our ideas up to that point. I would make a full 5.1 sound effect premix in one of c5 sound design suites for an entire episode, then bring Steve in and get his reaction, and then afterward build from that. What we learned from working with Steve on Episode 102 we would then take and apply to Episode 103, building as we went.

How did they want the prison to sound? What descriptions did they give?
You hear this low rumbling tone, this presence of heaviness. That really spoke to Steve’s idea of what he wanted the prison atmosphere to encompass. We found sounds and tones to mold that mood, working to create what that feeling is like when the prison is busy and full of activity. We also created the flip side of what that oppressive sound is when the lights are out and we are alone with Naz [Riz Ahmed] in this very scary place that’s now quiet. We kept working to give the cell block a heaviness so that it feels like it’s pulling you down as you go through these scenes with Naz and see what his life has become at this point.

Marissa Littlefield, our ADR supervisor, Steve and I had conversations about what we needed in terms of added voices and how we would handle that. We did a lot of interesting casting for loop group, with a focus on being specific to the locations around the city. We definitely put our loop group coordinators Dann Fink and Bruce Winant (of Loopers Unlimited) through the paces of casting. It was nice to be able to combine those added voices from the loop group with the substantial production recording that was done on set, along with a number of sounds we had in our personal sound libraries. I think we were pretty successful at creating those different locations based on both voices and sound atmospheres.

What about the reverb work for the prison and the precinct? You have dry loop group recordings, so what reverbs did you use to help fit those into the environments?
I jump back and forth using Avid’s ReVibe II, Space and Audio Ease’s Altiverb. In doing some of his design work I know Ruy liked to use Soundtoy’s Echoboy delay for some fun stuff, and I believe Michael Berry (re-recording mixer on music/dialog/ADR/Foley) used ReVibe II and Altiverb for most of the show. So there was a variety of different reverbs and effects that we would use.

In some cases, we would apply reverb directly to the sound file, and in other cases we would wait until we got to the mix. In terms of the loop group voices, Michael Berry spent time figuring out where he wanted those to sit — how far back in the environment they would play and how they would play against the effects tracks that we created. We found a nice balance there.

Where did you mix “The Beach” episode? What console did you use?
Michael Berry was in charge of all the dialog, ADR, music and Foley premixing, which he did at PostWorks/Technicolor in New York, on the Avid S5. I did the sound effects premixing at c5 Sound, in a 5.1 design/mix room on an Avid D-Command. The final mix then happened at PostWorks/Technicolor. All of the sound editorial was done at c5.

What were some challenges you had while mixing “The Beach” and how did you handle them?
The trickiest scene for us was the one under the George Washington Bridge. The production tracks were challenging due to the noise of the river and the George Washington Bridge overhead. However, the performances were so good we really wanted to save them at all costs. Sara Stern (dialogue editor) worked for a good while to clean up the initial dialogue, and then Michael [Berry] really worked at those tracks to find a way to save and salvage the on-camera performances. iZotope RX5 (RX6 wasn’t out yet) was our friend in a big way.

Then we had to figure out where the atmospheres wanted to be because the performances are so strong that you don’t want to put the effects or the music over what the actors are doing. You don’t want to overpower that or take away from what is happening on-screen. There’s a lot of subtlety in our decisions. A little went a long way.

Did you have a favorite scene in terms of mixing sound effects on your side of the board?
I really liked the opening section of the Queens neighborhood during the day and going into the night with the drive into Manhattan. The whole driving sequence into the city in the cab has some real nice moments…the juxtaposing of the interiors of the house and cab with city’s night exteriors.

Of all the episodes you could’ve picked from Season 1, why did you choose the mix on “The Beach” for Emmy consideration?
It’s the first episode and it really grabs you. I was just sitting there on the edge of my seat watching it for the first time. The performances were so powerful and our challenge was to add to that. How can you help build on that?

Steve, Michael and I felt this was the right episode to go with. It has interesting atmospheric sounds, the music is strong and the performances are strong. Across the board, the music, the effects and the dialogue were all there nicely represented.

Let’s talk about the sound editing on “Subtle Beast,” which is up for Emmy consideration. What were some opportunities you had for creative sound on this episode?
What was nice about “Subtle Beast” is that we had so many different and interesting locations to address and figure out. There is the morgue, which is the hallway and the waiting area, the parking lot outside and the morgue itself. All of those were fantastic spots where we could design the backgrounds and sound effects to create the mood. This episode showcased most of the locations from the first episode again. And we see Naz being brought from the police precinct in the van across town to the holding cell under the courthouse, which is a great sequence. Then finally Naz goes into the transport to Riker’s Island. You have this array of locations in which to create this rich tapestry of sound.

Nothing is huge. There are no large gun battles or things of that nature. There are just many different locations for which we can create some interesting moods.

You did a fantastic job on the backgrounds. They are so expressive. I particularly like when the transport van is backing up to the precinct to pick up the prisoners. You hear the music playing from inside the van and it’s bouncing around the street outside.
There is some fantastic music editing by Dan Evans Farkas and Grant Conway that is happening there as well. It was nice to figure out, from an editorial sense, how to get in all your editing food groups — your sound effects, your music, your production, your loop group, ADR and Foley. There were a lot of good moments in that episode. In looking at the episodes we could have chosen, I felt that “Subtle Beast” was the strongest for us.

In terms of sound editing on “Subtle Beast,” what was the most challenging scene?
I’m not sure about most challenging, but the most engaging sequence for me was the trip from the police precinct in the van to the night holding cell. Once that van pulls in and Naz is being marched down the hall it’s a ride of sound, music and tension. And, possibly, fear.

There’s so much to work with, from the point at which the van is backing up, we’ve got the odd metal double doors on the van, then the juxtaposition of the van, to Detective Box’s (Bill Camp) car drive, to John Stone (John Turturro) going home to his brownstone. All these actions are intercutting with each other. When the van pulls up at Baxter Street, we lose the music and are left with these echoing footsteps and police radio surrounded by the dripping water of the location. Then finally down into night holding cells and with the yelling distant voices. Naz doesn’t know what’s coming but it doesn’t sound good. So that was one of the more intense and fun spots for me personally.

In building these backgrounds, what were some of your sources? Being in New York, were you able to go out and capture local ambiences? Or was it completely crafted in post?
We did some recordings around town to pick up what we needed. Since c5 is based in New York, we have a really great library of New York sounds to pull from. Also, the production location recordists did a great job of capturing stuff as well so we were able to use a number of those sounds in our sound bed. I would say 85 percent of the ambiences were created in post, and the other 15 percent was what was recorded on set.

Strangely enough I personally have lived in two of the main locations of the series: the Upper West Side of Manhattan — on the exact street of Andrea’s brownstone — and Jackson Heights, Queens, where Naz’s family lives. So I was well aware of what these neighborhoods sounded like at all hours of the day and night and would use my own internal “appropriate location audio filter” when working on those locations. At the end of the day that’s sort of a silly side note, but I like to think it helps us stay true to the sounds of those neighborhoods.

Beyond the background sounds but in keeping with what we crafted in post, once we get to Riker’s I think it’s worth noting that the entire cellblock set had a floor of painted plywood. So it really fell to our Foley department to make sure all our foot falls on concrete were covered and ready to take center stage if called upon. The whole Foley team led by Marko Costanzo (artist), George Lara (recordist) and Steve Visscher (supervising Foley editor) did a wonderful job.

Anything else you’d like to share about The Night Of?
It was a show that involved a lot of really good collaboration in terms of sound and music. I personally feel very fortunate to have had such a good sound crew comprising so many talented people, and very lucky for the opportunity to get to mix next to Michael Berry and see the care and skill he brings to the process. I am also very appreciative of the support we got along the way from everybody at HBO, our wonderful post supervisor Lori Slomka, as well as our picture editor Nick Houy and his crew.

Lastly, I think through our conversations and discussions with Steve Zaillian we were successful in figuring out how best to shape and mold the tracks into something that is very compelling to watch and listen to and I hope people really enjoy it.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Behind the Title: 3008 Editorial’s Matt Cimino and Greg Carlson

NAMES: Matt Cimino and Greg Carlson

COMPANY: 3008 Editorial in Dallas

WHAT’S YOUR JOB TITLE?
Cimino: We are sound designers/mixers.

WHAT DOES THAT ENTAIL?
Cimino: Audio is a storytelling tool. Our job is to enhance the story directly or indirectly and create the illusion of depth, space and a sense of motion with creative sound design and then mix that live in the environment of the visuals.

Carlson: And whenever someone asks, I always tend to prioritize sound design before mixing. Although I love every aspect of what we do, when a spot hits my room as a blank slate, it’s really the sound design that can take it down a hundred different paths. And for me, it doesn’t get better than that.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Carlson: I’m not sure a brief job title can encompass what anyone really does. I am a composer as well as a sound designer/mixer, so I bring that aspect into my work. I love musical elements that help stitch a unified sound into a project.

Cimino: That there really isn’t “a button” for that!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Carlson: The freedom. Having the opportunity to take a project where I think it should go and along the way, pushing it to the edge and back. Experimenting and adapting makes every spot a completely new trip.

Matt Cimino

Cimino: I agree. It’s the challenge of creating an expressive and aesthetically pleasing experience by taking the soundtrack to a whole new level.

WHAT’S YOUR LEAST FAVORITE?
Cimino: Not Much. However, being an imperfect perfectionist, I get pretty bummed when I do not have enough time to perfect the job.

Carlson: People always say, “It’s so peaceful and quiet in the studio, as if the world is tuned out.” The downside of that is producer-induced near heart attacks. See, when you’re rocking out at max volume and facing away from the door, well, people tend to come in and accidentally scare you to death.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Cimino: I’m a morning person!

Carlson: Time is an abstract notion in a dark room with no windows, so no time in particular. However, the funniest time of day is when you notice you’re listening about 15 dB louder than the start of the day. Loud is better.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cimino: Carny. Or Evel Knievel.

Carlson: Construction/carpentry. Before audio, I had lots of gritty “hands-on” jobs. My dad taught me about work ethic, to get my hands dirty and to take pride in everything. I take that same approach with every spot I touch. Now I just sit in a nice chair while doing it.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Cimino: I’ve had a love for music since high school. I used to read all the liner notes on my vinyl. One day I remember going through my father’s records and thinking at that moment, I want to be that “sound engineer” listed in the notes. This led me to study audio at Columbia College in Chicago. I quickly gravitated towards post production audio classes and training. When I wasn’t recording and mixing music, I was doing creative sound design.

Carlson: I was always good with numbers and went to Michigan State to be an accountant. But two years in, I was unhappy. All I wanted was to work on music and compose, so I switched to audio engineering and never looked back. I knew the second I walked into my first studio, I had found my calling. People always say there isn’t a dream job; I disagree.

CAN YOU DESCRIBE YOUR COMPANY?
Cimino: A fun, stress-free environment full of artistry and technology.

Carlson: It is a place I look forward to every day. It’s like a family, solely focused on great creative.

CAN YOU NAME SOME RECENT SPOTS YOU HAVE WORKED ON?
Cimino: Snapple, RAM, Jeep, Universal Orlando, Cricket Wireless, Maserati.

Carlson: AT&T, Lay’s, McDonald’s, Bridgestone Golf.

Greg Carlson

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Carlson: It’s nearly impossible to pick one, but there is a project I see as pivotal in my time here in Dallas. It was shortly after I arrived six years ago. I think it was a boost to my confidence and in turn, enhanced my style. The client was The Home Depot and the campaign was Lets Do This. A creative I admire greatly here in town gave me the chance to spearhead the sonic approach for the work. There are many moments, milestones and memories, but this was a special project to me.

Cimino: There are so many. One of the most fun campaigns I worked on was for Snapple, where each spot opened with the “pop!” of the Snapple cap. I recorded several pops (close-miced) and selected one that I manipulated to sound larger than life but also retain the sound of the brands signature cap pop being opened. After the cap pops, the spot transforms into an exploding fruit infusion. The sound was created by smashing Snapple bottles for the glass break, crushing, smashing and squishing fruit with my hands, and using a hydrophone to record splashing and underwater sounds to create the slow-motion effect of the fruit morphing. So much fun.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cimino: During a mix, my go-tos are iZotope, Sound Toys and Slate Digital. Outside the studio I can’t live without my Apple!

Carlson: ProTools, all things iZotope, Native Instruments.

THIS IS A HIGH-STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cimino: Family and friends. I love watching my kiddos play select soccer. Relaxing pool or beachside with a craft cider. Or on a single path/trail with my mountain bike.

Carlson: I work on my home, build things, like to be outside. When I need to detach for a bit, I prefer dangerous power tools or being on a body of water.

The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com

Sound — Wonder Woman’s superpower

By Jennifer Walden

When director Patty Jenkins first met with supervising sound editor James Mather to discuss Warner Bros. Wonder Woman, they had a conversation about the physical effects of low-frequency sound energy on the human body, and how it could be used to manipulate an audience.

“The military spent a long time investigating sound cannons that could fire frequencies at groups of people and debilitate them,” explains Mather. “They found that the lower frequencies were far more effective than the very high frequencies. With the high frequencies, you can simply plug your ears and block the sound. The low-end frequencies, however, impact the fluid content of the human body. Frequencies around 5Hz-9Hz can’t be heard, but can have physiological, almost emotional effects on the human body. Patty was fascinated by all of that. So, we had a very good sound-nerd talk at our first meeting — before we even talked about the story of the film.”

Jenkins was fascinated by the idea of sound playing a physical role as well as a narrative one, and that direction informed all of Mather’s sound editorial choices for Wonder Woman. “I was amazed by Patty’s intent, from the very beginning, to veer away from very high-end sounds. She did not want to have those featured heavily in the film. She didn’t want too much top-end sonically,” says Mather, who handled sound editorial at his Soundbyte Studios in West London.

James Mather (far right) and crew take to the streets.

Soundbyte Studios offers creative supervision, sound design, Foley and dialog editing. The facility is equipped with Pro Tools 12 systems and Avid S6 and S3 consoles. Their client list includes top studios like Warner Bros., Disney, Fox, Paramount, DreamWorks, Aardman and Pathe. Mather’s team includes dialog supervisor Simon Chase, and sound effects editors Jed Loughran and Samir Fočo. When Mather begins a project, he likes to introduce his team to the director as soon as possible “so that they are recognized as contributors to the soundtrack,” he says. “It gives the team a better understanding of who they are working with and the kind of collaboration that is expected. I always find that if you can get everyone to work as a collaborative team and everyone has an emotional investment or personal investment in the project, then you get better work.”

Following Jenkins’s direction, Mather and his team designed a tranquil sound for the Amazonian paradise of Themyscira. They started with ambience tracks that the film’s sound recordist Chris Munro captured while they were on-location in Italy. Then Mather added Mediterranean ambiences that he and his team had personally collected over the years. Mather embellished the ambience with songbirds from Asia, Australasia and the Amazon. Since there are white peacocks roaming the island, he added in modified peacock sounds. Howler monkeys and domestic livestock, like sheep and goats, round out the track. Regarding the sheep and goats, Mather says, “We pitched them and manipulated them slightly so that they didn’t sound quite so ordinary, like a natural history film. It was very much a case of keeping the soundtrack relatively sparse. We did not use crickets or cicadas — although there were lots there while they were filming, because we wanted to stay away the high-frequency sounds.”

Waterfalls are another prominent feature of Themyscira, according to Mather, but thankfully they weren’t really on the island so the sound recordings were relatively clean. The post sound team had complete control over the volume, distance and frequency range of the waterfall sounds. “We very much wanted the low-end roar and rumble of the waterfalls rather than high-end hiss and white noise.”

The sound of paradise is serene in contrast to London and the front lines of World War I. Mather wanted to exaggerate that difference by overplaying the sound of boats, cars and crowds as Steve [Chris Pine] and Diana [Gal Gadot] arrived in London. “This was London at its busiest and most industria

l time. There were structures being built on a major scale so the environment was incredibly active. There were buses still being drawn by horses, but there were also cars. So, you have this whole mishmash of old and new. We wanted to see Diana’s reaction to being somewhere that she has never experienced before, with sounds that she has never heard and things she has never seen. The world is a complete barrage of sensory information.”

They recorded every vehicle they could in the film, from planes and boats to the motorcycle that Steve uses to chase after Diana later on in the film. “This motorcycle was like nothing we had ever seen before,” explains Mather. “We knew that we would have to go and record it because we didn’t have anything in our sound libraries for it.”

The studio spent days preparing the century-old motorcycle for the recording session. “We got about four minutes of recording with it before it fell apart,” admits Mather. “The chain fell off, the sprockets broke and then it went up in smoke. It was an antique and probably shouldn’t have been used! The funny thing is that it sounded like a lawnmower. We could have just recorded a lawnmower and it would’ve sounded the same!”

(Mather notes that the motorcycle Steve rides on-screen was a modern version of the century-old one they got to record.)

Goosing Sounds
Mather and his sound team have had numerous opportunities to record authentic weapons, cars, tanks, planes and other specific war-era machines and gear for projects they’ve worked on. While they always start with those recordings as their sound design base, Mather says the audience’s expectation of a sound is typically different from the real thing. “The real sound is very often disappointing. We start with the real gun or real car that we recorded, but then we start to work on them, changing the texture to give them a little bit more punch or bite. We might find that we need to add some gun mechanisms to make a gun sound a bit snappier or a bit brighter and not so dull. It’s the same with the cars. You want the car to have character, but you also want it to be slightly faster or more detailed than it actually sounds. By the nature of filmmaking, you will always end up slightly embellishing the real sound.”

Take the gun battles in Wonder Woman, for instance. They have an obvious sequentiality. The gun fires, the bullet travels toward its target and then there is a noticeable impact. “This film has a lot of slow-motion bullets firing, so we had to amp up the sense of what was propelling that very slow-motion bullet. Recording the sound of a moving bullet is very hard. All of that had to be designed for the film,” says Mather.

In addition to the real era-appropriate vehicles, Wonder Woman has imaginary, souped-up creations too, like a massive bomber. For the bomber’s sound, Mather sought out artist Joe Rush who builds custom Mad Max-style vehicles. They recorded all of Rush’s vehicles, which had a variety of different V8, V12 and V6 engines. “They all sound very different because the engines are on solid metal with no suspension,” explains Mather. “The sound was really big and beefy, loud and clunky and it gave you a sense of a giant war monster. They had this growl and weight and threat that worked well for the German machines, which were supposed to feel threatening. In London, you had these quaint buses being drawn by horses, and the counterpoint to that were these military machines that the Germans had, which had to be daunting and a bit terrifying.

“One of the limitations of the WWI-era soundscapes is the lack of some very useful atmospheric sounds. We used tannoy (loudspeaker) effects on the German bomb factory to hint at the background activity, but had to be very sparing as these were only just invented in that era. (Same thing with the machine guns — a far more mechanical version than the ‘retatatat’ of the familiar WWII versions).”

One of Mather’s favorite scenes to design starts on the frontlines as Diana makes her big reveal as Wonder Woman. She crosses No Man’s Land and deflects the enemies’ fire with her bulletproof bracelets and shield. “We played with that in so many different ways because the music was such an important part of Patty’s vision for the film. She very much wanted the music to carry the narrative. Sound effects were there to be literal in many ways. We were not trying to overemphasize the machismo of it. The story is about the people and not necessarily the action they were in. So that became a very musical-based moment, which was not the way I would have normally done it. I learned a lot from Patty about the different ways of telling the story.”

The Powers
Following that scene, Wonder Woman recaptured the Belgian village they were fighting for by running ahead and storming into the German barracks. Mather describes it as a Guy Ritchie-style fight, with Wonder Woman taking on 25 German soldiers. “This is the first time that we really get to see her use all of her powers: the lasso, her bracelets, her shield, and even her shin guards. As she dances her way around the room, it goes from realtime into slow motion and back into realtime. She is repelling bullets, smashing guns with her back, using her shield as a sliding mat and doing slow-motion kicks. It is a wonderfully choreographed scene and it is her first real action scene.”

The scene required a fluid combination of realistic sounds and subdued, slow-motion sounds. “It was like pushing and pulling the soundtrack as things slowed down and then sped back up. That was a lot of fun.”

The Lasso
Where would Wonder Woman be without her signature lasso of truth? In the film, she often uses the lasso as a physical weapon, but there was an important scene where the lasso was called upon for its truth-finding power. Early in the film, Steve’s plane crashes and he’s washed onto Themyscira’s shore. The Amazonians bind Steve with the lasso and interrogate him. Eventually the lasso of truth overpowers him and he divulges his secrets. “There is quite a lot of acting on Chris Pine’s part to signify that he’s uncomfortable and is struggling,” says Mather. “We initially went by his performance, which gave the impression that he was being burned. He says, ‘This is really hot,’ so we started with sizzling and hissing sounds as if the rope was burning him. Again, Patty felt strongly about not going into the high-frequency realm because it distracts from the dialogue, so we wanted to keep the sound in a lower, more menacing register.”

Mather and his team experimented with adding a multitude of different elements, including low whispering voices, to see if they added a sense of personality to the lasso. “We kept the sizzling, but we pitched it down to make it more watery and less high-end. Then we tried a dozen or so variations of themes. Eventually we stayed with this blood-flow sound, which is like an arterial blood flow. It has a slight rhythm to it and if you roll off the top end and keep it fairly muted then it’s quite an intriguing sound. It feels very visceral.”

The last elements Mather added to the lasso were recordings he captured of two stone slabs grinding against each other in a circular motion, like a mill. “It created this rotating, undulating sound that almost has a voice. So that created this identity, this personality. It was very challenging. We also struggled with this when we did the Harry Potter films, to make an inert object have a character without making it sound a bit goofy and a bit sci-fi. All of those last elements we put together, we kept that very low. We literally raised the volume as you see Steve’s discomfort and then let it peel away every time he revealed the truth. As he was fighting it, the sound would rise and build up. It became a very subtle, but very meaningful, vehicle to show that the rope was actually doing something. It wasn’t burning him but it was doing something that was making him uncomfortable.”

The Mix
Wonder Woman was mixed at De Lane Lea (Warner Bros. London) by re-recording mixers Chris Burdon and Gilbert Lake. Mather reveals that the mixing process was exhausting, but not because of the people involved. “Patty is a joy to work with,” he explains. “What I mean is that working with frequencies that are so low and so loud is exhausting. It wasn’t even the volume; it was being exposed to those low frequencies all day, every day for nine weeks or so. It was exhausting, and it really took its toll on everybody.”

In the mix, Jenkins chose to have Rupert Gregson-Williams’s score lead nearly all of the action sequences. “Patty’s sensitivity and vision for the soundtrack was very much about the music and the emotion of the characters,” says Mather. “She was very aware of the emotional narrative that the music would bring. She did not want to lean too heavily on the sound effects. She knew there would be scenes where there would be action and there would be opportunities to have sound design, but I found that we were not pushing those moments as hard as you would expect. The sound design highs weren’t so high that you felt bereft of momentum and pace when those sound design heavy scenes were finished. We ended up maintaining a far more interesting soundtrack that way.”

With DC films like Batman v Superman: Dawn of Justice and Spider-Man, the audience expects a sound design-heavy track, but Jenkins’s music-led approach to Wonder Woman provides a refreshing spin on superhero film soundtracks. “The soundtrack is less supernatural and more down to earth,” says Mather. “I don’t think it could’ve been any other way. It’s not a predictable soundtrack and I really enjoyed that.”

Mather really enjoys collaborating with people who have different ideas and different approaches. “What was exciting about doing this film was that I was able to work with someone who had an incredibly strong idea about the soundtrack and yet was very happy to let us try different routes and options. Patty was very open to listening to different ideas, and willing to take the best from those ideas while still retaining a very strong vision of how the soundtrack was going to play for the audience. This is Patty’s DC story, her opportunity to open up the DC universe and give the audience a new look at a character. She was an extraordinary person to work with and for me that was the best part of the process. In the time of remakes, it’s nice to have a film that is fresh and takes a different approach.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @AudioJeney

Netflix’s The Last Kingdom puts Foley to good use

By Jennifer Walden

What is it about long-haired dudes strapped with leather, wielding swords and riding horses alongside equally fierce female warriors charging into bloody battles? There is a magic to this bygone era that has transfixed TV audiences, as evident by the success of HBO’s Game of Thrones, History Channel’s Vikings series and one of my favorites, The Last Kingdom, now on Netflix.

The Last Kingdom, based on a series of historical fiction novels by Bernard Cornwell, is set in late 9th century England. It tells the tale of Saxon-born Uhtred of Bebbanburg who is captured as a child by Danish invaders and raised as one of their own. Uhtred gets tangled up in King Alfred of Wessex’s vision to unite the three separate kingdoms (Wessex, Northumbria and East Anglia) into one country called England. He helps King Alfred battle the invading Danish, but Uhtred’s real desire is to reclaim his rightful home of Bebbanburg from his duplicitous uncle.

Mahoney Audio Post
The sound of the series is gritty and rich with leather, iron and wood elements. The soundtrack’s tactile quality is the result of extensive Foley work by Mahoney Audio Post, who has been with the series since the first season. “That’s great for us because we were able to establish all the sound for each character, village, environment and more, right from the first episode,” says Foley recordist/editor/sound designer Arran Mahoney.

Mahoney Audio Post is a family-operated audio facility in Sawbridgeworth, Hertfordshire, UK. Arran Mahoney explains the studio’s family ties. “Clare Mahoney (mum) and Jason Swanscott (cousin) are our Foley artists, with over 30 years of experience working on high-end TV shows and feature films. My brother Billy Mahoney and I are the Foley recordists and editors/sound designers. Billy Mahoney, Sr. (dad) is the founder of the company and has been a dubbing mixer for over 40 years.”

Their facility, built in 2012, houses a mixing suite and two separate audio editing suites, each with Avid Pro Tools HD Native systems, Avid Artist mixing consoles and Genelec monitors. The facility also has a purpose-built soundproof Foley stage featuring 20 different surfaces including grass, gravel, marble, concrete, sand, pebbles and multiple variations of wood.

Foley artists Clare Mahoney and Jason Swanscott.

Their mic collection includes a Røde NT1-A cardioid condenser microphone and a Røde NTG3 supercardioid shotgun microphone, which they use individually for close-micing or in combination to create more distant perspectives when necessary. They also have two other studio staples: a Neumann U87 large-diaphragm condenser mic and a Sennheiser MKH-416 short shotgun mic.

Going Medieval
Over the years, the Mahoney Foley team has collected thousands of props. For The Last Kingdom specifically, they visited a medieval weapons maker and bought a whole armory of items: swords, shields, axes, daggers, spears, helmets, chainmail, armor, bridles and more. And it’s all put to good use on the series. Mahoney notes, “We cover every single thing that you see on-screen as well as everything you hear off of it.” That includes all the feet (human and horses), cloth, and practical effects like grabs, pick-ups/put downs, and touches. They also cover the battle sequences.

Mahoney says they use 20 to 30 tracks of Foley just to create the layers of detail that the battle scenes need. Starting with the cloth pass, they cover the Saxon chainmail and the Vikings leather and fur armor. Then they do basic cloth and leather movements to cover non-warrior characters and villagers. They record a general weapons track, played at low volume, to provide a base layer of sound.

Next they cover the horses from head to hoof, with bridles and saddles, and Foley for the horses’ feet. When asked what’s the best way to Foley horse hooves, Mahoney asserts that it is indeed with coconuts. “We’ve also purchased horseshoes to add to the stable atmospheres and spot FX when required,” he explains. “We record any abnormal horse movements, i.e. crossing a drawbridge or moving across multiple surfaces, and sound designers take care of the rest. Whenever muck or gravel is needed, we buy fresh material from the local DIY stores and work it into our grids/pits on the Foley stage.”

The battle scenes also require Foley for all the grabs, hits and bodyfalls. For the blood and gore, they use a variety of fruit and animal flesh.

Then there’s a multitude of feet to cover the storm of warriors rushing at each other. All the boots they used were wrapped in leather to create an authentic sound that’s true to the time. Mahoney notes that they didn’t want to capture “too much heel in the footsteps, while also trying to get a close match to the sync sound in the event of ADR.”

Surfaces include stone and marble for the Saxon castles of King Alfred and the other noble lords. For the wooden palisades and fort walls, Mahoney says they used a large wooden base accompanied by wooden crates, plinths, boxes and an added layer of controlled creaks to give an aged effect to everything. On each series, they used 20 rolls of fresh grass, lots of hay for the stables, leaves for the forest, and water for all the sea and river scenes. “There were many nights cleaning the studio after battle sequences,” he says.

In addition to the aforementioned props of medieval weapons, grass, mud, bridles and leather, Mahoney says they used an unexpected prop: “The Viking cloth tracks were actually done with samurai suits. They gave us the weight needed to distinguish the larger size of a Danish man compared to a Saxon.”

Their favorite scenes to Foley, and by far the most challenging, were the battle scenes. “Those need so much detail and attention. It gives us a chance to shine on the soundtrack. The way that they are shot/edited can be very fast paced, which lends itself well to micro details. It’s all action, very precise and in your face,” he says. But if they had to pick one favorite scene, Mahoney says it would be “Uhtred and Ragnar storming Kjartan’s stronghold.”

Another challenging-yet-rewarding opportunity for Foley was during the slave ship scenes. Uhtred and his friend are sold into slavery as rowers on a Viking ship, which holds a crew of nearly 30 men. The Mahoney team brought the slave ship to life by building up layers of detail. “There were small wood creaks with small variations of wood and big creaks with larger variations of wood. For the big creaks, we used leather and a broomstick to work into the wood, creating a deep creak sound by twisting the three elements against each other. Then we would pitch shift or EQ to create size and weight. When you put the two together it gives detail and depth. Throw in a few tracks of rigging and pulleys for good measure and you’re halfway there,” says Mahoney.

For the sails, they used a two-mic setup to record huge canvas sheets to create a stereo wrap-around feel. For the rowing effects, they used sticks, brooms and wood rubbing, bouncing, or knocking against large wooden floors and solid boxes. They also covered all the characters’ shackles and chains.

Foley is a very effective way to draw the audience in close to a character or to help the audience feel closer to the action on-screen. For example, near the end of Season 2’s finale, a loyal subject of King Alfred has fallen out of favor. He’s eventually imprisoned and prepares to take his own life. The sound of his fingers running down the blade and the handling of his knife make the gravity of his decision palpable.

Mahoney shares another example of using Foley to draw the audience in — during the scene when Sven is eaten by Thyra’s wolves (following Uhtred and Ragnar storming Kjartan’s stronghold). “We used oranges and melons for Sven’s flesh being eaten and for the blood squirts. Then we created some tracks of cloth and leather being ripped. Specially manufactured claw props were used for the frantic, ravenous wolf feet,” he says. “All the action was off-screen so it was important for the audience to hear in detail what was going on, to give them a sense of what it would be like without actually seeing it. Also, Thyra’s reaction needed to reflect what was going on. Hopefully, we achieved that.”

Post developments at the AES Berlin Convention

By Mel Lambert

The AES Convention returned to Berlin after a three-year absence, and once again demonstrated that the Audio Engineering Society can organize a series of well-attended paper programs, seminars and workshops, in addition to an exhibition of familiar brands, for the European tech-savvy post community. 

Held at the Maritim Hotel in the creative heart of Berlin in late May, the 142nd AES Convention was co-chaired by Sascha Spors from University of Rostock in Germany and Nadja Wallaszkovits from the Austrian Academy of Sciences. According to AES executive director Bob Moses, attendance was 1,800 — a figure at least 10% higher than last year’s gathering in Paris — with post professional from several overseas countries, including China and Australia.

During the opening ceremonies, current AES president Alex Case stated that, “AES conventions represent an ideal interactive meeting place,” whereas “social media lacks the one-on-one contact that enhances our communications bandwidth with colleagues and co-workers.” Keynote speaker Dr. Alex Arteaga, whose research integrates aesthetic and philosophical practices, addressed the thorny subject of “Auditory Architecture: Bringing Phenomenology, Aesthtic Practices and Engineering Together,” arguing that when considering the differences between audio soundscapes, “our experience depends upon the listening environment.” His underlying message was that a full appreciation of the various ways in which we hear immersive sounds requires a deeper understanding of how listeners interact with that space.

As part of his Richard C. Heyser Memorial Lecture, Prof. Dr. Jorg Sennheiser outlined “A Historic Journey in Audio-Reality: From Mono to AMBEO,” during which he reviewed the basis of audio perception and the interdependence of hearing with other senses. “Our enjoyment and appreciation of audio quality is reflected in the continuous development from single- to multi-channel reproduction systems that are benchmarked against sonic reality,” he offered. “Augmented and virtual reality call for immersive audio, with multiple stakeholders working together to design the future of audio.”

Post-Focused Technical Papers
There were several interesting technical papers that covered the changing requirements of the post community, particularly in the field of immersive playback formats for TV and cinema. With the new ATSC 3.0 digital television format scheduled to come online soon, including object-based immersive sound, there is increasing interest in techniques for capturing surround material and then delivering the same to consumer audiences.

In a paper titled “The Median-Plane Summing Localization in Ambisonics Reproduction,” Bosun Xie from the South China University of Technology in Guangzhou explained that, while one aim of Ambisonics playback is to recreate the perception of a virtual source in arbitrary directions, practical techniques are unable to recreate correct high-frequency spectra in binaural pressures that are referred to as front-back and vertical localization cues. Current research shows that changes of interaural time difference/ITD that result from head-turning for Ambisonics playback match with those of a real source, and hence provide dynamic cue for vertical localization, especially in the median plane. In addition, the LF virtual source direction can be approximately evaluated by using a set of panning laws.

“Exploring the Perceptual Sweet Area in Ambisonics,” presented by Matthias Frank from University of Music in Graz, Austria, described how the sweet-spot area does not match the large area needed in the real world. A method was described to experimentally determine the perceptual sweet spot, which is not limited to assessing the localization of both dry and reverberant sound using different Ambisonic encoding orders.

Another paper, “Perceptual Evaluation of Synthetic Early Binaural Room Impulse Responses Based on a Parametric Model,” presented by Philipp Stade from the Technical University of Berlin, described how an acoustical environment can be modeled using sound-field analysis plus spherical head-related impulse response/HRIRs — and the results compared with measured counterparts. Apparently, the selected listening experiment showed comparable performance and, in the main, was independent from room and test signals. (Perhaps surprisingly, the synthesis of direct sound and diffuse reverberation yielded almost the same results as for the parametric model.)

“Influence of Head Tracking on the Externalization of Auditory Events at Divergence between Synthesized and Listening Room Using a Binaural Headphone System,” presented by Stephan Werner from the Technical University of Ilmenau, Germany, reported on a study using a binaural headphone system that considered the influence of head tracking on the localization of auditory events. Recordings were conducted of impulse responses from a five-channel loudspeaker set-up in two different acoustic rooms. Results revealed that head tracking increased sound externalization, but that it did not overcome the room-divergence effect.

Heiko Purnhagen from Dolby Sweden, in a paper called “Parametric Joint Channel Coding of Immersive Audio,” described a coding scheme that can deliver channel-based immersive audio content in such formats as 7.1.4, 5.1.4, or 5.1.2 at very low bit rates. Based on a generalized approach for parametric spatial coding of groups of two, three or more channels using a single downmix channel, together with a compact parametrization that guarantees full covariance re-instatement in the decoder, the coding scheme is implemented using Dolby AC-4’s A-JCC standardized tool.

Hardware Choices for Post Users
Several manufacturers demonstrated compact near-field audio monitors targeted at editorial suites and pre-dub stages. Adam Audio focused on their new near/mid-fieldS Series, which uses the firm’s ART (Accelerating Ribbon Technology) ribbon tweeter. The five models, which are comprised of the S2V, S3H, S3V, S5V and S5H for horizontal or vertical orientation. The firm’s newly innovated LF and mid-range drivers with custom-designed waveguides for the tweeter — and MF driver on the larger, multiway models — are powered by a new DSP engine that “provides crossover optimization, voicing options and expansion potential,” according to the firm’s head of marketing, Andre Zeugner.

The Eve Audio SC203 near-field monitor features a three-inch LF/MF driver plus a AMT ribbon tweeter, and is supplied with a v-shaped rubberized pad that allows the user to decouple the loudspeaker from its base and reduce unwanted resonances while angling it flat or at a 7.5- or 15-degree angle. An adapter enables mounting directly on any microphone or speaker stand with a 3/8-inch thread. Integral DSP and a passive radiator located at the rear are said to reinforce LF reproduction to provide a response to 62Hz (-3dB).

Genelec showcased The Ones, a series of point-source monitors that are comprised of the current three-way Model 8351 plus the new two-way Model 8331 and three-way Model 8341. All three units include a co-axial MF/HF driver plus two acoustically concealed LF drivers for vertical and horizontal operation. A new Minimum Diffraction Enclosure/MDE is featured together with the firm’s loudspeaker management and alignment software via a dedicated Cat5 network port.

The Neumann KH-80 DSP near-field monitor is designed to offer automatic system alignment using the firm’s control software that is said to “mathematically model dispersion to deliver excellent detail in any surroundings.” The two-way active system features a four-inch LF/MF driver and one-inch HF tweeter with an elliptical, custom-designed waveguide. The design is described as offering a wide horizontal dispersion to ensure a wide sweet spot for the editor/mixer, and a narrow vertical dispersion to reduce sound reflections off the mix console.

To handle multiple monitoring sources and loudspeaker arrays, the Trinnov D-Mon Series controllers enable stereo to 7.1-channel monitoring from both analog and digital I/Os using Ethernet- and/or MIDI-based communication protocols and a fast-switching matrix. An internal mixer creates various combinations of stems, main or aux mixes from discrete inputs. An Optimizer processor offers tuning of the loudspeaker array to match studio acoustics.

Unveiled at last year’s AES Convention in Paris, the Eventide H9000 multichannel/multi-element processing system has been under constant development during the past 12 months with new functions targeted at film and TV post, including EQ, dynamics and reverb effects. DSP elements can be run in parallel or in a series to create multiple, fully-programmable channel strips per engine. Control plug-ins for Avid Pro Tools and other DAWs are being finalized, together with Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking.

Filmton, the German association for film sound professionals, explained to AES visitors its objective “to reinforce the importance of sound at an elemental level for the film community.” The association promotes the appreciation of film sound, together with the local film industry and its policy toward the public, while providing “an expert platform for technical, creative and legal issues.”

Philipp Sehling

Lawo demonstrated the new mc²96 Grand Audio production console, an IP-based networkable design for video post production, available with up to 200 on-surface faders. Innovative features include automatic gain control across multiple channels and miniature TFT color screens above each fader that display LiveView thumbnails of the incoming channel sources.

Stage Tec showed new processing features for its Crescendo Platinum TV post console, courtesy of v4.3 software, including an automixer based on gain sharing that can be used on every input channel, loudness metering to EBU R128 for sum and group channels, a de-esser on every channel path, and scene automation with individual user-adjustable blend curves and times for each channel.

Avid demonstrated native support for the new 7.1.2 Dolby Atmos channel-bed format — basically the familiar 9.1-channel bed with two height channels — for editorial suites and consumer remastering, plus several upgrades for Pro Tools, including new panning software for object-based audio and the ability to switch between automatable object and buss outputs. Pro Tools HD is said to be the only DAW natively supporting in-the-box Atmos mixing for this 10-channel 7.1.2 format. Full integration for Atmos workflows is now offered for control surfaces such as the Avid S6.

Jon Schorah

There was a new update to Nugen Audio’s popular Halo Upmix plug-in for Pro Tools — in addition to stereo to 5.1, 7.1 or 9.1 conversion it is now capable of delivering 7.1.2-channel mixes for Dolby Atmos soundtracks.

A dedicated Dante Pavilion featured several manufacturers that offer network-capable products, including Solid State Logic, whose Tempest multi-path processing engine and router is now fully Audinate Dante-capable for T Series control surfaces with unique arbitration and ownership functions; Bosch RTS intercom systems featuring Dante connectivity with OCA system control; HEDD/Heinz Electrodynamic Designs, whose Series One monitor speakers feature both Dante and AES67/Ravenna ports; Focusrite, whose RedNet series of modular pre-amps and converters offer “enhanced reliability, security and selectivity” via Dante, according to product specialist for EMEA/Germany, Dankmar Klein; and NTP Technology’s DAD Series DX32R and RV32 Dante/MADI router bridges and control room monitor controllers, which are fully compatible with Dante-capable consoles and outboard systems, according to the firm’s business development manager Jan Lykke.

What’s Next For AES
The next European AES convention will be held in Milan during the spring of 2018. “The society also is planning a new format for the fall convention in New York,” said Moses, as the AES is now aligning with the National Association of Broadcasters. “Next January we will be holding a new type of event in Anaheim, California, to be titled AES @ NAMM.” Further details will be unveiled next month. He also explained there will be no West Coast AES Convention next year. Instead the AES will return to New York in the autumn of 2018 with another joint AES/NAB gathering at the Jacob K. Javits Convention Center.


Mel Lambert is an LA-based writer and photographer. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Creating sounds of science for Bill Nye: Science Guy

By Jennifer Walden

Bill Nye, the science hero of a generation of school children, has expanded his role in the science community over the years. His transformation from TV scientist to CEO of The Planetary Society (the world’s largest non-profit space advocacy group) is the subject of Bill Nye: Science Guy — a documentary directed by David Alvarado and Jason Sussberg.

The doc premiered in the US at the SXSW Film Festival and had its international premiere at the Hot Docs Canadian International Documentary Festival in Toronto.

Peter Albrechtsen – Credit: Povl Thomsen

Supervising sound editor/sound designer Peter Albrechtsen, MPSE, started working with directors Alvarado and Sussberg in 2013 on their first feature-length documentary The Immortalists. When they began shooting the Bill Nye documentary in 2015, Albrechtsen was able to see the rough cuts and started collecting sounds and ambiences for the film. “I love being part of projects very early on. I got to discuss some sonic and musical ideas with David and Jason. On documentaries, the actual sound design schedule isn’t typically very long. It’s great knowing the vibe of the film as early as I can so I can then be more focused during the sound editing process. I know what the movie needs and how I should prioritize my work. That was invaluable on a complicated, complex and multilayered movie like this one.”

Before diving in, Albrechtsen, dialogue editor Jacques Pedersen, sound effects editor Morten Groth Brandt and sound effects recordist/assistant sound designer Mikkel Nielsen met up for a jam session — as Albrechtsen calls it — to share the directors’ notes for sound and discuss their own ideas. “It’s a great way of getting us all on the same page and to really use everyone’s talents,” he says.

Albrechtsen and his Danish sound crew had less than seven weeks for sound editorial at Offscreen in Copenhagen. They divided their time evenly between dialogue editing and sound effects editing. During that time, Foley artist Heikki Kossi spent three days on Foley at H5 Film Sound in Kokkola, Finland.

Foley artist Heikki Kossi. Credit: Clas-Olav Slotte

Bill Nye: Science Guy mixes many different media sources — clips from Bill Nye’s TV shows from the ‘90s, YouTube videos, home videos on 8mm film, TV broadcasts from different eras, as well as the filmmakers’ own footage. It’s a potentially headache-inducing combination. “Some of the archival material was in quite bad shape, but my dialogue editor Jacques Pedersen is a magician with iZotope RX and he did a lot of healthy cleaning up of all the rough pieces and low-res stuff,” says Albrechtsen. “The 8mm videos actually didn’t have any sound, so Heikki Kossi did some Foley that helped it to come alive when we needed it to.”

Sound Design
Albrechtsen’s sound edit was also helped by the directors’ dedication to sound. They were able to acquire the original sound effects library from Bill Nye’s ‘90s TV show, making it easy for the post sound team to build out the show’s soundscape from stereo to surround, and also to make it funnier. “A lot of humor in the old TV show came from the imaginative soundtrack that was often quite cartoonish, exaggerated and hilariously funny,” he explains. “I’ve done sound for quite a few documentaries now and I’ve never tried adding so many cartoonish sound effects to a track. It made me laugh.”

The directors’ dedication goes even deeper, with director Sussberg handling the production sound himself when they’re out shooting. He records dialogue with both a boom mic and radio mics, and also records wild tracks of room tones and ambience. He even captures special sound signatures for specific locations when applicable.

For example, Nye visits the creationist theme park called Noah’s Ark, built by Christian fundamentalist Ken Ham. The indoor park features life-size dioramas and animatronics to explain creationism. There are lots of sound effects and demonstrations playing from multiple speaker setups. Sussberg recorded all of them, providing Albrechtsen with the means of creating an authentic sound collage.

“People might think we added lots of sounds for these sequences, but actually we just orchestrated what was already there,” says Albrechtsen. “At moments, it’s like a cacophony of noises, with corny dinosaur screams, savage human screams and violent war noises. When I heard the sounds from the theme park that David and Jason had recorded, I didn’t believe my own ears. It’s so extreme.”

Albrechtsen approaches his sound design with texture in mind. Not every sound needs to be clean. Adding texture, like crackling or hiss, can change the emotional impact of a sound. For example, while creating the sound design for the archival footage of several rocket launches, Albrechtsen pulled clean effects of rocket launches and explosions from Tonsturm’s “Massive Explosions” sound effects library and transferred those recordings to old NAGRA tape. “The special, warm, analogue distortion that this created fit perfectly with the old, dusty images.”

In one of Albrechtsen’s favorite sequences in the film, there’s a failure during launch and the rocket explodes. The camera falls over and the video glitches. He used different explosions panned around the room, and he panned several low-pitched booms directly to the subwoofer, using Waves LoAir plug-in for added punch. “When the camera falls over, I panned explosions into the surrounds and as the glitches appear I used different distorted textures to enhance the images,” he says. “Pete Horner did an amazing job on mixing that sequence.”

For the emotional sequences, particularly those exploring Nye’s family history, and the genetic disorder passed down from Nye’s father to his two siblings, Albrechtsen chose to reduce the background sounds and let the Foley pull the audience in closer to Nye. “It’s amazing what just a small cloth rustle can do to get a feeling of being close to a person. Foley artist Heikki Kossi is a master at making these small sounds significant and precise, which is actually much more difficult than one would think.”

For example, during a scene in which Nye and his siblings visit a clinic Albrechtsen deliberately chose harsh, atonal backgrounds that create an uncomfortable atmosphere. Then, as Nye shares his worries about the disease, Albrechtsen slowly takes the backgrounds out so that only the delicate Foley for Nye plays. “I love creating multilayered background ambiences and they really enhanced many moments in the film. When we removed these backgrounds for some of the more personal, subjective moments the effect was almost spellbinding. Sound is amazing, but silence is even better.”

Bill Nye: Science Guy has layers of material taking place in both the past and present, in outer space and in Nye’s private space, Albrechtsen notes. “I was thinking about how to make them merge more. I tried making many elements of the soundtrack fit more with each other.”

For instance, Nye’s brother has a huge model train railway set up. It’s a legacy from their childhood. So when Nye visits his childhood home, Albrechtsen plays the sound of a distant train. In the 8mm home movies, the Nye family is at the beach. Albrechtsen’s sound design includes echoes of seagulls and waves. Later in the film, when Nye visits his sister’s home, he puts in distant seagulls and waves. “The movie is constantly jumping through different locations and time periods. This was a way of making the emotional storyline clearer and strengthening the overall flow. The sound makes the images more connected.”

One significant story point is Nye’s growing involvement with The Planetary Society. Before Carl Sagan’s death, Sagan conceptualized a solar sail — a sail for use in space that could harness the sun’s energy and use it as a means of propulsion. The Planetary Society worked hard to actualize Sagan’s solar sail idea. Albrechtsen needed to give the solar sail a sound in the film. “How does something like that sound? Well, in the production sound you couldn’t really hear the solar sail and when it actually appeared it just sounded like boring, noisy cloth rustle. The light sail really needed an extraordinary, unique sound to make you understand the magnitude of it.”

So they recorded different kinds of materials, in particular a Mylar blanket, which has a glittery and reflective surface. Then Albrechtsen tried different pitches and panning of those recordings to create a sense of its extraordinary size.

While they handled post sound editorial in Denmark, the directors were busy cutting the film stateside with picture editor Annu Lilja. When working over long distances, Albrechtsen likes to send lots of QuickTimes with stereo downmixes so the directors can hear what’s happening. “For this film, I sent a handful of sound sketches to David and Jason while they were busy finishing the picture editing,” he explains. “Since we’ve done several projects together we know each other very well. David and Jason totally trust me and I know that they like their soundtracks to be very detailed, dynamic and playful. They want the sound to be an integral part of the storytelling and are open to any input. For this movie, they even did a few picture recuts because of some sound ideas I had.”

The Mix
For the two-week final mix, Albrechtsen joined re-recording mixer Pete Horner at Skywalker Sound in Marin County, California. Horner started mixing on the John Waters stage — a small mix room featuring a 5.1 setup of Meyer Sound’s Acheron speakers and an Avid ICON D-Command control surface, while Albrechtsen finished the sound design and premixed the effects against William Ryan Fritch’s score in a separate editing suite. Then Albrechtsen sat with Horner for another week, as Horner crafted the final 5.1 mix.

One of Horner’s mix challenges was to keep the dialogue paramount while still pushing the layered soundscapes that help tell the story. Horner says, “Peter [Albrechtsen] provided a wealth of sounds to work with, which in the spirit of the original Bill Nye show were very playful. But this, of course, presented a challenge because there were so many sounds competing for attention. I would say this is a problem that most documentaries would be envious of, and I certainly appreciated it.”

Once they had the effects playing along with the dialogue and music, Horner and Albrechtsen worked together to decide which sounds were contributing the most and which were distracting from the story. “The result is a wonderfully rich, sometimes manic track,” says Horner.

Albrechtsen adds, “On a busy movie like this, it’s really in the mix where everything comes together. Pete [Horner] is a truly brilliant mixer and has the same musical approach to sound as me. He is an amazing listener. The whole soundtrack — both sound and score — should really be like one piece of music, with ebbs and flows, peaks and valleys.”

Horner explains their musical approach to mixing as “the understanding that the entire palette of sound coming through the faders can be shaped in a way that elicits an emotional response in the audience. Music is obviously musical, but sound effects are also very musical since they are made up of pitches and rhythmic sounds as well. I’ve come to feel that dialogue is also musical — the person speaking is embedding their own emotions into the way they speak using both pitch (inflection or emphasis) and rhythm (pace and pauses).”

“I’ll go even further to say that the way the images are cut by the picture editor is inherently musical. The pace of the cuts suggests rhythm and tempo, and a ‘hard cut’ can feel like a strong downbeat, as emotionally rich as any orchestral stab. So I think a musical approach to mixing is simply internalizing the ‘music’ that is already being communicated by the composer, the sound designer, the picture editor and the characters on the screen, and with the guidance of the director shaping the palette of available sounds to communicate the appropriate complexity of emotion,” says Horner.

In the mix, Horner embraces the documentary’s intention of expressing the duality of Nye’s life: his celebrity versus his private life. He gives the example of the film’s opening, which starts with sounds of a crowd gathering to see Nye. Then it cuts to Nye backstage as he’s preparing for his performance by quietly tying his bowtie in a mirror. “Here the exceptional Foley work of Heikki Kossi creates the sense of a private, intimate moment, contrasting with the voice of the announcer, which I treated as if it’s happening through the wall in a distant auditorium.”

Next it cuts to that announcer, and his voice is clearly amplified and echoing all around the auditorium of excited fans. There’s an interview with a fan and his friends who are waiting to take their seats. The fan describes his experience of watching Nye’s TV show in the classroom as a kid and how they’d all chant “Bill, Bill, Bill” as the TV cart rolled in. Underneath, plays the sound of the auditorium crowd chanting “Bill, Bill, Bill” as the picture cuts to Nye waiting in wings.

Horner says, “Again, the Foley here keeps us close to Bill while the crowd chants are in deep echo. Then the TV show theme kicks on, blasting through the PA. I embraced the distorted nature of the production recording and augmented it with hall echo and a liberal use of the subwoofer. The energy in this moment is at a peak as Bill takes the stage exclaiming, “I love you guys!” and the title card comes on. This is a great example of how the scene was already cut to communicate the dichotomy within Bill, between his private life and his public persona. By recognizing that intention, the sound team was able to express that paradox more viscerally.”


Jennifer Walden is a New Jersey-based audio engineer and writer.