Tag Archives: sound design

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The editors of Ad Astra: John Axelrad and Lee Haugen

By Amy Leland

The new Brad Pitt film Ad Astra follows astronaut Roy McBride (Pitt) as he journeys deep into space in search of his father, astronaut Clifford McBride (Tommy Lee Jones). The elder McBride disappeared years before, and his experiments in space might now be endangering all life on Earth. Much of the film features Pitt’s character alone in space with his thoughts, creating a happy challenge for the film’s editing team, who have a long history of collaboration with each other and the film’s director James Gray.

L-R: Lee Haugen and John Axelrad

Co-editors John Axelrad, ACE, and Lee Haugen share credits on three previous films — Haugen served as Axelrad’s apprentice editor on Two Lovers, and the two co-edited The Lost City of Z and Papillon. Ad Astra’s director, James Gray, was also at the helm of Two Lovers and The Lost City of Z. A lot can be said for long-time collaborations.

When I had the opportunity to speak with Axlerad and Haugen, I was eager to find out more about how this shared history influenced their editing process and the creation of this fascinating story.

What led you both to film editing?
John Axelrad: I went to film school at USC and graduated in 1990. Like everyone else, I wanted to be a director. Everyone that goes to film school wants that. Then I focused on studying cinematography, but then I realized several years into film school that I don’t like being on the set.

Not long ago, I spoke to Fred Raskin about editing Once Upon a Time… in Hollywood. He originally thought he was going to be a director, but then he figured out he could tell stories in an air-conditioned room.
Axelrad: That’s exactly it. Air conditioning plays a big role in my life; I can tell you that much. I get a lot of enjoyment out of putting a movie together and of being in my own head creatively and really working with the elements that make the magic. In some ways, there are a lot of parallels with the writer when you’re an editor; the difference is I’m not dealing with a blank page and words — I’m dealing with images, sound and music, and how it all comes together. A lot of people say the first draft is the script, the second draft is the shoot, and the third draft is the edit.

L-R: John and Lee at the Papillon premiere.

I started off as an assistant editor, working for some top editors for about 10 years in the ’90s, including Anne V. Coates. I was an assistant on Out of Sight when Anne Coates was nominated for the Oscar. Those 10 years of experience really prepped me for dealing with what it’s like to be the lead editor in charge of a department — dealing with the politics, the personalities and the creative content and learning how to solve problems. I started cutting on my own in the late ‘90s, and in the early 2000s, I started editing feature films.

When did you meet your frequent collaborator James Gray?
Axelrad: I had done a few horror features, and then I hooked up with James on We Own the Night, and that went very well. Then we did Two Lovers after that. That’s where Lee Haugen came in — and I’ll let him tell his side of the story — but suffice it to say that I’ve done five films for James Gray, and Lee Haugen rose up through the ranks and became my co-editor on the Lost City of Z. Then we edited the movie Papillon together, so it was just natural that we would do Ad Astra together as a team.

What about you, Lee? How did you wind your way to where we are now?
Lee Haugen: Growing up in Wisconsin, any time I had a school project, like writing a story or writing an article, I would change it into a short video or short film instead. Back then I had to shoot on VHS tape and edited tape to tape by pushing play and hitting record and timing it. It took forever, but that was when I really found out that I loved editing.

So I went to school with a focus on wanting to be an editor. After graduating from Wisconsin, I moved to California and found my way into reality television. That was the mid-2000s and it was the boom of reality television; there were a lot of jobs that offered me the chance to get in the hours needed for becoming a member of the Editors Guild as well as more experience on Avid Media Composer.

After about a year of that, I realized working the night shift as an assistant editor on reality television shows was not my real passion. I really wanted to move toward features. I was listening to a podcast by Patrick Don Vito (editor of Green Book, among other things), and he mentioned John Axelrad. I met John on an interview for We Own the Night when I first moved out here, but I didn’t get the job. But a year or two later, I called him, and he said, “You know what? We’re starting another James Gray movie next week. Why don’t you come in for an interview?” I started working with John the day I came in. I could not have been more fortunate to find this group of people that gave me my first experience in feature films.

Then I had the opportunity to work on a lower-budget feature called Dope, and that was my first feature editing job by myself. The success of the film at Sundance really helped launch my career. Then things came back around. John was finishing up Krampus, and he needed somebody to go out to Northern Ireland to edit the assembly of The Lost City of Z with James Gray. So, it worked out perfectly, and from there, we’ve been collaborating.

Axelrad: Ad Astra is my third time co-editing with Lee, and I find our working as a team to be a naturally fluid and creative process. It’s a collaboration entailing many months of sharing perspectives, ideas and insights on how best to approach the material, and one that ultimately benefits the final edit. Lee wouldn’t be where he is if he weren’t a talent in his own right. He proved himself, and here we are together.

How has your collaborative process changed and grown from when you were first working together (John, Lee and James) to now, on Ad Astra?
Axelrad: This is my fifth film with James. He’s a marvelous filmmaker, and one of the reasons he’s so good is that he really understands the subtlety and power of editing. He’s very neoclassical in his approach, and he challenges the viewer since we’re all accustomed to faster cutting and faster pacing. But with James, it’s so much more of a methodical approach. James is very performance-driven. It’s all about the character, it’s all about the narrative and the story, and we really understand his instincts. Additionally, you need to develop a second-hand language and truly understand what the director wants.

Working with Lee, it was just a natural process to have the two of us cutting. I would work on a scene, and then I could say, “Hey Lee, why don’t you take a stab at it?” Or vice versa. When James was in the editing room working with us, he would often work intensely with one of us and then switch rooms and work with the other. I think we each really touched almost everything in the film.

Haugen: I agree with John. Our way of working is very collaborative —that includes John and I, but also our assistant editors and additional editors. It’s a process that we feel benefits the film as a whole; when we have different perspectives, it can help us explore different options that can raise the film to another level. And when James comes in, he’s extremely meticulous. And as John said, he and I both touched every single scene, and I think we’ve even touched every frame of the film.

Axelrad: To add to what Lee said, about involving our whole editing team, I love mentoring, and I love having my crew feel very involved. Not just technical stuff, but creatively. We worked with a terrific guy, Scott Morris, who is our first assistant editor. Ultimately, he got bumped up during the course of the film and got an additional editor credit on Ad Astra.

We involve everyone, even down to the post assistant. We want to hear their ideas and make them feel like a welcome part of a collaborative environment. They obviously have to focus on their primary tasks, but I think it just makes for a much happier editing room when everyone feels part of a team.

How did you manage an edit that was so collaborative? Did you have screenings of dailies or screenings of cuts?
Axelrad: During dailies it was just James, and we would send edits for him to look at. But James doesn’t really start until he’s in the room. He really wants to explore every frame of film and try all the infinite combinations, especially when you’re dealing with drama and dealing with nuance and subtlety and subtext. Those are the scenes that take the longest. When I put together the lunar rover chase, it was almost easier in some ways than some of the intense drama scenes in the film.

Haugen: As the dailies came in, John and I would each take a scene and do a first cut. And then, once we had something to present, we would call everybody in to watch the scene. We would get everybody’s feedback and see what was working, what wasn’t working. If there were any problems that we could address before moving to the next scene, we would. We liked to get the outside point of view, because once you get further and deeper into the process of editing a film, you do start to lose perspective. To be able to bring somebody else in to watch a scene and to give you feedback is extremely helpful.

One thing that John established with me on Two Lovers — my first editing job on a feature — was allowing me to come and sit in the room during the editing. After my work was done, I was welcome to sit in the back of the room and just observe the interaction between John and James. We continued that process with this film, just to give those people experience and to learn and to observe how an edit room works. That helped me become an editor.

John, you talked about how the action scenes are often easier to cut than the dramatic scenes. It seems like that would be even more true with Ad Astra, because so much of this film is about isolation. How does that complicate the process of structuring a scene when it’s so much about a person alone with his own thoughts?
Axelrad: That was the biggest challenge, but one we were prepared for. To James’ credit, he’s not precious about his written words; he’s not precious about the script. Some directors might say, “Oh no, we need to mold it to fit the script,” but he allows the actors to work within a space. The script is a guide for them, and they bring so much to it that it changes the story. That’s why I always say that we serve the ego of the movie. The movie, in a way, informs us what it wants to be, and what it needs to be. And in the case of this, Brad gave us such amazing nuanced performances. I believe you can sometimes shape the best performance around what is not said through the more nuanced cues of facial expressions and gestures.

So, as an editor, when you can craft something that transcends what is written and what is photographed and achieve a compelling synergy of sound, music and performance — to create heightened emotions in a film — that’s what we’re aiming for. In the case of his isolation, we discovered early on that having voiceover and really getting more interior was important. That wasn’t initially part of the cut, but James had written voiceover, and we began to incorporate that, and it really helped make this film into more of an existential journey.

The further he goes out into space, the deeper we go into his soul, and it’s really a dive into the subconscious. That sequence where he dives underwater in the cooling liquid of the rocket, he emerges and climbs up the rocket, and it’s almost like a dream. Like how in our dreams we have superhuman strength as a way to conquer our demons and our fears. The intent really was to make the film very hypnotic. Some people get it and appreciate it.

As an editor, sound often determines the rhythm of the edit, but one of the things that was fascinating with this film is how deafeningly quiet space likely is. How do you work with the material when it’s mostly silent?
Haugen: Early on, James established that he wanted to make the film as realistic as possible. Sound, or lack of sound, is a huge part of space travel. So the hard part is when you have, for example, the lunar rover chase on the moon, and you play it completely silent; it’s disarming and different and eerie, which was very interesting at first.

But then we started to explore how we could make this sound more realistic or find a way to amplify the action beats through sound. One way was, when things were hitting him or things were vibrating off of his suit, he could feel the impacts and he could hear the vibrations of different things going on.

Axelrad: It was very much part of our rhythm, of how we cut it together, because we knew James wanted to be as realistic as possible. We did what we could with the soundscapes that were allowable for a big studio film like this. And, as Lee mentioned, playing it from Roy’s perspective — being in the space suit with him. It was really just to get into his head and hear things how he would hear things.

Thanks to Max Richter’s beautiful score, we were able to hone the rhythms to induce a transcendental state. We had Gary Rydstrom and Tom Johnson mix the movie for us at Skywalker, and they were the ultimate creators of the balance of the rhythms of the sounds.

Did you work with music in the cut?
Axelrad: James loves to temp with classical music. In previous films, we used a lot of Puccini. In this film, there was a lot of Wagner. But Max Richter came in fairly early in the process and developed such beautiful themes, and we began to incorporate his themes. That really set the mood.

When you’re working with your composer and sound designer, you feed off each other. So things that they would do would inspire us, and we would change the edits. I always tell the composers when I work with them, “Hey, if you come up with something, and you think musically it’s very powerful, let me know, and I am more than willing to pitch changing the edit to accommodate.” Max’s music editor, Katrina Schiller, worked in-house with us and was hugely helpful, since Max worked out of London.

We tend not to want to cut with music because initially you want the edit not to have music as a Band-Aid to cover up a problem. But once we feel the picture is working, and the rhythm is going, sometimes the music will just fit perfectly, even as temp music. And if the rhythms match up to what we’re doing, then we know that we’ve done it right.

What is next for the two of you?
Axelrad: I’m working on a lower-budget movie right now, a Lionsgate feature film. The title is under wraps, but it stars Janelle Monáe, and it’s kind of a socio-political thriller.

What about you Lee?
Haugen: I jumped onto another film as well. It’s an independent film starring Zoe Saldana. It’s called Keyhole Garden, and it’s this very intimate drama that takes place on the border between Mexico and America. So it’s a very timely story to tell.


Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.

Audio post pro Julienne Guffain joins Sonic Union

NYC-based audio post studio Sonic Union has added sound designer/mix engineer Julienne Guffain to its creative team. Working across Sonic Union’s Bryant Park and Union Square locations, Guffain brings over a decade of experience in audio post production to her new role. She has worked on television, film and branded projects for clients such as Google, Mountain Dew, American Express and Cadillac among others.

A Virginia native, Guffain came to Manhattan to attend New York University’s Tisch School of the Arts. She found herself drawn to sound in film, and it was at NYU where she cut her teeth as a Foley artist and mixer on student films and independent projects. She landed her first industry gig at Hobo Audio, working with clients such as The History Channel, The Discovery Channel and mixing the Emmy-winning television documentary series “Rising: Rebuilding Ground Zero.”

Making her way to Crew Cuts, she began lending her talents to a wide range of spot and brand projects, including the documentary feature “Public Figure,” which examines the psychological effects of constant social media use. It is slated for a festival run later this year.

 

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

Hobo’s Chris Stangroom on providing Quest doc’s sonic treatment

Following a successful film fest run that included winning a 2018 Independent Spirit Award, and being named a 2017 official selection at Sundance, the documentary Quest is having its broadcast premiere on PBS this month as part of their POV series.

Chris Stangroom

Filmed with vérité intimacy for nearly a decade, Quest follows the Rainey family who live in North Philadelphia. The story begins at the start of the Obama presidency with Christopher “Quest” Rainey, and his wife Christine (“Ma Quest”) raising a family, while also nurturing a community of hip-hop artists in their home music studio. It’s a safe space where all are welcome, but as the doc shows, this creative sanctuary can’t always shield them from the strife that grips their neighborhood.

New York-based audio post house Hobo, which is no stranger to indie documentary work (Weiner, Amanda Knox, Voyeur), lent its sonic skills to the film, including the entire sound edit (dialogue, effects and music), sound design, 5.1 theatrical and broadcast mixes.

We spoke with Hobo’s Chris Stangroom, supervising sound editor/re-recording mixer on the project about the challenges he and the Hobo team faced in their quest on this film.

Broadly speaking what did you and Hobo do on this project? How did you get involved?
We handled every aspect of the audio post on Quest for its Sundance Premiere, theatrical run and broadcast release of the film on POV.

This was my first time working with director Jonathan Olshefski and I loved every minute of it, The entire team on Quest was focused on making this film better with every decision, and he had to be the final voice on everything. We were connected through my friend producer Sabrina Gordon, who I had previously worked with on the film Undocumented. It was a pretty quick turn of events, as I think I got the first call about the film Thanksgiving weekend of 2016. We started working on the film the day after Christmas that year and were finished mix two weeks later with the entire sound edit and mix for the 2017 Sundance film festival.

How important is the audio mix/sound design in the overall cinematic experience of Quest? What was most important to Olshefski?
The sound of a film is half of the experience. I know it sounds cliché, but after years of working with clients on improving their films, the importance of a good sound mix and edit can’t be understated. I have seen films come to life by simply adding Foley to a few intimate moments in a scene. It seems like such a small detail in the grand scheme of a film’s soundtrack, but feeling that intimacy with a character connects us to them in a visceral way.

Since Quest was a film not only about the Rainey family but also their neighborhood of North Philly, I spent a lot of time researching the sounds of Philadelphia. I gathered a lot of great references and insight from friends who had grown up in Philly, like the sounds of “ghetto birds” (helicopters), the motorbikes that are driven around constantly and the SEPTA buses. As Jon and I spoke about the film’s soundtrack, those kinds of sounds and ideas were exactly what he was looking for when we were out on the streets of North Philly. It created an energy to the film that made it vivid and alive.

The film was shot over a 10-year period. How did that prolonged production affect the audio post? Were there format issues or other technical issues you needed to overcome?
It presented some challenges, but luckily Jon always recorded with a lav or a boom on his camera for the interviews, so matching their sound qualities was easier than if he had just been using a camera mic. There are probably half a dozen “narrated” scenes in Quest that are built from interview sound bites, so bouncing around from interviews 10 years apart was tricky and required a lot of attention to detail.

In addition, Quest‘s phenomenal editor Lindsay Utz was cutting scenes up until the last day of our sound mix. So even once we got an entire scene sounding clean and balanced, it would then change and we’d have to add a new line from some other interview during that decade-long period. She definitely kept me on my toes, but it was all to make the film better.

Music is a big part of the family’s lives. Did the fact that they run a recording studio out of their home affect your work?
Yes. The first thing I did once we started on the film was to go down to Quest’s studio in Philly and record “impulse responses” (IRs) of the space, essentially recording the “sound” of a room or space. I wanted to bring that feeling of the natural reverbs in his studio and home to the film. I captured the live room where the artists would be recording, his control room in the studio and even the hallway leading to the studio with doors opened and closed, because sound changes and becomes more muffled as more doors are shut between the microphone and the sound source. The IRs helped me add incredible depth and the feeling that you were there with them when I was mixing the freestyle rap sessions and any scenes that took place in the home and studio.

Jon and I also grabbed dozens of tracks that Quest had produced over the years, so that we could add them into the film in subtle ways, like when a car drives by or from someone’s headphones. It’s those kinds of little details that I love adding, like Easter eggs that only a handful of us know about. They make me smile whenever I watch a film.

Any particular scene or section or aspect of Quest that you found most challenging or interesting to work on?
The scenes involving Quest’s daughter PJ’s injury through her stay in the hospital and her return back home had a lot of challenges that came along with them. We used sound design and the score from the amazing composer T. Griffin to create the emotional arc that something dangerous and life-changing was about to happen.

Once we were in the hospital, we wanted the sound of everything to be very, very quiet. There is a scene in which Quest is whispering to PJ while she is in pain and trying to recover. The actual audio from that moment had a few nurses and women in the background having a loud conversation and occasionally laughing. It took the viewer immediately away from the emotions that we were trying to connect with, so we ended up scrapping that entire audio track and recreated the scene from scratch. Jon actually ended up getting in the sound booth and did some very low and quiet whispering of the kinds of phrases Quest said to his daughter. It took a couple hours to finesse that scene.

Lastly, the scene when PJ gets out of the hospital and is returning back into a world that didn’t stop while she was recovering. We spent a lot of time shifting back and forth between the reality of what happened, and the emotional journey PJ was going through trying to regain normalcy in her life. There was a lot of attention to detail in the mix on that scene because it had to be delivered correctly in order to not break the momentum that had been created.

What was the key technology you used on the project?
Avid Pro Tools, Izotope RX 5 Advanced, Audio Ease Altiverb, Zoom H4N; and a matched stereo pair of sE Electronics sE1a condenser mics.

Who else at Hobo was involved in Quest?
The entire Hobo team really stepped up on this project — namely our sound effects editors Stephen Davies, Diego Jimenez and Julian Angel; Foley artist Oscar Convers; and dialogue editor Jesse Peterson.

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Capturing, creating historical sounds for AMC’s The Terror

By Jennifer Walden

It’s September 1846. Two British ships — the HMS Erebus and HMS Terror — are on an exploration to find the Northwest Passage to the Pacific Ocean. The expedition’s leader, British Royal Navy Captain Sir John Franklin, leaves the Erebus to dine with Captain Francis Crozier aboard the Terror. A small crew rows Franklin across the frigid, ice-choked Arctic Ocean that lies north of Canada’s mainland to the other vessel.

The opening overhead shot of the two ships in AMC’s new series The Terror (Mondays 9/8c) gives the audience an idea of just how large those ice chunks are in comparison with the ships. It’s a stunning view of the harsh environment, a view that was completely achieved with CGI and visual effects because this series was actually shot on a soundstage at Stern Film Studio, north of Budapest, Hungary.

 Photo Credit: Aidan Monaghan/AMC

Emmy- and BAFTA-award-winning supervising sound editor Lee Walpole of Boom Post in London, says the first cut he got of that scene lacked the VFX, and therefore required a bit of imagination. “You have this shot above the ships looking down, and you see this massive green floor of the studio and someone dressed in a green suit pushing this boat across the floor. Then we got the incredible CGI, and you’d never know how it looked in that first cut. Ultimately, mostly everything in The Terror had to be imagined, recorded, treated and designed specifically for the show,” he says.

Sound plays a huge role in the show. Literally everything you hear (except dialogue) was created in post — the constant Arctic winds, the footsteps out on the packed ice and walking around on the ship, the persistent all-male murmur of 70 crew members living in a 300-foot space, the boat creaks, the ice groans and, of course, the creature sounds. The pervasive environmental sounds sell the harsh reality of the expedition.

Thanks to the sound and the CGI, you’d never know this show was shot on a soundstage. “It’s not often that we get a chance to ‘world-create’ to that extent and in that fashion,” explains Walpole. “The sound isn’t just there in the background supporting the story. Sound becomes a principal character of the show.”

Bringing the past to life through sound is one of Walpole’s specialties. He’s created sound for The Crown, Peaky Blinders, Klondike, War & Peace, The Imitation Game, The King’s Speech and more. He takes a hands-on approach to historical sounds, like recording location footsteps in Lancaster House for the Buckingham Palace scenes in The Crown, and recording the sounds on-board the Cutty Sark for the ships in To the Ends of the Earth (2005). For The Terror, his team spent time on-board the Golden Hind, which is a replica of Sir Francis Drake’s ship of the same name.

During a 5am recording session, the team — equipped with a Sound Devices 744T recorder and a Schoeps CMIT 5U mic — captured footsteps in all of the rooms on-board, pick-ups and put-downs of glasses and cups, drops of various objects on different surfaces, gun sounds and a selection of rigging, pulleys and rope moves. They even recorded hammering. “We took along a wooden plank and several hammers,” describes Walpole. “We laid the plank across various surfaces on the boat so we could record the sound of hammering resonating around the hull without causing any damage to the boat itself.”

They also recorded footsteps in the ice and snow and reached out to other sound recordists for snow and ice footsteps. “We wanted to get an authentic snow creak and crunch, to have the character of the snow marry up with the depth and freshness of the snow we see at specific points in the story. Any movement from our characters out on the pack ice was track-laid, step-by-step, with live recordings in snow. No studio Foley feet were recorded at all,” says Walpole.

In The Terror, the ocean freezes around the two ships, immobilizing them in pack ice that extends for miles. As the water continues to freeze, the ice grows and it slowly crushes the ships. In the distance, there’s the sound of the ice growing and shifting (almost like tectonic plates), which Walpole created from sourced hydrophone recordings from a frozen lake in Canada. The recordings had ice pings and cracking that, when slowed and pitched down, sounded like massive sheets of ice rubbing against each other.

Effects editor Saoirse Christopherson capturing sounds on board a kayak in the Thames River.

The sounds of the ice rubbing against the ships were captured by one of the show’s sound effects editor, Saoirse Christopherson, who along with an assistant, boarded a kayak and paddled out onto the frozen Thames River. Using a Røde NT2 and a Roland R26 recorder with several contact mics strapped to the kayak’s hull, they spent the day grinding through, over and against the ice. “The NT2 was used to directionally record both the internal impact sounds of the ice on the hull and also any external ice creaking sounds they could generate with the kayak,” says Walpole.

He slowed those recordings down significantly and used EQ and filters to bring out the low-mid to low-end frequencies. “I also fed them through custom settings on my TC Electronic reverbs to bring them to life and to expand their scale,” he says.

The pressure of the ice is slowly crushing the ships, and as the season progresses the situation escalates to the point where the crew can’t imagine staying there another winter. To tell that story through sound, Walpole began with recordings of windmill creaks and groans. “As the situation gets more dire, the sound becomes shorter and sharper, with close, squealing creaks that sound as though the cabins themselves are warping and being pulled apart.”

In the first episode, the Erebus runs aground on the ice and the crew tries to hack and saw the ice away from the ship. Those sounds were recorded by Walpole attacking the frozen pond in his backyard with axes and a saw. “That’s my saw cutting through my pond, and the axe material is used throughout the show as they are chipping away around the boat to keep the pack ice from engulfing it.”

Whether the crew is on the boat or on the ice, the sound of the Arctic is ever-present. Around the ships, the wind rips over the hulls and howls through the rigging on deck. It gusts and moans outside the cabin windows. Out on the ice, the wind constantly groans or shrieks. “Outside, I wanted it to feel almost like an alien planet. I constructed a palette of designed wind beds for that purpose,” says Walpole.

He treated recordings of wind howling through various cracks to create a sense of blizzard winds outside the hull. He also sourced recordings of wind at a disused Navy bunker. “It’s essentially these heavy stone cells along the coast. I slowed these recordings down a little and softened all of them with EQ. They became the ‘holding airs’ within the boat. They felt heavy and dense.”

Below Deck
In addition to the heavy-air atmospheres, another important sound below deck was that of the crew. The ships were entirely occupied by men, so Walpole needed a wide and varied palette of male-only walla to sustain a sense of life on-board. “There’s not much available in sound libraries, or in my own library — and certainly not enough to sustain a 10-hour show,” he says.

So they organized a live crowd recording session with a group of men from CADS — an amateur dramatics society from Churt, just outside of London. “We gave them scenarios and described scenes from the show and they would act it out live in the open air for us. This gave us a really varied palette of worldized effects beds of male-only crowds that we could sit the loop group on top of. It was absolutely invaluable material in bringing this world to life.”

Visually, the rooms and cabins are sometimes quite similar, so Walpole uses sound to help the audience understand where they are on the ship. In his cutting room, he had the floor plans of both ships taped to the walls so he could see their layouts. Life on the ship is mainly concentrated on the lower deck — the level directly below the upper deck. Here is where the men sleep. It also has the canteen area, various cabins and the officers’ mess.

Below that is the Orlop deck, where there are workrooms and storerooms. Then below that is the hold, which is permanently below the waterline. “I wanted to be very meticulous about what you would hear at the various levels on the boat and indeed the relative sound level of what you are hearing in these locations,” explains Walpole. “When we are on the lower two decks, you hear very little of the sound of the men above. The soundscapes there are instead focused on the creaks and the warping of the hull and the grinding of the ice as it crushes against the boat.”

One of Walpole’s favorite scenes is the beginning of Episode 4. Capt. Francis Crozier (Jared Harris) is sitting in his cabin listening to the sound of the pack ice outside, and the room sharply tilts as the ice shifts the ship. The scene offers an opportunity to tell a cause-and-effect story through sound. “You hear the cracks and pings of the ice pack in the distance and then that becomes localized with the kayak recordings of the ice grinding against the boat, and then we hear the boat and Crozier’s cabin creak and pop as it shifts. This ultimately causes his bottle to go flying across the table. I really enjoyed having this tale of varying scales. You have this massive movement out on the ice and the ultimate conclusion of it is this bottle sliding across the table. It’s very much a sound moment because Crozier is not really saying anything. He’s just sitting there listening, so that offered us a lot of space to play with the sound.”

The Tuunbaq
The crew in The Terror isn’t just battling the elements, scurvy, starvation and mutiny. They’re also being killed off by a polar bear-like creature called the Tuunbaq. It’s part animal, part mythical creature that is tied to the land and spirits around it. The creature is largely unseen for the first part of the season so Walpole created sonic hints as to the creature’s make-up.

Walpole worked with showrunner David Kajganich to find the creature’s voice. Kajganich wanted the creature to convey a human intelligence, and he shared recordings of human exorcisms as reference material. They hired voice artist Atli Gunnarsson to perform parts to picture, which Walpole then fed into the Dehumaniser plug-in by Krotos. “Some of the recordings we used raw as well, says Walpole. “This guy could make these crazy sounds. His voice could go so deep.”

Those performances were layered into the track alongside recordings of real bears, which gave the sound the correct diaphragm, weight, and scale. “After that, I turned to dry ice screeches and worked those into the voice to bring a supernatural flavor and to tie the creature into the icy landscape that it comes from.”

Lee Walpole

In Episode 3, an Inuit character named Lady Silence (Nive Nielsen) is sitting in her igloo and the Tuunbaq arrives snuffling and snorting on the other side of the door flap. Then the Tuunbaq begins to “sing” at her. To create that singing, Walpole reveals that he pulled Lady Silence’s performance of The Summoning Song (the song her people use to summon the Tuunbaq to them) from a later episode and fed that into Dehumaniser. “This gave me the creature’s version. So it sounds like the creature is singing the song back to her. That’s one for the diehards who will pick up on it and recognize the tune,” he says.

Since the series is shot on a soundstage, there’s no usable bed of production sound to act as a jumping off point for the post sound team. But instead of that being a challenge, Walpole finds it liberating. “In terms of sound design, it really meant we had to create everything from scratch. Sound plays such a huge role in creating the atmosphere and the feel of the show. When the crew is stuck below decks, it’s the sound that tells you about the Arctic world outside. And the sound ultimately conveys the perils of the ship slowly being crushed by the pack ice. It’s not often in your career that you get such a blank canvas of creation.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Review: Krotos Reformer Pro for customizing sounds

By Robin Shore

Krotos has got to be one of the most.innovative developers of sound design tools in the industry right now. That is a strong statement, but I stand by it. This Scottish company has become well known over the past few years for its Dehumaniser line of products, which bring a fresh approach to the creation of creature vocals and monster sounds. Recently, they released a new DAW plugin, Reformer Pro, which aims to give sound editors creative new ways of accessing and manipulating their sound effects.

Reformer Pro brings a procedural approach to working with sound effects libraries. According to their manual, “Reformer Pro uses an input to control and select segments of prerecorded audio automatically, and recompiles them in realtime, based on the characteristics of the incoming signal.” In layman’s terms this means you can “perform” sound effects from a library in realtime, using only a microphone and your voice.

It’s dead simple to use. A menu inside the plugin lets you choose from a list of libraries that have been pre-analyzed for use with Reformer Pro. Once you’ve loaded up the library you want, all that’s left to do is provide some sort of sonic input and let the magic happen. Whatever sound you put in will be instantly “reformed” into a new sound effect of your choosing. A number of libraries come bundled in when you buy Reformer Pro and additional libraries can be purchased from the Krotos website. The choice to include the Black Leopard library as a default when you first open the plugin was a very good one. There is just something so gratifying about breathing and grunting into a microphone and hearing a deep menacing growl come out the speakers instead of your own voice. It made me an immediate fan.

There are a few knobs and switches that let you tweak the response characteristics of Reformer Pro’s output, but for the most part you’ll be using sound to control things, and the amount of control you can get over the dynamics and rhythm of Reformer Pro’s output is impressive. While my immediate instinct was to drive Reformer Pro by vocalizing through a mic, any sound source can work well as an input. I also got great results by rubbing and tapping my fingers directly against the grill of a microphone and by dragging the mic across the surface of my desk.

Things get even more interesting if you start feeding pre-recorded audio into Reformer Pro. Using a Foley footstep track as the input for library of cloth and leather sounds creates a realistic and perfectly synced rustle track. A howling wind used as the input for a library of creaks and rattles can add a nice layer of texture to a scenes ambience tracks. Pumping music through Reformer Pro can generate some really wacky sounds and is great way to find inspiration and test out abstract sound design ideas.

If the only libraries you could use with Reformer Pro’s were the 100 or so available on the Krotos website it would still be a fun and innovative tool, but its utility would be pretty limited. What makes Reformer Pro truly powerful is its analysis tool. This lets you create custom libraries out of sounds from your own collection. The possibilities here are literally endless. As long as sound exists it can turned into a unique new plugin. To be sure some sounds are better for this than others, but it doesn’t take long at all figure out what kind of sounds will work best and I was pleasantly surprised with how well most of the custom libraries I created turned out. This is a great way to breath new life into an old sound effects collection.

Summing Up
Reformer Pro adds a sense liveliness, creativity and most importantly fun to the often tedious task of syncing sound effects to picture. It’s also a great way to breath new life into an old sound effects collection. Anyone who spends their days working with sound effects would be doing themselves a disservice by not taking Reformer Pro for a test drive, I imagine most will be both impressed and excited by it’s novel approach to sound effects editing and design.


Robin Shore is an audio engineer at NYC’s Silver Sound Studios

Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.

Super Bowl: Heard City’s audio post for Tide, Bud and more

By Jennifer Walden

New York audio post house Heard City put their collaborative workflow design to work on the Super Bowl ad campaign for Tide. Philip Loeb, partner/president of Heard City, reports that their facility is set up so that several sound artists can work on the same project simultaneously.

Loeb also helped to mix and sound design many of the other Super Bowl ads that came to Heard City, including ads for Budweiser, Pizza Hut, Blacture, Tourism Australia and the NFL.

Here, Loeb and mixer/sound designer Michael Vitacco discuss the approach and the tools that their team used on these standout Super Bowl spots.

Philip Loeb

Tide’s It’s a Tide Ad campaign via Saatchi & Saatchi New York
Is every Super Bowl ad really a Tide ad in disguise? A string of commercials touting products from beer to diamonds, and even a local ad for insurance, are interrupted by David Harbour (of Stranger Things fame). He declares that those ads are actually just Tide commercials, as everyone is wearing such clean clothes.

Sonically, what’s unique about this spot?
Loeb: These spots, four in total, involved sound design and mixing, as well as ADR. One of our mixers, Evan Mangiamele, conducted an ADR session with David Harbour, who was in Hawaii, and we integrated that into the commercial. In addition, we recorded a handful of different characters for the lead-ins for each of the different vignettes because we were treating each of those as different commercials. We had to be mindful of a male voiceover starting one and then a female voiceover starting another so that they were staggered.

There was one vignette for Old Spice, and since the ads were for P&G, we did get the Old Spice pneumonic and we did try something different at the end — with one version featuring the character singing the pneumonic and one of him whistling it. There were many different variations and we just wanted, in the end, to get part of the pneumonic into the joke at the end.

The challenge with the Tide campaign, in particular, was to make each of these vignettes feel like it was a different commercial and to treat each one as such. There’s an overall mix level that goes into that but we wanted certain ones to have a little bit more dynamic range than the others. For example, there is a cola vignette that’s set on a beach with people taking a selfie. David interrupts them by saying, “No, it’s a Tide ad.”

For that spot, we had to record a voiceover that was very loud and energetic to go along with a loud and energetic music track. That vignette cuts into the “personal digital assistant” (think Amazon’s Alexa) spot. We had to be very mindful of these ads flowing into each other while making it clear to the viewer that these were different commercials with different products, not one linear ad. Each commercial required its own voiceover, its own sound design, its own music track, and its own tone.

One vignette was about car insurance featuring a mechanic in a white shirt under a car. That spot isn’t letterbox like the others; it’s 4:3 because it’s supposed to be a local ad. We made that vignette sound more like a local ad; it’s a little over-compressed, a little over-equalized and a little videotape sounding. The music is mixed a little low. We wanted it to sound like the dialogue is really up front so as to get the message across, like a local advertisement.

What’s your workflow like?
Loeb: At Heard City, our workflow is unique in that we can have multiple mixers working on the same project simultaneously. This collaborative process makes our work much more efficient, and that was our original intent when we opened the company six years ago. The model came to us by watching the way that the bigger VFX companies work. Each artist takes a different piece of the project and then all of the work is combined at the end.

We did that on the Tide campaign, and there was no other way we could have done it due to the schedule. Also, we believe this workflow provides a much better product. One sound artist can be working specifically on the sound design while another can be mixing. So as I was working on mixing, Evan was flying in his sound design to me. It was a lot of fun working on it like that.

What tools helped you to create the sound?
One plug-in we’re finding to be very helpful is the iZotope Neutron. We put that on the master bus and we have found many settings that work very well on broadcast projects. It’s a very flexible tool.

Vitacco: The Neutron has been incredibly helpful overall in balancing out the mix. There are some very helpful custom settings that have helped to create a dynamic mix for air.

Tourism Australia Dundee via Droga5 New York
Danny McBride and Chris Hemsworth star in this movie-trailer-turned-tourism-ad for Australia. It starts out as a movie trailer for a new addition to the Crocodile Dundee film franchise — well, rather, a spoof of it. There’s epic music featuring a didgeridoo and title cards introducing the actors and setting up the premise for the “film.” Then there’s talk of miles of beaches and fine wine and dining. It all seems a bit fishy, but finally Danny McBride confirms that this is, in fact, actually a tourism ad.

Sonically, what’s unique about this spot?
Vitacco: In this case, we were creating a fake movie trailer that’s a misdirect for the audience, so we aimed to create sound design that was both in the vein of being big and epic and also authentic to the location of the “film.”

One of the things that movie trailers often draw upon is a consistent mnemonic to drive home a message. So I helped to sound design a consistent mnemonic for each of the title cards that come up.

For this I used some Native Instruments toolkits, like “Rise & Hit” and “Gravity,” and Tonsturm’s Whoosh software to supplement some existing sound design to create that consistent and branded mnemonic.

In addition, we wanted to create an authentic sonic palette for the Australian outback where a lot of the footage was shot. I had to be very aware of the species of animals and insects that were around. I drew upon sound effects that were specifically from Australia. All sound effects were authentic to that entire continent.

Another factor that came into play was that anytime you are dealing with a spot that has a lot of soundbites, especially ones recorded outside, there tends to be a lot of noise reduction taking place. I didn’t have to hit it too hard because everything was recorded very well. For cleanup, I used the iZotope RX 6 — both the RX Connect and the RX Denoiser. I relied on that heavily, as well as the Waves WNS plug-in, just to make sure that things were crisp and clear. That allowed me the flexibility to add my own ambient sound and have more control over the mix.

Michael Vitacco

In RX, I really like to use the Denoiser instead of the Dialogue Denoiser tool when possible. I’ll pull out the handles of the production sound and grab a long sample of noise. Then I’ll use the Denoiser because I find that works better than the Dialogue Denoiser.

Budweiser Stand By You via David Miami
The phone rings in the middle of the night. A man gets out of bed, prepares to leave and kisses his wife good-bye. His car radio announces that a natural disaster is affecting thousands of families who are in desperate need of aid. The man arrives at a Budweiser factory and helps to organize the production of canned water instead of beer.

Sonically, what’s unique about this spot?
Loeb: For this spot, I did a preliminary mix where I handled the effects, the dialogue and the music. We set the preliminary tone for that as to how we were going to play the effects throughout it.

The spot starts with a husband and wife asleep in bed and they’re awakened by a phone call. Our sound focused on the dialogue and effects upfront, and also the song. I worked on this with another fantastic mixer here at Heard City, Elizabeth McClanahan, who comes from a music background. She put her ears to the track and did an amazing job of remixing the stems.

On the master track in the Pro Tools session, she used iZotope’s Neutron, as well as the FabFilter Pro-L limiter, which helps to contain the mix. One of the tricks on a dynamic mix like that — which starts off with that quiet moment in the morning and then builds with the music in the end — is to keep it within the restrictions of the CALM Act and other specifications that stipulate dynamic range and not just average loudness. We had to be mindful of how we were treating those quiet portions and the lower portions so that we still had some dynamic range but we weren’t out of spec.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @AudioJeney.

Coco’s sound story — music, guitars and bones

By Jennifer Walden

Pixar’s animated Coco is a celebration of music, family and death. In the film, a young Mexican boy named Miguel (Anthony Gonzalez) dreams of being a musician just like his great-grandfather, even though his family is dead-set against it. On the evening of Día de los Muertos (the Mexican holiday called Day of the Dead), Miguel breaks into the tomb of legendary musician Ernesto de la Cruz (Benjamin Bratt) and tries to steal his guitar. The attempted theft transforms Miguel into a spirit, and as he flees the tomb he meets his deceased ancestors in the cemetery.

Together they travel to the Land of the Dead where Miguel discovers that in order to return to life he must have the blessing of his family. The matriarch, great-grandmother Mamá Imelda (Alanna Ubach) gives her blessing with one stipulation, that Miguel can never be a musician. Feeling as though he cannot live without music, Miguel decides to seek out the blessing of his musician great-grandfather.

Music is intrinsically tied to the film’s story, and therefore to the film’s soundtrack. Ernesto de la Cruz’s guitar is like another character in the film. The Skywalker Sound team handled all the physical guitar effects, from subtle to destructive. Although they didn’t handle any of the music, they covered everything from fret handling and body thumps to string breaks and smashing sounds. “There was a lot of interaction between music and effects, and a fine balance between them, given that the guitar played two roles,” says supervising sound editor/sound designer/re-recording mixer Christopher Boyes, who was just nominated for a CAS award for his mixing work on Coco. His Skywalker team on the film included co-supervising sound editor J.R. Grubbs, sound effects editors Justin Doyle and Jack Whittaker, and sound design assistant Lucas Miller.

Boyes bought a beautiful guitar from a pawn shop in Petaluma near their Northern California location, and he and his assistant Miller spent a day recording string sounds and handling sounds. “Lucas said that one of the editors wanted us to cut the guitar strings,” says Boyes. “I was reluctant to cut the strings on this beautiful guitar, but we finally decided to do it to get the twang sound effects. Then Lucas said that we needed to go outside and smash the guitar. This was not an inexpensive guitar. I told him there was no way we were going to smash this guitar, and we didn’t! That was not a sound we were going to create by smashing the actual guitar! But we did give it a couple of solid hits just to get a nice rhythmic sound.”

To capture the true essence of Día de los Muertos in Mexico, Boyes and Grubbs sent effects recordists Daniel Boyes, Scott Guitteau, and John Fasal to Oaxaca to get field recordings of the real 2016 Día de los Muertos celebrations. “These recordings were essential to us and director Lee Unkrich, as well as to Pixar, for documenting and honoring the holiday. As such, the recordings formed the backbone of the ambience depicted in the track. I think this was a crucial element of our journey,” says Boyes.

Just as the celebration sound of Día de los Muertos was important, so too was the sound of Miguel’s town. The team needed to provide a realistic sense of a small Mexican town to contrast with the phantasmagorical Land of the Dead, and the recordings that were captured in Mexico were a key building block for that environment. Co-supervising sound editor Grubbs says, “Those recordings were invaluable when we began to lay the background tracks for locations like the plaza, the family compound, the workshop, and the cemetery. They allowed us to create a truly rich and authentic ambiance for Miguel’s home town.”

Bone Collecting
Another prominent set of sounds in Coco are the bones. Boyes notes that director Unkrich had specific guidelines for how the bones should sound. Characters like Héctor (Gael García Bernal), who are stuck in the Land of the Dead and are being forgotten by those still alive, needed to have more rattle-y sounding bones, as if the skeleton could come apart easily. “Héctor’s life is about to dissipate away, just as we saw with his friend Chicharrón [Edward James Olmos] on the docks, so their skeletal structure is looser. Héctor’s bones demonstrated that right from the get-go,” he explains.

In contrast, if someone is well remembered, such as de la Cruz, then the skeletal structure should sound tight. “In Miguel’s family, Papá Julio [Alfonso Arau] comically bursts apart many times, but he goes back together as a pretty solid structure,” explains Boyes. “Lee [Unkrich] wanted to dig into that dynamic first of all, to have that be part of the fabric that tells the story. Certain characters are going to be loose because nobody remembers them and they’re being forgotten.”

Creating the bone sounds was the biggest challenge for Boyes as a sound designer. Unkrich wanted to hear the complexity of the bones, from the clatter and movement down to the detail of cartilage. “I was really nervous about the bones challenge because it’s a sound that’s not easily embedded into a track without calling attention to itself, especially if it’s not done well,” admits Boyes.

Boyes started his bone sound collection by recording a mobile he built using different elements, like real bones, wooden dowels, little stone chips and other things that would clatter and rattle. Then one day Boyes stumbled onto an interesting bone sound while making a coconut smoothie. “I cracked an egg into the smoothie and threw the eggshell into the empty coconut hull and it made a cool sound. So I played with that. Then I was hitting the coconut on concrete, and from all of those sources I created a library of bone sounds.” Foley also contributed to the bone sounds, particularly for the literal, physical movements, like walking.

According to Grubbs, the bone sounds were designed and edited by the Skywalker team and then presented to the directors over several playbacks. The final sound of the skeletons is a product of many design passes, which were carefully edited in conjunction with the Foley bone recordings and sometimes used in combination with the Foley.

L-R: J.R. Grubbs and Chris Boyes

Because the film is so musical, the bone tracks needed to have a sense of rhythm and timing. To hit moments in a musical way, Boyes loaded bone sounds and other elements into Native Instruments’ Kontakt and played them via a MIDI keyboard. “One place for the bones that was really fun was when Héctor went into the security office at the train station,” says Boyes.

Héctor comes apart and his fingers do a little tap dance. That kind of stuff really lent to the playfulness of his character and it demonstrated the looseness of his skeletal structure.”

From a sound perspective, Boyes feels that Coco is a great example of how movies should be made. During editorial, he and Grubbs took numerous trips to Pixar to sit down with the directors and the picture department. For several months before the final mix, they played sequences for Unkrich that they wanted to get direction on. “We would play long sections of just sound effects, and Lee — being such a student of filmmaking and being an animator — is quite comfortable with diving down into the nitty-gritty of just simple elements. It was really a collaborative and healthy experience. We wanted to create the track that Lee wanted and wanted to make sure that he knew what we were up to. He was giving us direction the whole way.”

The Mix
Boyes mixed alongside re-recording mixer Michael Semanick (music/dialogue) on Skywalker’s Kurosawa Stage. They mixed in native Dolby Atmos on a DFC console. While Boyes mixed, effects editor Doyle handled last-minute sound effects needs on the stage, and Grubbs ran the logistics of the show. Grubbs notes that although he and Boyes have worked together for a long time this was the first time they’ve shared a supervising credit.

“J.R. [Grubbs] and I have been working together for probably 30 years now.” Says Boyes. “He always helped to run the show in a very supervisory way, so I just felt it was time he started getting credit for that. He’s really kept us on track, and I’m super grateful to him.”

One helpful audio tool for Boyes during the mix was the Valhalla Room reverb, which he used on Miguel’s footsteps inside de la Cruz’s tomb. “Normally, I don’t use plug-ins at all when I’m mixing. I’m a traditional mixer who likes to use a console and TC Electronic’s TC 6000 and the Leixcon 480 reverb as outboard gear. But in this one case, the Valhalla Room plug-in had a preset that really gave me a feeling of the stone tomb.”

Unkrich allowed Semanick and Boyes to have a first pass at the soundtrack to get it to a place they felt was playable, and then he took part in the final mix process with them. “I just love Lee’s respect for us; he gives us time to get the soundtrack into shape. Then, he sat there with us for 9 to 10 hours a day, going back and forth, frame by frame at times and section by section. Lee could hear everything, and he was able to give us definitive direction throughout. The mix was achieved by and directed by Lee, every frame. I love that collaboration because we’re here to bring his vision and Pixar’s vision to the screen. And the best way to do that is to do it in the collaborative way that we did,” concludes Boyes.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Behind the Titles: Something’s Awry Productions

NAME: Amy Theorin

NAME: Kris Theorin

NAME: Kurtis Theorin

COMPANY: Something’s Awry Productions

CAN YOU DESCRIBE YOUR COMPANY?
We are a family owned production company that writes, creates and produces funny sharable web content and commercials mostly for the toy industry. We are known for our slightly offbeat but intelligent humor and stop-motion animation. We also create short films of our own both animated and live action.

WHAT’S YOUR JOB TITLE?
Amy: Producer, Marketing Manager, Business Development
Kris: Director, Animator, Editor, VFX, Sound Design
Kurtis: Creative Director, Writer

WHAT DOES THAT ENTAIL?
Amy: A lot! I am the point of contact for all the companies and agencies we work with. I oversee production schedules, all social media and marketing for the company. Because we operate out of a small town in Pennsylvania we rely on Internet service companies such as Tongal, Backstage.com, Voices.com, Design Crowd and Skype to keep us connected with the national brands and talent we work with who are mostly based in LA and New York. I don’t think we could be doing what we are doing 10 years ago without living in a hub like LA or NYC.

Kris: I handle most of production, post production and some pre-production. Specifically, storyboarding, shooting, animating, editing, sound design, VFX and so on.

Kurtis: A lot of writing. I basically write everything that our company does, including commercials, pitches and shorts. I help out on our live-action shoots and occasionally direct. I make props and sets for our animation. I am also Something Awry’s resident voice actor.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Amy: Probably that playing with toys is something we get paid to do! Building Lego sets and setting up Hot Wheels jumps is all part of the job, and we still get excited when we get a new toy delivery — who wouldn’t? We also get to explore our inner child on a daily basis.

Hot Wheels

Kurtis: A lot of the arts and crafts knowledge I gathered from my childhood has become very useful in my job. We have to make a lot of weird things and knowing how to use clay and construction paper really helps.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Amy: See above. Seriously, we get to play with toys for a living! Being on set and working with actors and crew in cool locations is also great. I also like it when our videos exceed our client’s expectations.

Kris: The best part of my job is being able to work with all kinds of different toys and just getting the chance to make these weird and entertaining movies out of them.

Kurtis: Having written something and seeing others react positively to it.

WHAT’S YOUR LEAST FAVORITE?
Amy/Kris: Working through the approval process with rounds of changes and approvals from multiple departments throughout a large company. Sometimes it goes smoothly and sometimes it doesn’t.

Kurtis: Sitting down to write.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Amy: Since most of the companies we work with are on the West Coast my day kicks into high gear around 4:00pm East Coast time.

Kris: I work best in the morning.

Kurtis: My day often consists of hours of struggling to sit down and write followed by about three to four hours where I am very focused and get everything done. Most often those hours occur from 4pm to 7pm, but it varies a lot.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Amy: Probably helping to organize events somewhere. I am not happy unless I am planning or organizing a project or event of some sort.

Kris: Without this job, I’d likely go into some kind of design career or something involving illustration. For me, drawing is one of my secondary interests after filming.

Kurtis: I’d be telling stories in another medium. Would I be making a living doing it is another question.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Amy: I have always loved advertising and creative projects. When I was younger I was the advertising manager for PNC Bank, but left the corporate world when I had kids and started my own photography business, which I operated for 10 years. Once my kids became interested in film I wanted to foster that interest and here we are!

Kris: Filmmaking is something I’ve always had an interest in. I started when I was just eight years old and from there it’s always something I loved to do. The moment when I first realized this would be something I’d follow for an actual career was really around 10th grade, when I started doing it more on a professional level by creating little videos here and there for company YouTube channels. That’s when it all started to sink in that this could actually be a career for me.

Kurtis: I knew I wanted to tell stories very early on. Around 10 years old or so I started doing some home movies. I could get people to laugh and react to the films I made. It turned out to be the medium I could most easily tell stories in so I have stuck with it ever since.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Amy: We are currently in the midst of two major projects — one is a six-video series for Hot Wheels that involves creating six original song music videos parodying different music genres. The other is a 12-episode series for Warner Bros. Scooby Doo that features live-action and stop-motion animation. Each episode is a mini-mystery that Scooby and the gang solve. The series focuses on the imaginations of different children and the stories they tell.

We also have two short animations currently on the festival circuit. One is a hybrid of Lovecraft and a Scooby-Doo chase scene called Mary and Marsha in the Manor of Madness. The other is dark fairytale called The Gift of the Woods.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Amy: Although I am proud of a lot of our projects I am most proud of the fact that even though we are such a small company, and live in the middle of nowhere, we have been able to work with companies around the world like Lego, Warner Bros. and Mattel. Things we create are seen all over the world, which is pretty cool for us.

Lego

Kris: The Lego Yellow Submarine Beatles film we created is what I’m most proud of. It just turned out to be this nice blend of wacky visuals, crazy action, and short concise storytelling that I try to do with most of my films.

Kurtis: I really like the way Mary and Marsha in the Manor of Madness turned out. So far it is the closest we have come to creating something with a unique feel and a sense of energetic momentum; two long term goals I have for our work. We also recently wrapped filming for a twelve episode branded content web series. It is our biggest project yet and I am proud that we were able to handle the production of it really well.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Amy: Skype, my iPad and the rise of online technology companies such as Tongal, Voices.com, Backstage.com and DesignCrowd that help us get our job done.

Kris: Laptop computers, Wacom drawing tablets and iPhones.

Kurtis: My laptop (and it’s software Adobe Premiere and Final Draft), my iPhone and my Kindle.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Amy: Being in this position I like to know what is going on in the industry so I follow Ad Age, Ad Week, Ad Freak, Mashable, Toy Industry News, iO9, Geek Tyrant, and of course all the social media channels of our clients like Lego, Warner Bros., Hot Wheels and StikBots. We also are on Twitter (@AmyTheorin) Instagram (@Somethingsawryproductions) and Facebook (Somethingsawry).

Kris: Mostly YouTube and Facebook.

Kurtis: I follow the essays of Film Crit Hulk. His work on screenwriting and story-telling is incredibly well done and eye opening. Other than that I try to keep up with news and I follow a handful of serialized web-comics. I try to read, watch and play a lot of different things to get new ideas. You never know when the spaghetti westerns of Sergio Leone might give you the idea for your next toy commercial.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Amy: I don’t usually but I do like to listen to podcasts. Some of my favorites are: How I Built This, Yeah, That’s Probably an Ad and Fresh Air.

Kris: I listen to whatever pop songs are most popular at the time. Currently, that would be Taylor Swift’s “Look What You Made Me Do.”

Kurtis: I listen to an eclectic mix of soundtracks, classic rock songs I‘ve heard in movies, alternative songs I heard in movies, anime theme songs… basically songs I heard with a movie or game and can’t get out of my head. As for particular artists I am partial to They Might Be Giants, Gorillaz, Queen, and the scores of Ennio Morricone, Darren Korb, Jeff Williams, Shoji Meguro and Yoko Kanno.

IS WORKING WITH FAMILY EASIER OR MORE DIFFICULT THAN WORKING/MANAGING IN A REGULAR AGENCY?
Amy: Both! I actually love working with my sons, and our skill sets are very complimentary. I love to organize and my kids don’t. Being family we can be very upfront with each other in terms of telling our opinions without having to worry about hurting each other’s feelings.

We know at the end of the day we will always be there for each other no matter what. It sounds cliché but it’s true I think. We have a network of people we also work with on a regular basis who we have great relationships with as well. Sometimes it is hard to turn work off and just be a family though, and I find myself talking with them about projects more often than what is going on with them personally. That’s something I need to work on I guess!

Kris: It’s great because you can more easily communicate and share ideas with each other. It’s generally a lot more open. After a while, it really is just like working within an agency. Everything is fine-tuned and you have worked out a pipeline for creating and producing your videos.

Kurtis: I find it much easier. We all know how we do our best work and what our strengths are. It certainly helps that my family is very good at what they do. Not to mention working from home means I get to set my own hours and don’t have a commute. Sometimes it’s difficult to stay motivated when you’re not in a professional office setting but overall the pros far outweigh the cons.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Amy: I try to take time out to walk our dog, but mostly I love it so much I don’t mind working on projects all the time. If I don’t have something to work on I am not a happy camper. Sometimes I have to remember that not everyone is working on the weekends, so I can’t bother them with work questions!

Kris: It really helps that I don’t often get stressed. At least, not after doing this job for as long as I have. You really learn how to cope with it all. Oftentimes, it’s more just getting exhausted from working long hours. I’ll often just watch some YouTube videos at the end of a day or maybe a movie if there’s something I really want to see.

Kurtis: I like to read and watch interesting stories. I play a lot games: board games, video games, table-top roleplaying. I also find bike riding improves my mood a lot.

Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

King, along with Alex Gibson, recently won the Academy Award for Achievement in Sound Editing for Dunkirk.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com

Behind the Title: Nylon Studios creative director Simon Lister

NAME: Simon Lister

COMPANY: Nylon Studios

CAN YOU DESCRIBE YOUR COMPANY?
Nylon Studios is a New York- and Sydney-based music and sound house offering original composition and sound design for films and commercials. I am based in the Australia location.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
I help manage and steer the company, while also serving as a sound designer, client liaison, soundtrack creative and thinker.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People are constantly surprised with the amount of work that goes into making a soundtrack.

WHAT TOOLS DO YOU USE?
I use Avid Pro Tools, and some really cool plugins

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is being able to bring a film to life through sound.

WHAT’S YOUR LEAST FAVORITE?
At times, clients can be so stressed and make things difficult. However, sometimes we just need to sit back and look at how lucky we are to be in such a fun industry. So in that case, we try our best to make the client’s experience with us as relaxing and seamless as possible.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Anything that involves me having a camera in my hand and taking pictures.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was pretty young. I got a great break when I was 19 years old in one of the best music studios in New Zealand and haven’t stopped since. Now, I’ve been doing this for 31 years (cough).

Honda Civic spot

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
In the last couple of months I think I’ve counted several different car brand spots we’ve worked on, including Honda, Hyundai, Subaru, Audi and Toyota. All great spots to sink our teeth and ears into.

Also we have been working on the great wildlife series Tales by Light, which is being played on National Geographic and Netflix.

For Every Child

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It would be having the opportunity to film and direct my own commercial, For Every Child, for Unicef global rebranding TVC. We had the amazing voiceover of Liam Neeson and the incredible singing voice of Lisa Gerard (Gladiator, Heat, Black Hawk Down).

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My camera, my computer and my motorbike.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I ride motorbikes throughout Morocco, Baja, Himalayas, Mongolia, Vietnam, Thailand, New Zealand and in the traffic of India.

Audio post vet Rex Recker joins Digital Arts in NYC

Rex Recker has joined the team at New York City’s Digital Arts as a full-time audio post mixer and sound designer. Recker, who co-founded NYC’s AudioEngine after working as VP and audio post mixer at Photomag recording studios, is an award-winning mixer with a long list of credits. Over the span of his career he has worked on countless commercials with clients including McCann Erickson JWT, Ogilvy & Mather, BBDO, DDB, HBO and Warner Books.

Over the years, Recker has developed a following of clients who seek him out for his audio post mixer talents — they seek his expertise in surround sound audio mixing for commercials airing via broadcast, Web and cinemas. In addition to spots, Recker also mixes long-form projects, including broadcast specials and documentaries.

Since joining the Digital Arts team, Recker has already worked on several commercial campaigns, promos and trailers for such clients as Samsung, SlingTV, Ford, Culturelle, Orvitz, NYC Department of Health, and HBO Documentary Films.

Digital Arts, owned by Axel Ericson, is an end-to-end production, finishing and audio facility.

Creating sounds of science for Bill Nye: Science Guy

By Jennifer Walden

Bill Nye, the science hero of a generation of school children, has expanded his role in the science community over the years. His transformation from TV scientist to CEO of The Planetary Society (the world’s largest non-profit space advocacy group) is the subject of Bill Nye: Science Guy — a documentary directed by David Alvarado and Jason Sussberg.

The doc premiered in the US at the SXSW Film Festival and had its international premiere at the Hot Docs Canadian International Documentary Festival in Toronto.

Peter Albrechtsen – Credit: Povl Thomsen

Supervising sound editor/sound designer Peter Albrechtsen, MPSE, started working with directors Alvarado and Sussberg in 2013 on their first feature-length documentary The Immortalists. When they began shooting the Bill Nye documentary in 2015, Albrechtsen was able to see the rough cuts and started collecting sounds and ambiences for the film. “I love being part of projects very early on. I got to discuss some sonic and musical ideas with David and Jason. On documentaries, the actual sound design schedule isn’t typically very long. It’s great knowing the vibe of the film as early as I can so I can then be more focused during the sound editing process. I know what the movie needs and how I should prioritize my work. That was invaluable on a complicated, complex and multilayered movie like this one.”

Before diving in, Albrechtsen, dialogue editor Jacques Pedersen, sound effects editor Morten Groth Brandt and sound effects recordist/assistant sound designer Mikkel Nielsen met up for a jam session — as Albrechtsen calls it — to share the directors’ notes for sound and discuss their own ideas. “It’s a great way of getting us all on the same page and to really use everyone’s talents,” he says.

Albrechtsen and his Danish sound crew had less than seven weeks for sound editorial at Offscreen in Copenhagen. They divided their time evenly between dialogue editing and sound effects editing. During that time, Foley artist Heikki Kossi spent three days on Foley at H5 Film Sound in Kokkola, Finland.

Foley artist Heikki Kossi. Credit: Clas-Olav Slotte

Bill Nye: Science Guy mixes many different media sources — clips from Bill Nye’s TV shows from the ‘90s, YouTube videos, home videos on 8mm film, TV broadcasts from different eras, as well as the filmmakers’ own footage. It’s a potentially headache-inducing combination. “Some of the archival material was in quite bad shape, but my dialogue editor Jacques Pedersen is a magician with iZotope RX and he did a lot of healthy cleaning up of all the rough pieces and low-res stuff,” says Albrechtsen. “The 8mm videos actually didn’t have any sound, so Heikki Kossi did some Foley that helped it to come alive when we needed it to.”

Sound Design
Albrechtsen’s sound edit was also helped by the directors’ dedication to sound. They were able to acquire the original sound effects library from Bill Nye’s ‘90s TV show, making it easy for the post sound team to build out the show’s soundscape from stereo to surround, and also to make it funnier. “A lot of humor in the old TV show came from the imaginative soundtrack that was often quite cartoonish, exaggerated and hilariously funny,” he explains. “I’ve done sound for quite a few documentaries now and I’ve never tried adding so many cartoonish sound effects to a track. It made me laugh.”

The directors’ dedication goes even deeper, with director Sussberg handling the production sound himself when they’re out shooting. He records dialogue with both a boom mic and radio mics, and also records wild tracks of room tones and ambience. He even captures special sound signatures for specific locations when applicable.

For example, Nye visits the creationist theme park called Noah’s Ark, built by Christian fundamentalist Ken Ham. The indoor park features life-size dioramas and animatronics to explain creationism. There are lots of sound effects and demonstrations playing from multiple speaker setups. Sussberg recorded all of them, providing Albrechtsen with the means of creating an authentic sound collage.

“People might think we added lots of sounds for these sequences, but actually we just orchestrated what was already there,” says Albrechtsen. “At moments, it’s like a cacophony of noises, with corny dinosaur screams, savage human screams and violent war noises. When I heard the sounds from the theme park that David and Jason had recorded, I didn’t believe my own ears. It’s so extreme.”

Albrechtsen approaches his sound design with texture in mind. Not every sound needs to be clean. Adding texture, like crackling or hiss, can change the emotional impact of a sound. For example, while creating the sound design for the archival footage of several rocket launches, Albrechtsen pulled clean effects of rocket launches and explosions from Tonsturm’s “Massive Explosions” sound effects library and transferred those recordings to old NAGRA tape. “The special, warm, analogue distortion that this created fit perfectly with the old, dusty images.”

In one of Albrechtsen’s favorite sequences in the film, there’s a failure during launch and the rocket explodes. The camera falls over and the video glitches. He used different explosions panned around the room, and he panned several low-pitched booms directly to the subwoofer, using Waves LoAir plug-in for added punch. “When the camera falls over, I panned explosions into the surrounds and as the glitches appear I used different distorted textures to enhance the images,” he says. “Pete Horner did an amazing job on mixing that sequence.”

For the emotional sequences, particularly those exploring Nye’s family history, and the genetic disorder passed down from Nye’s father to his two siblings, Albrechtsen chose to reduce the background sounds and let the Foley pull the audience in closer to Nye. “It’s amazing what just a small cloth rustle can do to get a feeling of being close to a person. Foley artist Heikki Kossi is a master at making these small sounds significant and precise, which is actually much more difficult than one would think.”

For example, during a scene in which Nye and his siblings visit a clinic Albrechtsen deliberately chose harsh, atonal backgrounds that create an uncomfortable atmosphere. Then, as Nye shares his worries about the disease, Albrechtsen slowly takes the backgrounds out so that only the delicate Foley for Nye plays. “I love creating multilayered background ambiences and they really enhanced many moments in the film. When we removed these backgrounds for some of the more personal, subjective moments the effect was almost spellbinding. Sound is amazing, but silence is even better.”

Bill Nye: Science Guy has layers of material taking place in both the past and present, in outer space and in Nye’s private space, Albrechtsen notes. “I was thinking about how to make them merge more. I tried making many elements of the soundtrack fit more with each other.”

For instance, Nye’s brother has a huge model train railway set up. It’s a legacy from their childhood. So when Nye visits his childhood home, Albrechtsen plays the sound of a distant train. In the 8mm home movies, the Nye family is at the beach. Albrechtsen’s sound design includes echoes of seagulls and waves. Later in the film, when Nye visits his sister’s home, he puts in distant seagulls and waves. “The movie is constantly jumping through different locations and time periods. This was a way of making the emotional storyline clearer and strengthening the overall flow. The sound makes the images more connected.”

One significant story point is Nye’s growing involvement with The Planetary Society. Before Carl Sagan’s death, Sagan conceptualized a solar sail — a sail for use in space that could harness the sun’s energy and use it as a means of propulsion. The Planetary Society worked hard to actualize Sagan’s solar sail idea. Albrechtsen needed to give the solar sail a sound in the film. “How does something like that sound? Well, in the production sound you couldn’t really hear the solar sail and when it actually appeared it just sounded like boring, noisy cloth rustle. The light sail really needed an extraordinary, unique sound to make you understand the magnitude of it.”

So they recorded different kinds of materials, in particular a Mylar blanket, which has a glittery and reflective surface. Then Albrechtsen tried different pitches and panning of those recordings to create a sense of its extraordinary size.

While they handled post sound editorial in Denmark, the directors were busy cutting the film stateside with picture editor Annu Lilja. When working over long distances, Albrechtsen likes to send lots of QuickTimes with stereo downmixes so the directors can hear what’s happening. “For this film, I sent a handful of sound sketches to David and Jason while they were busy finishing the picture editing,” he explains. “Since we’ve done several projects together we know each other very well. David and Jason totally trust me and I know that they like their soundtracks to be very detailed, dynamic and playful. They want the sound to be an integral part of the storytelling and are open to any input. For this movie, they even did a few picture recuts because of some sound ideas I had.”

The Mix
For the two-week final mix, Albrechtsen joined re-recording mixer Pete Horner at Skywalker Sound in Marin County, California. Horner started mixing on the John Waters stage — a small mix room featuring a 5.1 setup of Meyer Sound’s Acheron speakers and an Avid ICON D-Command control surface, while Albrechtsen finished the sound design and premixed the effects against William Ryan Fritch’s score in a separate editing suite. Then Albrechtsen sat with Horner for another week, as Horner crafted the final 5.1 mix.

One of Horner’s mix challenges was to keep the dialogue paramount while still pushing the layered soundscapes that help tell the story. Horner says, “Peter [Albrechtsen] provided a wealth of sounds to work with, which in the spirit of the original Bill Nye show were very playful. But this, of course, presented a challenge because there were so many sounds competing for attention. I would say this is a problem that most documentaries would be envious of, and I certainly appreciated it.”

Once they had the effects playing along with the dialogue and music, Horner and Albrechtsen worked together to decide which sounds were contributing the most and which were distracting from the story. “The result is a wonderfully rich, sometimes manic track,” says Horner.

Albrechtsen adds, “On a busy movie like this, it’s really in the mix where everything comes together. Pete [Horner] is a truly brilliant mixer and has the same musical approach to sound as me. He is an amazing listener. The whole soundtrack — both sound and score — should really be like one piece of music, with ebbs and flows, peaks and valleys.”

Horner explains their musical approach to mixing as “the understanding that the entire palette of sound coming through the faders can be shaped in a way that elicits an emotional response in the audience. Music is obviously musical, but sound effects are also very musical since they are made up of pitches and rhythmic sounds as well. I’ve come to feel that dialogue is also musical — the person speaking is embedding their own emotions into the way they speak using both pitch (inflection or emphasis) and rhythm (pace and pauses).”

“I’ll go even further to say that the way the images are cut by the picture editor is inherently musical. The pace of the cuts suggests rhythm and tempo, and a ‘hard cut’ can feel like a strong downbeat, as emotionally rich as any orchestral stab. So I think a musical approach to mixing is simply internalizing the ‘music’ that is already being communicated by the composer, the sound designer, the picture editor and the characters on the screen, and with the guidance of the director shaping the palette of available sounds to communicate the appropriate complexity of emotion,” says Horner.

In the mix, Horner embraces the documentary’s intention of expressing the duality of Nye’s life: his celebrity versus his private life. He gives the example of the film’s opening, which starts with sounds of a crowd gathering to see Nye. Then it cuts to Nye backstage as he’s preparing for his performance by quietly tying his bowtie in a mirror. “Here the exceptional Foley work of Heikki Kossi creates the sense of a private, intimate moment, contrasting with the voice of the announcer, which I treated as if it’s happening through the wall in a distant auditorium.”

Next it cuts to that announcer, and his voice is clearly amplified and echoing all around the auditorium of excited fans. There’s an interview with a fan and his friends who are waiting to take their seats. The fan describes his experience of watching Nye’s TV show in the classroom as a kid and how they’d all chant “Bill, Bill, Bill” as the TV cart rolled in. Underneath, plays the sound of the auditorium crowd chanting “Bill, Bill, Bill” as the picture cuts to Nye waiting in wings.

Horner says, “Again, the Foley here keeps us close to Bill while the crowd chants are in deep echo. Then the TV show theme kicks on, blasting through the PA. I embraced the distorted nature of the production recording and augmented it with hall echo and a liberal use of the subwoofer. The energy in this moment is at a peak as Bill takes the stage exclaiming, “I love you guys!” and the title card comes on. This is a great example of how the scene was already cut to communicate the dichotomy within Bill, between his private life and his public persona. By recognizing that intention, the sound team was able to express that paradox more viscerally.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Music house Wolf at the Door opens in Venice

Wolf at the Door has opened in Venice, California, providing original music, music supervision and sound design for the ad industry and, occasionally, films. Founders Alex Kemp and Jimmy Haun have been making music for some time: Kemp was composer at Chicago-based Catfish Music and Spank, and was the former creative director of Hum in Santa Monica. Haun spent over 10 years as the senior composer at Elias, in addition to being a session musician.

Between the two of them they’ve been signed to four major labels, written music for 11 Super Bowl spots, and have composed music for top agencies, including W+K, Goodby, Chiat Day, Team One and Arnold, working with directors like David Fincher, Lance Acord, Stacy Wall and Gore Verbinski.

In addition to making music, Kemp linked up with his longtime friend Scott Brown, a former creative director at agencies including Chiat Day, 72and Sunny and Deutsch, to start a surf shop and brand featuring hand-crafted surf boards — Lone Wolfs Objets d’Surf.

With the Wolf at the Door recording studio and production office existing directly behind the Lone Wolfs retail store, Kemp and his partners bounce between different creative projects daily: writing music for spots, designing handmade Lone Wolfs surfboards, recording bands in the studio, laying out their own magazine, or producing their own original branded content.

Episodes of their original surf talk show/Web series Everything’s Not Working have featured guest pro surfers, including Dion Agius, Nabil Samadani and Eden Saul.

Wolf at the Door recently worked on an Experian commercial directed by the Malloy Brothers for the Martin Agency, as well as a Century Link spot directed by Malcom Venville for Arnold Worldwide. Kemp worked closely with Venville on the casting and arrangement for the spot, and traveled to Denver to record the duet of singer Kelvin Jones’ “Call You Home” with Karissa Lee, a young singer Kemp found specifically for the project.

“Our approach to music is always driven by who the brand is and what ideas the music needs to support,” says Kemp. “The music provides the emotional context.” Paying attention to messaging is something that goes hand in hand with carving out their own brand and making their own content. “The whole model seemed ready for a reset. And personally speaking, I like to live and work at a place where being inspired dictates the actions we take, rather than the other way around.”

Main Image L-R:  Jimmy Haun and Alex Kemp.

Lime opens sound design division led by Michael Anastasi, Rohan Young

Santa Monica’s Lime Studios has launched a sound design division. LSD (Lime Sound Design), featuring newly signed sound designer Michael Anastasi and Lime sound designer/mixer Rohan Young has already created sound design for national commercial campaigns.

“Having worked with Michael since his early days at Stimmung and then at Barking Owl, he was always putting out some of the best sound design work, a lot of which we were fortunate to be final mixing here at Lime,” says executive producer Susie Boyajan, who collaborates closely with Lime and LSD owner Bruce Horwitz and the other company partners — mixers Mark Meyuhas and Loren Silber. “Having Michael here provides us with an opportunity to be involved earlier in the creative process, and provides our clients with a more streamlined experience for their audio needs. Rohan and Michael were often competing for some of the same work, and share a huge client base between them, so it made sense for Lime to expand and create a new division centered around them.”

Boyajan points out that “all of the mixers at Lime have enjoyed the sound design aspect of their jobs, and are really talented at it, but having a new division with LSD that operates differently than our current, hourly sound design structure makes sense for the way the industry is continuing to change. We see it as a real advantage that we can offer clients both models.”

“I have always considered myself a sound designer that mixes,” notes Young. “It’s a different experience to be involved early on and try various things that bring the spot to life. I’ve worked closely with Michael for a long time. It became more and more apparent to both of us that we should be working together. Starting LSD became a no-brainer. Our now-shared resources, with the addition of a Foley stage and location audio recordists only make things better for both of us and even more so for our clients.”

Young explains that setting up LSD as its own sound design division, as opposed to bringing in Michael to sound design at Lime, allows clients to separate the mix from the sound design on their production if they choose.

Anastasi joins LSD from Barking Owl, where he spent the last seven years creating sound design for high-profile projects and building long-term creative collaborations with clients. Michael recalls his fortunate experiences recording sounds with John Fasal, and Foley sessions with John Roesch and Alyson Dee Moore as having taught him a great deal of his craft. “Foley is actually what got me to become a sound designer,” he explains.

Projects that Anastasi has worked on include the PSA on human trafficking called Hide and Seek, which won an AICP Award for Sound Design. He also provided sound design to the feature film Casa De Mi Padre, starring Will Ferrell, and was sound supervisor as well. For Nike’s Together project, featuring Lebron James, a two-minute black-and-white piece, Anastasi traveled back to Lebron’s hometown of Cleveland to record 500+ extras.

Lime is currently building new studios for LSD, featuring a team of sound recordists and a stand-alone Foley room. The LSD team is currently in the midst of a series of projects launching this spring, including commercial campaigns for Nike, Samsung, StubHub and Adobe.

Main Image: Michael Anastasi and Rohan Young.

The sound of John Wick: Chapter 2 — bigger and bolder

The director and audio team share their process.

By Jennifer Walden

To achieve the machine-like precision of assassin John Wick for director Chad Stahelski’s signature gun-fu-style action films, Keanu Reeves (Wick) goes through months of extensive martial arts and weapons training. The result is worth the effort. Wick is fast, efficient and thorough. You cannot fake his moves.

In John Wick: Chapter 2, Wick is still trying to retire from his career as a hitman, but he’s asked for one last kill. Bound by a blood oath, it’s a job Wick can’t refuse. Reluctantly, he goes to work, but by doing so, he’s dragged further into the assassin lifestyle he’s desperate to leave behind.

Chad Stahelski

Stahelski builds a visually and sonically engaging world on-screen, and then fills it full of meticulously placed bullet holes. His inspiration for John Wick comes from his experience as a stunt man and martial arts stunt coordinator for Lily and Lana Wachowski on The Matrix films. “The Wachowskis are some of the best world creators in the film industry. Much of what I know about sound and lighting has to do with their perspective that every little bit helps define the world. You just can’t do it visually. It’s the sound and the look and the vibe — the combination is what grabs people.”

Before the script on John Wick: Chapter 2 was even locked, Stahelski brainstormed with supervising sound editor Mark Stoeckinger and composer Tyler Bates — alumni of the first Wick film — and cinematographer Dan Laustsen on how they could go deeper into Wick’s world this time around. “It was so collaborative and inspirational. Mark and his team talked about how to make it sound bigger and more unique; how to make this movie sound as big as we wanted it to look. This sound team was one of my favorite departments to work with. I’ve learned more from those guys about sound in these last two films then I thought I had learned in the last 15 years,” says Stahelski.

Supervising sound editor Stoeckinger, at the Formosa Group in West Hollywood, knows action films. Mission Impossible II and III, both Jack Reacher films, Iron Man 3, and the upcoming (April) The Fate of the Furious, are just a part of his film sound experience. Gun fights, car chases, punches and impacts — Stoeckinger knows that all those big sound effects in an action film can compete with the music and dialogue for space in a scene. “The more sound elements you have, the more delicate the balancing act is,” he explains. “The director wants his sounds to be big and bold. To achieve that, you want to have a low-frequency punch to the effects. Sometimes, the frequencies in the music can steal all that space.”

The Sound of Music
Composer Bates’s score was big and bold, with lots of percussion, bass and strong guitar chords that existed in the same frequency range as the gunshots, car engines and explosions. “Our composer is very good at creating a score that is individual to John Wick,” says Stahelski. “I listened to just the music, and it was great. I listened to just the sound design, and that was great. When we put them together we couldn’t understand what was going on. They overlapped that much.”

During the final mix at Formosa’s Stage B on The Lot, re-recording mixers Andy Koyama and Martyn Zub — who both mixed the first John Wick — along with Gabe Serrano, approached the fight sequences with effects leading the mix, since those needed to match the visuals. Then Koyama made adjustments to the music stems to give the sound effects more room.

“Andy made some great suggestions, like if we lowered the bass here then we can hear the effects punch more,” says Stahelski. “That gave us the idea to go back to our composers, to the music department and the music editor. We took it to the next level conceptually. We had Tyler [Bates] strip out a lot of the percussion and bass sounds. Mark realized we have so many gunshots, so why not use those as the percussion? The music was influenced by the amount of gunfire, sound design and the reverb that we put into the gunshots.”

Mark Stoeckinger

The music and sound departments collaborated through the last few weeks of the final mix. “It was a really neat, synergistic effect of the sound and music complementing each other. I was super happy with the final product,” says Stahelski.

Putting the Gun in Gun-Fu
As its name suggests, gun-fu involves a range of guns —handguns, shotguns and assault rifles. It was up to sound designer Alan Rankin to create a variety of distinct gun effects that not only sounded different from weapon to weapon but also differentiated between John Wick’s guns and the bad guys’ guns. To help Wick’s guns sound more powerful and complex than his foes, Rankin added different layers of air, boom and mechanical effects. To distinguish one weapon from another, Rankin layered the sounds of several different guns together to make a unique sound.

The result is the type of gun sound that Stoeckinger likes to use on the John Wick films. “Even before this film officially started, Alan would present gun ideas. He’d say, ‘What do you think about this sound for the shotgun? Or, ‘How about this gun sound?’ We went back and forth many times, and once we started the film, he took it well beyond that.”

Rankin developed the sounds further by processing his effects with EQ and limiting to help the gunshots punch through the mix. “We knew we would inevitably have to turn the gunshots down in the mix due to conflicts with music or dialogue, or just because of the sheer quantity of shots needed for some of the scenes,” Rankin says.

Each gun battle was designed entirely in post, since the guns on-screen weren’t shooting live rounds. Rankin spent months designing and evolving the weapons and bullet effects in the fight sequences. He says, “Occasionally there would be a production sound we could use to help sell the space, but for the most part it’s all a construct.”

There were unique hurdles for each fight scene, but Rankin feels the catacombs were the most challenging from a design standpoint, and Zub agrees in terms of mix. “In the catacombs there’s a rapid-fire sequence with lots of shots and ricochets, with body hits and head explosions. It’s all going on at the same time. You have to be delicate with each gunshot so that they don’t all sound the same. It can’t sound repetitive and boring. So that was pretty tricky.”

To keep the gunfire exciting, Zub played with the perspective, the dynamics and the sound layers to make each shot unique. “For example, a shotgun sound might be made up of eight different elements. So in any given 40-second sequence, you might have 40 gunshots. To keep them all from sounding the same, you go through each element of the shotgun sound and either turn some layers off, tune some of them differently or put different reverb on them. This gives each gunshot its own unique character. Doing that keeps the soundtrack more interesting and that helps to tell the story better,” says Zub. For reverb, he used the PhoenixVerb Surround Reverb plug-in to create reverbs in 7.1.

Another challenge was the fight sequence at the museum. To score the first part of Wick’s fight, director Stahelski chose a classical selection from Vivaldi… but with a twist. Instead of relying solely on traditional percussion, “Mark’s team intermixed gunshots with the music,” notes Stahelski. “That is one of my favorite overall sound sequences.”

At the museum, there’s a multi-level mirrored room exhibit with moving walls. In there, Wick faces several opponents. “The mirror room battle was challenging because we had to represent the highly reflective space in which the gunshots were occurring,” explains Rankin. “Martyn [Zub] was really diligent about keeping the sounds tight and contained so the audience doesn’t get worn out from the massive volume of gunshots involved.”

Their goal was to make as much distinction as possible between the gunshot and the bullet impact sounds since visually there were only a few frames between the two. “There was lots of tweaking the sync of those sounds in order to make sure we got the necessary visceral result that the director was looking for,” says Rankin.

Stahelski adds, “The mirror room has great design work. The moment a gun fires, it just echoes through the whole space. As you change the guns, you change the reverb and change the echo in there. I really dug that.”

On the dialogue side, the mirror room offered Koyama an opportunity to play with the placement of the voices. “You might be looking at somebody, but because it’s just a reflection, Andy has their voice coming from a different place in the theater,” Stoeckinger explains. “It’s disorienting, which is what it is supposed to be. The visuals inspired what the sound does. The location design — how they shot it and cut it — that let us play with sound.”

The Manhattan Bridge
Koyama’s biggest challenge on dialogue was during a scene where Laurence Fishburne’s character The Bowery King is talking to Wick while they’re standing on a rooftop near the busy Manhattan Bridge. Koyama used iZotope RX 5 to help clean up the traffic noise. “The dialogue was very difficult to understand and Laurence was not available for ADR, so we had to save it. With some magic we managed to save it, and it actually sounds really great in the film.”

Once Koyama cleaned the production dialogue, Stoeckinger was able to create an unsettling atmosphere there by weaving tonal sound elements with a “traffic on a bridge” roar. “For me personally, building weird spaces is fun because it’s less literal,” says Stoeckinger.

Stahelski strives for a detailed and deep world in his John Wick films. He chooses Stoeckinger to lead his sound team because Stoeckinger’s “work is incredibly immersive, incredibly detailed,” says the director. “The depths that he goes, even if it is just a single sound or tone or atmosphere, Mark has a way to penetrate the visuals. I think his work stands out so far above most other sound design teams. I love my sound department and I couldn’t be happier with them.”


Jennifer Walden is a New Jersey-based writer and audio engineer.

Alvaro Rodríguez

Behind the Title: Histeria Music’s chief audio engineer Alvaro Rodríguez

NAME: Alvaro Rodríguez

COMPANY: Histeria Music (@histeriamusic)

CAN YOU DESCRIBE YOUR COMPANY?
Miami’s Histeria Music is a music production and audio post company. Since its foundation in 2003 we have focused on supporting our clients’ communication needs with powerful music and sound that convey a strong message and create a bond with the audience. We offer full audio post production, music production, and sound design services for advertising, film, TV, radio, video games and the corporate world.

WHAT’S YOUR JOB TITLE?
CEO/ Chief Audio Engineer

WHAT DOES THAT ENTAIL?
As an audio post engineer, I work on 5.1 and stereo mixing, ADR and voiceover recordings, voiceover castings and talent direction, music search and editing, dialogue cleanup, remote recording via ISDN and/or Source Connect and sound design.

Studio A

Studio A

As the owner and founder of the studio, I take care of a ton of things. I make sure our final productions are of the highest quality possible, and handle client services, PR, bookkeeping, social media and marketing. Sometimes it’s a bit overwhelming but I wouldn’t trade it for anything else!

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Some people might think that I just sit behind a console, pushing buttons trying to make things sound pretty. In reality, I do much more than that. I advise creative and copywriters on changes in scripts that might help better fit whatever project we are recording. I also direct talent using creative vocabulary to ensure that their delivery is adequate and their performance hits that emotion we are trying to achieve. I get to sound design, edit and move audio clips around on my DAW, almost as if I were composing a piece of music, adding my own sound to the creative process.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Sound design! I love it when I get a video from any of our clients that has no sound whatsoever, not even a scratch recording of a voiceover. This gives me the opportunity to add my signature sound and be as creative as possible and help tell a story. I also love working on radio spots. Since there is no video to support the audio, I usually get to be a bigger part of the creative process once we start putting together the spots. Everything from the way the talent is recorded to the sounds and the way phrases and words are edited together is something I’ll never get tired of doing.

WHAT’S YOUR LEAST FAVORITE?
Sales. It’s tricky because as the owner when you succeed, it’s the best feeling in the world, but it can be very frustrating and overwhelming sometimes.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
During work it has to be that moment you get the email saying the spots have been approved and are ready for traffic. On a personal level, it’s when I take my nine-year old to soccer practice, usually around 6pm

Studio B

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Wow, I have no idea how to answer this question. I can’t see myself doing anything else, really, although I’ll add that I am an avid home brewer and enjoy the craft quite a bit.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Ever since I was a kid I had this fascination with things that make sounds. I was always drawn to a guitar or simply buckets I could smack and make some sort of a rhythmic pattern. After high school, I went to college and started studying business administration, only to follow in my dad and brother’s steps. Not to anyone’s surprise I quit after the second semester and ended up doing a bit of soul searching. Long story short, I ended up attending Full Sail University where I graduated in the Recording Arts program back in 2000

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year started with a great and fun project for us. We are recording ADR for the Netflix series Bloodline. We are also currently working on the audio post and film scoring of a short film called Andante based on a story from Argentinian author Julio Cortazar.

Also worth mentioning is that we recently concluded the audio post for seasons one and two of the MTV show Ridículos, which is the Spanish and Portuguese language adaptations of the original English version of Ridiculousness that currently airs in Latin America and Brazil.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The first project I ever did for the advertising industry. I was 23 and a recent graduate of Full Sail. All the stars and planets aligned and a campaign for Budweiser — both for the general and US Hispanic markets — landed in my lap. This came from Del Rivero Messianu DDB (currently known as ALMA DDB, Ad Age’s 2017 multicultural agency of the year).

I was living with my parents at the time and had a small home studio in the garage. No Pro Tools, no Digi Beta, just good-old Cool Edit and a VHS player (yes, I manually pressed play on the VHS and Cool Edit to sync my music to picture). Long story short, I ended up writing and producing the music for that TV spot. This led to me unavoidably opening the doors of Histeria Music to the public in 2003.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iZotope’s RX Post Production Suite, Telos Zephyr Xstream ISDN box and Source Connect. I also use the FabFilter Pro-Q 2 quite a bit.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Facebook, Twitter and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I live in Miami and the beach is my backyard, so I find myself relaxing for hours at the beach on weekends. I love to spend time with my family during my son’s soccer practices and games. When I am really stressed and need to be alone, I tend to brew some crafty beers at home. Great hobby!

Jon Hamm

Audio post for Jon Hamm’s H&R Block spots goes to Eleven

If you watch broadcast television at all, you’ve likely seen the ubiquitous H&R Block spots featuring actor Jon Hamm of Mad Men fame. The campaign out of Fallon Worldwide features eight spots — all take place either on a film set or a studio backlot, and all feature Hamm in costume for a part. Whether he’s breaking character dressed in traditional Roman garb to talk about how H&R Block can help with your taxes, or chatting up a zombie during a lunch break, he’s handsome, funny and on point: use H&R Block for your tax needs. Simon McQuoid from Imperial Woodpecker directed.

Studio C /Katya Jeff Payne

Jeff Payne

The campaign’s audio post was completed at Eleven in Santa Monica. Eleven founder Jeff Payne worked the spots. “As well as mixing, I created sound design for all of the spots. The objective was to make the sound design feel very realistic and to enhance the scenes in a natural way, rather than a sound design way. For example, on the spot titled Donuts the scene was set on a studio back lot with a lot of extras moving around, so it was important to create that feel without distracting from the dialogue, which was very subtle and quiet. On the spot titled Switch, there was a very energetic music track and fast cutting scenes, but again it needed support with realistic sounds that gave all the scenes more movement.”

Payne says the major challenge for all the spots was to make the dialogue feel seamless. “There were many different angle shots with different microphones that needed to be evened out so that the dialogue sounded smooth.”

In terms of tools, all editing and mixing was done with Avid’s Pro Tools HDX system and S6 console. Sound design was done through Soundminer software.

Jordan Meltzer was assistant mixer on the campaign, and Melissa Elston executive produced for Eleven. Arcade provided the edit, Timber the VFX and post and color was via MPC.

Behind the Title: Stir Post Audio sound designer/mixer Nick Bozzone

NAME: Nick Bozzone

COMPANY: Chicago’s Stir Post Audio (@STIRpost)

DESCRIBE YOUR COMPANY:
Stir Post Audio is comprised of engineers, mixers, sound designers and producers, who transform audio mixes into what we call “sonic power shots.”

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
As a post sound professional, there are many different disciplines of audio that I use on a day-to-day basis — voiceover recording/mic techniques (ADR included), creative sound designing, voiceover and music editing, 5.1 and stereo broadcast (LKFS) mixing, as well as providing a positive (and fun) voice in the room.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The term sound designer envelops more than simply spotting stock sound effects to picture, it’s an opportunity to be as creative as my mind allows. It’s a chance at making a sonic signature —a signature that, most of the time, is associated with the product itself. I have been very fortunate through my career so far to have worked on these types of commercial campaigns and short films… projects that have allowed me to stretch my sonic imagination.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is when its time to mix. Mixing can be just as creative, if not more so, as sound design. There are a lot of technical aspects to mixing heavy-hitting commercials. Most of the time there are a bunch of very dynamic elements going on at the same time. The finesse of a great mix is the ability to take all of these things, bring them all together and have them all sitting in their own spot.

WHAT’S YOUR LEAST FAVORITE?
It may be my least favorite part, but it’s a necessary evil… archiving!

WHAT’S YOUR FAVORITE TIME OF THE DAY?
During work, it’s when the whole room gives my mix a thumbs up. During the weekend, it’s definitely around sunset. For whatever reason, no matter how tired I am, around sunset is when my body kicks into its second wind and I become a night owl (or at least I used to be one before my daughter was born five months ago).

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
“If you love what you do, you’ll never work a day in your life.” That was told to me when I entered college, and I took that quote to heart. Originally, I thought that I wanted to be a creative writer and then I had an interest in being a hypnotherapist. Both were interesting to me, but neither one was holding my interest for very long. Thankfully, I took an introductory class in Pro Tools. That one class showed me that there could be a future in sound. You never know where you’ll get your inspiration.

Nick creating sounds for Mist Twst.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Many projects that come through our doors require quite a bit of strategy with regard to the intention or emotion of the project. I worked on the re-branding campaign for Pepsi’s Sierra Mist, which changed its name to Mist Twst.

There were a lot of very specific sound design elements I created in that session. The intention was to not just make an everyday run-of-the-mill soda commercial; we wanted it to feel crisp, clean and natural like the drink. So, we went to the store and bought a bunch of different fruits and vegetables, and recorded ourselves cutting, squeezing, and dropping them into a fizzy glass of Mist Twst. We even recorded ourselves opening soda cans at different speeds and pouring soda into glasses with and without ice.

I also worked on a really fun 5 Gum radio campaign that won a Radio Mercury Award. The concept was a “truth or dare” commercial geared toward people streaming music with headphones on. It allows the listener to choose whether to play along with listening to the left headphone for a truth, or the right headphone to do a dare.

We did campaign for Aleve with beautiful film showing a grandfather on an outing with his granddaughter at an amusement park and suddenly he throws his back out. The entire park grinds to a halt as a result — visually and audio-wise. There was a lot of sound design involved in this process, and was a very fun and creative experience.

Kerrygold

For a recent package of TV spots for Kerrygold, the Irish dairy group, created by Energy BBDO. my main goal for “Made for this Moment” was to let the gentile music track and great lyrics have center stage and breathe, as if they were their own character in the story. My approach to the sound design was to fill out each scene with subtle sound design elements that are almost felt and not heard… nothing poking through further than anything else, and nothing competing with the music, only enhancing the overall mood.”

Cory Melious

Behind the Title: Heard City senior sound designer/mixer Cory Melious

NAME: Cory Melious

COMPANY: Heard City (@heardcity)

CAN YOU DESCRIBE YOUR COMPANY?
We are an audio post production company.

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
I provide final mastering of the audio soundtrack for commercials, TV shows and movies. I combine the production audio recorded on set (typically dialog), narration, music (whether it’s an original composition or artist) and sound effects (often created by me) into one 5.1 surround soundtrack that plays on both TV and Internet.

Heard City

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
I think most people without a production background think the sound of a spot just “is.” They don’t really think about how or why it happens. Once I start explaining the sonic layers we combine to make up the final mix they are really surprised.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The part that really excites me is the fact that each spot offers its own unique challenge. I take raw audio elements and tweak and mold them into a mix. Working with the agency creatives, we’re able to develop a mix that helps tell the story being presented in the spot. In that respect I feel like my job changes day in and day out and feels fresh every day.

WHAT’S YOUR LEAST FAVORITE?
Working late! There are a lot of late hours in creative jobs.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I really like finishing a job. It’s that feeling of accomplishment when, after a few hours, I’m able to take some pretty rough-sounding dialog and manipulate that into a smooth-sounding final mix. It’s also when the clients we work with are happy during the final stages of their project.

WHAT TOOLS DO YOU USE ON A DAY-TO-DAY BASIS?
Avid Pro Tools, Izotope RX, Waves Mercury, Altiverb and Revibe.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
One of my many hobbies is making furniture. My dad is a carpenter and taught me how to build at a very young age. If I never had the opportunity to come to New York and make a career here, I’d probably be building and making furniture near my hometown of Seneca Castle, New York.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think this profession chose me. When I was a kid I was really into electronics and sound. I was both the drummer and the front of house sound mixer for my high school band. Mixing from behind the speakers definitely presents some challenges! I went on to college to pursue a career in music recording, but when I got an internship in New York at a premier post studio, I truly fell in love with creating sound for picture.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, I’ve worked on Chobani, Google, Microsoft, and Budweiser. I also did a film called The Discovery for Netflix.

The Discovery for Netflix.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I’d probably have to say Chobani. That was a challenging campaign because the athletes featured in it were very busy. In order to capture the voiceover properly I was sent to Orlando and Los Angeles to supervise the narration recording and make sure it was suitable for broadcast. The spots ran during the Olympics, so they had to be top notch.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iPhone, iPad and depth finder. I love boating and can’t imagine navigating these waters without knowing the depth!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on the basics — Facebook, LinkedIn and Instagram. I dabble with SnapChat occasionally and will even open up Twitter once in a while to see what’s trending. I’m a fan of photography and nature, so I follow a bunch of outdoor Instagramers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I joke with my friends that all of my hobbies are those of retired folks — sailing, golfing, fly fishing, masterful dog training, skiing, biking, etc. I joke that I’m practicing for retirement. I think hobbies that force me to relax and get out of NYC are really good for me.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”

Lucky Post helps with the funny for McDonald’s McPick 2 spots

Lucky Post editor Travis Aitken and sound designer Scottie Richardson were part of the new campaign for McDonald’s, via agency Moroch, that reminds us that there are many things you cannot choose, but you can “McPick 2.”

The campaign — shot by production house Poster with directors Plástico and Sebastian Caporelli — highlights humor in the subtleties of life. Parents features a not-so-cool, but well-meaning, dad and his teenage son talking about texts and “selfies” while enjoying McPick 2 meal from McDonald’s. His son explains the picture he is showing him isn’t a selfie, but his father defends, saying, “Yeah, it is. I took it myself.”

Passengers features a little guy sandwiched between two big, muscular guys in a three-seater row on an airplane. The only thing that makes him feel better is that he chose to bring a McPick 2 meal with him.

“Performance comedy, like these spots, is at its best when you’re seeing people interacting in frame,” says editor Aitken, who cut using Adobe Premiere. “You don’t want to manipulate too much in the edit — it is finding the best performances and allowing them to play out. In that sense, editing with dialogue comedy is punctuation. It’s vastly different than other genres — beauty, for example, where you are editing potentially unrelated images and music to create the story. Here, the story is in front of you.”

According to sound designer Richardson, “My job was to make sure dialogue was clear and create ambient noise that provided atmosphere but didn’t overwhelm the scenes. I used Avid Pro Tools with Soundminer and Sony Oxford noise reduction to provide balance and let the performances shine.”

The executive producer for Dallas-based Lucky Post was Jessica Berry. MPC’s Ricky Gausis provided the color grade.

SuperExploder’s Jody Nazzaro creates sounds of love for Popeyes, Comedy Central

Sound designer/mixer Jody Nazzaro from New York audio house SuperExploder teamed up with Comedy Central to help tell the story of a boy who needs to be more “Southern Fair” if he wants to land the girl of his dreams in a new :60 parody movie trailer Southern Crossed Lovers for Popeyes.

Poor Chester!

In the faux trailer, a young couple meets and falls in love at a country fair, but the girl’s parents disapprove, saying he’s “not Fair enough” for her. They are pushing her toward Chester, the red suspenders-wearing corn dog dipper. In the end, our love-struck hero shows up in a traditional southern suit holding a box of Popeyes Southern Fair tenders and Cajun fries, quickly winning the heart of the girl’s father.

The direction that Nazzaro got from the client was what every artist wants to hear: ”We trust your instincts, go for it. Make it feel like a trailer.”

According to Nazarro, “This project aligned with essentially the new standard of sound design and mixing for broadcast networks. The money isn’t there for ISDNs and phone patches anymore, and most talent records at home with the producer on the phone and sends the VO files via file sharing.”

He received the picture reference as a 1920×180 ProRes QuickTime and an AAF from Adobe Premiere. “Clean production dialogue was sent that I conformed as they cut with the camera mix. Once I prepped the session in Pro Tools, I began to clean up the dialog in Izotope RX5 Advanced and build the ambience tracks,” he explains. “I added some Foley, edited the music and enhanced the dramatic music swell a bit with Omnisphere.”

He mixed the spot in stereo and 5.1, in case they needed it for cinema release — which he says is standard workflow for him now — and sent it off for approval. It was approved on his first mix pass.

“It was a lot of fun working on a non-standard project with a twist — making it feel like a real trailer,” says Nazzaro. “With the audio, I felt like less was more. I wanted to let the voiceover and the dialogue carry it into a comedic misdirection.”

VR Audio: Crytek goes to new heights for VR game ‘The Climb’

By Jennifer Walden

Dealing with locomotion, such as walking and especially running, is a challenge for VR content developers — but what hasn’t been a challenge in creating VR content? Climbing, on the other hand, has proved to be a simple, yet interesting, locomotion that independent game developer Crytek found to be sustainable for the duration of a full-length game.

Crytek, known for the Crysis game series, recently released their first VR game title, The Climb, a rock climbing adventure exclusively for the Oculus Rift. Players climb, swing and jump their way up increasingly difficult rock faces modeled after popular climbing destinations in places like Indonesia, the Grand Canyon and The Alps.

Crytek’s director of audio, Simon Pressey, says their game engine, CryEngine, is capable of UltraHD resolutions higher than 8K. They could have taken GPS data of anywhere in the world and turned that into a level on The Climb. “But to make the climbing interesting and compelling, we found that real geography wasn’t the way to go. Still, we liked the idea of representing different areas of the world,” he says. While the locations Crytek designed aren’t perfect geographical imitations, geologically they’re pretty accurate. “The details of how the rocks look up close — the color, the graininess and texture — they are as close to photorealistic as we can get in the Oculus Rift. We are running at a resolution that the Rift can handle. So how detailed it looks depends on the Rift’s capabilities.”

Keep in mind that this is first-generation VR technology. “It’s going to get better,” promises Pressey. “By the third-generation of this, I’m sure we’ll have visuals you can’t tell apart from reality.”

Simon Pressey

Simon Pressey

The Sound Experience
Since the visuals aren’t perfect imitations of reality, the audio is vital for maintaining immersion and supporting the game play. Details in the audio actually help the brain process the visuals faster. Even still, flaws and all, first-gen VR headsets give the player a stronger connection to his/her actions in-game than was previously possible with traditional 2D (flat screen) games. “You can look away from the screen in a traditional game, but you can’t in VR. When you turn around in The Climb, you can see a thousand feet below you. You can see that it’s a long way down, and it feels like a long way down.”

One key feature of the Oculus Rift is the integrated audio — it comes equipped with headphones. For Pressey, that meant knowing the exact sound playback system of the end user, a real advantage from a design and mix standpoint. “We were designing for a known playback variable. We knew that it would be a binaural experience. Early on we started working with the Oculus-provided 3D encoder plug-in for Audiokinetic’s Wwise, which Oculus includes with their audio SDK. That plug-in provides HRTF binaural encoding, adding the z-axis that you don’t normally experience even with surround sound,” says Pressey.

He explains that the sounds start as mono source-points, positioned in a 3D space using middleware like Wwise. Then, using the Oculus audio SDK via the middleware, those audio signals are being downmixed to binaural stereo, which gets HRTF (head related transfer function) processing, adding a spatialized effect to the sounds. So even though the player is listening through two speakers, he/she perceives sounds as coming from the left, the right, in front, behind, above and below.

Since most VR is experienced with headphones, Pressey feels there is an opportunity to improve the binaural presentation of the audio [i.e., better headphones or in-ear monitors], and to improve 3D positional audio with personalized HRTFs and Ambisonics. “While the visuals are still very apparently a representation of reality, the audio is perceived as realistic, even if it is a totally manufactured reality. The headphone environment is very intimate and allows greater use of dynamic range, so subtle mixes and more realistic recordings and rendering are sort of mandatory.”

Realistic Sound
Pressey leads the Crytek audio team, and together they collaborated on The Climb’s audio design, which includes many different close-up hand movements and grabs that signify the quality of the player’s grip. There are sweaty, wet sounding hand grabs. There are drier, firmer hand grabs for when a player’s hands are freshly chalked. There are rock crumbles for when holds crumble away.

At times a player needs to wipe dirt away from a hold, or brush aside vegetation. These are very subtle details that in most games wouldn’t be sounded, says Pressey. “But in VR, we are going into very subtle detail. Like, when you rub your hands over plants searching for grips, we are following your movement speed to control how much sound it makes as you ruffle the leaves.” It’s that level of detail that makes the immersion work. Even though in real life a sound so small would probably be masked by other environmental sounds, in the intimacy of VR, those sounds engage the player in the action of climbing.

Crytek_TheClimb_Asia_Screenshot4

Breathing and heartbeat elements also pull a player into the game experience. After moving through several holds, a player’s hands get sweaty, and the breathing sound becomes more labored. If the hold crumbles or if a player is losing his/her grip, the audio design employs a heartbeat sound. “It is not like your usual game situation where you hear a heartbeat if you have low health. In The Climb you actually think, “I’ve got to jump!” Your heart is racing, and after you make the jump and chalk your hands, then your heartbeat and your breathing slow down, and you physically relax,” he says.

Crytek’s aim was to make The Climb believable, to have realistic qualities, dynamic environments and a focused sound to mimic the intensity of focus felt when concentrating on important life or death decisions. They wanted the environment sounds to change, such as the wind changing as a player moves around a corner. But, they didn’t want to intentionally draw the player’s attention away from climbing.

For example, there’s a waterfall near one of the climbs, and the sound for it plays subtly in the background. If the player turns to look at it, then the waterfall sound fades up. They are able to focus the player’s attention by attenuating non-immediate sounds. “You don’t want to hear that waterfall as the focus of your attention and so we steer the sound. But, if that is what you’re focusing on, then we want to be more obvious,” explains Pressey.

The Crytek audio team

The Crytek audio team

The Crytek audio team records, designs and edits sounds in Steinberg’s Nuendo 7, which works directly with Audiokinetic’s Wwise middleware that connects directly to the CryEngine. The audio team, which has been working this way for the past two years, feels the workflow is very iterative, with the audio flowing easily in that pipeline from Nuendo 7 to Wwise to CryEngine and back again. They are often able to verify the audio in-game without needing to request code support. If a sound isn’t working in-game, it can be tweaked in Wwise or completely reworked in Nuendo. All aspects of the pipeline are version controlled and built for sharing work across the audio team.

“It’s a really tight workflow and we can do things quickly. In the game world, speed is everything,” says Pressey. “The faster you get your game to market the sooner you recoup on your very heavy R&D.”

Two factors that propelled this workflow are the collaboration between Crytek, Audiokinetic and Steinberg in designing software tailored to the specific needs of game audio pros, and Crytek’s overhaul of CryEngine where they removed the integrated FMOD-based audio engine in favor of using an external audio engine. Running the audio engine separate from the game engine not only improves the game engine efficiency, it also allows updates to the audio engine as needed without fear of breaking the game engine.

Within hours of Wwise releasing an update, for example, Pressey says their system can be up to date. “Previously, it could’ve been a long and complicated process to incorporate the latest updates. There was always the risk of crashing the whole system by making a change because the code was so mixed up with the rest of the system. By separating them we can always be running the latest versions of things without risking anything.”

Having that adaptability is essential for VR content creation since the industry is changing all the time. For example, Sony’s PS4 VR headset release is slated for this fall, so they’re releasing a new SDK about every week or so, according to Pressey.

CryEngine is freely available for anyone to use. VR games developed with CryEngine will work for any VR platform. CryEngine is also audio middleware agnostic, meaning it can talk to any audio middleware, be it Wwise, FMOD or proprietary middleware. Users can choose a workflow that best suits the needs of their game.

Pressey finds creating for VR to be an intensely experimental process, for every discipline involved in game development. While most members on the Crytek team have solved problems relating to a new IP or a new console, Pressey says, “We were not prepared for this amount of new. We were all used to knowing what we were doing, and now we are experimenting with no net to fall back on. The experience is surprisingly different; the interaction using your eye and head tracking is much more physical. It is more intimate. There is an undeniable and inescapable immersion, in that you can’t look away as the game world is all around you. You can’t switch off your ears.” The first time Pressey put on a VR headset, he knew there was no going back. “Before that, I had no real idea. It is the difference between reading about a country and visiting it.”

Upcoming Release
Crytek will be presenting a new VR release titled Robinson — The Journey at E3 this month, and Pressey gives us a few hints as to what the game experience might be like. He says that VR offers new ways of storytelling, such as nonlinear storytelling. “Crytek and the CryEngine team have developed a radically new Dynamic Response System to allow the game to be intelligent in what dialog gets presented to the player at what time. Aspects of a story can be sewn together and presented based on the player’s approach to the game. This technology takes the idea of RPG-like branching storylines to a new level, and allows narrative progression in what I hope will be new and exciting territory for VR.”

The Climb uses this Dynamic Response System in a limited capacity during the tutorial where the instructor is responsive to the player’s actions. “Previously, to be that responsive, a narrative designer or level designer would have to write pages of logic to do what our new system does very simply,” concludes Pressey.

Jennifer Walden is an audio engineer and writer based in New Jersey.

Larson Studios pulls off an audio post slam dunk for FX’s ‘Baskets’

By Jennifer Walden

Turnarounds for TV series are notoriously fast, but imagine a three-day sound post schedule for a single-camera half-hour episodic series? Does your head hurt yet? Thankfully, Larson Studios in Los Angeles has its workflow on FX’s Baskets down to a science. In the show, Zach Galifianakis stars as Chip Baskets, who works as a California rodeo clown after failing out of a prestigious French clown school.

So how do you crunch a week and a half’s worth of work into three days without sacrificing quality or creativity? Larson’s VP, Rich Ellis, admits they had to create a very aggressive workflow, which was made easier thanks to their experience working with Baskets post supervisor Kaitlin Menear on a few other shows.

Ellis says having a supervising sound editor — Cary Stacy — was key in setting up the workflow. “There are others competing for space in this market of single-camera half-hours, and they treat post sound differently — they don’t necessarily bring a sound supervisor to it. The mixer might be cutting and mixing and wrangling all of the other elements, but we felt that it was important to continue to maintain that traditional sound supervisor role because it actually helps the process to be more efficient when it comes to the stage.”

John Chamberlin and Cary Stacy

John Chamberlin and Cary Stacy

This allows re-recording mixer John Chamberlin to stay focused on the mix while sound supervisor Stacy handles any requests that pop-up on stage, such as alternate lines or options for door creaks. “I think director Jonathan Krisel, gave Cary at least seven honorary Emmy awards for door creaks over the course of our mix time,” jokes Menear. “Cary can pull up a sound effect so quickly, and it is always exactly perfect.”

Every second counts when there are only seven hours to mix an episode from top to bottom before post producer Menear, director Krisel and the episode’s picture editor join the stage for the two-hour final fixes and mix session. Having complete confidence in Stacy’s alternate selections, Chamberlin says he puts them into the session, grabs the fader and just lets it roll. “I know that Cary is going to nail it and I go with it.”

Even before the episode gets to the stage, Chamberlin knows that Stacy won’t overload the session with unnecessary elements, which are time consuming. Even still, Chamberlin says the mix is challenging in that it’s a lot for one person to do. “Although there is care taken to not overload what is put on my plate when I sit down to mix, there are still 8 to 10 tracks of Foley, 24 or more tracks of backgrounds and, depending on the show, the mono and stereo sound effects can be 20 tracks. Dialogue is around 10 and music can be another 10 or 12, plus futz stuff, so it’s a lot. You have to have a workflow that’s efficient and you have to feel confident about what you’re doing. It’s about making decisions quickly.”

Chamberlin mixed Baskets in 5.1 — using a Pro Tools 11 system with an Avid ICON D-Command — on Stage 4 at Larson Studios, where he’s mixed many other shows, such as Portlandia, Documentary Now, Man Seeking Woman, Dice, the upcoming Netflix series Easy, Comedy Bang Bang, Meltdown With Jonah and Kumail and Kroll Show. “I’m so used to how Stage 4 sounds that I know when the mix is in a good place.”

Another factor of the three-day turn-around is choosing to forgo loop group and minimizing ADR to only when it’s absolutely necessary. The post sound team relied on location sound mixer Russell White to capture all the lines as clearly as possible on set, which was a bit of a challenge with the non-principal characters.

Baskets

Tricky On-Set Audio
According to Menear, director Krisel loves to cast non-actors in the majority of the parts. “In Baskets, outside of our three main roles, the other people are kind of random folk that Jonathan has collected throughout his different directing experiences,” she says. While that adds a nice flavor creatively, the inexperienced cast members tend to step on each other’s lines, or not project properly — problems you typically won’t have with experienced actors.

For example, Louie Anderson plays Chip’s mom Christine. “Louie has an amazing voice and it’s really full and resonant,” explains Chamberlin. “There was never a problem with Louie or the pro actors on the show. The principals were very well represented sonically, but the show has a lot of local extras, and that poses a challenge in the recording of them. Whether they were not talking loud enough or there was too much talking.”

A good example is the Easter brunch scene in Episode 104. Chip, his mother and grandmother encounter Martha (Chip’s insurance agent/pseudo-friend played by Martha Kelly) and her parents having brunch in the casino. They decide to join their tables together. “There were so many characters talking at the same time, and a lot of the side characters were just having their own conversations while we were trying to pay attention to the main characters,” says Stacy. “I had to duck those side conversations as much as possible when necessary. There was a lot of that finagling going on.”

Stacy used iZotope RX 5 features like Decrackle and Denoise to clean up the tracks, as well as the Spectral Repair feature for fixing small noises.

Multiple Locations
Another challenge for sound mixer White was that he had to quickly shoot in numerous locations for any given episode. That Easter brunch episode alone had at least eight different locations, including the casino floor, the casino’s buffet, inside and outside of a church, inside the car, and inside and outside of Christine’s house. “Russell mentioned how he used two rigs for recording because he would always have to just get up and go. He would have someone else collect all of the gear from one location while he went off to a new location,” explains Chamberlin. “They didn’t skimp on locations. When they wanted to go to a place they would go. They went to Paris. They went to a rodeo. So that has challenges for the whole team — you have to get out there and record it and capture it. Russell did a pretty fantastic job considering where he was pushed and pulled at any moment of the day or night.”

Sound Effects
White’s tracks also provided a wealth of production effects, which were a main staple of the sound design. The whole basis for the show, for picture and sound, was to have really funny, slapstick things happen, but have them play really straight. “We were cutting the show to feel as real and as normal as possible, regardless of what was actually happening,” says Menear. “Like when Chip was walking across a room full of clown toys and there were all of these strange noises, or he was falling down, or doing amazing gags. We played it as if that could happen in the real world.”

Stacy worked with sound effects editor TC Spriggs to cut in effects that supported the production effects, never sounding too slapstick or over the top, even if the action was. “There is an episode where Chip knocks over a table full of champagne glasses and trips and falls. He gets back up only to start dancing, breaking even more glasses,” describes Chamberlin.

That scene was a combination of effects and Foley provided by Larson’s Foley team of Adam De Coster (artist) and Tom Kilzer (recordist). “Foley sync had to be perfect or it fell apart. Foley and production effects had to be joined seamlessly,” notes Chamberlin. “The Foley is impeccably performed and is really used to bring the show to life.”

Spriggs also designed the numerous backgrounds. Whether it was the streets of Paris, the rodeo arena or the doldrums of Bakersfield, all the locations needed to sound realistic and simple yet distinct. On the mix side, Chamberlin used processing on the dialogue to help sell the different environments – basic interiors and exteriors, the rodeo arena and backstage dressing room, Paris nightclubs, Bakersfield dive bars, an outdoor rave concert, a volleyball tournament, hospital rooms and dream-like sequences and a flashback.

“I spent more time on the dialogue than any other element. Each place had to have its own appropriate sounding environments, typically built with reverbs and delays. This was no simple show,” says Chamberlin. For reverbs, Chamberlin used Avid’s ReVibe and Reverb One, and for futzing, he likes McDSP’s FutzBox and Audio Ease’s Speakerphone plug-ins.

One of Chamberlin’s favorite scenes to mix was Chip’s performance at the rodeo, where he does his last act as his French clown alter ego Renoir. Chip walks into the announcer booth with a gramophone and asks for a special song to be played. Chamberlin processed the music to account for the variable pitch of the gramophone, and also processed the track to sound like it was coming over the PA system. In the center of the ring you can hear the crowds and the announcer, and off-screen a bull snorts and grinds it hooves into the dirt before rushing at Chip.

Another great sequence happens in the Easter brunch episode where we see Chip walking around the casino listening to a “Learn French” lesson through ear buds while smoking a broken cigarette and dreaming of being Renoir the clown on the streets of Paris. This scene summarizes Chip’s sad clown situation in life. It’s thoughtful, and charming and lonely.

“We experimented with elaborate sound design for the voice of the narrator, however, we landed on keeping things relatively simple with just an iPhone futz,” says Stacy. “I feel this worked out for the best, as nothing in this show was over done. We brought in some very light backgrounds for Paris and tried to keep the transitions as smooth as possible. We actually had a very large build for the casino effects, but played them very subtly.”

Adds Chamberlin, “We really wanted to enhance the inner workings of Chip and to focus in on him there. It takes a while in the show to get to the point where you understand Chip, but I think that is great. A lot of that has to do with the great writing and acting, but our support on the sound side, in particular on that Easter episode, was not to reinvent the wheel. Picture editors Micah Gardner and Michael Giambra often developed ideas for sound, and those had a great influence on the final track. We took what they did in picture editorial and just made it more polished.”

The post sound process on Baskets may be down and dirty, but the final product is amazing, says Menear. “I think our Larson Studios team on the show is awesome!”

Behind the Title: Slick Sounds’ David Van Slyke

NAME: David F. Van Slyke

COMPANY: Slick Sounds Media Partners

CAN YOU DESCRIBE YOUR COMPANY?
Slick Sounds is a boutique sound design company that handles audio post — from dailies to the delivery of the DCP (Digital Cinema Package). We creatively apply the craft, especially the art of telling stories with sound. We partner with directors, picture editors, color timers, composers and mix stages.

WHAT’S YOUR JOB TITLE?
Lead Sound Designer and Re-Recording Mixer

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That I am also sales manager and CTO. I also attend conferences and regularly go to talks about how to get a jump on the new workflows. I’m constantly letting vendors know they can collaborate with us to create a cost-competitive product with professional standards that will pass a third-party QC.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Each project requires a unique sonic approach that I enjoy figuring out. The story speaks to me, and I interpret which aspect of creative sound is needed. I also do a lot of field recording. I love finding new source sounds.

WHAT IS YOUR PROCESS FOR SOUND DESIGNING?
It’s like a chef who is trying to come up with a new signature dish. You open a lot of items, chop things up, add some secret sauce, make a mess, and then you see what has the best flavors — and you trash the stuff that doesn’t taste good.

It has to be right. To me, and my clients, “right” is the feeling you get when you watch the final mix of a section or the whole piece. It creates the proper response in the viewer.

HOW DO YOU BEGIN?
I always start by getting in the zone. My room is dark and the dual 23-inch monitors are right in front of me; I lose myself in the fact that while I may not know exactly what to do at the start,  I am confident that I will figure it out. It’s fun to play in the unknown. I tap into creativity and come up with things that I later ask myself, “Where did that come from?”

CAN YOU WALK US THROUGH YOUR WORKFLOW?
I watch the picture several times and try to really get into the filmmaker’s head. Sometimes that means looking at it frame by frame. I can figure out what sounds I create quickly and what story points I need to obsess about. The sound design must always sell what the picture is telling us. I obsess about big sound moments because they need to make a big impact on the viewer.

DOES YOUR PROCESS CHANGE DEPENDING ON THE TYPE OF PROJECT?
Yes to a degree. This is where good training in the craft of sound work comes in. There are nuts and bolts things that just have to be banged out, and then there are signature sounds that take the most creative energy. I often do the creative part first knowing the basic stuff will happen quickly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
“I’d open a haberdashery” — that’s my favorite line from Spinal Tap.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
It took a little while since I enjoyed being a professional musician for a couple of years. I realized as a junior at Berklee College of Music that I needed a career that had more steady income than playing gigs or recording bands. My love of recording led me to sound design and into the digital revolution that has changed the record and the post industry.

    Çƒ˙Immortality Parts I and IIǃ˘

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I just finished mixing a feature documentary called Chris Brown — This is Me; the CSI series finale, which was a two-hour television movie called “Immortality” (pictured above); the pilot for Lucifer, a new Jerry Bruckheimer series coming out soon; and I am mixing 20-minute mini-docs for League of Legends.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
All of them… well, most of them. We give the same creative intensity to all our projects. It’s not done until it’s right! Some recent projects though are Dragon Nest: Warrior’s Dawn for Universal; Tyrus, which won the audience award at the San Diego Asian Film Festival; and Home — a Bruckheimer pilot that I’m currently sound designing and co-supervising — which will hopefully get picked up for next year.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools|HD, Serato Pitch ‘n’ Time Pro, iZotope RX5, Soundtoys and SoundMiner.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m not so good at social media. This is a referral business and very few movies are sound designed because of a social media presence. Perhaps the micro budgets get their sound designer from social media, however, if they have any budget at all they want known talent on their project at a known professional facility with amenities.

So, I do old-fashioned social media — I go to lunch with clients I like to work with.

THIS IS AN INDUSTRY WITH TIGHT DEADLINES. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
First, I say, “That’s an impossible deadline, how can the timeframe keep getting smaller and smaller?” Then I figure out how to do it. Which means sometimes having to say no to jobs because they don’t give me enough time to do it “right.”

I live and breath this gig, although it doesn’t always feel like work — it’s just fun!

Sound Design for ‘The Hunger Games: Mockingjay — Part 2’

Warner Bros. Sound re-teams with director Francis Lawrence for the final chapter

By Jennifer Walden

It’s the final installment of The Hunger Games, and all the cards are on the table. Katniss Everdeen encourages all the districts to ban together and turn against the Capitol, but President Snow is ready for their attack.

In true Hunger Games-style, he decides to broadcast the invasion and has rigged the city to be a maze full of traps, called pods, which unleash deadly terrors on the rebel attackers. The pods trigger things like flamethrowers, a giant wave of toxic oil, a horde of subhuman creatures called “mutts,” heat-lasers and massive machine guns, all of which are brought to life on-screen thanks to the work of the visual effects team led by VFX supervisor Charles Gibson.

Warner Bros. Sound supervising sound editor/re-recording mixer Jeremy Peirson, who began working with director Francis Lawrence on The Hunger Games franchise during Catching Fire, knew what to expect in terms of VFX. He and Lawrence developed a workflow where Peirson was involved very early on, with a studio space set up in the cutting room.

Picture and Sound Working Together
Without much in the way of VFX early in the post process, Lawrence relied on Peirson’s sound design to help sell the idea of what was happening on-screen. It’s like a rough pencil sketch of how the scene might sound. As the visuals start coming in, Peirson redesigned, refineed and recorded more elements to better fit the scene. “As we move through the process, sometimes the ideas change,” he explains. “Unfortunately, sound is usually the last step before we finish the film. The visual effects were coming in pretty late in the game and sometimes we got surprised, and they’re completely different. All the work we did in trying to prepare ourselves for the final version changed. You just have to roll with it basically.”

Despite having to rework a scene four or five times, there were advantages to this workflow. One was having constant input from director Lawrence. He was able to hear the sound take shape from a very rough point, and guide Peirson’s design. “Francis popped in a couple times a day to listen to what I was doing. He’d say, ‘Yes this is the right direction’ or ‘No, I was thinking more purple or more bold.’ It allowed for this unique situation where we could fine-tune how the movie is going to sound starting very early in the process,” he says.

Jeremy Peirson

Jeremy Peirson

Another advantage to being embedded with the picture department is that sound is able to inform how the picture is cut. “Sometimes they will give me a scene and ask me to quickly create the sound for it so they can re-cut the scene to make it better. That’s always a fun collaboration, when the picture department and sound department can work so closely together,” Peirson states.

The Gun Pod
One of Peirson’s most challenging “pods” to design sound for was the gun pod, where two .50 caliber machine guns were blasting away a concrete archway, causing it to collapse. Peirson needed to build detail and clarity into a scene that had bullets and rubble spraying everywhere. To do this, he spent hours recording specific, individual impacts. “I bought a bunch of brick and tile of various different kinds, and I took a 12-pound shot-put, raised it up about 10 feet and dropped it onto these things to get individual impacts, as well as clatter and debris.”

In the edit, he finessed the rhythm of the impacts, spacing them out so there was a distinguishable variety of sounds and it wasn’t just a wash. “It’s not a single note of sound,” he says. “It was a wide palette of impacts. Each individual impact was hand placed throughout the whole sequence. I tried to differentiate the sound of the wall from the pavement and the grass, the stairs and the metal pole which happened to be in that particular area.”

For Mockingjay  —Part 1, Peirson, sound recordist John Fasal, and sound designer Bryan O. Watkins, did a bullet-by and bullet-ricochet recording session. All of that material came into play for Mockingjay — Part 2, in addition to new material, such as the gun sounds captured by Peirson, Fasal, Watkins and sound designer Mitch Osias.

For one of their gun recording sessions, Peirson notes they headed to an industrial park where they were able to capture the gun sounds in a mock-urban environment that would match the acoustics of the city streets on-screen. “We wanted to know how the guns would echo off the buildings and down the alleys — how that would sound from various distances.”

They took it one step further by recording gun sounds inside a warehouse that simulated the underground subway environment in the film. “We were able to record them in different ways, putting the guns in certain spots in the warehouse so we could get a tighter, closer feel that sounded very different from an outside perspective,” he says.

With four recordists, they were able to capture 26 individual sets of recordings for each gunshot — some mono, some stereo and some quad recordings. “We used a large range of mics, everything from Neumann to Schoeps to Sennheiser to AKG. You name it and we probably used it.”

      

When building a gun sound in the edit, Peirson started by selecting a close-up gunshot, then he added an acoustic flavor to that gun. “We didn’t always pick the same type of gun for the acoustic response,” he explains. “It was a lot of hand-cutting to make sure everything was in sync since certain guns fire at different rates; some fire faster and some are slower, but they had to be in the same range as the initial close-up sound.”

Another challenge was designing the mutts — the subhuman lizard-like creatures that inhabit the underground area. Peirson says, “Anytime you have creatures — and we had a lot of creatures — you can design the perfect sound for each one, but how do you sell the difference between all of these creatures when you’re surrounded by 30 or 40 of them?”

Even though there may have been a large group of mutts, within that the characters were only fighting a few of them at any given time. They needed to sound the same, yet different. Peirson’s design also had to factor in how the sound would work against the music, and it had to evolve with the VFX as well.

As the re-recording mixer on the effects, Peirson was able to mix a sound as he was designing it. If something wasn’t working, he could get rid of it right away. “I didn’t need to carry it around and then pick and choose later. By the time we got to the stage, we had the opportunity to refine the whole sonic palette so we only had what we wanted.”

He found that moving to the larger space of the dub stage, and hearing how the sound design plays with the music, generated new ideas for sound. “We added a bit of a different flavor to help the sound cut through, or we add a little bit of detail that was getting lost in the music.”

Since composer James Newton Howard scored all four films in The Hunger Games series, Peirson had a wealth of demos and themes to reference when designing the sound. They were a good indication of what frequency range he could work within and still have the effects cut through the music. “We had an idea of how it would sound, but when you get that fully recorded score, it’s a totally different ballgame in terms of scope. It kicks that demo up a huge notch.”

SS_D142-42501.dng

The Mix
Peirson and re-recording mixer Skip Lievsay — who worked on dialogue and music — crafted the final mix first in Dolby Atmos on Warner Bros. Stage 6 in Burbank, using three Avid ICONs. “This was a completely in-the-box virtual mix,” says Peirson. “We had sound effects on one Pro Tools system, dialogue on another and music on a third system. My sound effects session, which had close to 730 tracks, was a completely virtual mix, meaning there were no physically recorded pre-dubs.”

Using the final Atmos mix as their guide, Peirson and Lievsay then mixed the film in Barco Auro-3D, DTS:X, IMAX and IMAX 12.0, plus 7.1, 5.1 and two-track. “That’s every single format that I know of right now for film,” he concludes. “It was an interesting exercise in seeing the difference between all those formats.”

With Oscar season getting into full swing, we wouldn’t be surprised if the sound team on Mockingjay — Part 2 gets a nod.

Skywalker’s Randy Thom helps keep it authentic for ‘Peanuts’

By Jennifer Walden

Snoopy, Woodstock, Charlie Brown, Lucy… all the classic Peanuts characters hit the big screen earlier this month thanks to the Blue Sky Studios production The Peanuts Movie (20th Century Fox).

For those of you who might have worried that the Peanuts gang would “go Hollywood,” there is no need for concern. These beloved characters look and sound like they did in the Charles M. Schulz TV specials — which started airing in the 1960s — but they have been updated to fit the theatrical expectations of 2015.

While the latest technology has given depth and texture to these 2D characters, director Steve Martino and the Schulz family made sure the film didn’t stray far from Charles Schulz’s original creations.

Randy Thom

Randy Thom

According to Skywalker Sound supervising sound editor/sound designer/re-recording mixer Randy Thom, “Steve Martino (from Blue Sky) spent most of the year hanging out in Santa Rosa, California, which is where the Schulz family still lives. He worked with them very closely to make sure that this film had the same feel and look as not only the cartoon strip, but also the TV specials. They did a wonderful job of staying true to all those visual and sonic tropes that we so much associate with Peanuts.”

Thom and the Skywalker sound team, based at the Skywalker Ranch in Marin County, California, studied the style of sound effects used in the original Peanuts TV specials and aimed to evoke those sounds as closely as they could for The Peanuts Movie, while also adding a modern vibe. “Often, on animated films, the first thing the director tells us is that it shouldn’t sound like a cartoon — they don’t want it to be cartoony with sound effects,” explains Thom, who holds an Oscar for his sound design on the animated feature The Incredibles, and has two Oscar nominations for his sound editing on The Polar Express and Ratatouille. “In The Peanuts Movie, we were liberated to play around with boings and other classic cartoon type sounds. We even tried to invent some of our own.”

PEANUTS PEANUTS

The Red Baron and Subtle Sounds
The sound design is a mix of Foley effects, performed at Skywalker by Foley artists Sean England and Ronni Pittman, and cartoon classics like zips, boinks and zings. One challenge was creating a kid-friendly machine gun sound for Snoopy’s Red Baron air battles. “It couldn’t be scary, but it had to suggest the kinds of guns that were used on those planes in that era,” says Thom. The solution? Thom vocalized “ett-ett-ett-ett-ett” sounds, which they processed and combined with a “rat-tat-tat-tat-tat” rhythm that they banged out on pots and pans. The result is a faux machine gun that’s easy on little ears.

Another key element in the Red Baron sequences was the sound of the planes. Charles Schulz’s son, Craig, who was very involved with the film, owns a vintage WWI plane that, amazingly, still flies. “Craig [Schulz] flew the plane and a couple of people on our sound team rode in it. They were very brave and kept the recorder running the whole time,” says Thom, who completed the sound edit and premix in Avid Pro Tools 12

PEANUTS

They captured recordings on the plane, as well as from the ground as the plane performed a few acrobatic aerial maneuvers. During the final 7.1 mix in Mix G at Skywalker Sound, via the Neve DFC console, Thom says the challenge was to make the film sound exciting without being too dynamic. The final plane sounds were very mellow without any harsh upper frequencies or growly tones. “We had to be careful of the nature of the sounds,” he says. “If you make the airplanes too scary or intimidating, or sound to animalistic, little kids are going to be scared and cover their ears. We wanted to make sure it was fun without being scary.”

Many of the scenes in The Peanuts Movie have subtle sound design, with Foley being a big part of the track. There are a few places where sound gets to deliver the joke. One of Thom’s favorite scenes was when Charlie Brown visits the library to find the book “Leo’s Toy Store.”

“The library is supposed to be quiet and we had to be very playful with the sound of Charlie’s feet squeaking on the floor and making too much noise,” says Thom. “After he leaves the library, he slides down the hillside in the snow and ice and ends up running right through a house. That was a fun sequence also.”

PEANUTS PEANUTS

One surprising piece of the soundtrack was the music. The name Vince Guaraldi is practically synonymous with Peanuts. His jazzy compositions are part of the Peanuts cultural lexicon. If someone says Peanuts, it instantly recalls to mind the melody of Guaraldi’s “Linus and Lucy” tune. And while “Linus and Lucy” is part of the film’s soundtrack, the majority of the score is orchestral compositions by Christophe Beck. “The music is mostly orchestral but even that has a Peanuts feel somehow,” concludes Thom.

Setting the audio tone of ‘Everest’

Glenn Freemantle sounds off on making this film’s audio authentic

By Jennifer Walden

Immovable, but not insurmountable, Mount Everest has always loomed large in the minds of ambitious adventurers who seek to test their mettle against nature’s most imposing obstacle course, with unpredictable weather.

Reaching the summit takes more than just determination, it requires training, teamwork and a bit of stubborn resolve not to die. Even then, there’s no guarantee that what, or who, goes up will come down. Director Baltasar Kormákur’s film Everest, from Universal Studios, is based on the tragic true story of two separate expeditions who sought to reach the summit on the same day, May 10th 1996, only to be bested by a frigid tempest.

Glenn Freemantle

Glenn Freemantle

Supervising sound editor/sound designer Glenn Freemantle at Sound24, based at Pinewood Studios in Iver Heath, Buckinghamshire, UK, was in charge of building Everest’s blustery sound personality. All the wind, snow and ice sounds that lash the film’s characters were carefully crafted in post and designed to take the viewer on a journey up the mountain.

“Starting at the bottom and going right to the top, you feel like you are moving through the different camps,” explains Freemantle. “We tried to make each location as interesting as possible. The film is all about nature; it’s all about how the viewer would feel on that mountain. We always wanted the viewer to feel that journey that they were on.”

In addition to Freemantle, Sound24’s crew includes sound design editors Eilam Hoffman, Niv Adiri, Ben Barker, Tom Sayers and sound effects editors Danny Freemantle and Dillon Bennett.

Capturing Wind
Glenn Freemantle and his sound team collected thousands of wind sounds, like strong winter winds from along the shores of western England, Ireland and Scotland. They recorded wide canyon winds and sand storms in the deserts of Israel, and on Santorini, they recorded strong tonal mountain winds. At the base camp on Mount Everest, they set out recorders day and night to capture what it sounded like there at different times. “At the base camp on Everest, even if we didn’t use all the recordings from there, we got the sense of the real environment, exactly what it was like. From a cinematic point of view, we used that as a basis, but obviously we were also trying to tell a story with the sound,” he says.

To capture ambience from various altitudes on Everest, Freemantle sent two small recording set-ups with the camera crew who filmed at the top of Everest. “The equipment had to be small, portable and resistant to the extreme conditions,” he explains. For these set-ups, owner of Telinga Microphones, Klas Strandberg, created a small, custom-made omnidirectional mic for an A/B set-up, as well as a pair of cardioid mics in XY configuration that were connected to two Sony D100 recorders.

The best way to record wind is to have it sing through something, so on their wind capturing outings, Freemantle and crew brought along an assortment of items — sieves, coat hangers, bits of metal, pans, all sorts of oddities that would produce different tones as the wind moved through and around them. They also set up tents, like those used in the film, to capture the tent movements in the wind. “We used a multi-mic set-up to record the sound so you felt like you were in the middle of all of these situations. We put the mics in the corners and in the center of the tent, and then we shook it. We also left them up for the night,” he says.

They used Sennheiser MKH8020s, MKH8050s and MKH8040s paired with multiple Sound Devices 744T and 722 recorders set at 192k/24-bit. For high-frequency winds, they chose the Sanken COS-100k, which can capture sounds up to 100kHz. “This allowed us to pitch down the inaudible wind to audible frequencies (between 20Hz – 20kHz) and create the bass for powerful tonal winds.”

With wind being a main player in the sound, Freemantle’s design focused on its dynamics. Changing the speed of the wind, the harshness of the wind and also the weight of the wind kept it interesting. “We were moving the sound all the time, and that was really effective. There was a 20-minute section of storm in there, which wasn’t easy to build,” explains Freemantle. “We would mix a scene for a day and then walk away. You can exhaust your ears mixing a film like this.”

Having the opportunity to revisit the stormy sequences allowed the sound team to compare the different storms and wind-swept scenes, and make adjustments. One of their biggest challenges was making sure each storm didn’t feel too big, or lack dynamics. “We wanted to have something different happening for each storm or camp so the audience could feel the journey of these people. It had to build up to the big storm at the end. We’d have to look at the whole film to make sure we weren’t going wrong. The sound needed to progress.”

In addition to wind, Freemantle and his team recorded sounds of snow and ice. They purchased a few square meters of snow and froze big chunks of ice for their recording sessions. “We got all the gear the actors were wearing and we put the jackets and things into the freezer overnight, so they would have that feeling, that frozen texture, that they would have out there in the weather,” he says. “We tried to do everything we could to make it sound as real as possible. It’s exhausting how that weather makes you feel, and it was all from a human point of view that we tried to create the weather that was around them.”

ADR
The weather sounds weren’t the only thing to be recreated for Everest. The soundtrack also hosts a sizable amount of ADR thanks to massive wind machines that were constantly blowing on set, and the actors having to wear masks didn’t help the dialogue intelligibility either. “That’s why the film is 90 percent re-recorded dialogue,” shares Freemantle. “Sound mixer Adrian Bell did a hell of a job in those conditions, but they are wearing all of these masks so you can hardly hear them. Everything had to be redone.”

The dialogue was so muffled at times that it was difficult for the picture-editing department to cut Everest. Director Kormákur asked for a quick ADR track of the whole film, using sound-alike actors when the real ones weren’t available. In addition, he also asked for a rough sound design and Foley pass, giving Freemantle about a week to mock it up. “You couldn’t follow the film. They couldn’t run it for the producers to get a sense of the story because you couldn’t hear what the actors were saying,” he says. “So we recreated the whole dialogue sequence for the film, and we quickly cut — from our sound libraries — all the footsteps and we did a quick cloth pass so they had a complete soundtrack in a very short period of time.”

During the ADR session for the final tracks, Freemantle notes the actors wore weight vests and straps around their chests to make it difficult for them to breathe and talk, all in an effort to recreate the experience of what is happening to them on screen. As CG was being added to the picture, with more sprays of snow and ice, the actors could react to the environment even more.

“Having to re-create their performances was a curse in one way, but it was a blessing because then we had control over every single sound in the soundtrack. We had control of every part of their breathing, every noise from their gear and outfits. We have everything so we could pull the perspective in the sound at any given moment and not bring along a lot of muck with it.”

Everest was mixed in three immersive formats: Dolby Atmos, Barco Auro-3D and IMAX 12.0. “Each one of the formats works really well and you really feel like you are in the film,” reports Freemantle. “The weight of the sound hits you in the theater. There is a lot of bass in there. With sound, you are moving the air around, so you are feeling it when the storm hits. The presence of the bass hits you in the chest.”

But it’s not a continuous aural onslaught —there are highs and lows, with rumbly wind fighting against the side of the mountain on Hillary Step and hissing wind higher up towards the summit. “You have to have detail and the sounds should be helping to tell the story,” he says. “It’s not about how much you put in — in the end, it’s about what you take out when you finish. That’s very important. You don’t want the film to be just a massive noise.”

The Mix
Everest was mixed natively in the Dolby Atmos theatre at Pinewood Studios by Freemantle and re-recording mixers Niv Adiri, CAS, and Ian Tapp, CAS. Sound24’s tried and tested Avid set-up helped bring the sounds of Everest to life, working on the powerful Avid System 5 large-format console, using Pro Tools 11 with EUCON control. Their goal was to put the audience on the mountain with the climbers without overwhelming them with a constant barrage of sound. “The journey the characters are going through is both mental and physical, and mixing in Atmos helped us bring these emotions to the audience,” says Adiri. Since director Kormákur’s focus was on the human tragedy, the dialogue scenes were intimately shot. This enabled the mixers to shift the bala

nce towards dialogue in these sequences and maintain the emotional contact with the characters. In the Atmos format they could position sounds around the audience to immerse them in the scene without having the sounds sit on top of the dialogue. “The sheer weight and power of the sound that the Atmos system produces was perfect for this film, particularly in the storm sequence, where we were able to make the sound an almost physical experience for the audience, yet still maintain the clarity of the dialogue and not make the whole thing unbearable to watch,” says Tapp.

Once the final Atmos mix was approved by director Kormákur, the tracks were taken to Galaxy Studios in Mol, Belgium, for the Barco 3D-Auro mix, and then it was on to Toronto’s Technicolor for the 12.0 IMAX mix. Despite the change in format, the integrity of the film was kept the same. The mix they defined in Atmos was the blueprint for the other formats.

For Freemantle, the best part of making Everest was being able to capture the journey. To make the audience feel like they are moving up the mountain, and make them feel cold and distressed. “You want to feel that contact, that physical contact like you are in it, like the snow is hitting your face and the jacket around you. When people watch it you want them to experience it because it’s a true story and you want them to feel it. If they are feeling it, then they are feeling the emotion of it.”

For more on Everest, read out interview with editor Mick Audsley.

Jennifer Walden is a New Jersey-based audio engineer and writer.

Creating the sonic world of ‘Macbeth’

By Jennifer Walden

On December 4, we will all have the opportunity to hail Michael Fassbender as he plays Macbeth in director Justin Kurzel’s film adaption of the classic Shakespeare play. And while Macbeth is considered to be the Bard’s darkest tragedy, audiences at the Cannes Film Festival premiere felt there was nothing tragic about Kurzel’s fresh take on it.

As evidenced in his debut film, The Snowton Murders, Kurzel’s passion for dark imagery fits The Weinstein Co’s Macbeth like a custom-fitted suit of armor. “The Snowtown Murders was brutal, beautiful, uncompromising and original, and I felt sure Justin would approach Macbeth with the same vision,” says freelance supervising sound editor Steve Single. “He’s a great motivator and demanded more of the team than almost any director I’ve worked with, but we always felt that we were an important part of the process. We all put more of ourselves into this film, not only for professional pride, but to make sure we were true to Justin’s expectations and vision.”

Single, who was also the re-recording mixer on the dialogue/music, worked with London-based sound designers Markus Stemler and Alastair Sirkett to translate Kurzel’s abstract and esoteric ideas — like imagining the sound of mist — and place them in the reality of Macbeth’s world. Whether it was the sound of sword clashes or chimes for the witches, Kurzel looked beyond traditional sound devices. “He wanted the design team to continually look at what elements they were adding from a very different perspective,” explains Single.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

L-R: Gilbert Lake, Steve Single and Alastair Sirkett.

Sirkett notes that Kurzel’s bold cinematic style — immediately apparent by the slow-motion-laced battle sequence in the opening — led him and Stemler to make equally bold choices in sound. Adds Stemler, “I love it when films have a strong aesthetic, and it was the same with the sound design. Justin certainly pushed all of us to go for the rather unconventional route here and there.  In terms of the creative process, I think that’s a truly wonderful situation.”

Gathering, Creating Sounds
Stemler and Sirkett split up the sound design work by different worlds, as Kurzel referred to them, to ensure that each world sounded distinctly different, with its own, unique sonic fingerprint. Stemler focused on the world of the battles, the witches and the village of Inverness. “The theme of the world of the witches was certainly a challenge. Chimes had always been a key element in Justin’s vision,” says Stemler, whose approach to sound design often begins with a Schoeps mic and a Sound Devices recorder.

As he started to collect and record a variety of chimes, rainmakers and tiny bells, Stemler realized that just shaking them wasn’t going to give him the atmospheric layer he was looking for. “It needed to be way softer and smoother. In the process I found some nacre chimes (think mother-of-pearl shells) that had a really nice resonance, but the ‘clonk’ sound just didn’t fit. So I spent ages trying to kind of pet the chimes so I would only get their special resonance. That was quite a patience game.”

By having distinct sonic themes for each “world,” re-recording mixers Single and Gilbert Lake (who handled the effects/Foley/backgrounds) were able to transition back and forth between those sonic themes, diving into the next ‘world’ without fully leaving the previous one.

There’s the “gritty reality of the situation Macbeth appears to be forging, the supernatural world of the witches whose prophecy has set out his path for him, the deterioration of Macbeth’s mental state, and how Macbeth’s actions resonate with the landscape,” says Lake, explaining the contrast between the different worlds. “It was a case of us finding those worlds together and then being conscious about how they relate to one another, sometimes contrasting and sometimes blending.”

Skirett notes that the sonic themes were particularly important when crafting Macbeth’s craziness. “Justin wanted to use sound to help with Macbeth’s deterioration into paranoia and madness, whether it be using the sound of the witches, harking back to the prophecy or the initial battle and the violence that had occurred there. Weaving that into the scenes as we moved forward was alMACBETHways going to be a tricky balancing act, but I think with the sounds that we created, the fantastic music from composer Jed Kurzel, and with Steve [Single] and Gilly [Lake] mixing, we’ve achieved something quite amazing.”

Sirkett details a moment of Macbeth’s madness in which he recalls the memory of war. “I spent a lot of time finding elements from the opening battle — whether it be swords, clashes or screams — that worked well once they were processed to feel as though they were drifting in and out of his mind without the audience being able to quite grasp what they were hearing, but hopefully sensing what they were and the implication of the violence that had occurred.”

Sirkett used Audio Ease’s Altiverb 7 XL in conjunction with a surround panning tool called Spanner by The Cargo Cult “to get some great sounds and move them accurately around the theatre to help give a sense of unease for those moments that Justin wanted to heighten Macbeth’s state of mind.”

The Foley, Score, Mix
The Foley team on Macbeth included Foley mixer Adam Mendez and Foley artist Ricky Butt from London’s Twickenham Studios. Additional Foley for the armies and special sounds for the witches was provided by Foley artist Carsten Richter and Foley mixer Marcus Sujata at Tonstudio Hanse Warns in Berlin, Germany. Sirkett points out that the sonic details related to the costumes that Macbeth and Banquo (Paddy Considine) wore for the opening battle. “Their costumes look huge, heavy and bloodied by the end of the opening battle. When they were moving about or removing items, you felt the weight, blood and sweat that was in them and how it was almost sticking to their bodies,” he says.

Composer Jed Kurzel’s score often interweaves with the sound design, at times melting into the soundscape and at other times taking the lead. Stemler notes the quiet church scene in which Lady Macbeth sits in the chapel of an abandoned village. Dust particles gently descend to the sound of delicate bells twinkling in the background. “They prepare for the moment where the score is sneaking in almost like an element of the wind.  It took us some time in the mix to find that perfect balance between the score and our sound elements. We had great fun with that kind of dance between the elements.”

MACBETHDuring the funeral of Macbeth’s child in the opening of the film, Jed Kurzel’s score (the director’s brother) emotes a gentle mournfulness as it blends with the lashing wind and rain sound effects. Single feels the score is almost like another character. “Bold and unexpected, it was an absolute pleasure to bring each cue into the mix. From the rolling reverse percussion of the opening credits to the sublime theme for Lady Macbeth’s decline into madness, he crafted a score that is really very special.”

Single and Lake mixed Macbeth in 5.1 at Warner Bros.’ De Lane Lea studio in London, using an AMS Neve DFC console. On Lake’s side of the board, he loved mixing the final showdown between Macbeth and Macduff — a beautifully edited sequence where the rhythm of the fighting perfectly plays against Jed Kurzel’s score.

“We wanted the action to feel like Macbeth and Macduff were wrenching their weapons from the earth and bringing the full weight of their ambitions down on one another,” says Lake. “Markus [Stemler] steered clear of traditional sword hits and shings and I tried to be as dynamic as possible and to accentuate the weight and movement of their actions.”

To create original sword sounds, Stemler took the biggest screw wrench he could find and recorded himself banging on every big piece of metal available in their studio’s warehouse. “I hit old heaters, metal staircases, stands and pipes. I definitely left a lot of damage,” he jokes. After a bit of processing, those sounds became major elements in the sword sounds.

Director Kurzel wanted the battle sequences to immerse the audience in the reality of war, and to show how deeply it affects Macbeth to be in the middle of all that violence. “I think the balance between “real” action and the slo-mo gives you a chance to take in the horror unfolding,” says Lake. “Jed’s music is very textural and it was about finding the right sounds to work with it and knowing when to back off with the effects and let it become more about the score. It was one of those rare and fortunate events where everyone is pulling in the same direction without stepping on each other’s toes!”

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

L-R Alastair Sirkett, Steve Single and Gilbert Lake.

To paraphrase the famous quote, “Copy is King” holds true for any project, in a Shakespeare adaptation, the copy is as untouchable as Vito Corleone in The Godfather. “You have in Macbeth some of the most beautiful and insightful language ever written and you have to respect that,” says Single. His challenge was to make every piece of poetic verse intelligible while still keeping the intimacy that director Kurzel and the actors had worked for on-set, which Single notes, was not an easy task. “The film was shot entirely on location, during the worst storms in the UK for the past 100 years. Add to this an abundance of smoke machines and heavy Scottish accents and it soon became apparent that no matter how good production sound mixer Stuart Wilson’s recordings were — he did a great job under very tough conditions — there was going to be a lot of cleaning to do and some difficult decisions about ADR.”

Even though there was a good bit of ADR recorded, in the end Single found he was able to restore and polish much of the original recordings, always making sure that in the process of achieving clarity the actors’ performances were maintained. In the mix, Single says it was about placing the verse in each scene first and then building up the soundtrack around that. “This was made especially easy by having such a good effects mixer in Gilly Lake,” he concludes.

Jennifer Walden is a New Jersey-based writer and audio engineer.

‘Jurassic World’: Dinos find their inner animal via Skywalker Sound

By Jennifer Walden

So the makers of Jurassic World are, quite obviously, asking you to suspend your disbelief. This disbelief might just focus on the fact they are asking you to accept that a group of super-positive thinking people (or maybe super-stupid) thought that opening up a theme park where dinosaurs once again roam the earth was a good idea. Especially since it has gone so spectacularly bad at parks featured in earlier movies.

Yes, the idea of re-opening the same park again and again and expecting that people won’t get eaten again and again is a bit crazy, right? Well, the only thing insane about Jurassic World is that it is insanely awesome! Yeah, ok, people get eaten, but that’s part of the fun, no?

Also part of the fun is the new bad-ass, ‘roided-out dino, Indominus Rex. What does he sound like? Well, it’s a mix of whale, wild pig, tiger, monkey, fox and dolphin. The fun doesn’t stop there – all the sounds you love from the Jurassic Park franchise — Gary Rydstrom’s boss T-rex and those iconic raptors — have been brought back and are updated for Jurassic World. Think raptor 4.0!

Al Nelson

Al Nelson

Supervising sound editor/sound designer Al Nelson from Skywalker Sound says, “One of the things that we were very intent on was honoring the original sounds and being consistent with the story. We’ve gone back to the same island so in theory, many of the creatures there have been carried on.”

What’s Old Is New Again
In 1993, Skywalker sound designer Gary Rydstrom first introduced audiences to what a raptor sounds like in Jurassic Park, and it’s been consistent ever since. In Jurassic World, the main raptor screams, screeches and growls are the original sounds Rydstrom created for Jurassic Park, using African geese hisses, dolphins and the now-famous “tortoises having sex” sound.

“We just augmented their vocal library,” explains Nelson. “The raptors in Jurassic World are more interactive and have individual personalities. We wanted additional vocals that were positive, communicative and that would evoke a different side of their personality. We took those actual mastered sounds that Gary [Rydstrom] had on the first films and diversified from there.”

Armed with a Schoeps M/S rig with cardioid mics and a Sound Devices 744T portable digital recorder, Nelson set out to get new sounds from the animals originally used for Rydstrom’s raptors. But, he soon discovered that each animal is different. “These particular geese I recorded gave me great recordings but they were just too bird-like. They weren’t fitting in with the sounds that Gary had initially designed. I expected to be able to redo what Gary had done, but that raptor sound is really a credit to Gary, who generated something that was so unique.”

With some guidance from Rydstrom, Nelson researched new animals that might fit in with the original raptors. “We wanted to honor the tradition of the first Jurassic Park, where Gary and his assistant at the time Chris Boyes — now a re-recording mixer on Jurassic World — went out and recorded lots of new, original sounds from animals,” says Nelson, who read books on how and why animals communicate, in order to make his time spent recording in the field as productive as possible.

“Many times when you go to a zoo you stand in front of an animal and it stares back at you. It’s not going to communicate unless it has a reason,” explains Nelson. Based on his research, Nelson contacted animal sanctuaries, zoos and animal specialists to find out if their animals can roar on cue or react to particular people or vocalize in certain situations like feeding time or vet visits.

Film Title: Jurassic World“Situations where the animals are compelled to vocalize are what you’re looking for,” says Nelson. “Animals in interesting situations will make interesting sounds.” The new raptor communication sounds use leopard and kinkajou hisses, dolphins, geese, Asian otters and baby baboons. “For the more sympathetic sounds, we went back to birds, namely a toucan and a penguin. The Gentoo penguin in particular had this guttural, chittery sound that worked well with the raptors’ communications.”

The New Guys
Jurassic World features a giant, carnivorous marine reptile called the Mosasaur, and a genetically engineered hybrid dino dubbed the Indominus Rex — two very large and very toothy creatures. Helping to build the Mosasaur was sound designer Pete Horner, who is also the re-recording mixer on the dialogue and music in Jurassic World. The Mosasaur is composed of several water-based animals, like walruses, whales and dolphins.

Says Nelson, “The walrus is a great source for animal design since it’s one of the biggest sounding animals. The walrus sound has a lot of body, but it also has tonal aspects to it. It’s got a great guttural, water-animal quality. Pete also used beluga and pilot whales and some dolphins to give it a tonal throaty sound. That huge snapping chomp was also a big part of the Mosasaur — when it rears up out of the water and takes a big bite out of a great white shark.”

Film Title: Jurassic World

The Indominus Rex is genetically engineered to scare the pants of park visitors. Its genes may include material from the Tyrannosaurus Rex, but Nelson points out that it’s a vastly different creature: “The Tyrannosaurus is a pure bred, the real deal, but the Indominus Rex is kind of a mutant. The overall character of it needed to be big but it also needed to sound broken, nasty and gnarly.”

Director Colin Trevorrow compared it to a toddler that is having a tantrum, where the sound starts low and then builds to a wail. “It doesn’t know what it is, or where it is. It just knows that it hates everything and it wants to eat everything. It’s a cranky, pissy creature,” says Nelson.

Working in Avid Pro Tools 11, in combination with Native Instruments Kontakt, Nelson created an Indominus Rex sound that is irritating, squeal-y and angry. It’s a big creature, so Nelson chose whale sounds, pitched down tiger chuffs and the sounds of huge pigs that were fighting with each other. “They didn’t sound like what you would expect a pig to sound like… with a squeal. They were doing these crazy growls that had this big, guttural, deep sound.”

For the howl-y and scream-y layers of the Indominus Rex, Nelson recorded a little fennec fox. “It was just irate and scared, and it was bellowing and screaming,” describes Nelson. His online search for animals that scream and screech resulted in stories of people who live in parts of the Midwest and the Northeast, all being woken up in the middle of the night to a blood-curdling sound.

“It’s these cute little foxes that come out of the woods and just do these blood-curdling squeals. I pitched them down a little bit, but in the end some of the sounds weren’t manipulated nearly as much as I expected they would be.”

He also recorded wild pigs, spider monkeys, macaques and a howler monkey that would go bananas for his animal handler’s singing. “This young animal caretaker introduced us to the howler monkey and shyly said he’ll vocalize when she sings to him. So I eventually convinced her to sing and the howler monkey started hooting and hollering with this deep raspy growly sound. It was very scary and very out of control in a lot of ways.”

Film Title: Jurassic WorldThe Others
While the Indominus Rex commanded much of Nelson’s creative attention, he had a lot of fun creating the dinosaur sounds for the raptor named Blue. “Colin [Trevorrow] was very supportive in pushing the boundaries and going too far for the tweaky personality things I got to do for Blue,” explains Nelson, who also enjoyed the emotional scenes, like when the Apatosauruses were dying. “That was a special scene and a big focus; it was all eyes on this event. Colin, from the beginning, said he wanted people to be crying after that scene.”

Nelson feels the success of Jurassic World’s sound is a direct result of a Skywalker Sound team effort. “It was great to have Pete Horner help me out on the Mosasaur. All the vehicles, the crowds and the personality of the park itself were a lot of work as well. Scott Guitteau was great on sound effects. There was Gwendolyn Yates Whittle, who wrangled all this dialogue with Stuart McCowan, and Ben Burtt who wrangled the Foley with Nia Hansen.

“There are a lot of people in there who I was constantly bouncing ideas off of. At the end of each day Gary [Rydstrom] would come in and comment on sounds. When you’re in the trenches, you want to be with your friends. You want to be with people who you respect and are invested in the sound in the same way that you are. That’s the great thing about working here at Skywalker, we are all together.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

 

White Light Audio combines
 production, post sound services

These three USC Film School grads take on audio their way

By Mel Lambert

White Light Audio is a boutique sound facility that provides premium sound services for all phases of film and digital media production,” states Ginge Cox who, with co-founder Heika Burnison and sound designer/mixer Jan Bezouska, form the core of this new operation based in East LA. “We can work on any segment of an entertainment project and be present for every phase — from a project’s inception to final delivery.” Recent projects have included a number of feature films, shorts, episodic series, commercials, documentaries, voiceovers and trailers.

The three young filmmakers met at USC Film School while completing MFA studies in film/TV production, each with a specialization in sound engineering and design. “We’re a passionate collective of tech creatives that live for all things audio,” says Burnison. “And we are one of the only independent sound studios to offer both production and post sound services in-house.”

Jan Bezouska, Heika Burnison and Ginge Cox.

To date, White Light Audio has provided production and/or post services for over 30 projects, including the films Devil’s Night (2014), Lake Los Angeles (2014), Vimana (2014), It’s Better in Italian (2015), Suburbanite (2014) and three upcoming features this summer, including James Franco’s Actors Anonymous, based on his acclaimed novel.

“We started by offering sound services and equipment to fellow USC students who needed to film their own productions,” Cox recalls. “We wanted to offer reasonable sound packages but, at the same time, provide the best sound quality possible. The idea grew into White Light Audio and our one-stop solution.”

“Our ADR and Foley stage features a number of covered pits for textured footsteps and the like,” adds Bezouska. “The sound editorial and 5.1-channel re-recording space boasts a state-of-the-art Slate Pro Audio Raven MTi MultiTouch production console for our Avid Pro Tools HD systems, 5.1 monitoring is via Focal SM Series loudspeakers. We also love working with Waves Diamond Bundle plug-ins, and Izotope RX 4 Advanced.”

Bezouska says the Raven MTi is a very intuitive edit/mix surface that streamlines the studio’s editorial and mixing workflows. He calls the 27-inch, six-touch display intuitive and fast. The surface connects to a DAW computer via standard DVI and USB 2.0 ports and uses the industry-standard Ney-Fi protocol to provide remote control of all DAW edit, mix and plug-in parameters.

Audio Pre-Pro
White Light Audio (@whitelightaudio) offers full production services for film and TV shoots, including location scouting. “We will visit a client’s intended shooting locations to review potential sound challenges,” Cox explains. “This allows us to anticipate audio issues that might occur on shoot days and eliminate the sound problems that lead to unnecessary costs during post.”

“Our pre-production meetings let us develop sound solutions during the critical stages before principal photography,” Burnison notes, “and help develop creative choices to deliver the most effective audio for a client’s project.” During production recording, WLA provides a two-to-three-person crew that manages a Schoeps CMIT-5U shotgun mic, a Zaxcom Nomad digital mixer/recorder, a Zaxcom IFB 100, Zaxcom TRX900LA  wireless transmitter and QRX100 receivers and the Countryman B6 and Sanken COS-11 lavalier mics, plus Comteck 216 wireless monitoring.

PRODUCTION SHOT

“During editorial post, our sound editors work with a client’s production audio to prepare elements that best complement the edited images,” Cox continues. “We can pull from an extensive library of sound files in our digital libraries, or record original tracks to create a world of SFX. In addition, we have access to several talented Foley artists, props and surfaces to create unique sounds. We can also handle ADR and voice-over recordings in our custom-designed sound booth.”

While Burnison acknowledges that the studio’s production and post equipment is the latest and greatest, she points to their training and love of sound as key to their success. “Thanks to USC, we were fortunate enough to study under some of the greatest sound designers in the business. Because of that, our creative team possesses an appreciation of the needs of both production and post — we recognize the vital relationship between the two.

“Having produced many of our own graduate-student films from the ground up, we know the importance of the collaborative environment that a sound department needs to develop with directors, producers and other crew Our goal is to help them achieve their dream during a complex process. Sound is such a powerful tool – creatively and emotionally. We love showing people how the right audio work can elevate a cinematic experience,” Burnison concludes.

Behind the Title: SuperExploder’s Jody Nazzaro

NAME: Jody Nazzaro

COMPANY: SuperExploder (@SuperExploder)

CAN YOU DESCRIBE YOUR COMPANY?
We’re a boutique audio house specializing in mixing, sound design and music composition for every screen and genre. Music supervision and licensing, as well as voice casting, round out our capabilities.

WHAT’S YOUR JOB TITLE?
Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
The sound designer/mixer, and to some extent composer, responsibilities have become blurred and intertwined for some time now. I predominantly sound design and mix commercials and promos for cinema, TV and IP delivery. When it comes to network promos, I often record voiceover, sound design and mix multiple spot lengths with many versions within a four-hour booking.

At the agency level, heavier sound design and original music jobs still occupy their own arena. I may sound design a spot, but not mix it, and I often mix spots another sound designer has crafted. It’s wonderful to collaborate with my colleagues, and it’s the mastering of the different disciplines that makes what I do exciting and challenging.

In Studio R

In Studio R

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
In addition to the above, I sometimes act as producer, voice coach, writer, Foley artist, voiceover talent, therapist, sommelier, concierge and quality control technician.

WHAT TOOLS DO YOU USE?
I use Avid Pro Tools and it’s available arsenal of plug-ins and synthesizers to translate what I’m hearing in my head to what I want the world to hear. My trusty Dolby Cat 430 and iZotope RX 4 allow me to fix problematic audio, and I love my RTW TouchMonitor for metering.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Usually in post, the final sound mix is one of the last steps of the project. I enjoy it when clients come to my studio for the mix after what is sometimes a month-long process of pre-production, shoot, edit, VFX, etc. By the time they show up to the studio, they’re pretty burnt out. It’s my task to inject some fresh life and vibrancy into their spot, and at the end of the session, have them feel happy about shipping a final product they’re proud of.

WHAT’S YOUR LEAST FAVORITE?
Sometimes clients get rough-cut love and it’s hard to get them to budge on what is a better mix decision for telling the story. That, and when no one pays attention while we’re recording voiceover.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning, before everyone arrives. I get to sit in a beautiful quiet studio and focus on the task at hand. No music, no phones, no email. It’s sort of a calm-before-the storm pre-session meditation.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Flying bush planes and making wine in the Northwest.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I had a “communication arts” class in 8th grade and wanted to be a film editor. Then after college, at the company I was working for, an audio assistant position opened up. It paid more so I took it and that’s when I fell in love with the power of sound.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I just worked with Showtime/HBO to sound design and mix the open for the Mayweather vs. Pacquiao fight. I mixed a really nice spot with The Vault for Eastbay.com, and some funny work for ESPN and Amazon. I also just wrapped the sound design package for Spike TV’s rebrand.

From HBO’s open for the Mayweather vs. Pacquiao fight.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I put equal effort into every job all the time. Regardless of budget or profile. I think that’s what my clients have come to expect and it’s what they appreciate.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I can name three I’d like to un-invent! Smart phones, email and MP3 encoding!

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes. Whatever is in the spot I’m mixing… over and over and over and over again!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
We go down the shore whenever we can, and I chase my 19-month-old son around. The sound of his giggle makes it all go away.

Quick Chat: Lucky Post’s Scottie Richardson on ‘Reclaim the Kitchen’

Wolf Appliances and agency The Richards Group recently launched the “Reclaim the Kitchen” campaign meant to inspire families to prepare and eat meals together. At the center of this initiative is a film that shows audiences the joys of home cooking. Lucky Post’s Scottie Richardson, based in Dallas, created the sound design, music edit and the final mix for the three-minute, stop-motion piece directed by Brikk.

In Reclaim the Kitchen the viewer’s perspective is sitting at a dinner table or making tasty dishes. There are statistics, meal suggestions and recipes. You can see the film on http://www.reclaimthekitchen.com, a site created to offer “tools to cook with confidence.”

We checked in with Richardson in order to dig a bit deeper into the sound and music.

When did you first meet with the agency?
I was put on hold by The Richards Group producer David Rucker, but didn’t truly know what the job would be. I only knew that I was working on a video for Wolf and I had two days to work on my own before the creative team would come in.  David and I had just worked on a huge Chrysler campaign so there was a strong trust factor going in.

Scottie Richardson

Scottie Richardson

What direction did they give or were they open to ideas?
The concept behind the Wolf project is “reclaim the kitchen,” getting people together to share home-cooked meals. It’s not meant to badger viewers; it’s more like, “Wow, what have I been missing? I can do this!” Inspiring and whimsical. In terms of the sound, I was just told to “do what I do.” That’s a dream on any project — to have the time to immerse yourself in the narrative. I wanted the sound design to match the integrity of this initiative.

There are many layers of sounds — the home, cooking, technology. What were you trying to convey with the audio?
The creatives did an unbelievable job creating a sweeping yet simple message. Preparing a meal isn’t just about the food. Time is ticking, money matters and family are all important. These factors are influenced by myriad circumstances, but rather than ignore them, they’re addressed head-on. The sounds outside the kitchen are designed to resonate with viewers, to put them in these moments that influence their meal decisions. Phones are often seen as tools that distract us from our family time, but they can be used to help with family participation — Googling recipes, meal-planning apps, converting measurements — there are ways to use these tools mindfully and together.  Fast food may be a modern solution to creating more time with your family, but if that time is allocated to a mission in the kitchen, your time is invested in camaraderie.

In some cases the sound was meant to add atmosphere, while in other cases it was to specifically key off of what was happening on screen. We used it to interplay with the voiceover script. There is a scene nearing the end that is a tight close-up of a food scale. Meanwhile, you hear a ticking from a stopwatch as the camera pulls out to reveal that it is a food scale. That sound was to accent the voiceover talking about “time” rather than the image, but it provides a nice juxtaposition. Overall, the goal was to key off of the verbal cues and visuals with both sound design and music edits so they were additional characters in the narrative. In some sections, we chose an absence of sound to allow moments to breathe and stand out. This piece was designed to inspire people to recalibrate, be somewhat introspective and learn, but not feel intimidated, so creating moments to process were crucial.

Can you talk about creating the sounds?
I have a large sound effects library that I’ve built over the last 20 years. I start with using the logical pre-recorded, time-tested sounds as a baseline. On this particular job I pulled from my stock library but also Foley’d lots of sounds. I like to be musical with sound design, so I am constantly making sure the sounds work with the music track in tone or pitch. Sometimes that’s using verb and delay to match the music and its space, or pitching things up or down. Being a musician I like to use musical effects like cymbals or shakers to accent things as well.  These elements integrated well on this project because Breed’s music track was so lively and elegant.

What about the mix?
At the end of the day what the voiceover is saying is important, like the vocals to a great song. I made sure all of that was clear, then I experimented with the music and sound design. I built a nice bed for the voice to lay in that would hopefully let the poignancy of the message resonate. Sometimes it’s best to just feel the sound and not actually be able to articulate what it is. You miss it if it is gone but you can’t actually say, “That was a scooter running over an umbrella.”

You wore a few different hats on this one. Can you talk about that?
Well at first it was as a sound designer. I created a sound scape from beginning to end of the cut. Then I brought in the editor’s sound design and went through to see if anything clashed or to see what needed to be replaced or enhanced. When agency creative Dave Longfield came into the session, he had very specific things he wanted to try with the music, so we spent a half-day cutting up the music stems and trying out things to hit with picture. Breed’s music was amazing. It balanced the narrative with energy and the intelligence of the message. After that, we edited dialog, trying out various takes and pacing that felt right. This was followed by the mixing stage to bring it all together.

What tools do you call on?
This was all built and mixed in Avid Pro Tools. One tool I use often on sound design is Omnisphere by Spectrasonics. This allows me to make more music sound effects and really transform them into something new.

ReclaimTheKitchenmain

Where do you find inspiration?
Honestly all over. Art, movies, music. One of my favorite groups as a young kid was The Art Of Noise. I just loved how they made music out of door slams and breaking glass. I love how layering many sounds together make one solid sound. I enjoy seeing a good movie and hearing how they use a sound you wouldn’t think of for what you are seeing, or how the absence of sound speaks better than having one.

Finally, how did this project influence you personally?
I am truly the healthiest I have been in a long, long time. I have not had fast food in over three months and we have been cooking as a family every night for dinner. I’m avidly researching recipes and trying to one-up the next meal. This is a project that changed my lifestyle for the better. I didn’t see that coming.

Oscar-winner Ben Wilkins on Whiplash’s audio mix, edit

This BAFTA- and Oscar-winner walks us through his process.

By Randi Altman

When I first spoke with Ben Wilkins, he was freshly back from the Oscar-nominee luncheon in Hollywood and about to head to his native England to attend the BAFTAs. Wilkins was nominated by both academies for his post sound work on Sony Picture Classics’ Whiplash, the Damien Chazelle-directed film about an aspiring jazz drummer and his brutal instructor.

Wilkins (@tonkasound) didn’t return to LA empty handed — he, along with fellow sound re- Continue reading

Quick Chat: The Hit House’s Sally House on new Lexus spots

LA-based The Hit House created and produced original music and sound design for the new Lexus NX campaign via Team One Advertising. The Corner Shop produced and Wilfrid Brimo directed. Jump Editorial’s Richard Cooperman provided the cut.

The What You Get Out of It spot features a man in a parking garage, opening a large shipping container. Suddenly people start appearing and entering the container with random items, such as a bike, luggage and a dog. The man then closes the doors and they fall away, revealing a white Lexus filled with all the people and their stuff. They drive away together.

The other commercial in this campaign, which promotes the Lexus’ NX Hybrid, F Sport and Turbo car,  is called Moving. The Hit House (@HitHouseMusic) describes the music they created as industrial and contemporary.

Continue reading