Tag Archives: audio mixing

Patriots Day

Augmenting Patriots Day‘s sound with archival audio

By Jennifer Walden

Fresh off the theatrical release of his dramatized disaster film Deepwater Horizon, director Peter Berg brings another current event to the big screen with Patriots Day. The film recounts the Boston Marathon bombing by combining Berg’s cinematic footage with FBI-supplied archival material from the actual bombing and investigation.

Once again, Berg chose to partner with Technicolor’s supervising sound editor/re-recording mixer Dror Mohar, who contributed to the soundtrack of Berg’s Deepwater Horizon (2016) and Lone Survivor (2013).  He earned an MPSE award nomination for sound editing on the latter.

According to Mohar, Berg’s intention for Patriots Day was not to make a film about tragedy and terrorism, but rather to tell the story of a community’s courage in the face of this disaster. “This was personal for Peter [Berg]. His conviction about not exploiting or sensationalizing any of it was in every choice he made,” says Mohar. “He was vigilant about the cinematic attributes never compromising the authenticity and integrity of the story of the events and the people who were there — the law enforcement, victims and civilians. Peter wanted to evolve and explore the sound continuously. My compass throughout was to create a soundtrack that was as immersive as it was genuine.”

From a sound design perspective, Mohar was conscious of keeping the qualities and character of the sounds in check — favoring raw, visceral sounds over treated or polished ones. He avoided oversized “Hollywood” treatments. For example, Mohar notes the Watertown shootout sequence. The lead-up to the firefight was inspired by a source audio recording of the Watertown shootout captured by a neighbor on a handheld camera.

“Two things grabbed my attention — the density of the firefight, which sounded like Chinese New Year, and the sound of wind chimes from a nearby home,” he explains. “Within what sounded like war and chaos, there was a sweet sound that referenced home, family, porch… This shootout is happening in a residential area, in the middle of everyday life. Throughout the film, I wanted to maintain the balance between emotional and visceral sounds. Working closely with picture editors Colby Parker Jr. and Gabriel Fleming, we experimented with sound design that aligned directly with the dramatic effect of the visuals versus designs that counteracted the drama and created an experience that was less comfortable but ultimately more emotional.”

Tension was another important aspect of the design. The bombing disrupted life, and not just the lives of those immediately or physically affected by the bombing. Mohar wanted the sound to express those wider implications. “When the city is hit, it affects everyone. Something in that time period is just not the same. I used a variety of recordings of calls to prayer and crowds of people from all over the world to create soundscapes that you could expect to hear in a city but not in Boston. I incorporated these in different times throughout the film. They aren’t in your face, but used subtly.”

Patriots DayThe Mix
On the mix, he and re-recording mixer Mike Prestwood-Smith chose a realistic approach to their sonic treatments.

Prestwood-Smith notes that for an event as recent and close to the heart as the Boston Marathon bombing, the goal was to have respect for the people who were involved — to make Patriots Day feel real and not sensationalized in any sense. “We wanted it to feel believable, like you are witnessing it, rather than entertaining people. We want to be entertaining, engaging and dramatic, but ultimately we don’t want this to feel gratuitous, as though we are using these events to our advantage. That’s a tight rope to tread, not just for sound but for everything, like the shooting and the performances. All of it.”

Mohar reinforces the idea of enabling the audience to feel the events of the bombing first-hand through sound. “When we experience an event that shocks us, like a car crash, or in this case, an act of terror, the way we experience time is different. You assess what’s right there in front of you and what is truly important. I wanted to leverage this characteristic in the soundtrack to represent what it would be like to be there in real time, objectively, and to create a singular experience.”

Archival Footage
Mohar and Prestwood-Smith had access to enormous amounts of archival material from the FBI, which was strategically used throughout the soundtrack. In the first two reels, up to and including the bombing, Prestwood-Smith explains that picture editors Fleming and Parker Jr. intercut between the dramatized footage and the archived footage “literally within seconds of each other. Whole scenes became a dance between the original footage and the footage that Peter shot. In many cases, you’re not aware of the difference between the two and I think that is a very clever and articulate thing they accomplished. The sound had to adhere to that and it had to make you feel like you were never really shifting from one thing to the other.”

It was not a simple task to transition from the Hollywood-quality sound of the dramatized footage to sound captured on iPhones and low-resolution cameras. Prestwood-Smith notes that he and Mohar were constantly evolving the qualities of the sounds and mix treatments so all elements would integrate seamlessly. “We needed to keep a balance between these very different sound sources and make them feel coherently part of one story rather than shifting too much between them all. That was probably the most complex part of the soundtrack.”

Berg’s approach to perspective — showing the event from a reporter’s point of view as opposed to a spectator’s point of view — helped the sound team interweave the archival material and fictionalized material. For example, Prestwood-Smith reports the crowd sounds were 90 percent archival material, played from the perspective of different communication sources, like TV broadcasts, police radio transmissions and in-ear exchanges from production crews on the scene. “These real source sounds are mixed with the actors’ dialogue to create a thread that always keeps the story together as we alternate through archival and dramatized picture edits.”

While intercutting various source materials for the marathon and bombing sequences, Mohar and Prestwood-Smith worked shot by shot, determining for each whether to highlight an archival sound, carry the sound across from the previous shot or go with another specific sound altogether, regardless of whether it was one they created or one that was from the original captured audio.

“There would be archival footage with screaming on it that would go across to another shot and connect the archive footage to the dramatized, or sometimes not. We literally worked inch-by-inch to make it feel like it all belonged in one place,” explains Prestwood-Smith. “We did it very boldly. We embraced it rather than disguised it. Part of what makes the soundtrack so dynamic is that we allow each shot to speak in its genuine way. In the earlier reels, where there is more of the archival footage, the dynamics of it really shift dramatically.”

Patriots Day is not meant to be a clinical representation of the event. It is not a documentary. By dramatizing the Boston Marathon bombing, Berg delivers a human story on an emotional level. He uses music to help articulate the feeling of a scene and guide the audience through the story emotionally.

“On an emotional level, the music did an enormous amount of heavy lifting because so much of the sound work was really there to give the film a sense of captured reality and truth,” says Prestwood-Smith. “The music is one of the few things that allows the audience to see the film — the event — slightly differently. It adds more emotion where we want it to but without ever tipping the balance too far.”

The Score
Composers Trent Reznor and Atticus Ross had a definitive role for each cue. Their music helps the audience decompress for certain moments before being thrust right back into the action. “Their compositions were so intentional and so full of character and attitude. It’s not generic,” says Mohar. “Each cue feels like a call to action. The tracks have eyes and mouths and teeth. It’s very intentional. The music is not just an emotional element; it’s part of the sound design and sound overall. The sound and music work together to contribute equally to this film.”

The way that we go back and forth between the archival footage and the dramatized footage was the same way we went from designed audio to source audio, from music to musical, from sound effects to sound effective,” he continues. “On each scene, we decided to either blur the line between music and effects, between archival sound and designed sound, or to have a hard line between each.”

To complement the music, Mohar experimented with rhythmic patterns of sounds to reinforce the level of intensity of certain scenes. “I brought in mechanical keyboards of various types, ages and material, and recorded different typing rhythms on them. These sounds were used in many of the Black Falcon terminal scenes. I used softer sounding keyboards with slower tempos when I wanted the level of tension to be lower, and then accelerated them into faster tempos with harsher sounding keyboards as the drama in the terminal increased,” he says. “By using modest, organic sounds I could create a subliminal sense of tension. I treated the recordings with a combination of plug-ins, delays, reverbs and EQs to create sounds that were not assertive.”

Dialogue
In terms of dialogue, the challenge was to get the archive material and the dramatized material to live in the same space emotionally and technically, says Prestwood-Smith. “There were scenes where Mark Wahlberg’s character is asking for ambulances or giving specific orders and playing underneath that dialogue is real, archival footage of people who have just been hurt by these explosions talking on their phones. Getting those two things to feel integrated was a complex thing to do. The objective was to make the sound believable. ‘Is this something I can believe?’ That was the focus.”

Prestwood-Smith used a combination of Avid and FabFilter plug-ins for EQ and dynamics, and created reverbs using Exponential Audio’s PhoenixVerb and Audio Ease’s Altiverb.

Staying in The Box
From sound editorial through to the final mix, Mohar and Prestwood-Smith chose to keep the film in Pro Tools. Staying in the box offered the best workflow solution for Patriots Day. Mohar designed and mixed for the first phase of the film at his studio at Technicolor’s Tribeca West location in Los Angeles, a satellite of Technicolor at Paramount’s main sound facility while Prestwood-Smith worked out of his own mix room in London. The two collaborated remotely, sharing their work back and forth, continuously developing the mix to match the changing picture edit. “We were on a very accelerated schedule, and they were cutting the film all the way through mastering. Having everything in the box meant that we could constantly evolve the soundtrack,” says Prestwood-Smith.

7.1 Surround Mix
Mohar and Prestwood-Smith met up for the final 7.1 surround mix at 424 Post in Hollywood and mixed the immersive versions at Technicolor Hollywood.

While some mix teams prefer to split the soundtrack, with one mixer on music and dialogue and the other handling sound effects and Foley, Mohar and Prestwood-Smith have a much more fluid approach. There is no line drawn across the board; they share the tracks equally.

“Mike has great taste and instincts; he doesn’t operate like a mixer. He operates like a filmmaker and I look to him to make the final decisions and direct the shape of the soundtrack,” explain Mohar. “The best thing about working with Mike is that it’s truly collaborative, no part of the mix belonged to just one person. Anything was up for grabs and the sound as a whole belonged to the story. It makes the mix more unified, and I wouldn’t have it any other way.”


Jennifer Walden is a New Jersey-based audio pro and writer. 

Cory Melious

Behind the Title: Heard City senior sound designer/mixer Cory Melious

NAME: Cory Melious

COMPANY: Heard City (@heardcity)

CAN YOU DESCRIBE YOUR COMPANY?
We are an audio post production company.

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
I provide final mastering of the audio soundtrack for commercials, TV shows and movies. I combine the production audio recorded on set (typically dialog), narration, music (whether it’s an original composition or artist) and sound effects (often created by me) into one 5.1 surround soundtrack that plays on both TV and Internet.

Heard City

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
I think most people without a production background think the sound of a spot just “is.” They don’t really think about how or why it happens. Once I start explaining the sonic layers we combine to make up the final mix they are really surprised.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The part that really excites me is the fact that each spot offers its own unique challenge. I take raw audio elements and tweak and mold them into a mix. Working with the agency creatives, we’re able to develop a mix that helps tell the story being presented in the spot. In that respect I feel like my job changes day in and day out and feels fresh every day.

WHAT’S YOUR LEAST FAVORITE?
Working late! There are a lot of late hours in creative jobs.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I really like finishing a job. It’s that feeling of accomplishment when, after a few hours, I’m able to take some pretty rough-sounding dialog and manipulate that into a smooth-sounding final mix. It’s also when the clients we work with are happy during the final stages of their project.

WHAT TOOLS DO YOU USE ON A DAY-TO-DAY BASIS?
Avid Pro Tools, Izotope RX, Waves Mercury, Altiverb and Revibe.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
One of my many hobbies is making furniture. My dad is a carpenter and taught me how to build at a very young age. If I never had the opportunity to come to New York and make a career here, I’d probably be building and making furniture near my hometown of Seneca Castle, New York.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think this profession chose me. When I was a kid I was really into electronics and sound. I was both the drummer and the front of house sound mixer for my high school band. Mixing from behind the speakers definitely presents some challenges! I went on to college to pursue a career in music recording, but when I got an internship in New York at a premier post studio, I truly fell in love with creating sound for picture.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, I’ve worked on Chobani, Google, Microsoft, and Budweiser. I also did a film called The Discovery for Netflix.

The Discovery for Netflix.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I’d probably have to say Chobani. That was a challenging campaign because the athletes featured in it were very busy. In order to capture the voiceover properly I was sent to Orlando and Los Angeles to supervise the narration recording and make sure it was suitable for broadcast. The spots ran during the Olympics, so they had to be top notch.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iPhone, iPad and depth finder. I love boating and can’t imagine navigating these waters without knowing the depth!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on the basics — Facebook, LinkedIn and Instagram. I dabble with SnapChat occasionally and will even open up Twitter once in a while to see what’s trending. I’m a fan of photography and nature, so I follow a bunch of outdoor Instagramers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I joke with my friends that all of my hobbies are those of retired folks — sailing, golfing, fly fishing, masterful dog training, skiing, biking, etc. I joke that I’m practicing for retirement. I think hobbies that force me to relax and get out of NYC are really good for me.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

TrumpLand

TrumpLand gets quick turnaround via Technicolor Postworks

Michael Moore in TrumpLand is a 73-minute film that documents a one-man show performed by Moore over two nights in October to a mostly Republican crowd at a theater in Ohio. It made its premiere just 11 days later at New York’s IFC Center.

The very short timeframe between live show and theatrical debut included a brisk five days at Technicolor PostWorks New York, where sound and picture were finalized. [Editor’s note: The following isn’t any sort of political statement. It’s just a story about a very quick post turnaround and the workflow involved. Enjoy!]

TrumplandMichael Kurihara was supervising sound editor and re-recording mixer on the project. He was provided with the live feeds from more than a dozen microphones used to record the event. “Michael had a hand-held mic and a podium mic, and there were boom mics throughout the crowd,” Kurihara recalls. “They set it up like they were recording an orchestra with mics everywhere. I was able to use those boom mics and some on stage to push sound into the surrounds to really give you the feeling that you are sitting in the theater.”

Kurihara’s main objectives, naturally, were to ensure that the dialogue was clear and that the soundtrack, which included elements from both nights, was consistent, but he also worked to capture the flavor of the event. He notes, for example, that Moore wanted to preserve the way that he used his microphone to produce comic effects. “He did a funny bit about the Clinton Foundation, and used the mic the way stand-up comics do, holding it closer or further a way to underscore the joke,” Kurihara says. “By holding the mic at different angles, he makes the sound warmer or punchier.”

Kurihara adds that the mix sessions did not follow a conventional, linear path as creative editorial was still ongoing. “That made it a particularly exciting project,” he notes. “We were never just mixing. Editorial changes continued to arrive right up to the point of print.”

Focusing on Picture
Colorist Allie Ames handled the film’s picture finishing. Similar to Kurihara, her task was to cement visual consistency while maintaining the immediacy of the live event. She worked from a conformed version of the film, supplied by the editing team.

According to Ames, “It already had a beautiful look from the way it was staged and shot, therefore, my goal was to embrace and enhance the intimacy of the location and create a consistent look that would draw the film audience into the world of the theatrical audience without distracting from Michael’s stage performance.”

Moore and his producers attended most of the sound mixing and picture grading sessions. “It was an unusual and exciting process,” says Ames. “Usually, you have weeks to finish a film, but in this case we had to get it out quickly. It was an honor to contribute to this project.”

Technicolor PostWorks has provided post services for several of Moore’s documentaries, including Where to Invade Next, which debuted earlier this year. For TrumpLand the facility created deliverables for the premiere at IFC, and subsequent theatrical and Netflix releases.

Says Moore, “Simply put, there would have been no TrumpLand movie without Technicolor PostWorks. They have a dedicated team of artists who are passionate about filmmaking, and especially about documentaries. In this instance, they went above and beyond what was asked of them to ensure we were ready in record time for our premiere — and they did so without compromising quality or creativity. I did my previous film with them a year ago and in just 14 months they were already using technology so new it made our 2015 experience feel so… 2015.”

The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”

iZotope intros mixing plug-in Neutron at AES show

iZotope was at last week’s AES show in LA with Neutron, their newest plug-in, which is geared toward simplifying and enhancing the mixing process. Neutron’s Track Assistant saves you time by listening to your audio and recommending custom starting points for tracks. According to iZotope, analysis intelligence within Neutron allows Track Assistant to automatically detect instruments, recommend the placement of EQ nodes and set optimal settings for other modules. Users still maintain full control over all their mix decisions, but Track Assistant gives them more time to focus on their creative take on the mix.

Neutron’s Masking Meter allows you to visually identify and fix perceptual frequency collisions between instruments, which can result in guitars masking lead vocals, bass covering up drums and other issues that can cause a “muddy” or overly crowded mix. Easily tweak each track to carve away muddiness and reveal new sonic possibilities.

“[Neutron] has a deep understanding of the tracks and where they compete with one another, and it offers subtle enhancements to the sound based on that understanding,” explains iZotope CEO/co-founder Mark Ethier.

Neutron can be used on every track, offering zero-latency, CPU-efficient performance. It offers static /dynamic EQ, two multiband compressors, a multiband Transient Shaper, a multiband Exciter and a True Peak Limiter.

What the plug-in offers:
• The ability to automatically detect different instruments — such as vocals, dialogue, guitar, bass, and drums — and then apply the spectral shaping technology within Neutrino to provide subtle clarity and balance to each track.
• Recommendations for optimal starting points using Track Assistant, including EQ nodes, compressor thresholds, saturation types and multiband crossover points.
• It carves out sonic space using the Masking Meter to help each instrument sit better in the mix.
• The ability to create the a mix with five mixing processors integrated into one CPU-efficient channel strip, offering both clean digital and warm vintage-flavored processing.
• There is surround support [Advanced Only] for audio post pros that need to enhance the audio for picture experience.
• There are individual plug-ins [Advanced Only] for the Equalizer, Compressor, Transient Shaper and Exciter.

Neutron and Neutron Advanced is available now. Neutron Advanced will also be available as part of iZotope’s new Music Production Bundle 2. This combines iZotope’s latest products with its other tools, including Ozone 7 Advanced, Nectar 2 Production Suite, VocalSynth, Trash 2 Expanded, RX Plug-in Pack and Insight.

Once available, Neutron, Neutron Advanced, and the Music Production Bundle 2 will be discounted through October 31, 2016: Neutron will be available for $199 (reg.$249); Neutron Advanced will be available for $299 (reg. $349); and the Music Production Bundle 2 will be available for $499 (reg. $699).

Deepwater Horizon’s immersive mix via Twenty Four Seven Sound

By Jennifer Walden

The Peter Berg-directed film Deepwater Horizon, in theaters now, opens on a black screen with recorded testimony from real-life Deepwater Horizon crew member Mike Williams recounting his experience of the disastrous oil spill that began April 20, 2010 in the Gulf of Mexico.

“This documentary-style realism moves into a wide, underwater immersive soundscape. The transition sets the music and sound design tone for the entire film,” explains Eric Hoehn, re-recording mixer at Twenty Four Seven Sound in Topanga Canyon, California. “We intentionally developed the immersive mixes to drop the viewer into this world physically, mentally and sonically. That became our mission statement for the Dolby Atmos design on Deepwater Horizon. Dolby empowered us with the tools and technology to take the audience on this tightrope journey between anxiety and real danger. The key is not to push the audience into complete sensory overload.”

eric-and-wylie

L-R: Eric Hoehn and Wylie Stateman.  Photo Credit: Joe Hutshing

The 7.1 mix on Deepwater Horizon was crafted first with sound designer Wylie Stateman and re-recording mixers Mike Prestwood Smith (dialogue/music) and Dror Mohar (sound effects) at Warner Bros in New York City. Then Hoehn mixed the immersive versions, but it wasn’t just a technical upmix. “We spent four weeks mixing the Dolby Atmos version, teasing out sonic story-point details such as the advancing gas pressure, fire and explosions,” Hoehn explains. “We wanted to create a ‘wearable’ experience, where your senses actually become physically involved with the tension and drama of the picture. At times, this movie is very much all over you.”

The setting for Deepwater Horizon is interesting in that the vertical landscape of the 25-story oil rig is more engrossing than the horizontal landscape of the calm sea. This dynamic afforded Hoehn the opportunity to really work with the overhead Atmos environment, making the audience feel as though they’re experiencing the story and not just witnessing it. “The story takes place 40 miles out at sea on a floating oil drilling platform. The challenge was to make this remote setting experiential for the audience,” Hoehn explains. “For visual artists, the frame is the boundary. For us, working in Atmos, the format extends the boundaries into the auditorium. We wanted the audience to feel as if they too were trapped with our characters aboard the Deepwater Horizon. The movement of sound into the theater adds to the sense of disorientation and confusion that they’re viewing on screen, making the story more immediate and disturbing.”

In their artistic approach to the Atmos mix, Stateman and sound effects designers Harry Cohen and Sylvain Lasseur created an additional sound design layer — specific Atmos objects that help to reinforce the visuals by adding depth and weight via sound. For example, during a sequence after a big explosion and blow out, Mike Williams (Mark Wahlberg) wakes up with a pile of rubble and a broken door on top of him. Twisted metal, confusing announcements and alarms were designed from scratch to become objects that added detail to the space above the audience. “I think it’s one of the most effective Atmos moments in the film. You are waking up with Williams in the aftermath of this intense, destructive sequence. The entire rig is overwhelmed by off-stage explosions, twisting metal, emergency announcements and hissing steam. Things are falling apart above you and around you,” details Hoehn.

Hoehn shares another example: during a scene on the drill deck they created sound design objects to describe the height and scale of the 25-story oil derrick. “We put those sounds into the environment by adding delays and echoes that make it feel like those sounds are pinging around high above you. We wanted the audience to sense the vertical layers of the Deepwater Horizon oil rig,” says Hoehn, who created the delays and echoes using a multichannel delay plug-in called Slapper by The Cargo Cult. “I had separate mix control over the objects and the acoustic echoes applied. I could put the discrete echoes in distinct places in the Atmos environment. It was an agitative design element. It was designed to make the audience feel oriented and at the same time disoriented.”

The additional sounds they created were not an attempt to reimagine the soundtrack, but rather a means of enhancing what was there. “We were deliberate about what we added,” Hoehn explains. “As a team we strived to maximize the advantages of an Atmos theater, which allows us to keep a film mentally, physically and sonically intense. That was the filmmaker’s primary goal.”

The landscape in Deepwater Horizon doesn’t just tower over the audience; it extends under them as well. The underwater scenes were an opportunity to feature the music since these “sequences don’t contain metal banging and explosions. These moments allow the music to give an emotional release,” says Hoehn.

Hoehn explains that the way music exists in Atmos is sort of like a big womb of sound; it surrounds the audience. The underwater visuals depict the catastrophic failure of the blowout preventer — a valve that can close off the well and prevent an uncontrolled flow of oil, and the music punctuates this emotional and pivotal point in the film. It gives a sense of calm that contrasts what’s happening on screen. Sonically, it’s also a contrast to the stressful soundscape happening on-board the rig. Hoehn says, “It’s good for such an intense film and story to have moments where you can find comfort, and I think that is where the music provides such emotional depth. It provides that element of comfort between the moments where your senses are being flooded. We played with dynamic range, going to silence and using the quiet to heighten the anticipation of a big release.”

Hoehn mixed the Atmos version in Twenty Four Seven Sound’s Dolby Atmos lab, which uses an Avid S6 console running Pro Tools 12 and features Meyer Acheron mains and 26 JBL AC28 monitors for the surrounds and overheads. It is an environment designed to provide sonic precision so that when the mixer turns a knob or pushes a fader, the change can instantly be heard. “You can feel your cause-and-effect happen immediately. Sometimes when you’re in a bigger room, you are battling the acoustics of the space. It’s helpful to work under a magnifying glass, particularly on a soundtrack that is as detailed as Deepwater Horizon’s,” says Hoehn.

Hoehn spent a month on the Atmos mix, which served as the basis for the other immersive formats, such as the IMAX 5 and IMAX 12 mixes. “The IMAX versions maintain the integrity of our Atmos design,” says Hoehn, “A lot of care had to be taken in each of the immersive versions to make sure the sound worked in service of the storytelling process.”

Bring On VR
In addition to the theatrical release, Hoehn discussed the prospect of a Deepwater Horizon VR experience. “Working with our friends at Dolby, we’re looking at virtual reality and experimenting with sequences from Deepwater Horizon. We are working to convert the Atmos mix to a headset, virtual sound environment,” says Hoehn. He explains that binaural sound or surround sound in headphones present its own design challenges; it’s not just a direct lift of the 7.1 or Atmos mix.

“Atmos mixing for a theatrical sound pressure environment is different than the sound pressure environment in headphones,” explains Hoehn. “It’s a different sound pressure that you have to design for, and the movement of sounds needs to be that much more precise. Your brain needs to track movement and so maybe you have less objects moving around. Or, you have one sound object hand off to another object and it’s more of a parade of sound. When you’re in a theater, you can have audio coming from different locations and your brain can track it a lot easier because of the fixed acoustical environment of a movie theater. So that’s a really interesting challenge that we are excited to sink our teeth into.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

Call of the Wild —Tarzan’s iconic yell

By Jennifer Walden

For many sound enthusiasts, Tarzan’s iconic yell is the true legend of that story. Was it actually actor Johnny Weissmuller performing the yell? Or was it a product of post sound magic involving an opera singer, a dog, a violin and a hyena played backwards as MGM Studios claims? Whatever the origin, it doesn’t impact how recognizable that yell is, and this fact wasn’t lost on the filmmakers behind the new Warner Bros. movie The Legend of Tarzan.

The updated version is not a far cry from the original, but it is more guttural and throaty, and less like a yodel. It has an unmistakable animalistic quality. While we may never know the true story behind the original Tarzan yell, postPerspective went behind the scenes to learn how the new one was created.

Supervising sound editor/sound designer Glenn Freemantle and sound designer/re-recording mixer Niv Adiri at Sound24, a multi-award winning audio post company located on the lot of Pinewood Film Studios in Buckinghamshire, UK, reveal that they went through numerous iterations of the new Tarzan yell. “We had quite a few tries on that but in the end it’s quite a simple sound. It’s actor Alexander Skarsgård’s voice and there are some human and animal elements, like gorillas, all blended together in it,” explains Freemantle.

Since the new yell always plays in the distance, it needed to feel powerful and raw, as though Tarzan is waking up the jungle. To emphasize this, Freemantle says, “We have animal sounds rushing around the jungle after the Tarzan yell, as if he is taking control of it.”

The jungle itself is a marvel of sight and sound. Freemantle notes that everything in the film, apart from the actors on screen, was generated afterward — the Congo, the animals, even the villages and people, a harbor with ships and an action sequence involving a train. Everything.

LEGEND OF TARZANThe film was shot on a back lot of Warner Bros. Studios in Leavesden, UK, so making the CGI-created Congo feel like the real deal was essential. They wanted the Congo to feel alive, and have the sound change as the characters moved through the space. Another challenge was grounding all the CG animals — the apes, wildebeests, ostriches, elephants, lions, tigers, and other animals — in that world.

When Sound24 first started on the film, a year and a half before its theatrical release, Freemantle says there was very little to work with visually. “Basically it was right from the nuts and bolts up. There was nothing there, nothing to see in the beginning apart from still pictures and previz. Then all the apes, animals and jungles were put in and gradually the visuals were built up. We were building temp mixes for the editors to use in their cut, so it was like a progression of sound over time,” he says.

Sound24’s sound design got increasingly detailed as the visuals presented more details. They went from building ambient background for different parts of Africa — from the deep jungle to the open plains — at different times of the day and night to covering footsteps for the CG gorillas. The sound design team included Ben Barker, Tom Sayers, and Eilam Hoffman, with sound effects editing by Dan Freemantle and Robert Malone. Editing dialogue and ADR was Gillian Dodders. Foley was recorded at Shepperton Studios by Foley mixer Glen Gathard.

Capturing Sounds
Since capturing their own field recordings in the Congo would have proved too challenging, Sound 24 opted to source sound recordings authentic to that area. They also researched and collected the best animal sounds they could find, which were particularly useful for the gorilla design.

Sound24’s sound design team designed the gorillas to have a range of reactions, from massive roars and growls to smaller grunts and snorts. They cut and layered different animal sounds, including processed human vocalizations, to create a wide range of gorilla sounds.

There were three main gorillas, and each sounds a bit different, but the most domineering of all was Akut. During a fight between Akut and Tarzan, Adiri notes that in the mix, they wanted to communicate Akut’s presence and power through sound. “We tried to create dynamics within Akut’s voice so that you feel that he is putting in a lot of effort into the fight. You see him breathing hard and moving, so his voice had to have his movement in it. We had to make it dynamic and make sure that there was space for the hits, and the falls, and whatever is happening visually. We had to make sure that all of the sounds are really tied to the animal and you feel that he’s not some super ape, but he’s real,” Adiri says. They also designed sounds for the gang of gorillas that came to egg on Akut in his fight.

The Mix
All the effects, Foley and backgrounds were edited and premixed in Avid Pro Tools 11. Since Sound24 had been working on The Legend of Tarzan for over a year, keeping everything in the box allowed them to update their session over time and still have access to previous elements and temp mixes. “The mix was evolving throughout the sound editorial process. Once we had that first temp mix we just kept working with that, remixing sounds and reworking scenes but it was all done in the box up until the final mix. We never started the mix from scratch on the dub stage,” says Adiri.

For the final Dolby Atmos mix at Warner Bros. De Lane Lea Studios in London, Adiri and Freemantle brought in their Avid S6 console to studio. “That surface was brilliant for us,” says Adiri, who mixed the effects/Foley/backgrounds. He shared the board with re-recording mixer Ian Tapp, on dialogue/music.

Adiri feels the Atmos surround field worked best for quiet moments, like during a wide aerial shot of the jungle where the camera moves down through the canopy to the jungle floor. There he was able to move through layers of sounds, from the top speakers down, and have the ambience change as the camera’s position changed. Throughout the jungle scenes, he used the Atmos surrounds to place birds and distant animal cries, slowly panning them around the theater to make the audience feel as though they are surrounded by a living jungle.

He also likes to use the overhead speakers for rain ambience. “It’s nice to use them in quieter scenes when you can really feel the space, moving sounds around in a more subliminal way, rather than using them to be in-your-face. Rain is always good because it’s a bright sound. You know that it is coming from above you. It’s good for that very directional sort of sound.”

Ambience wasn’t the only sound that Adiri worked with in Atmos. He also used it to pan the sounds of monkeys swinging through the trees and soaring overhead, and for Tarzan’s swinging. “We used it for these dynamic moments in the storytelling rather than filling up those speakers all the time. For the moments when we do use the Atmos field, it’s striking and that becomes a moment to remember, rather than just sound all the time,” concludes Freemantle.

Jennifer Walden is a New Jersey-based writer and audio engineer. 

Larson Studios pulls off an audio post slam dunk for FX’s ‘Baskets’

By Jennifer Walden

Turnarounds for TV series are notoriously fast, but imagine a three-day sound post schedule for a single-camera half-hour episodic series? Does your head hurt yet? Thankfully, Larson Studios in Los Angeles has its workflow on FX’s Baskets down to a science. In the show, Zach Galifianakis stars as Chip Baskets, who works as a California rodeo clown after failing out of a prestigious French clown school.

So how do you crunch a week and a half’s worth of work into three days without sacrificing quality or creativity? Larson’s VP, Rich Ellis, admits they had to create a very aggressive workflow, which was made easier thanks to their experience working with Baskets post supervisor Kaitlin Menear on a few other shows.

Ellis says having a supervising sound editor — Cary Stacy — was key in setting up the workflow. “There are others competing for space in this market of single-camera half-hours, and they treat post sound differently — they don’t necessarily bring a sound supervisor to it. The mixer might be cutting and mixing and wrangling all of the other elements, but we felt that it was important to continue to maintain that traditional sound supervisor role because it actually helps the process to be more efficient when it comes to the stage.”

John Chamberlin and Cary Stacy

John Chamberlin and Cary Stacy

This allows re-recording mixer John Chamberlin to stay focused on the mix while sound supervisor Stacy handles any requests that pop-up on stage, such as alternate lines or options for door creaks. “I think director Jonathan Krisel, gave Cary at least seven honorary Emmy awards for door creaks over the course of our mix time,” jokes Menear. “Cary can pull up a sound effect so quickly, and it is always exactly perfect.”

Every second counts when there are only seven hours to mix an episode from top to bottom before post producer Menear, director Krisel and the episode’s picture editor join the stage for the two-hour final fixes and mix session. Having complete confidence in Stacy’s alternate selections, Chamberlin says he puts them into the session, grabs the fader and just lets it roll. “I know that Cary is going to nail it and I go with it.”

Even before the episode gets to the stage, Chamberlin knows that Stacy won’t overload the session with unnecessary elements, which are time consuming. Even still, Chamberlin says the mix is challenging in that it’s a lot for one person to do. “Although there is care taken to not overload what is put on my plate when I sit down to mix, there are still 8 to 10 tracks of Foley, 24 or more tracks of backgrounds and, depending on the show, the mono and stereo sound effects can be 20 tracks. Dialogue is around 10 and music can be another 10 or 12, plus futz stuff, so it’s a lot. You have to have a workflow that’s efficient and you have to feel confident about what you’re doing. It’s about making decisions quickly.”

Chamberlin mixed Baskets in 5.1 — using a Pro Tools 11 system with an Avid ICON D-Command — on Stage 4 at Larson Studios, where he’s mixed many other shows, such as Portlandia, Documentary Now, Man Seeking Woman, Dice, the upcoming Netflix series Easy, Comedy Bang Bang, Meltdown With Jonah and Kumail and Kroll Show. “I’m so used to how Stage 4 sounds that I know when the mix is in a good place.”

Another factor of the three-day turn-around is choosing to forgo loop group and minimizing ADR to only when it’s absolutely necessary. The post sound team relied on location sound mixer Russell White to capture all the lines as clearly as possible on set, which was a bit of a challenge with the non-principal characters.

Baskets

Tricky On-Set Audio
According to Menear, director Krisel loves to cast non-actors in the majority of the parts. “In Baskets, outside of our three main roles, the other people are kind of random folk that Jonathan has collected throughout his different directing experiences,” she says. While that adds a nice flavor creatively, the inexperienced cast members tend to step on each other’s lines, or not project properly — problems you typically won’t have with experienced actors.

For example, Louie Anderson plays Chip’s mom Christine. “Louie has an amazing voice and it’s really full and resonant,” explains Chamberlin. “There was never a problem with Louie or the pro actors on the show. The principals were very well represented sonically, but the show has a lot of local extras, and that poses a challenge in the recording of them. Whether they were not talking loud enough or there was too much talking.”

A good example is the Easter brunch scene in Episode 104. Chip, his mother and grandmother encounter Martha (Chip’s insurance agent/pseudo-friend played by Martha Kelly) and her parents having brunch in the casino. They decide to join their tables together. “There were so many characters talking at the same time, and a lot of the side characters were just having their own conversations while we were trying to pay attention to the main characters,” says Stacy. “I had to duck those side conversations as much as possible when necessary. There was a lot of that finagling going on.”

Stacy used iZotope RX 5 features like Decrackle and Denoise to clean up the tracks, as well as the Spectral Repair feature for fixing small noises.

Multiple Locations
Another challenge for sound mixer White was that he had to quickly shoot in numerous locations for any given episode. That Easter brunch episode alone had at least eight different locations, including the casino floor, the casino’s buffet, inside and outside of a church, inside the car, and inside and outside of Christine’s house. “Russell mentioned how he used two rigs for recording because he would always have to just get up and go. He would have someone else collect all of the gear from one location while he went off to a new location,” explains Chamberlin. “They didn’t skimp on locations. When they wanted to go to a place they would go. They went to Paris. They went to a rodeo. So that has challenges for the whole team — you have to get out there and record it and capture it. Russell did a pretty fantastic job considering where he was pushed and pulled at any moment of the day or night.”

Sound Effects
White’s tracks also provided a wealth of production effects, which were a main staple of the sound design. The whole basis for the show, for picture and sound, was to have really funny, slapstick things happen, but have them play really straight. “We were cutting the show to feel as real and as normal as possible, regardless of what was actually happening,” says Menear. “Like when Chip was walking across a room full of clown toys and there were all of these strange noises, or he was falling down, or doing amazing gags. We played it as if that could happen in the real world.”

Stacy worked with sound effects editor TC Spriggs to cut in effects that supported the production effects, never sounding too slapstick or over the top, even if the action was. “There is an episode where Chip knocks over a table full of champagne glasses and trips and falls. He gets back up only to start dancing, breaking even more glasses,” describes Chamberlin.

That scene was a combination of effects and Foley provided by Larson’s Foley team of Adam De Coster (artist) and Tom Kilzer (recordist). “Foley sync had to be perfect or it fell apart. Foley and production effects had to be joined seamlessly,” notes Chamberlin. “The Foley is impeccably performed and is really used to bring the show to life.”

Spriggs also designed the numerous backgrounds. Whether it was the streets of Paris, the rodeo arena or the doldrums of Bakersfield, all the locations needed to sound realistic and simple yet distinct. On the mix side, Chamberlin used processing on the dialogue to help sell the different environments – basic interiors and exteriors, the rodeo arena and backstage dressing room, Paris nightclubs, Bakersfield dive bars, an outdoor rave concert, a volleyball tournament, hospital rooms and dream-like sequences and a flashback.

“I spent more time on the dialogue than any other element. Each place had to have its own appropriate sounding environments, typically built with reverbs and delays. This was no simple show,” says Chamberlin. For reverbs, Chamberlin used Avid’s ReVibe and Reverb One, and for futzing, he likes McDSP’s FutzBox and Audio Ease’s Speakerphone plug-ins.

One of Chamberlin’s favorite scenes to mix was Chip’s performance at the rodeo, where he does his last act as his French clown alter ego Renoir. Chip walks into the announcer booth with a gramophone and asks for a special song to be played. Chamberlin processed the music to account for the variable pitch of the gramophone, and also processed the track to sound like it was coming over the PA system. In the center of the ring you can hear the crowds and the announcer, and off-screen a bull snorts and grinds it hooves into the dirt before rushing at Chip.

Another great sequence happens in the Easter brunch episode where we see Chip walking around the casino listening to a “Learn French” lesson through ear buds while smoking a broken cigarette and dreaming of being Renoir the clown on the streets of Paris. This scene summarizes Chip’s sad clown situation in life. It’s thoughtful, and charming and lonely.

“We experimented with elaborate sound design for the voice of the narrator, however, we landed on keeping things relatively simple with just an iPhone futz,” says Stacy. “I feel this worked out for the best, as nothing in this show was over done. We brought in some very light backgrounds for Paris and tried to keep the transitions as smooth as possible. We actually had a very large build for the casino effects, but played them very subtly.”

Adds Chamberlin, “We really wanted to enhance the inner workings of Chip and to focus in on him there. It takes a while in the show to get to the point where you understand Chip, but I think that is great. A lot of that has to do with the great writing and acting, but our support on the sound side, in particular on that Easter episode, was not to reinvent the wheel. Picture editors Micah Gardner and Michael Giambra often developed ideas for sound, and those had a great influence on the final track. We took what they did in picture editorial and just made it more polished.”

The post sound process on Baskets may be down and dirty, but the final product is amazing, says Menear. “I think our Larson Studios team on the show is awesome!”