Category Archives: Audio Mixing

Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

Music
On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”


Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.

Blade Runner 2049’s dynamic and emotional mix

By Jennifer Walden

“This film has more dynamic range than any movie we’ve ever mixed,” explains re-recording mixer Doug Hemphill of the Blade Runner 2049 soundtrack. He and re-recording mixer Ron Bartlett, from Formosa Group, worked with director Denis Villeneuve to make sure the audio matched the visual look of the film. From the pounding sound waves of Hans Zimmer and Benjamin Wallfisch’s score to the overwhelming wash of Los Angeles’s street-level soundscape, there’s massive energy in the film’s sonic peaks.

L-R: Ron Bartlett, Denis Villeneuve, Joe Walker, Ben Wallfisch and Doug Hemphill. Credit: Clint Bennett

The first time K (Ryan Gosling) arrives in Los Angeles in the film, the audience is blasted with a Vangelis-esque score that is reminiscent of the original Blade Runner, and that was ultimately the goal there — to envelope the audience in the Blade Runner experience. “That was our benchmark for the biggest, most enveloping sound sequence — without being harsh or loud. We wanted the audience to soak it in. It was about filling out the score, using all the elements in Hans Zimmer’s and Ben Wallfisch’s arsenal there,” says Bartlett, who handled the dialogue and music in the mix.

He and Villeneuve went through a wealth of musical elements — all of which were separated so Villeneuve could pick the ones he liked. His preference gravitated toward the analog synth sounds, like the Yamaha CS-80, which composer Vangelis used in his 1982 Blade Runner composition. “We featured those synth sounds throughout the movie,” says Bartlett. “I played with the spatial aspects, spreading certain elements into the room to envelope you into the score. It was very immersive that way.”

Bartlett notes that initially there were sounds from the original Blade Runner in their mix, like huge drum hits from the original score that were converted into 7.1 versions by supervising sound editor Mark Mangini at Formosa Group. Bartlett used those drum hits as punctuation throughout the film, for scene changes and transitions. “Those hits were everywhere. Actually, they’re the first sound in the movie. Then you can hear those big drum hits in the Vegas walk. That Vegas walk had another score with it, but we kept stripping it away until we were down to just those drum hits. It’s so dramatic.”

But halfway into the final mix for Blade Runner 2049, Mangini phoned Bartlett to tell him that the legal department said they couldn’t use any of those sounds from the original film. They’d need to replace them immediately. “Since I’m a percussionist, Mark asked if I could remake the drum hits. I stayed up until 3am and redid them all in my studio in 7.1, and then brought them in and replaced them throughout the movie. Mark had to make all these new spinner sounds and replace those in the film. That was an interesting moment,” reveals Bartlett.

Sounds of the City
Los Angeles 2049 is a multi-tiered city. Each level offers a different sonic experience. The zen-like prayer that’s broadcast at the top level gradually transforms into a cacophony the closer one gets to street-level. Advertisements, announcements, vehicles, music from storefronts and vending machine sounds mix with multi-language crowds — there’s Russian, Vietnamese, Korean, Japanese, and the list goes on. The city is bursting with sound, and Hemphill enhanced that experience by using Cargo Cult’s Spanner on the crowd effects during the scene where K is sitting outside of Bibi’s Bar to put the crowds around the theater and “give the audience a sense of this crush of humanity,” he says.

The city experience could easily be chaotic, but Hemphill and Bartlett made careful choices on the stage to “rack the focus” — determining for the audience what they should be listening to. “We needed to create the sense that you’re in this overpopulated city environment, but it still had to make sense. The flow of the sound is like ‘musique concrète.’ The sounds have a rhythm and movement that’s musical. It’s not random. There’s a flow,” explains Hemphill, who has an Oscar for his work on The Last of the Mohicans.

Bartlett adds that their goal was to keep a sense of clarity as the camera traveled through the street scene. If there was a big, holographic ad in the forefront, they’d focus on that, and as the scene panned away another sound would drive the mix. “We had to delete some of the elements and then move sounds around. It was a difficult scene and we took a long time on it but we’re happy with the clarity.”

On the quiet end of the spectrum, the film’s soundtrack shines. Spaces are defined with textural ambiences and handcrafted reverbs. Bartlett worked with a new reverb called DSpatial created by Rafael Duyos. “Mark Mangini and I helped to develop DSpatial. It’s a very unique reverb,” says Bartlett.

According to the website, DSpatial Reverb is a space modeler and renderer that offers 48 decorrelated outputs. It doesn’t use recorded impulse responses; instead it uses modeled IRs. This allows the user to select and tweak a series of parameters, like surface texture and space size, to model the acoustic and physical characteristics of any room. “It’s a decorrelated reverb, meaning you can add as many channels as you like and pan them into every Dolby Atmos speaker that is in the room. That wasn’t the only reverb we used, but it was the main one we used in specific environments in the film,” says Bartlett.

In combination with DSpatial, Bartlett used Audio Ease’s Altiverb, FabFilter reverbs and Cargo Cult’s Slapper delay to help create the multifaceted reflections that define the spaces on-screen so well. “We tried to make each space different, “says Bartlett. “We tried to evoke an emotion through the choices of reverbs and delays. It was never just one reverb or delay. I used two or three. It was very interesting creating those textures and creating those rooms.”

For example, in the Tyrell Corporation building, Niander Wallace (Jared Leto)’s private office is a cold, lonely space. Water surrounds a central platform; reflections play on the imposing stone walls. “The way that Roger Deakins lit it was just stunning,” says Bartlett. “It really evoked a cool emotion. That’s what is so intangible about what we do, creating those emotions out of sound.” In addition to DSpatial, Altiverb and FabFilter reverbs, he used Cargo Cult’s Slapper delay, which “added a soft rolling, slight echo to Jared Leto’s voice that made him feel a little more God-like. It gave his voice a unique presence without being distracting.”

Another stunning example of Bartlett’s reverb work was K’s entrance into Rick Deckard’s (Harrison Ford) casino hideout. The space is dead quiet then K opens the door and the sound rings out and slowly dissipates. It conveys the feeling that this is a vast, isolated, and empty space. “It was a combination of three reverbs and a delay that made that happen, so the tail had a really nice shine to it,” says Bartlett.

One of the most difficult rooms to find artistically, says Bartlett, was that of the memory maker, Dr. Ana Stelline (Carla Juri). “Everyone had a different idea of what that dome might sound like. We experimented with four or five different approaches to find a good place with that.”

The reverbs that Bartlett creates are never static. They change to fit the camera perspective. Bartlett needed several different reverb and delay processing chains to define how Dr. Stelline’s voice would react in the environment. For example, “There are some long shots, and I had a longer, more distant reverb. I bled her into the ceiling a little bit in certain shots so that in the dome it felt like the sound was bouncing off the ceiling and coming down at you. When she gets really close to the glass, I wanted to get that resonance of her voice bouncing off of the glass. Then when she’s further in the dome, creating that birthday memory, there is a bit broader reverb without that glass reflection in it,” he says.

On K’s side of the glass, the reverb is tighter to match the smaller dimensions and less reflective characteristics of that space. “The key to that scene was to not be distracting while going in and out of the dome, from one side of the glass to the other,” says Bartlett. “I had to treat her voice a little bit so that it felt like she was behind the glass, but if she was way too muffled it would be too distracting from the story. You have to stay with those characters in the story, otherwise you’re doing a disservice by trying to be clever with your mixing.

“The idea is to create an environment so you don’t feel like someone mixed it. You don’t want to smell the mixing,” he continues. “You want to make it feel natural and cool. If we can tell when we’ve made a move, then we’ll go back and smooth that out. We try to make it so you can’t tell someone’s mixing the sound. Instead, you should just feel like you’re there. The last thing you want to do is to make something distracting. You want to stay in the story. We are all about the story.”

Mixing Tools
Bartlett and Hemphill mixed Blade Runner 2049 at Sony Pictures Post in the William Holden Theater using two Avid S6 consoles running Avid Pro Tools 12.8.2, which features complete Dolby Atmos integration. “It’s nice to have Atmos panners on each channel in Pro Tools. You just click on the channel and the panner pops up. You don’t want to go to just one panner with one joystick all the time so it was nice to have it on each channel,” says Bartlett.

Hemphill feels the main benefit of having the latest gear — the S6 consoles and the latest version of Pro Tools — is that it gives them the ability to carry their work forward. “In times past, before we had this equipment and this level of Pro Tools, we would do temp dubs and then we would scrap a lot of that work. Now, we are working with main sessions all the way from the temp mix through to the final. That’s very important to how this soundtrack was created.”

For instance, the dialogue required significant attention due to the use of practical effects on set, like weather machines for rain and snow. All the dialogue work they did during the temp dubs was carried forward into the final mix. “Production sound mixer Mac Ruth did an amazing job while working in those environments,” explains Bartlett. “He gave us enough to work with and we were able to use iZotope RX 6 to take out noise that was distracting. We were careful not to dig into the dialogue too much because when you start pulling out too many frequencies, you ruin the timbre and quality of the dialogue— the humanness.”

One dialogue-driven scene that made a substantial transformation from temp dub to final mix was the underground sequence in which Freysa [Hiam Abbass] makes a revelation about the replicant child. “The actress was talking in this crazy accent and it was noisy and hard to understand what was happening. It’s a very strong expositional moment in the movie. It’s a very pivotal point,” says Bartlett. They looped the actress for that entire scene and worked to get her ADR performance to sound natural in context to the other sounds. “That scene came such a long way, and it really made the movie for me. Sometimes you have to dig a little deeper to tell the story properly but we got it. When K sits down in the chair, you feel the weight. You feel that he’s crushed by that news. You really feel it because the setup was there.”

Blade Runner 2049 is ultimately a story that questions the essence of human existence. While equipment and technique were an important part of the post process, in the end it was all about conveying the emotion of the story through the soundtrack.

“With Denis [Villeneuve], it’s very much feel-based. When you hear a sound, it brings to mind memories immediately. Denis is the type of director that is plugged into the emotionality of sound usage. The idea more than anything else is to tell the story, and the story of this film is what it means to be a human being. That was the fuel that drove me to do the best possible work that I could,” concludes Hemphill.


Jennifer Walden is a NJ-based writer and audio engineer. Follow her on Twitter @audiojeney.

Dell 6.15

MPSE to present John Paul Fasal with Career Achievement Award

The Motion Picture Sound Editors (MPSE) will present sound designer and sound recordist John Paul Fasal with its 2018 MPSE Career Achievement Award. A 30-year veteran of the sound industry, Fasal has contributed to more than 150 motion pictures and is best known for his work in field recording.

Among his many credits are Top Gun, Master and Commander: The Far Side of the World, Interstellar, The Dark Knight, American Sniper and this year’s Dunkirk. Fasal will receive his award at the MPSE Golden Reel Awards ceremony, February 18, 2018 in Los Angeles.

“John is a master of his craft, an innovator who has pioneered many new recording techniques, and a restless, creative spirit who will stop at nothing to capture the next great sound,” says MPSE president Tom McCarthy.

The MPSE Career Achievement Award recognizes “sound artists who have distinguished themselves by meritorious works as both an individual and fellow contributor to the art of sound for feature film, television and gaming and for setting an example of excellence for others to follow.”

Fasal joins a distinguished list of sound innovators, including 2017 Career Achievement recipient Harry Cohen, Richard King, John Roesch, Skip Lievsay, Randy Thom, Larry Singer, Walter Murch and George Watters II.

“Sound artists typically work behind the scenes, out of the limelight, and so to be recognized in this way by my peers is humbling,” says Fasal. “It is an honor to join the past recipients of this award, many of whom are both colleagues and friends.”

Fasal began his career as a musician and songwriter, but gravitated toward post production sound in the 1980s. Among his first big successes was Top Gun for which he recorded and designed many of the memorable jet aircraft sound effects. He has been a member of the sound teams on several films that have won Academy Awards in sound categories, including Inception, The Dark Knight, Letters From Iwo Jima, Master and Commander: The Far Side of the World, The Hunt for Red October and Pearl Harbor.

Fasal has worked as a sound designer and recordist throughout his career, but in recent years has increasingly focused on field recording. He enjoys especially high regard for his ability to capture the sounds of planes, ships, automobiles and military weaponry. “The equipment has changed dramatically over the course of my career, but the philosophy behind the craft remains the same,” he says. “It still involves the layering of sounds to create a sonic picture and help tell the story.”

 


Creating sounds for Battle of the Sexes

By Jennifer Walden

Fox Searchlight’s biographical sports, drama Battle of the Sexes, delves into the personal lives of tennis players Bobby Riggs (Steve Carell) and Billie Jean King (Emma Stone) during the time surrounding their famous televised tennis match in 1973, known as the Battle of the Sexes. Directors Jonathan Dayton and Valerie Faris faithfully recreated the sports event using real-life tennis players Vince Spadea and Kaitlyn Christian as body doubles for Carell and Stone, and they used the original event commentary by announcer Howard Cosell to add an air of authenticity.

Oscar-nominated supervising sound editors Ai-Ling Lee (also sound designer/re-recording mixer) and Mildred Iatrou, from Fox Studios Post Production in LA, began their work during the director’s cut. Lee was on-site at Hula Post providing early sound support to film editor Pamela Martin, feeding her era-appropriate effects, like telephones, cars and cameras, and working on scenes that the directors wanted to tackle right away.

For director Dayton, the first priority scene was Billie Jean’s trip to a hair salon where she meets Marilyn Barnett (Andrea Riseborough). It’s the beginnings of a romantic relationship and Dayton wanted to explore the idea of ASMR (autonomous sensory meridian response, mainly an aural experience that causes the skin on the scalp and neck to tingle in a pleasing way) to make the hair cut feel close and sensual. Lee explains that ASMR videos are popular on YouTube, and topping the list of experience triggers are hair dryers blowing, cutting hair and running fingers through hair. After studying numerous examples, Lee discovered “the main trick to ASMR is to have the sound source be very close to the mic and to use slow movements,” she says. “If it’s cutting hair, the scissors move very slow and deliberate, and they’re really close to the mic and you have close-up breathing.”

Lee applied those techniques to the recordings she made for the hair salon scene. Using a Sennheiser MKH 8040 and MKH 30 in an MS setup, Lee recorded the up-close sound of slowly cutting a wig’s hair. She also recorded several hair dryers slowly panning back and forth to find the right sound and speed that would trigger an ASMR feeling. “For the hairdryers, you don’t want an intense sound or something that’s too loud. The right sound is one that’s soothing. A lot of it comes down to just having quiet, close-up, sensual movement,” she says.

Ai-Ling Lee capturing the sound of hair being cut.

Recording the sounds was the easy part. Getting that experience to translate in a theater environment was the challenge because most ASMR videos are heard through headphones as a binaural, close experience. “In the end, I just took the mid-side recording and mixed it by slowly panning the sound across the front speakers and a little bit into the surrounds,” explains Lee. “Another trick to making that scene work was to slowly melt away the background sounds of the busy salon, so that it felt like it was just the two of them there.”

Updating the Commentary
As Lee was working on the ASMR sound experience, Iatrou was back at Fox Studios working on another important sequence — the final match. The directors wanted to have Howard Cosell’s original commentary play in the film but the only recording available was a mixed mono track of the broadcast, complete with cheering crowds and a marching band playing underneath.

“At first, the directors sent us the pieces that they wanted to use and we brightened it a little because it was very dull sounding. They also asked us if we could get rid of the music, which we were not able to do,” says Iatrou.

As a work-around, the directors asked Iatrou to record Cosell’s lines using a soundalike. “We did a huge search. Our ADR/group leader Johnny Gidcomb at Loop De Loop held auditions of people who could do Howard Cosell. We did around 50 auditions and sent those to the directors. Finally, we got one guy they really liked.”

L-R: Mildred Iatrou and Ai-Ling Lee.

They spent a day recording the Cosell soundalike, using the same make and model mic that was used by Cosell and nearly all newscasters of that period — the Electro-Voice 635A Apple. Even with the “new” Cosell and the proper mic, the directors felt it still wasn’t right. “They really wanted to use Howard Cosell,” says Iatrou. “We ended up using all Howard Cosell in the film except for a word or a few syllables here and there, which we cut in from the Cosell soundalike. During the mix, re-recording mixer Ron Bartlett (dialogue/music) had to do very severe noise reduction in the segments with the music underneath. Then we put other music on top to help mask the degree of noise reduction that we did.”

Another challenge to the Howard Cosell commentary was that he wasn’t alone. Rosie Casals was also a commentator at the event. In the film, Rosie is played by actress Natalie Morales. Iatrou recorded Morales performing Casals’ commentary using the Electro-Voice 635A Apple mic. She then used iZotope RX 6’s EQ Match feature to help her lines sound similar to Cosell’s. “For the final mix, Ron Bartlett put more time and energy into getting the EQ to match. It’s interesting because we didn’t want Rosie’s lines to be as distressed as Cosell’s. We had to find this balance between making it work with Howard Cosell’s material but also make it a tiny bit better.”

After cutting Rosie’s new lines with Cosell’s original commentary, Iatrou turned her attention to the ambience. She played through the original match’s 90-minute mixed mono track to find clear sections of crowds, murmuring and cheering to cut under Rosie’s lines, so they would have a natural transition into Cosell’s lines. “For example, if there was a swell of the cheer on Howard Cosell’s line then I’d have to find a similar cheer to extend the sound under the actress’s line to fill it in.”

Crowd Sounds
To build up authentic crowd sounds for the recreated Battle of the Sexes match, Iatrou had the loop group perform call-outs that she and Lee heard in the original broadcast, like a woman yelling, “Come on Billie!” and a man shouting, “Come on Bobby baby!”

“The crowd is another big character in the match,” says Lee. “As the game went on, it felt like more of the women were cheering for Billie Jean and more of the men were cheering for Bobby Riggs. In the real broadcast, you hear one guy cheer for Bobby Riggs and then a woman would immediately cheer on Billie Jean. The guy would try to out cheer her and she would cheer back. It’s this whole secondary situation going on and we have that in the film because we wanted to make sure we were as authentic as possible.”

Lee also wanted the tennis rackets to sound authentic. She tracked down a wooden racket and an aluminum racket and had them restrung with a gut material at a local tennis store. She also had them strung with less tension than a modern racket. Then Lee and an assistant headed to an outdoor tennis court and recorded serves, bounces, net impacts, ball-bys and shoe squeaks using two mic setups — both with a Schoeps MK 41 and an MK 8 in an MS setup, paired with Sound Devices 702 and 722 recorders. “We miked it close and far so that it has some natural outdoor sound.”

Lee edited her recordings of tennis sounds and sporting event crowds with the production effects captured by sound mixer Lisa Pinero. “Lisa did a really good job of miking everything, and we were able to use some of the production crowd sounds, especially for the Margaret Court vs. Bobby Riggs match that happens before the final Battle of the Sexes match. In the final match, some of the tennis ball hits were layers of what I recorded and the production hits.”

Foley
Another key sonic element in the recreated Battle of the Sexes match was the Foley work by Dan O’Connell and John Cucci of One Step Up, located on the Fox Studios lot. During the match, Billie Jean’s strategy was to wear out the older and out-of-shape Bobby Riggs by making him run all over the court. “As the game went on, I wanted Bobby’s footsteps to feel heavier, with more thumps, as though he’s running out of steam trying to get the ball,” explains Lee. “Dan O’Connell did a good job of creating that heavy stomping foot, but with a slight wood resonance too. We topped that with shoe squeaks — some that Dan did and some that I recorded.”

The final Battle of the Sexes match was by far the most challenging scene to mix, says Lee. Re-recording mixers Bartlett and Doug Hemphill, as well as Lee, mixed the film in 7.1 surround at Formosa Group’s Hollywood location on Stage A using Avid S6 consoles. In the final match, they had Cosell’s original commentary blended with actress Morales commentary as Rosie Casals. There was music and layered crowds with call-outs. Production sound, field recordings, and Foley meshed to create the diegetic effects. “There were so many layers involved. Deciding how the sounds build and choosing what to play when — the crowds being tied to Howard Cosell, made it challenging to balance that sequence,” concludes Lee.


Jennifer Walden is a New Jersey-based audio engineer and writer.


Jeff Haboush and Chris Newman join Cinema Audio Society board

The Cinema Audio Society has added re-recording mixer Jeffrey J. Haboush, CAS, and production sound mixer Chris Newman, CAS, to its board. They will be filling the vacancies left by the recent passing of production mixer Ed Greene, CAS and the retirement of re-recording mixer Mary Jo Lang, CAS.

“Adding new board members at this time is bittersweet, but we are proud and inspired by the fact that we can welcome two dynamic and valued members of the sound community to fill shoes that we thought might be impossible to fill,” says CAS president Mark Ulano.

With over 200 feature and television mixing credits, Haboush has four Oscar nominations along with CAS, BAFTA and Emmy nominations. One of those Emmy nominations led to a win. His career began in 1978 at B&B Sound Studios Burbank. In 1989 he moved to Warner Bros./Goldwyn sound and in 1999 move to Sony Studios. Currently, Haboush can be found bouncing between Technicolor and Smart Post Sound mixing stages.

In a career that spans more than 40 years, Newman has been the production sound mixer on more than 85 feature films and garnered eight Oscar nominations with three wins for The English Patient, Amadeus and The Exorcist.

Newman was honored in 2013 with the CAS Career Achievement Award.  He also won a CAS Award for Outstanding Sound Mixing for The English Patient and has BAFTA wins for Fame and Amadeus. Prior to working on feature films he spent a decade working on documentaries, including working for Ted Yates’s NBC unit in Southeast Asia in 1966. Having taught sound and filmmaking in Europe, Brazil, Mexico and at NYU and Columbia University, Newman currently teaches both sound and production at the School of Visual Arts in New York.

Main Image: (L-R) Chris Newman and Jeff Haboush.


Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 


Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.


The sound of Netflix’s The Defenders

By Jennifer Walden

Netflix’s The Defenders combines the stories of four different Marvel shows already on the streaming service: Daredevil, Iron Fist, Luke Cage and Jessica Jones. In the new show, the previously independent superheroes find themselves all wanting to battle the same foe —a cultish organization called The Hand, which plans to destroy New York City. Putting their differences aside, the superheroes band together to protect their beloved city.

Supervising sound editor Lauren Stephens, who works at Technicolor at Paramount, has earned two Emmy nominations for her sound editing work on Daredevil. And she supervised the sound for each of the aforementioned Marvel series, with the exception of Jessica Jones. So when it came to designing The Defenders she was very conscious of maintaining the specific sonic characteristics they had already established.

“We were dedicated to preserving the palette of each of the previous Marvel characters’ neighborhoods and sound effects,” she explains. “In The Defenders, we wanted viewers of the individual series to recognize the sound of Luke’s Harlem and Daredevil’s Hell’s Kitchen, for example. In addition, we kept continuity for all of the fight material and design work established in the previous four series. I can’t think of another series besides Better Call Saul that borrows directly from its predecessors’ sound work.”

But it wasn’t all borrowed material. Eventually, Luke Cage (Mike Colter), Daredevil (Charlie Cox), Jessica Jones (Krysten Ritter), Iron Fist (Finn Jones) and Elektra Natchios (Elodie Yung) come together to fight The Hand’s leader Alexandra Reid (Sigourney Weaver). “We experience new locations, and new fighting techniques and styles,” says Stephens. “Not to mention that half the city gets destroyed by The Hand. We haven’t had that happen in the previous series.”

Even though these Netflix/Marvel series are based on superheroes, the sound isn’t overly sci-fi. It’s as though the superheroes have more practical superhuman abilities. Stephens says their fight sounds are all real punches and impacts, with some design elements added only when needed, such as when Iron Fist’s iron fist is activated. “At the heart of our punches, for instance, is the sound of a real fist striking a side of beef,” she says. “It sounds like you’d expect, and then we amp it up when we mix. We record a ton of cloth movement and bodies scraping and sliding and tumbling in Foley. Those elements connect us to the humans on-screen.”

Since most of the violence plays out in hand-to-hand combat, it takes a lot of editing to make those fight scenes, and it involves contributions from several sound departments. Stephens has her hard effects team — led by sound designer Jordon Wilby (who has worked on all the Netflix/Marvel series) cut sound effects for every single punch, grab, flip, throw and land. In addition, they cut metal shings and whooshes, impacts and drops for weapons, crashes and bumps into walls and furniture, and all the gunshot material.

Stephens then has the Technicolor Foley team — Foley artists Zane Bruce and Lindsay Pepper and mixer Antony Zeller —cover all the footsteps, cloth “scuffle,” wall bumps, body falls and grabs. Additionally, she has dialogue editor Christian Buenaventura clean up any dialogue that occurs within or around the fight scenes. With group ADR, they replace every grunt and effort for each individual in the fight so that they have ultimate control over every element during the mix.

Stephens finds Gallery’s SpotStudio to be very helpful for cueing all the group ADR. “I shoot a lot of group ADR for the fights and to help create the right populated feel for NYC. SpotStudio is a slick program that interfaces well with Avid’s Pro Tools. It grabs timecode location of ADR cues and can then output that to many word processing programs. Personally, I use FileMaker Pro. I can make great cuesheets that are easy to format and use for engineers and talent.”

All that effort results in fight scenes that feel “relentless and painful,” says Stephens. “I want them to have movement, tons of detail and a wide range of dynamics. I want the fights to sound great wherever our fans are listening.”

The most challenging fight in The Defenders happens in the season finale, when the superheroes fight The Hand in the sublevels of a building. “That underground fight was the toughest simply because it was endless and shot with a 360-degree turn. I focused on what was on-screen and continued those sounds just until the action passed out of frame. This kept our tracks from getting too cluttered but still gives us the right idea that 60 people are going at it,” concludes Stephens


The challenges of dialogue and ice in Game of Thrones ‘Beyond the Wall’

By Jennifer Walden

Fire-breathing dragons and hordes of battle-ready White Walkers are big attention grabbers on HBO’s Game of Thrones, but they’re not the sole draw for audiences. The stunning visual effects and sound design are just the gravy on the meat and potatoes of a story that has audiences asking for more.

Every line of dialogue is essential for following the tangled web of storylines. It’s also important to take in the emotional nuances of the actors’ performances. Striking the balance between clarity and dynamic delivery isn’t an easy feat. When a character speaks in a gruff whisper because, emotionally, it’s right for the scene, it’s the job of the production sound crew and the post sound crew to make that delivery work.

At Formosa Group’s Hollywood location, an Emmy-winning post sound team works together to put as much of the on-set performances on the screen as possible. They are supervising sound editor Tim Kimmel, supervising dialogue editor Paul Bercovitch and dialogue/music re-recording mixer Onnalee Blank.

Tim Kimmel and Onnalee Blank

“The production sound crew does such a phenomenal job on the show,” says Kimmel. “They have to face so many issues on set, between the elements and the costumes. Even though we have to do some ADR, it would be a whole lot more if we didn’t have such a great sound crew on-set.”

In Season 7, Episode 6, “Beyond the Wall,” the sound team faced a number of challenges. Starting at the beginning of this episode, Jon Snow [Kit Harington] and his band of fighters trek beyond the wall to capture a White Walker. As they walk across a frozen, windy landscape, they pass the time by getting to know each other more. Here the threads of their individual stories from past seasons start to weave together. Important connections are being made in each line of dialogue.

Those snowy scenes were shot in Iceland and the actors wore metal spikes on their shoes to help them navigate the icy ground. Unfortunately, the spikes also made their footsteps sound loud and crunchy, and that got recorded onto the production tracks.

Another challenge came from their costumes. They wore thick coats of leather and fur, which muffled their dialogue at times or pressed against the mic and created a scratchy sound. Wind was also a factor, sometimes buffeting across the mic and causing a low rumble on the tracks.

“What’s funny is that parts of the scene would be really tough to get cleaned up because the wind is blowing and you hear the spikes on their shoes — you hear costume movements. Then all of a sudden they stop and talk for a minute and the wind stops and it’s the most pristine, quiet, perfect recording you can think of,” explains Kimmel. “It almost sounded like it was shot on a soundstage. In Iceland, when the wind isn’t blowing and the actors aren’t moving, it’s completely quiet and still. So it was tough to get those two to match.”

As supervising sound editor, Kimmel is the first to assess the production dialogue tracks. He goes through an episode and marks priority sections for supervising dialogue editor Bercovitch to tackle first. He says, “That helps Tim [Kimmel] put together his ADR plan. He wants to try to pare down that list as much as possible. For Beyond the Wall, he wanted me to start with the brotherhood’s walk-and-talk north of the wall.”

Bercovitch began his edit by trying to clean up the existing dialogue. For that opening sequence, he used iZotope RX 6’s Spectral Repair to clean up the crunchy footsteps and the rumble of heavy winds. Next, he searched for usable alt takes from the lav and boom tracks, looking for a clean syllable or a full line to cut in as needed. Once Bercovitch was done editing, Kimmel could determine what still needed to be covered in ADR. “For the walk-and-talk beyond the wall, the production sound crew really did a phenomenal job. We didn’t have to loop that scene in its entirety. How they got as good of recordings as they did is honestly beyond me.”

Since most of the principle actors are UK and Ireland-based, the ADR is shot in London at Boom Post with ADR supervisor Tim Hands. “Tim [Hands] records 90% of the ADR for each season. Occasionally, we’ll shoot it here if the actor is in LA,” notes Kimmel.

Hands had more lines than usual to cover on Beyond the Wall because of the battle sequence between the brotherhood and the army of the dead. The principle actors came in to record grunts, efforts and breaths, which were then cut to picture. The battle also included Bercovitch’s selects of usable production sound from that sequence.

Re-recording mixer Blank went through all of those elements on dub Stage 1 at Formosa Hollywood using an Avid S6 console to control the Pro Tools 12 session. She chose vocalizations that weren’t “too breathy, or sound like it’s too much effort because it just sounds like a whole bunch of grunts happening,” she says. “I try to make the ADR sound the same as the production dialogue choices by using EQ, and I only play sounds for whoever is on screen because otherwise it just creates too much confusion.”

One scene that required extensive ADR was for Arya (Maisie Williams) and Sansa (Sophie Turner) on the catwalk at Winterfell. In the seemingly peaceful scene, the sisters share an intimate conversation about their father as snow lightly falls from the sky. Only it wasn’t so peaceful. The snow was created by a loud snow machine that permeated the production sound, which meant the dialogue on the entire scene needed to be replaced. “That is the only dialogue scene that I had no hand in and I’ve been working on the show for three seasons now,” says Bercovitch.

For Bercovitch, his most challenging scenes to edit were ones that might seem like they’d be fairly straightforward. On Dragonstone, Daenerys (Emilia Clarke) and Tyrion (Peter Dinklage) are in the map room having a pointed discussion on succession for the Iron Throne. It’s a talk between two people in an interior environment, but Bercovitch points out that the change of camera perspective can change the sound of the mics. “On this particular scene and on a lot of scenes in the show, you have the characters moving around within the scene. You get a lot of switching between close-ups and longer shots, so you’re going between angles with a usable boom to angles where the boom is not usable.”

There’s a similar setup with Sansa and Brienne (Gwendoline Christie) at Winterfell. The two characters discuss Brienne’s journey to parley with Cersei (Lena Headey) in Sansa’s stead. Here, Bercovitch faced the same challenge of matching mic perspectives, and also had the added challenge of working around sounds from the fireplace. “I have to fish around in the alt takes — and there were a lot of alts — to try to get those scenes sounding a little more consistent. I always try to keep the mic angles sounding consistent even before the dialogue gets to Onnalee (Blank). A big part of her job is dealing with those disparate sound sources and trying to make them sound the same. But my job, as I see it, is to make those sound sources a little less disparate before they get to her.”

One tool that’s helped Bercovitch achieve great dialogue edits is iZotope’s RX 6. “It doesn’t necessarily make cleaning dialogue faster,”he says. “It doesn’t save me a ton of time, but it allows me to do so much more with my time. There is so much more that you can do with iZotope RX 6 that you couldn’t previously do. It still takes nitpicking and detailed work to get the dialogue to where you want it, but iZotope is such an incredibly powerful tool that you can get the result that you want,” he says.

On the dub stage, Blank says one of her most challenging scenes was the opening walk-and-talk sequence beyond the wall. “Half of that was ADR, half was production, and to make it all sound the same was really challenging. Those scenes took me four days to mix.”

Her other challenge was the ADR scene with Arya and Sansa in Winterfell, since every line there was looped. To help the ADR sound natural, as if it’s coming from the scene, Blank processes and renders multiple tracks of fill and backgrounds with the ADR lines and then re-records that back into Avid Pro Tools. “That really helps it sit back into the screen a little more. Playing the Foley like it’s another character helps too. That really makes the scene come alive.”

Bercovitch explains that the final dialogue you hear in a series doesn’t start out that way. It takes a lot of work to get the dialogue to sound like it would in reality. “That’s the thing about dialogue. People hear dialogue all day, every day. We talk to other people and it doesn’t take any work for us to understand when other people speak. Since it doesn’t take any work in one’s life why would it require a lot of work when putting a film together? There’s a big difference between the sound you hear in the world and recorded sound. Once it has been recorded you have to take a lot of care to get those recordings back to a place where your brain reads it as intelligible. And when you’re switching from angle to angle and changing mic placement and perspective, all those recordings sound different. You have to stitch those together and make them sound consistent so it sounds like dialogue you’d hear in reality.”

Achieving great sounding dialogue is a team effort — from production through post. “Our post work on the dialogue is definitely a team effort, from Paul’s editing and Tim Hands’ shooting the ADR so well to Onnalee getting the ADR to match with the production,” explains Kimmel. “We figure out what production we can use and what we have to go to ADR for. It’s definitely a team effort and I am blessed to be working with such an amazing group of people.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Broadway Video’s Sue Pelino and team win Emmy

Sue Pelino and the sound mixing team at New York City’s Broadway Video have won the Emmy for Outstanding Sound Mixing for a Variety Series Or Special for their work on the 2017 Rock & Roll Hall of Fame Induction Ceremony that aired on HBO in April. Pelino served as re-recording mixer on the project.

Says Pelino, who is VP of audio post production at Broadway Video, “Our goal in preparing the televised package was to capture the true essence of the night. We wanted viewers to experience the energy and feel as if they were sitting in the tenth row of the Barclays Center. It’s a remarkable feeling to know that we have achieved that goal.”

Pelino is already the proud owner of two Emmy awards and has nine nominations under her belt. Her career as an audio post production engineer rests on her early years playing guitar in rock bands and recording original songs in her home studio.

Additional members of the winning sound team for the 2017 Rock & Roll Hall of Fame Induction Ceremony — produced by HBO entertainment in association with Playtone, Line by Line Productions, Alex Coletti Productions and the Rock & Roll Hall of Fame Foundation — include Al Centrella, John Harris, Dave Natale, Jay Vicari, Erik Von Ranson and Simon Welch.