Mark Mangini keynotes The Art of Sound 
Design at Sony Studios

Panels focus on specifics of music, effects and dialog sound design, and immersive soundtracks

By Mel Lambert

Defining a sound designer as somebody “who uses sound to tell stories,” Mark Mangini, MPSE, was adamant that “sound editors and re-recording mixers should be authors of a film’s content, and take creative risks. Art doesn’t get made without risk.”

A sound designer/re-recording mixer at Hollywood’s Formosa Group Features, Mangini outlined his sound design philosophy during a keynote speech at the recent The Art of Sound Design: Music, Effects and Dialog in an Immersive World conference, which took place at Sony Pictures Studios in Culver City.

Mangini is recipient of three Academy Award nominations for The Fifth Element (1997), Aladdin (1992) and Star Trek IV: The Voyage Home (1986).

Acknowledging that an immersive soundtrack should fully engage the audience, Mangini outlined two ways to achieve that goal. “Physically, we can place sound around an audience, but we also need to engage them emotionally with the narrative, using sound to tell the story,” he explained to the 500-member audience. “We all need to better understand the role that sound plays in the filmmaking process. For me, sound design is storytelling — that may sound obvious, but it’s worth reminding ourselves on a regular basis.”

While an understanding of the tools available to a sound designer is important, Mangini readily concedes, “Too much emphasis on technology keeps us out of the conversation; we are just seen as technicians. Sadly, we are all too often referred to as ‘The Sound Guy.’ How much better would it be for us if the director asked to speak with the ‘Audiographer,’ for example. Or the ‘Director of Sound’ or the ‘Sound Artist?’ — terms that better describe what we actually do? After all, we don’t refer to a cinematographer as ‘The Image Guy.’”

Mangini explained that he always tries to emphasize the why and not the how, and is not tempted to imitate somebody else’s work. “After all, when you imitate you ensure that you will only be ‘almost’ as good as the person or thing you imitate. To understand the ‘why,’ I break down the script into story arcs and develop a sound script so I can reference the dramatic beats rather than the visual cues, and articulate the language of storytelling using sound.”

Past Work
Offering up examples of his favorite work as a soundtrack designer, Mangini provided two clips during his keynote. “While working on Star Trek [in 2009] with supervising sound editor Mark Stoeckinger, director J. J. Abrams gave me two days to prepare — with co-designer Mark Binder — a new soundtrack for the two-minute mind meld sequence. J. J. wanted something totally different from what he already had. We scrapped the design work we did on the first day, because it was only different, not better. On day two we rethought how sound could tell the story that J. J. wanted to tell. Having worked on three previous Star Trek projects [different directors], I was familiar with the narrative. We used a complex combination of orchestral music and sound effects that turned the sequence on its head; I’m glad to say that J. J. liked what we did for his film.”

The two collaborators received the following credit: “Mind Meld Soundscape by Mark Mangini and Mark Binder.”

Turning to his second soundtrack example, Mangini recalled receiving a call from Australia about the in-progress soundtrack for George Miller’s Mad Max: Fury Road, the director’s fourth outing with the franchise. “The mix they had prepared in Sydney just wasn’t working for George. I was asked to come down and help re-invigorate the track. One of the obstacles to getting this mix off the ground was the sheer abundance of material to choose from. When you have so many choices on a soundtrack, the mix can be an agonizing process of ‘Sound Design by Elimination.’ We needed to tell him, ‘Abandon what you have and start over.’ It was up to me, as an artist, to tell George that his V8 needed an overhaul and not just a tune-up!”

“We had 12 weeks, working at Formosa with co-supervising sound editor Scott Hecker — and at Warner Bros Studios with re-recording mixers Chris Jenkins and Greg Rudloff — to come up with what George Miller was looking for. We gave each vehicle [during the extended car-chase sequence that opens the film] a unique character with sound, and carefully defined [the lead proponent Max Rockatansky’s] changing mental state during the film. The desert chase became ‘Moby Dick,’ with the war rig as the white whale. We focused on narrative decisions as we reconstructed the soundtrack, always referencing ‘the why’ for our design choices in order to provide a meaningful sonic immersion. Miller has been quoted as saying, ‘Mad Max is a film where we see with our ears.’ This from a director who has been making films for 40 years!”

His advice to fledgling sound designers? Mangini kept it succinct: “Ask yourself why, not how. Be the author of content, take risks, tell stories.”

Creating a Sonic Immersive Experience
Subsequent panels during the all-day conference addressed how to design immersive music, sound effects and dialog elements used on film and TV soundtracks. For many audiences, a 5.1-channel format is sufficient for carrying music, effects and dialog in an immersive, surround experience, but 7.1-channel — with added side speakers, in addition to the new Dolby Atmos, Barco/Auro 3D and DTS:X/MDA formats — can extend that immersive experience.

“During editorial for Guardians of the Galaxy we had so many picture changes that the re-recording mixers needed all of the music stems and breakouts we could give them,” said music editor Will Kaplan, MPSE, from Warner Bros. Studio Facilities, during the “Music: Composing, Editing and Mixing Beyond 5.1” panel. It was presented by Formosa Group and moderated by scoring mixer Dennis Sands, CAS. “In a quieter movie we can deliver an entire orchestral track that carries the emotion of a scene.”

Music: Composing, Editing and Mixing Beyond 5.1 panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

‘Music:Composing, Editing and Mixing Beyond 5.1’ panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

Describing his collaboration with Tim Burton, music editor Bill Abbott, MPSE from Formosa reported that the director “liked to hear an entire orchestral track for its energy, and then we recorded it section by section with the players remaining on the stage, which can get expensive!”

Joseph Magee, CAS, (supervising music mixer on such films as Pitch Perfect 2, The Wedding Ringer, Saving Mr. Banks and The Muppets) likes to collaborate closely with the effects editor to decide who handles which elements from each song. “Who gets the snaps and dance shoes How do we divide up the synchronous ambience and the design ambience? The synchronous ambience from the set might carry tails from the sing-offs, and needs careful matching. What if they pitch shift the recorded music in post? We then need to change the pitch of the music captured in the audience mics using DAW plug-ins.”

“I like to invite the sound designer to the music spotting session,” advised Abbott, “and discuss who handles what — is it a music cue or a sound effect?”

“We need to immerse audiences with sound and use the surrounds for musical elements,” explained Formosa’s re-recording mixer, Andy Koyama, CAS. “That way we have more real estate in the front channels for sound effects.”

“We should get the sound right on the set because it can save a lot of processing time on the dub stage,” advised production mixer Lee Orloff, CAS, during the “A Dialog on Dialog: From Set to Screen” panel moderated by Jeff Wexler, CAS.

A Dialog on Dialog: From Set to Screen panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

‘A Dialog on Dialog: From Set to Screen’ panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

“I recall working on The Patriot, where the director [Roland Emmerich] chose to create ground mist using smoke machines known as a Smoker Boats,” recalled Orloff, who received Oscar and BAFTA Awards for Terminator 2: Judgment Day (1991). “The trouble was that they contained noisy lawnmower engines, whose sound can be heard under all of the dialog tracks. We couldn’t do anything about it! But, as it turned out, that low-level noise added to the sense of being there.”

“I do all of my best work in pre-production,” added Wexler, “by working out the noise problems we will face on location. It is more than just the words that we capture; a properly recorded performance tells you so much about the character.”

“I love it when the production track is full of dynamics,” added dialog/music re-recording mixer Gary Bourgeois, CAS. “The voice is an instrument; if I mask out everything that is not needed I lose the ‘essence’ of the character’s performance. The clarity of dialog is crucial.”

“We have tools that can clean up dialog,” conceded supervising sound editor Marla McGuire, MPSE, “but if we apply them too often and too deeply it takes the life out of the track.”

“Sound design can make an important scene more impactful, but you need to remember that you’re working in the service of the film,” advised sound designer/supervising sound editor Richard King, MPSE, during the “Sound Effects: How Far Can You Go?” moderated by David Bondelevitch, MPSE, CAS.

Sound Effects: How Far Can You Go? panel L_R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

‘Sound Effects: How Far Can You Go?’ panel L-R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

In terms of music co-existing with sound effects, Formosa’s Scott Gershin, MPSE, advised, “During a plane crash sequence, I pitch shifted the sound effect to match the music.”

“I like to go to the music spotting session and ask if the director wants the music to serve as a rhythmic or thematic/tonal part of the soundtrack,” added sound effects re-recording mixer Will File from Fox Post Production Services. “I just take the other one. Or if it’s all rhythm — a train ride, for example — we’ll agree to split [the elements].”

“On the stage, I’m constantly shifting sync and pitch shifting the sound effects to match the music track,” stated Gershin. “For Pacific Rim we had many visual effects arriving late with picture changes. Director Guillermo del Toro received so many new eight-frame VFX cues he wanted to use that the music track ended up looking like bar code” in the final Pro Tools sessions.

In terms of working with new directors, “I like to let them see some good movies with good sound design to start the conversation” offered Files. “I front load the process by giving the director and picture editors a great sounding temp track using dialog predubs that they can load into the Avid Media Composer to get them used to our sound ideas It also helps the producers dazzle the studio!”

“Successful soundtrack design is a collaborative effort from production sound onwards,” advised re-recording mixer Mike Minkler, CAS, during “The Mix: Immersive Sound, Film and Television” panel, presented by DTS and moderated by Mix editor Tom Kenny. “It’s about storytelling. Somebody has to be the story’s guardian during the mix,” stated Minkler, who received Academy Awards for Dreamgirls (2006), Chicago (2002) and Black Hawk Down (2001). “Filmmaking is the ultimate collaboration. We need to be aware of what the director wants and what the picture needs. To establish your authority you need to gain their confidence.”

“For immersive mixes, you should start in Dolby Atmos as your head mix,” advised Jeremy Pearson, CAS, who is currently re-recording The Hunger Games: Mockingjay – Part 2 at Warner Bros. Studio. He also worked in that format on Mockingjay – Part 1 and Catching Fire. “Atmos is definitely the way to go; it’s what everyone can sign off on. In terms of creative decisions during an Atmos mix, I always ask myself, ‘Am I helping the story by moving a sound, or distracting the audience?’ After all, the story is up on the screen. We can enhance sound depth to put people into the scene, or during calmer, gentler scenes you can pinpoint sounds that engage the audience with the narrative.”

Kim Novak Theater at Sony Pictures Studios

Kim Novak Theater at Sony Pictures Studios.

Minkler reported that he is currently working on director Quentin Tarantino’s The Hateful Eight, “which will be released initially for two weeks in a three-hour version on 70mm film to 100 screens, with an immersive 5.1-channel soundtrack mastered to 35 mm analog mag.”

Subsequently, the film will be released next year in a slightly different version via a conventional digital DCP.

“Our biggest challenge,” reported Matt Waters, CAS, sound effects re-recording mixer for HBO’s award-winning Game of Thrones, “is getting everything competed in time. Changes are critical and we might spend half a day on a sequence and then have only 10 minutes to update the mix when we receive picture changes.”

“When we receive new visuals,” added Onnalee Blank, CAS, who handles music and dialog re-recording on the show, “[the showrunners] tell us, ‘it will not change the sound.’ But if the boats become dragons…”

Photos by Mel Lambert.

Mel Lambert is principal of Content Creators, an LA-based editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.