Tag Archives: Sony Pictures

Baby Driver editors — Syncing cuts to music

By Mel Lambert

Writer/director Edgar Wright’s latest outing is a major departure from his normal offering of dark comedies. Unlike his Three Flavours Cornetto film trilogy — Shaun of the Dead, Hot Fuzz and The World’s End — and Scott Pilgrim vs. the World, TriStar Pictures’ Baby Driver has been best described as a romantic musical disguised as a car-chase thriller.

Wright’s regular pair of London-based picture editors, Paul Machliss, ACE, and Jonathan Amos, ACE, also brought a special brand of magic to the production. Machliss, who had worked with Wright on Scott Pilgrim, The World’s End and his TV series Spaced for Channel 4, recalls that, “very early on, Edgar decided that I should come along on the shoot in Atlanta to ensure that we had the material he’d already storyboarded in a series of complex animatics for the film [using animator Steve Markowski and editor Evan Schiff]. Jon Amos joined us when we returned to London for sound and picture post production, primarily handling the action sequences, at which he excels.”

Developed by Wright over the past two decades, Baby Driver tells the story of an eponymous getaway driver (Ansel Elgort), who uses earphones to drown out the “hum-in-the-drum” of tinnitus — the result of a childhood car accident — and to orchestrate his life to carefully chosen music. But now indebted to a sinister kingpin named Doc (Kevin Spacey), Baby becomes part of a seriously focused gang of bank robbers, including Buddy and Darling (Jon Hamm and Eiza González), Bats (Jamie Foxx) and Griff (Jon Bernthal). Debora, Baby’s love interest (Lily James), dreams of heading west “in a car I can’t afford, with a plan I don’t have.” Imagine, in a sense, Jim McBride’s Breathless rubbing metaphorical shoulders with Tony Scott’s True Romance.

The film also is indebted to Wright’s 2003 music video for Mint Royale’s Blue Song, during which UK comedian/actor Noel Fielding danced in a stationery getaway car. In that same vein, Baby Driver comprises a sequence of linked songs that tightly choreograph the action and underpin the dramatic arcs being played out, often keying off the songs’ lyrics.

The film’s opener, for example, features Elgort partly lipsyncing to “Bellbottoms,” by the Jon Spencer Blues Explosion, as the villains commit their first robbery. In subsequent scenes, our hero’s movements follow the opening bass riffs of The Damned’s “Neat Neat Neat,” then later to Golden Earring’s “Radar Love” before Queen’s “Brighton Rock” adds complex guitar cacophony to a key encounter scene.

Even the film’s opening titles are accompanied by Baby performing a casual coffee run in a continuous three-minute take to Bob & Earl’s “Harlem Shuffle” — a scene that reportedly took 28 takes on the first day of practical photography in Atlanta. And the percussion and horns of “Tequila” provide syncopation for a protracted gunfight. Fold in “Egyptian Reggae,” “Unsquare Dance,” and “Easy,” followed by “Debora,” and it’s easy to appreciate that Wright is using music as a key and underpinning component of this film. The director also brought in music video choreographer Ryan Heffington to achieve the timing precision he needed.

The swift action is reflected in a fast style of editing, including whip pans and crash zooms, with cuts that are tightly synchronized to the music. “Whereas the majority of Edgar’s previous TV series and films have been parodies, for Baby Driver he had a very different idea,” explains Machliss. Wright had accumulated a playlist of over 30 songs that would inspire various scenes in his script. “It’s something that’s very much a part of my previous films,” says director Wright, “and I thought of this idea of how to take that a stage further by having a character who listens to music the entire time.”

“Edgar had organized a table read of his script in the spring of 2012 in Los Angeles, at which he recorded all of the dialog,” says Machliss. “Taking that recording, some sound effects and the music tracks, I put together a 100-minute ‘radio play’ that was effectively the whole film in audio-only form that Edgar could then use as a selling tool to convince the studios that he had a viable idea. Remember, Baby Driver was a very different format for him and not what he is traditionally known for.”

Australia-native Machliss was on set to ensure that the gunshots, lighting effects, actors and camera movements, plus car hits, all happened to the beat of the accompanying music. “We were working with music that we could not alter or speed up or slow down,” he says. “We were challenged to make sure that each sequence fit in the time frame of the song, as well as following the cadence of the music.”

Almost 95% of music included in the first draft of Wright’s script made it into the final movie according to Machliss. “I laid up the relevant animatic as a video layer in my Avid Media Composer and then confirmed how each take worked against the choreographed timeline. This way I always had a reference to it as we were filming. It was a very useful guide to see if we were staying on track.”

Editing On Location
During the Atlanta shoot, Machliss used Apple ProRes digital files captured by an In2Core QTake video assist that was recording taps from the production’s 35mm cameras. “I connected to my Mac via Ethernet so I could create a network to the video assist’s storage. I had access to his QuickTime files the instant he stopped recording. I could use Avid’s AMA function to place the clip in the timeline without the need for transcoding. This allowed almost instantaneous feedback to Edgar as the sequence was built up.”

Paul Machliss on set.

While on location, Machliss used a 15-inch MacBook Pro, Avid Mojo DX and a JVC video monitor “which could double as a second screen for the Media Composer or show full-screen video output via the Mojo DX.” He also had a Wacom tablet, an 8TB Thunderbolt drive, a LaCie 500GB rugged drive — “which would shuttle my media between set and editorial” — and an APU “so that I wouldn’t lose power if the supply was shut down by the sparks!”

LA’s Fotokem handled film processing, with negative scanning by Efilm. DNX files were sent to Company 3 in Atlanta for picture editorial, “where we would also review rushes in 2K sent down the line from Efilm,” says Machliss. “All DI on-lining and grading took place at Molinare in London.” Bill Pope, ASC, was the film’s director of photography.

Picture and Sound Editorial in London
Instead of hiring out editorial suites at a commercial facility in London, Wright and his post teams opted for a different approach. Like an increasing number of London-based productions, they elected to rent an entire floor in an office building.

They located a suitable location on Berners Street, north of the Soho-based film community. As Machliss recalls: “That allowed us to have the picture editorial team in the same space as the sound crew,” which was headed up by Wright’s long-time collaborator Julian Slater, who served as sound designer, supervising sound editor and re-recording engineer on Baby Driver. “Having ready access to Julian and his team meant that we could collaborate very closely — as we had on Edgar’s other films — and share ideas on a regular basis,” as the 10-week Director’s Cut progressed.

British-born Slater then moved across Soho to Goldcrest Films for sound effects pre-dubs, while his co-mixer, Tim Cavagin, worked on dialog and Foley pre-mixes at Twickenham Studios. Print mastering of the Dolby Atmos soundtrack occurred in February 2017 at Goldcrest, with Slater handling music and SFX, while Cavagin oversaw dialog and Foley. “Following Edgar’s concept of threading together the highly choreographed songs with linking scenes, Jon and I began the cut in London against the pre-assembled material from Atlanta,” says Machliss.

To assist Machliss during his picture cut, the film’s sound designer had provided a series of audio stems for his Avid. “Julian [Slater] had been working on his sound effects and dialog elements since principal photography ended in Atlanta. He had prepared separate, color-coded left-center-right stems of the music, dialog and SFX elements he was working on. I laid these [high-quality tracks] into Media Composer so I could better appreciate the intricacies of Julian’s evolving soundtrack. It worked a lot better than a normal rough mix of production dialog, rough sound effects and guide music.”

“From its inception, this was a movie for which music and sound design worked together as a whole piece,” Slater recalls. “There is a large amount of syncopation of the diegetic sounds [implied by the film’s action] to the music track Baby is listening to. Sometimes it’s obvious because the action was filmed with that purpose in mind. For example, walking in tempo to the music track or guns being fired in tempo. But many times it’s more subtle, including police sirens or distant trains that have been pitched and timed to the music,” and hence blend into the overall musical journey. “We strived to always do this to support the story, and to never distract from it.”

Because of the lead character’s tinnitus, Slater worked with pitch changes to interweave elements of the film’s soundtrack. “Whenever Baby is not listening to music, his tinnitus is present to some degree. But it became apparent very soon in our design process that strident, high-pitched ‘whistle tones’ would not work for a sustained period of time. Working closely with composer Steven Price, we developed a varied set of methods to convey the tinnitus — it’s rarely the same sound twice. Much of the time, the tinnitus is pitched according to either the outgoing or incoming music track. This then enabled us to use more of it, yet at the same time be quite subtle.”

Meticulous Planning for Set Pieces and Car Chases
Picture editor Amos joined the project at the start of the Director’s Cut to handle the film’s set pieces. He says, “These set pieces were conceptually very different from the vast majority of action scenes in that they were literally built up around the music and then visualized. Meticulous development and planning went into these sequences before the shoot even began, which was decisive in making the action become musical. For example, the ‘Tequila’ gunfight started as a piece of music by Button Down Brass. It was then laced with gunfire and SFX pitched to the music, and in time with the drum hits — this was done at the script stage by Mark Nicholson (aka, Osymyso, a UK musician/DJ) who specializes in mashup/bastard pop and breakbeat.”

Storyboards then grew around this scripted sound collage, which became a precise shot list for the filmed sequences. “Guns were rigged to go off in time with the music; it was all a very deliberate thing,” adds Amos. “Clearly, there was a lot of editing still to be done, but this approach illustrates that there’s a huge difference between something that is shot and edited to music, and something that is built around the music.”

“All the car chases for Baby Driver were meticulously planned, and either prevised or storyboarded,” Amos explains. “This ensured that the action would always fit into the time slot permitted within the music. The first car chase [against the song ‘Bellbottoms’] is divided into 13 sections, to align to different progressions in the music. One of the challenges resulted from the decision to never edit the music, which meant that none of these could overrun. Stunts were tested and filmed by second unit director Darrin Prescott, and the footage passed back to editorial to test against the timing allowed in the animatic. If a stunt couldn’t be achieved in the time allowed, it was revised and tweaked until it worked. This detailed planning gave the perfect backbone to the sequences.”

Amos worked on the sequences sequentially, “using the animatic and Paul’s on-set assembly as reference,” and began to break down all the footage into rolls that aligned to specific passages of the music. “There was a vast amount of footage for all the set pieces, and things are not always shot in order. So generally I spent a lot of time breaking the material down very methodically. I then began to make selects and started to build the sequences from scratch, section by section. Once I completed a pass, I spent some time building up my sound layers. I find this helps evolve the cut, generating another level of picture ideas that further tighten the syncopation of sound and picture.”

Amos’ biggest challenge, despite all the planning, was finding ways to condense the material into its pre-determined time slot. “The real world never moves quite like animatics and boards. We had very specific points in every track where certain actions had to take place; we called these anchor points. When working on a section, we would often work backwards from the anchor point knowing, for instance, that we only had 20 seconds to tell a particular part of the story. Initially, it can seem quite restrictive, but the edits become so precise.

Jonathan Amos

“The time restriction led to a level of kineticism and syncopation that became a defining feature of the movie. While the music may be the driving force of the action scenes, editorial choices were always rooted in the story and the characters. If you lose sight of the characters, the audience will disengage with the sequence, and you’ll lose all the tension you’ve worked so hard to create. Every shot choice was therefore very considered, and we worked incredibly hard to ensure we never wasted a frame, telling the story in the most compelling, rhythmic and entertaining way we could.”

“Once we had our cut,” Machliss summarizes, “we could return the tracks to Julian for re-conforming,” to accommodate edit changes. “It was an excellent way of working, with full-sounding edit mixes.”

Summing up his experience in Baby Driver, Machliss considers the film to be “the hardest job I’ve ever done, but the most fun I’ve ever had. Ultimately, our task was to create a film that on one level could be purely enjoyed as an exciting/dramatic piece of cinema, but, on repeated viewing, would reveal all the little elements ‘under the surface’ that interlock together — which makes the film unique. It’s a testament to Edgar’s singular vision and, in that regard, he is a tremendously exciting director to work with.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Dell partners with Sony on Spider-Man film, showcases VR experience

By Jay Choi

Sony Pictures Imageworks used Dell technology during the creation of the Spider-Man: Homecoming. To celebrate, Dell and Sony held a press junket in New York City that included tech demos and details on the film, as well as the Spider-Man: Homecoming Virtual Reality Experience. While I’m a huge Spider-Man fan, I am not biased in saying it was spectacular.

To begin the VR demo, users are given the same suit Tony Stark designs for Peter Parker in Captain America: Civil War and Spider-Man: Homecoming. The first action you perform is grabbing the mask and putting on the costume. You then jump into a tutorial that teaches you how to use your web-shooter mechanics (which implement intuitively with your VR controllers).

Users are then tasked with thwarting the villainous Vulture from attacking you and the city of New York. Admittedly, I didn’t get too far into the demo. I was a bit confused as to where to progress, but also absolutely stunned by the mechanics and details. Along with pulling triggers to fire webs, each button accessed a different type of web cartridge in your web shooter. So, like Spidey, I had to be both strategic and adaptive to each changing scenario. I actually felt like I was shooting webs and pulling large crates around… I honestly spent most of my time seeing how far the webs could go and what they could stick to — it was amazing!

The Tech
With the power of thousands of workstations, servers and over a petabyte of storage from Dell, Sony Pictures Imageworks and other studios, such as MPC and Method, were able to create the visual effects for the Spider-Man: Homecoming film. The Virtual Reality Experience actually pulled the same models, assets and details used in the film, giving users a truly awesome and immersive experience.

When I asked what this particular VR experience would cost your typical consumer, I was told that when developing the game, Dell researched major VR consoles and workstations and set a benchmark to strive for so most consumers should be able to experience the game without too much of a difference.

Along with the VR game, Dell also showcased its new gaming laptop: the Inspiron 15 7000. With a quad-core H-Class 7th-Gen Intel Core and Nvidia GeForce GTX 1050/1050 Ti, the laptop is marketed for hardcore gaming. It has a tough-yet-sleek design that’s appealing to the eye. However, I was more impressed with its power and potential. The junket had one of these new Inspiron laptops running the recently rebooted Killer Instinct fighting game (which ironically was my very first video game on the Super Nintendo… I guess violent video games did an okay job raising me). As a fighting game fanatic and occasional competitor, I have to say the game ran very smoothly. I couldn’t spot latency between inputs from the USB-connected X-Box One controllers or any frame skipping. It does what it says it can do!

The Inspiron 15 7000 was also featured in the Spider-Man: Homecoming film and was used by Jacob Batalon’s character, Ned, to help aid Peter Parker in his web-tastic mission.

I was also lucky enough to try out Sony Future Lab Program’s projector-based interactive Find Spider-Man game, where the game’s “screen” is projected on a table from a depth-perceiving projector lamp. A blank board was used as a scroll to maneuver a map of New York City, while piles of movable blocks were used to recognize buildings and individual floors. Sometimes Spidey was found sitting on the roof, while other times he was hiding inside on one of the floors.

All in all, Dell and Sony Pictures Imageworks’ partnership provided some sensational insight to what being Spider-Man is like with their technology and innovation, and I hope to see it evolve even further along side more Spider-Man: Homecoming films.

The Spider-Man: Homecoming Virtual Reality Experience arrives on June 30th for all major VR platforms. Marvel’s Spider-Man: Homecoming releases in theaters on July 7th.


Jay Choi is a Korean-American screenwriter, who has an odd fascination with Lego minifigures, a big heart for his cat Sula, and an obsession with all things Spider-Man. He is currently developing an animated television pitch he sold to Nickelodeon and resides in Brooklyn.

A closer look at Southpaw’s audio

Director Antoine Fuqua and the film’s sound team talked about their process during a panel at Sony Pictures.

By Mel Lambert

With Oscar buzz swirling around the film Southpaw, director Antoine Fuqua paid tribute to his sound crew on The Weinstein Company’s drama during a screening and Q&A session on the Cary Grant Stage at Sony Pictures in Culver City — the same venue where the film’s soundtrack was re-recorded earlier this year.

The event was co-moderated by Cinema Audio Society president Mark Ulano and Motion Picture Sound Editor president Frank Morrone; it was introduced by MPSE president-elect Tom McCarthy, Sony Pictures Studio’s EVP of post production facilities.

The film depicts the decline and rise of former World Light Heavyweight boxer Billy Hope (Jake Gyllenhaal), who turns to trainer Tick Wills (Forest Whitaker) for help getting his life back on track after losing his wife (Rachel McAdams) in a tragic accident and his daughter Leila (Oona Laurence) to child protection services. Once the custody of his daughter falls into question, Hope decides to regain his former life by returning to the ring for a grudge match in Las Vegas with Miguel “Magic” Escobar (Miguel Gomez).

“Boxing is a violent sport,” Fuqua told the large audience of industry pros and guests. “It’s always best to be ready to train or you’re going to get hurt! I spent a lot of time with the actors preparing them for their roles, and on Jake’s pivotal relationship with his daughter, but I had to make sure that Jake’s character wasn’t too consumed by anger. If you don’t control your anger [in the boxing ring] you cannot control your performance.”

Fuqua is best known for his work on Training Day, as well as The Replacement Killers, King Arthur, Shooter, Olympus Has Fallen and The Equalizer. He has also directed a number of music videos for artists such as Prince, Stevie Wonder and Coolio. The latter’s Gansta’s Paradise rap video won a The Young Generators Award.

Director Antoine Fuqua with his Southpaw sound crew.

Director Antoine Fuqua (center, leather jacket) with the panel.

Fuqua revealed that he has worked with most of the crew since Training Day (2001), his major directorial debut. “I like to give them a copy of the script as early as possible so that they can prepare” for the editorial and post process. “The script shows me the ‘nuts and bolts’ of the film,” stated production mixer Ed Novick. “It shows the planned environments and gives me an idea of how I can capture the sound. Most of the key boxing matches were staged as a TV event, like an audience watching an HBO Production, for example. I placed mics in the corners of the boxing ring, on the referee and around the audience areas.”

“I drove Ed crazy,” Fuqua said. “I gave the actors the freedom to improvise; Jake is that type of actor and he just went with it! But often we had no idea where we were heading — we were just riffing a lot of the time to get the fire going — but Ed did an amazing job of securing what we were looking for.”

“The actors were very cooperative and very accommodating to my needs,” said Novick. “They wore mics while fighting, and Jake and Rachel helped me get great tracks.”

“Sound secured from the set is always the best,” added the film’s dialog/music re-recording mixer, Steve Pederson. “There was very little ADR on this film — most of it is production.”

“We developed a wide range of crowd sounds, which became our medium shots,” explained supervising sound editor Mandell Winter, MPSE.

Sound designer David Esparza and supervising sound editor Mandell Winter

Sound designer David Esparza and supervising sound editor Mandell Winter

“We made a number of ambience recordings during HBO boxing matches in Las Vegas using microphones located around the perimeter of the boxing ring and under the balcony, as well as mounting a DPA 5100 surround mic below the press box and camera platforms,” added sound designer David Esparza, MPSE. “We covered every angle we could to place the action into the middle of the ring using the sound of real crowds, and not effects libraries.”

As sound effects re-recording mixer Dan Leahy stated: “We used a combination of close-up and distant sounds to accurately locate the audience in the center of the fighting action.”

“It’s all about using sound to reinforce the feeling and emotion of a scene,” stressed Fuqua.

Picture editor John Refoua, ACE, added that “the sound also drove the cut. We had an initial mix with pre-cut effects — the final mix evolved with effects being cut at different audio frequencies to heighten the crowd’s excitement. It was an amazing process to witness, to have the soundtrack evolve during that period.”

“You could feel the heart beat rising,” Fuqua added.

For the major fight at the end of the film, Refoua recalled that there were 12 cameras running simultaneously, includingSOUTHPAW a handful of Canon EOS-5D DSLRs being assigned to the press. “That was a lot of footage,” he recalled. “We looked at it all a shot at a time, and made decisions about which one worked better than another.”

Originally, the final boxing match was choreographed for six rounds, “but we then cut it into 12,” continued Refoua. “We stretched and took alternate takes to build the other rounds.”

Regarding the use of a haunting score by the late James Horner, music editor Joe E. Rand said that the composer was drawn to the film because of the intimate father/daughter relationship, “and looked to different harmonic structures and balances” to reinforce that core element.

But the sound for one pivotal scene didn’t run as expected. “For the graveyard scene [between Gyllenhaal and Laurence, at the grave of the lead character’s wife] we lost most of the radio mics,” reported Winter. “We had a lot of RF hits and [because of camera angles] the boom mic wasn’t close to the actors. The only viable track was Oona [Laurence]’s lavaliere, which still had RF dropouts on it — iZotope RX saved the day.” “We needed to use iZotope to extract the signal from the RF noise,” recalled re-recording mixer Pederson. “Mandell [Winter] and I were surprised it worked out so well.”

“No director can make a movie by themselves,” concluded Fuqua. “The sound crew all came up with creative ideas that I needed to hear. After all, moviemaking is a highly collaborative effort.”
Mel Lambert is principal of Content Creators, an LA-based editorial service. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Sony gives Rita Hayworth, Gene Kelly a 4K make-over in ‘Cover Girl’

Sony Pictures Entertainment has completed an all-new 4K restoration of Cover Girl, director Charles Vidor’s 1944 Technicolor musical that starred Rita Hayworth and Gene Kelly. The restoration, completed under the supervision of Sony’s Grover Crisp, premiered at New York’s Museum of Modern Art in New York during Preserve and Project, its 13th international festival of film preservation.

Cover Girl was Columbia Pictures’ first big film shot in the Technicolor three-strip process. For the new 4K restoration, the team went back to the original 3-strip nitrate camera negatives.

“There was a preservation initiative with this film in the 1990s that involved making some positive intermediate elements for video transfer, but our current process dictates that we source the most original materials possible to come up with the best visual result for our 4K workflow,” recalls Crisp, who is EVP of asset management, film restoration and digital mastering at Sony Pictures. “The technical capabilities that we have now allow us to digitally recombine the three separate black and white negatives to create a color image that is virtually free of the fuzzy registration issues inherent in the traditional analog work, in addition to the usual removal of scratches and other physical flaws in the film.”

Crisp says they tried to stay as true to the Technicolor look as possible. “That specific kind of look is impossible to match exactly as it was in the original work from the 1940s and 1950s for a variety of reasons. With original sources for reference, however, it gives us a good target to aim for.”

The greater color range facilitated the recreation of a Technicolor look that is as authentic as possible, especially where original dye transfer prints were available as reference points.

In terms of challenges, Crisp says that aside from the usual number of torn frames, scratches and dirt imbedded in the emulsion of the film, there is always the issue of color breathing when working with the 3-strip Technicolor films.  “It is an inconsistent problem and can be very difficult to address,” he explains. “Kevin Manbeck at MTI Film has developed algorithms to compensate and correct for this problem and that is a big advancement.”

The film was scanned at Cineric in New York City on their proprietary 4K wetgate scanner.

“Working with our colorist, Sheri Eisenberg, we strived to get the colors, with deep blacks and vibrant reds, right.”

She called on the FilmLight Baselight 8 for the color at Deluxe (formerly ColorWorks) in Culver City. “It is a very robust color correction system, and one that we have used for years on our work,” says Crisp. “The lion’s share of the image restoration was done at L’Immagine Ritrovata, a film restoration and conservation facility in Bologna, Italy.  They use a variety of software for image cleanup, though much of this kind of work is manual. This means a lot of individuals sitting at digital workstations working on one frame at a time.  At MTI Film, here in Los Angeles, some of the final image restoration was completed, mostly for the removal of gate hairs in numerous shots, something that is very difficult to achieve without leaving digital artifacts.”

Mark Mangini keynotes The Art of Sound 
Design at Sony Studios

Panels focus on specifics of music, effects and dialog sound design, and immersive soundtracks

By Mel Lambert

Defining a sound designer as somebody “who uses sound to tell stories,” Mark Mangini, MPSE, was adamant that “sound editors and re-recording mixers should be authors of a film’s content, and take creative risks. Art doesn’t get made without risk.”

A sound designer/re-recording mixer at Hollywood’s Formosa Group Features, Mangini outlined his sound design philosophy during a keynote speech at the recent The Art of Sound Design: Music, Effects and Dialog in an Immersive World conference, which took place at Sony Pictures Studios in Culver City.

Mangini is recipient of three Academy Award nominations for The Fifth Element (1997), Aladdin (1992) and Star Trek IV: The Voyage Home (1986).

Acknowledging that an immersive soundtrack should fully engage the audience, Mangini outlined two ways to achieve that goal. “Physically, we can place sound around an audience, but we also need to engage them emotionally with the narrative, using sound to tell the story,” he explained to the 500-member audience. “We all need to better understand the role that sound plays in the filmmaking process. For me, sound design is storytelling — that may sound obvious, but it’s worth reminding ourselves on a regular basis.”

While an understanding of the tools available to a sound designer is important, Mangini readily concedes, “Too much emphasis on technology keeps us out of the conversation; we are just seen as technicians. Sadly, we are all too often referred to as ‘The Sound Guy.’ How much better would it be for us if the director asked to speak with the ‘Audiographer,’ for example. Or the ‘Director of Sound’ or the ‘Sound Artist?’ — terms that better describe what we actually do? After all, we don’t refer to a cinematographer as ‘The Image Guy.’”

Mangini explained that he always tries to emphasize the why and not the how, and is not tempted to imitate somebody else’s work. “After all, when you imitate you ensure that you will only be ‘almost’ as good as the person or thing you imitate. To understand the ‘why,’ I break down the script into story arcs and develop a sound script so I can reference the dramatic beats rather than the visual cues, and articulate the language of storytelling using sound.”

Past Work
Offering up examples of his favorite work as a soundtrack designer, Mangini provided two clips during his keynote. “While working on Star Trek [in 2009] with supervising sound editor Mark Stoeckinger, director J. J. Abrams gave me two days to prepare — with co-designer Mark Binder — a new soundtrack for the two-minute mind meld sequence. J. J. wanted something totally different from what he already had. We scrapped the design work we did on the first day, because it was only different, not better. On day two we rethought how sound could tell the story that J. J. wanted to tell. Having worked on three previous Star Trek projects [different directors], I was familiar with the narrative. We used a complex combination of orchestral music and sound effects that turned the sequence on its head; I’m glad to say that J. J. liked what we did for his film.”

The two collaborators received the following credit: “Mind Meld Soundscape by Mark Mangini and Mark Binder.”

Turning to his second soundtrack example, Mangini recalled receiving a call from Australia about the in-progress soundtrack for George Miller’s Mad Max: Fury Road, the director’s fourth outing with the franchise. “The mix they had prepared in Sydney just wasn’t working for George. I was asked to come down and help re-invigorate the track. One of the obstacles to getting this mix off the ground was the sheer abundance of material to choose from. When you have so many choices on a soundtrack, the mix can be an agonizing process of ‘Sound Design by Elimination.’ We needed to tell him, ‘Abandon what you have and start over.’ It was up to me, as an artist, to tell George that his V8 needed an overhaul and not just a tune-up!”

“We had 12 weeks, working at Formosa with co-supervising sound editor Scott Hecker — and at Warner Bros Studios with re-recording mixers Chris Jenkins and Greg Rudloff — to come up with what George Miller was looking for. We gave each vehicle [during the extended car-chase sequence that opens the film] a unique character with sound, and carefully defined [the lead proponent Max Rockatansky’s] changing mental state during the film. The desert chase became ‘Moby Dick,’ with the war rig as the white whale. We focused on narrative decisions as we reconstructed the soundtrack, always referencing ‘the why’ for our design choices in order to provide a meaningful sonic immersion. Miller has been quoted as saying, ‘Mad Max is a film where we see with our ears.’ This from a director who has been making films for 40 years!”

His advice to fledgling sound designers? Mangini kept it succinct: “Ask yourself why, not how. Be the author of content, take risks, tell stories.”

Creating a Sonic Immersive Experience
Subsequent panels during the all-day conference addressed how to design immersive music, sound effects and dialog elements used on film and TV soundtracks. For many audiences, a 5.1-channel format is sufficient for carrying music, effects and dialog in an immersive, surround experience, but 7.1-channel — with added side speakers, in addition to the new Dolby Atmos, Barco/Auro 3D and DTS:X/MDA formats — can extend that immersive experience.

“During editorial for Guardians of the Galaxy we had so many picture changes that the re-recording mixers needed all of the music stems and breakouts we could give them,” said music editor Will Kaplan, MPSE, from Warner Bros. Studio Facilities, during the “Music: Composing, Editing and Mixing Beyond 5.1” panel. It was presented by Formosa Group and moderated by scoring mixer Dennis Sands, CAS. “In a quieter movie we can deliver an entire orchestral track that carries the emotion of a scene.”

Music: Composing, Editing and Mixing Beyond 5.1 panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

‘Music:Composing, Editing and Mixing Beyond 5.1’ panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

Describing his collaboration with Tim Burton, music editor Bill Abbott, MPSE from Formosa reported that the director “liked to hear an entire orchestral track for its energy, and then we recorded it section by section with the players remaining on the stage, which can get expensive!”

Joseph Magee, CAS, (supervising music mixer on such films as Pitch Perfect 2, The Wedding Ringer, Saving Mr. Banks and The Muppets) likes to collaborate closely with the effects editor to decide who handles which elements from each song. “Who gets the snaps and dance shoes How do we divide up the synchronous ambience and the design ambience? The synchronous ambience from the set might carry tails from the sing-offs, and needs careful matching. What if they pitch shift the recorded music in post? We then need to change the pitch of the music captured in the audience mics using DAW plug-ins.”

“I like to invite the sound designer to the music spotting session,” advised Abbott, “and discuss who handles what — is it a music cue or a sound effect?”

“We need to immerse audiences with sound and use the surrounds for musical elements,” explained Formosa’s re-recording mixer, Andy Koyama, CAS. “That way we have more real estate in the front channels for sound effects.”

“We should get the sound right on the set because it can save a lot of processing time on the dub stage,” advised production mixer Lee Orloff, CAS, during the “A Dialog on Dialog: From Set to Screen” panel moderated by Jeff Wexler, CAS.

A Dialog on Dialog: From Set to Screen panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

‘A Dialog on Dialog: From Set to Screen’ panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

“I recall working on The Patriot, where the director [Roland Emmerich] chose to create ground mist using smoke machines known as a Smoker Boats,” recalled Orloff, who received Oscar and BAFTA Awards for Terminator 2: Judgment Day (1991). “The trouble was that they contained noisy lawnmower engines, whose sound can be heard under all of the dialog tracks. We couldn’t do anything about it! But, as it turned out, that low-level noise added to the sense of being there.”

“I do all of my best work in pre-production,” added Wexler, “by working out the noise problems we will face on location. It is more than just the words that we capture; a properly recorded performance tells you so much about the character.”

“I love it when the production track is full of dynamics,” added dialog/music re-recording mixer Gary Bourgeois, CAS. “The voice is an instrument; if I mask out everything that is not needed I lose the ‘essence’ of the character’s performance. The clarity of dialog is crucial.”

“We have tools that can clean up dialog,” conceded supervising sound editor Marla McGuire, MPSE, “but if we apply them too often and too deeply it takes the life out of the track.”

“Sound design can make an important scene more impactful, but you need to remember that you’re working in the service of the film,” advised sound designer/supervising sound editor Richard King, MPSE, during the “Sound Effects: How Far Can You Go?” moderated by David Bondelevitch, MPSE, CAS.

Sound Effects: How Far Can You Go? panel L_R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

‘Sound Effects: How Far Can You Go?’ panel L-R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

In terms of music co-existing with sound effects, Formosa’s Scott Gershin, MPSE, advised, “During a plane crash sequence, I pitch shifted the sound effect to match the music.”

“I like to go to the music spotting session and ask if the director wants the music to serve as a rhythmic or thematic/tonal part of the soundtrack,” added sound effects re-recording mixer Will File from Fox Post Production Services. “I just take the other one. Or if it’s all rhythm — a train ride, for example — we’ll agree to split [the elements].”

“On the stage, I’m constantly shifting sync and pitch shifting the sound effects to match the music track,” stated Gershin. “For Pacific Rim we had many visual effects arriving late with picture changes. Director Guillermo del Toro received so many new eight-frame VFX cues he wanted to use that the music track ended up looking like bar code” in the final Pro Tools sessions.

In terms of working with new directors, “I like to let them see some good movies with good sound design to start the conversation” offered Files. “I front load the process by giving the director and picture editors a great sounding temp track using dialog predubs that they can load into the Avid Media Composer to get them used to our sound ideas It also helps the producers dazzle the studio!”

“Successful soundtrack design is a collaborative effort from production sound onwards,” advised re-recording mixer Mike Minkler, CAS, during “The Mix: Immersive Sound, Film and Television” panel, presented by DTS and moderated by Mix editor Tom Kenny. “It’s about storytelling. Somebody has to be the story’s guardian during the mix,” stated Minkler, who received Academy Awards for Dreamgirls (2006), Chicago (2002) and Black Hawk Down (2001). “Filmmaking is the ultimate collaboration. We need to be aware of what the director wants and what the picture needs. To establish your authority you need to gain their confidence.”

“For immersive mixes, you should start in Dolby Atmos as your head mix,” advised Jeremy Pearson, CAS, who is currently re-recording The Hunger Games: Mockingjay – Part 2 at Warner Bros. Studio. He also worked in that format on Mockingjay – Part 1 and Catching Fire. “Atmos is definitely the way to go; it’s what everyone can sign off on. In terms of creative decisions during an Atmos mix, I always ask myself, ‘Am I helping the story by moving a sound, or distracting the audience?’ After all, the story is up on the screen. We can enhance sound depth to put people into the scene, or during calmer, gentler scenes you can pinpoint sounds that engage the audience with the narrative.”

Kim Novak Theater at Sony Pictures Studios

Kim Novak Theater at Sony Pictures Studios.

Minkler reported that he is currently working on director Quentin Tarantino’s The Hateful Eight, “which will be released initially for two weeks in a three-hour version on 70mm film to 100 screens, with an immersive 5.1-channel soundtrack mastered to 35 mm analog mag.”

Subsequently, the film will be released next year in a slightly different version via a conventional digital DCP.

“Our biggest challenge,” reported Matt Waters, CAS, sound effects re-recording mixer for HBO’s award-winning Game of Thrones, “is getting everything competed in time. Changes are critical and we might spend half a day on a sequence and then have only 10 minutes to update the mix when we receive picture changes.”

“When we receive new visuals,” added Onnalee Blank, CAS, who handles music and dialog re-recording on the show, “[the showrunners] tell us, ‘it will not change the sound.’ But if the boats become dragons…”

Photos by Mel Lambert.

Mel Lambert is principal of Content Creators, an LA-based editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.