Category Archives: Audio Mixing

Post developments at the AES Berlin Convention

By Mel Lambert

The AES Convention returned to Berlin after a three-year absence, and once again demonstrated that the Audio Engineering Society can organize a series of well-attended paper programs, seminars and workshops, in addition to an exhibition of familiar brands, for the European tech-savvy post community. 

Held at the Maritim Hotel in the creative heart of Berlin in late May, the 142nd AES Convention was co-chaired by Sascha Spors from University of Rostock in Germany and Nadja Wallaszkovits from the Austrian Academy of Sciences. According to AES executive director Bob Moses, attendance was 1,800 — a figure at least 10% higher than last year’s gathering in Paris — with post professional from several overseas countries, including China and Australia.

During the opening ceremonies, current AES president Alex Case stated that, “AES conventions represent an ideal interactive meeting place,” whereas “social media lacks the one-on-one contact that enhances our communications bandwidth with colleagues and co-workers.” Keynote speaker Dr. Alex Arteaga, whose research integrates aesthetic and philosophical practices, addressed the thorny subject of “Auditory Architecture: Bringing Phenomenology, Aesthtic Practices and Engineering Together,” arguing that when considering the differences between audio soundscapes, “our experience depends upon the listening environment.” His underlying message was that a full appreciation of the various ways in which we hear immersive sounds requires a deeper understanding of how listeners interact with that space.

As part of his Richard C. Heyser Memorial Lecture, Prof. Dr. Jorg Sennheiser outlined “A Historic Journey in Audio-Reality: From Mono to AMBEO,” during which he reviewed the basis of audio perception and the interdependence of hearing with other senses. “Our enjoyment and appreciation of audio quality is reflected in the continuous development from single- to multi-channel reproduction systems that are benchmarked against sonic reality,” he offered. “Augmented and virtual reality call for immersive audio, with multiple stakeholders working together to design the future of audio.”

Post-Focused Technical Papers
There were several interesting technical papers that covered the changing requirements of the post community, particularly in the field of immersive playback formats for TV and cinema. With the new ATSC 3.0 digital television format scheduled to come online soon, including object-based immersive sound, there is increasing interest in techniques for capturing surround material and then delivering the same to consumer audiences.

In a paper titled “The Median-Plane Summing Localization in Ambisonics Reproduction,” Bosun Xie from the South China University of Technology in Guangzhou explained that, while one aim of Ambisonics playback is to recreate the perception of a virtual source in arbitrary directions, practical techniques are unable to recreate correct high-frequency spectra in binaural pressures that are referred to as front-back and vertical localization cues. Current research shows that changes of interaural time difference/ITD that result from head-turning for Ambisonics playback match with those of a real source, and hence provide dynamic cue for vertical localization, especially in the median plane. In addition, the LF virtual source direction can be approximately evaluated by using a set of panning laws.

“Exploring the Perceptual Sweet Area in Ambisonics,” presented by Matthias Frank from University of Music in Graz, Austria, described how the sweet-spot area does not match the large area needed in the real world. A method was described to experimentally determine the perceptual sweet spot, which is not limited to assessing the localization of both dry and reverberant sound using different Ambisonic encoding orders.

Another paper, “Perceptual Evaluation of Synthetic Early Binaural Room Impulse Responses Based on a Parametric Model,” presented by Philipp Stade from the Technical University of Berlin, described how an acoustical environment can be modeled using sound-field analysis plus spherical head-related impulse response/HRIRs — and the results compared with measured counterparts. Apparently, the selected listening experiment showed comparable performance and, in the main, was independent from room and test signals. (Perhaps surprisingly, the synthesis of direct sound and diffuse reverberation yielded almost the same results as for the parametric model.)

“Influence of Head Tracking on the Externalization of Auditory Events at Divergence between Synthesized and Listening Room Using a Binaural Headphone System,” presented by Stephan Werner from the Technical University of Ilmenau, Germany, reported on a study using a binaural headphone system that considered the influence of head tracking on the localization of auditory events. Recordings were conducted of impulse responses from a five-channel loudspeaker set-up in two different acoustic rooms. Results revealed that head tracking increased sound externalization, but that it did not overcome the room-divergence effect.

Heiko Purnhagen from Dolby Sweden, in a paper called “Parametric Joint Channel Coding of Immersive Audio,” described a coding scheme that can deliver channel-based immersive audio content in such formats as 7.1.4, 5.1.4, or 5.1.2 at very low bit rates. Based on a generalized approach for parametric spatial coding of groups of two, three or more channels using a single downmix channel, together with a compact parametrization that guarantees full covariance re-instatement in the decoder, the coding scheme is implemented using Dolby AC-4’s A-JCC standardized tool.

Hardware Choices for Post Users
Several manufacturers demonstrated compact near-field audio monitors targeted at editorial suites and pre-dub stages. Adam Audio focused on their new near/mid-fieldS Series, which uses the firm’s ART (Accelerating Ribbon Technology) ribbon tweeter. The five models, which are comprised of the S2V, S3H, S3V, S5V and S5H for horizontal or vertical orientation. The firm’s newly innovated LF and mid-range drivers with custom-designed waveguides for the tweeter — and MF driver on the larger, multiway models — are powered by a new DSP engine that “provides crossover optimization, voicing options and expansion potential,” according to the firm’s head of marketing, Andre Zeugner.

The Eve Audio SC203 near-field monitor features a three-inch LF/MF driver plus a AMT ribbon tweeter, and is supplied with a v-shaped rubberized pad that allows the user to decouple the loudspeaker from its base and reduce unwanted resonances while angling it flat or at a 7.5- or 15-degree angle. An adapter enables mounting directly on any microphone or speaker stand with a 3/8-inch thread. Integral DSP and a passive radiator located at the rear are said to reinforce LF reproduction to provide a response to 62Hz (-3dB).

Genelec showcased The Ones, a series of point-source monitors that are comprised of the current three-way Model 8351 plus the new two-way Model 8331 and three-way Model 8341. All three units include a co-axial MF/HF driver plus two acoustically concealed LF drivers for vertical and horizontal operation. A new Minimum Diffraction Enclosure/MDE is featured together with the firm’s loudspeaker management and alignment software via a dedicated Cat5 network port.

The Neumann KH-80 DSP near-field monitor is designed to offer automatic system alignment using the firm’s control software that is said to “mathematically model dispersion to deliver excellent detail in any surroundings.” The two-way active system features a four-inch LF/MF driver and one-inch HF tweeter with an elliptical, custom-designed waveguide. The design is described as offering a wide horizontal dispersion to ensure a wide sweet spot for the editor/mixer, and a narrow vertical dispersion to reduce sound reflections off the mix console.

To handle multiple monitoring sources and loudspeaker arrays, the Trinnov D-Mon Series controllers enable stereo to 7.1-channel monitoring from both analog and digital I/Os using Ethernet- and/or MIDI-based communication protocols and a fast-switching matrix. An internal mixer creates various combinations of stems, main or aux mixes from discrete inputs. An Optimizer processor offers tuning of the loudspeaker array to match studio acoustics.

Unveiled at last year’s AES Convention in Paris, the Eventide H9000 multichannel/multi-element processing system has been under constant development during the past 12 months with new functions targeted at film and TV post, including EQ, dynamics and reverb effects. DSP elements can be run in parallel or in a series to create multiple, fully-programmable channel strips per engine. Control plug-ins for Avid Pro Tools and other DAWs are being finalized, together with Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking.

Filmton, the German association for film sound professionals, explained to AES visitors its objective “to reinforce the importance of sound at an elemental level for the film community.” The association promotes the appreciation of film sound, together with the local film industry and its policy toward the public, while providing “an expert platform for technical, creative and legal issues.”

Philipp Sehling

Lawo demonstrated the new mc²96 Grand Audio production console, an IP-based networkable design for video post production, available with up to 200 on-surface faders. Innovative features include automatic gain control across multiple channels and miniature TFT color screens above each fader that display LiveView thumbnails of the incoming channel sources.

Stage Tec showed new processing features for its Crescendo Platinum TV post console, courtesy of v4.3 software, including an automixer based on gain sharing that can be used on every input channel, loudness metering to EBU R128 for sum and group channels, a de-esser on every channel path, and scene automation with individual user-adjustable blend curves and times for each channel.

Avid demonstrated native support for the new 7.1.2 Dolby Atmos channel-bed format — basically the familiar 9.1-channel bed with two height channels — for editorial suites and consumer remastering, plus several upgrades for Pro Tools, including new panning software for object-based audio and the ability to switch between automatable object and buss outputs. Pro Tools HD is said to be the only DAW natively supporting in-the-box Atmos mixing for this 10-channel 7.1.2 format. Full integration for Atmos workflows is now offered for control surfaces such as the Avid S6.

Jon Schorah

There was a new update to Nugen Audio’s popular Halo Upmix plug-in for Pro Tools — in addition to stereo to 5.1, 7.1 or 9.1 conversion it is now capable of delivering 7.1.2-channel mixes for Dolby Atmos soundtracks.

A dedicated Dante Pavilion featured several manufacturers that offer network-capable products, including Solid State Logic, whose Tempest multi-path processing engine and router is now fully Audinate Dante-capable for T Series control surfaces with unique arbitration and ownership functions; Bosch RTS intercom systems featuring Dante connectivity with OCA system control; HEDD/Heinz Electrodynamic Designs, whose Series One monitor speakers feature both Dante and AES67/Ravenna ports; Focusrite, whose RedNet series of modular pre-amps and converters offer “enhanced reliability, security and selectivity” via Dante, according to product specialist for EMEA/Germany, Dankmar Klein; and NTP Technology’s DAD Series DX32R and RV32 Dante/MADI router bridges and control room monitor controllers, which are fully compatible with Dante-capable consoles and outboard systems, according to the firm’s business development manager Jan Lykke.

What’s Next For AES
The next European AES convention will be held in Milan during the spring of 2018. “The society also is planning a new format for the fall convention in New York,” said Moses, as the AES is now aligning with the National Association of Broadcasters. “Next January we will be holding a new type of event in Anaheim, California, to be titled AES @ NAMM.” Further details will be unveiled next month. He also explained there will be no West Coast AES Convention next year. Instead the AES will return to New York in the autumn of 2018 with another joint AES/NAB gathering at the Jacob K. Javits Convention Center.


Mel Lambert is an LA-based writer and photographer. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Recording live musicians in 360

By Luke Allen

I’ve had the opportunity to record live musicians in a couple of different in-the-field scenarios for 360 video content. In some situations — such as the ubiquitous 360 rock concert video — simply having access to the board feed is all one needs to create a pretty decent spatial mix (although the finer points of that type of mix would probably fill up a whole different article).

But what if you’re shooting in an acoustically interesting space where intimacy and immersion are the goal? What if you’re in the field in the middle of a rainstorm without access to AC power? It’s clear that in most cases, some combination of ambisonic capture and close micing is the right approach.

What I’ve found is that in all but a few elaborate set-ups, a mobile ambisonic recording rig (in my case, built around the Zaxcom Nomad and Soundfield SPS-200) — in addition to three to four omni-directional lavs for close micing — is more than sufficient to achieve excellent results. Last year, I had the pleasure of recording a four-piece country ensemble in a few different locations around Ireland.

Micing a Pub
For this particular job, I had the SPS and four lavs. For most of the day I had planted one Sanken COS-11 on the guitar, one on the mandolin, one on the lead singer and a DPA 4061 inside the upright bass (which sounded great!). Then, for the final song, the band wanted to add a fiddle to the mix — yet I was out of mics to cover everything. We had moved into the partially enclosed porch area of a pub with the musicians perched in a corner about six feet from the camera. I decided to roll the dice and trust the SPS to pick up the fiddle, which I figured would be loud enough in the small space that a lav wouldn’t be used much in the mix anyways. In post, the gamble paid off.

I was glad to have kept the quieter instruments mic’d up (especially the singer and the bass) while the fiddle lead parts sounded fantastic on the ambisonic recordings alone. This is one huge reason why it’s worth it to use higher-end Ambisonic mics, as you can trust them to provide fidelity for more than just ambient recordings.

An Orchestra
In another recent job, I was mixing for a 360 video of an orchestra. During production we moved the camera/sound rig around to different locations in a large rehearsal stage in London. Luckily, on this job we were able to also run small condensers into a board for each orchestra section, providing flexibility in the mix. Still, in post, the director wanted the spatial effect to be very perceptible and dynamic as we jump around the room during the lively performance. The SPS came in handy once again; not only does it offer good first-order spatial fidelity but a wide enough dynamic range and frequency response to be relied on heavily in the mix in situations where the close-mic recordings sounded flat. It was amazing opening up those recordings and listening to the SPS alone through a decent HRTF — it definitely exceeded my expectations.

It’s always good to be as prepared as possible when going into the field, but you don’t always have the budget or space for tons of equipment. In my experience, one high-quality and reliable ambisonic mic, along with some auxiliary lavs and maybe a long shotgun, are a good starting point for any field recording project for 360 video involving musicians.


Sound designer and composer Luke Allen is a veteran spatial audio designer and engineer, and a principal at SilVR in New York City. He can be reached at luke@silversound.us.

Dell 6.15

Nutmeg and Nickelodeon team up to remix classic SpongeBob songs

New York creative studio Nutmeg Creative was called on by Nickelodeon to create trippy music-video-style remixes of some classic SpongeBob SquarePants songs for the kids network’s YouTube channel. Catchy, sing-along kids’ songs have been an integral part of SpongeBob since its debut in 1999.

Though there are dozens of unofficial fan remixes on YouTube, Nickelodeon frequently turns to Nutmeg for official remixes: vastly reimagined versions accompanied by trippy, trance-inducing visuals that inevitably go viral. It all starts with the music, and the music is inspired by the show.

Infused with the manic energy of classic Warner Bros. Looney Toons, SpongeBob is simultaneously slapstick and surreal with an upbeat vibe that has attracted a cult-like following from the get-go. Now in its 10th season, SpongeBob attracts fans that span two generations: kids who grew up watching SpongeBob now have kids of their own.

The show’s sensibility and multi-generational audience informs the approach of Nutmeg sound designer, mixer and composer JD McMillin, whose remixes of three popular and vintage SpongeBob songs have become viral hits: Krusty Krab Pizza and Ripped My Pants from 1999, and The Campfire Song Song (yes, that’s correct) from 2004. With musical styles ranging from reggae, hip-hop and trap/EDM to stadium rock, drum and bass and even Brazilian dance, McMillin’s remixes expand the appeal of the originals with ear candy for whole new audiences. That’s why, when Nickelodeon provides a song to Nutmeg, McMillin is given free rein to remix it.

“No one from Nick is sitting in my studio babysitting,” he says. “They could, but they don’t. They know that if they let me do my thing they will get something great.”

“Nickelodeon gives us a lot of creative freedom,” says executive producer Mike Greaney. “The creative briefs are, in a word, brief. There are some parameters, of course, but, ultimately, they give us a track and ask us to make something new and cool out of it.”

All three remixes have collectively racked up hundreds of thousands of views on YouTube, with The Campfire Song Song remix generating 655K views in less than 24 hours on the SpongeBob Facebook page.

McMillin credits the success to the fact that Nutmeg serves as a creative collaborative force: what he delivers is more reinvention than remix.

“We’re not just mixing stuff,” he says. “We’re making stuff.”

Once Nick signs off on the audio, that approach continues with the editorial. Editors Liz Burton, Brian Donnelly and Drew Hankins each bring their own unique style and sensibility, with graphic Effects designer Stephen C. Walsh adding the finishing touches.

But Greaney isn’t always content with cut, shaken and stirred clips from the show, going the extra mile to deliver something unexpected. Case in point: he recently donned a pair of red track pants and high-kicked in front of a greenscreen to add a suitably outrageous element to the Ripped My Pants remix.

In terms of tools used for audio work, Nutmeg used Ableton Live, Native Instruments Maschine and Avid Pro Tools. For editorial they called on Avid Media Composer, Sapphire and Boris FX. Graphics were created in Adobe After Effects, and Mocha Pro.


Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey


Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.


Lime opens sound design division led by Michael Anastasi, Rohan Young

Santa Monica’s Lime Studios has launched a sound design division. LSD (Lime Sound Design), featuring newly signed sound designer Michael Anastasi and Lime sound designer/mixer Rohan Young has already created sound design for national commercial campaigns.

“Having worked with Michael since his early days at Stimmung and then at Barking Owl, he was always putting out some of the best sound design work, a lot of which we were fortunate to be final mixing here at Lime,” says executive producer Susie Boyajan, who collaborates closely with Lime and LSD owner Bruce Horwitz and the other company partners — mixers Mark Meyuhas and Loren Silber. “Having Michael here provides us with an opportunity to be involved earlier in the creative process, and provides our clients with a more streamlined experience for their audio needs. Rohan and Michael were often competing for some of the same work, and share a huge client base between them, so it made sense for Lime to expand and create a new division centered around them.”

Boyajan points out that “all of the mixers at Lime have enjoyed the sound design aspect of their jobs, and are really talented at it, but having a new division with LSD that operates differently than our current, hourly sound design structure makes sense for the way the industry is continuing to change. We see it as a real advantage that we can offer clients both models.”

“I have always considered myself a sound designer that mixes,” notes Young. “It’s a different experience to be involved early on and try various things that bring the spot to life. I’ve worked closely with Michael for a long time. It became more and more apparent to both of us that we should be working together. Starting LSD became a no-brainer. Our now-shared resources, with the addition of a Foley stage and location audio recordists only make things better for both of us and even more so for our clients.”

Young explains that setting up LSD as its own sound design division, as opposed to bringing in Michael to sound design at Lime, allows clients to separate the mix from the sound design on their production if they choose.

Anastasi joins LSD from Barking Owl, where he spent the last seven years creating sound design for high-profile projects and building long-term creative collaborations with clients. Michael recalls his fortunate experiences recording sounds with John Fasal, and Foley sessions with John Roesch and Alyson Dee Moore as having taught him a great deal of his craft. “Foley is actually what got me to become a sound designer,” he explains.

Projects that Anastasi has worked on include the PSA on human trafficking called Hide and Seek, which won an AICP Award for Sound Design. He also provided sound design to the feature film Casa De Mi Padre, starring Will Ferrell, and was sound supervisor as well. For Nike’s Together project, featuring Lebron James, a two-minute black-and-white piece, Anastasi traveled back to Lebron’s hometown of Cleveland to record 500+ extras.

Lime is currently building new studios for LSD, featuring a team of sound recordists and a stand-alone Foley room. The LSD team is currently in the midst of a series of projects launching this spring, including commercial campaigns for Nike, Samsung, StubHub and Adobe.

Main Image: Michael Anastasi and Rohan Young.


The sound of John Wick: Chapter 2 — bigger and bolder

The director and audio team share their process.

By Jennifer Walden

To achieve the machine-like precision of assassin John Wick for director Chad Stahelski’s signature gun-fu-style action films, Keanu Reeves (Wick) goes through months of extensive martial arts and weapons training. The result is worth the effort. Wick is fast, efficient and thorough. You cannot fake his moves.

In John Wick: Chapter 2, Wick is still trying to retire from his career as a hitman, but he’s asked for one last kill. Bound by a blood oath, it’s a job Wick can’t refuse. Reluctantly, he goes to work, but by doing so, he’s dragged further into the assassin lifestyle he’s desperate to leave behind.

Chad Stahelski

Stahelski builds a visually and sonically engaging world on-screen, and then fills it full of meticulously placed bullet holes. His inspiration for John Wick comes from his experience as a stunt man and martial arts stunt coordinator for Lily and Lana Wachowski on The Matrix films. “The Wachowskis are some of the best world creators in the film industry. Much of what I know about sound and lighting has to do with their perspective that every little bit helps define the world. You just can’t do it visually. It’s the sound and the look and the vibe — the combination is what grabs people.”

Before the script on John Wick: Chapter 2 was even locked, Stahelski brainstormed with supervising sound editor Mark Stoeckinger and composer Tyler Bates — alumni of the first Wick film — and cinematographer Dan Laustsen on how they could go deeper into Wick’s world this time around. “It was so collaborative and inspirational. Mark and his team talked about how to make it sound bigger and more unique; how to make this movie sound as big as we wanted it to look. This sound team was one of my favorite departments to work with. I’ve learned more from those guys about sound in these last two films then I thought I had learned in the last 15 years,” says Stahelski.

Supervising sound editor Stoeckinger, at the Formosa Group in West Hollywood, knows action films. Mission Impossible II and III, both Jack Reacher films, Iron Man 3, and the upcoming (April) The Fate of the Furious, are just a part of his film sound experience. Gun fights, car chases, punches and impacts — Stoeckinger knows that all those big sound effects in an action film can compete with the music and dialogue for space in a scene. “The more sound elements you have, the more delicate the balancing act is,” he explains. “The director wants his sounds to be big and bold. To achieve that, you want to have a low-frequency punch to the effects. Sometimes, the frequencies in the music can steal all that space.”

The Sound of Music
Composer Bates’s score was big and bold, with lots of percussion, bass and strong guitar chords that existed in the same frequency range as the gunshots, car engines and explosions. “Our composer is very good at creating a score that is individual to John Wick,” says Stahelski. “I listened to just the music, and it was great. I listened to just the sound design, and that was great. When we put them together we couldn’t understand what was going on. They overlapped that much.”

During the final mix at Formosa’s Stage B on The Lot, re-recording mixers Andy Koyama and Martyn Zub — who both mixed the first John Wick — along with Gabe Serrano, approached the fight sequences with effects leading the mix, since those needed to match the visuals. Then Koyama made adjustments to the music stems to give the sound effects more room.

“Andy made some great suggestions, like if we lowered the bass here then we can hear the effects punch more,” says Stahelski. “That gave us the idea to go back to our composers, to the music department and the music editor. We took it to the next level conceptually. We had Tyler [Bates] strip out a lot of the percussion and bass sounds. Mark realized we have so many gunshots, so why not use those as the percussion? The music was influenced by the amount of gunfire, sound design and the reverb that we put into the gunshots.”

Mark Stoeckinger

The music and sound departments collaborated through the last few weeks of the final mix. “It was a really neat, synergistic effect of the sound and music complementing each other. I was super happy with the final product,” says Stahelski.

Putting the Gun in Gun-Fu
As its name suggests, gun-fu involves a range of guns —handguns, shotguns and assault rifles. It was up to sound designer Alan Rankin to create a variety of distinct gun effects that not only sounded different from weapon to weapon but also differentiated between John Wick’s guns and the bad guys’ guns. To help Wick’s guns sound more powerful and complex than his foes, Rankin added different layers of air, boom and mechanical effects. To distinguish one weapon from another, Rankin layered the sounds of several different guns together to make a unique sound.

The result is the type of gun sound that Stoeckinger likes to use on the John Wick films. “Even before this film officially started, Alan would present gun ideas. He’d say, ‘What do you think about this sound for the shotgun? Or, ‘How about this gun sound?’ We went back and forth many times, and once we started the film, he took it well beyond that.”

Rankin developed the sounds further by processing his effects with EQ and limiting to help the gunshots punch through the mix. “We knew we would inevitably have to turn the gunshots down in the mix due to conflicts with music or dialogue, or just because of the sheer quantity of shots needed for some of the scenes,” Rankin says.

Each gun battle was designed entirely in post, since the guns on-screen weren’t shooting live rounds. Rankin spent months designing and evolving the weapons and bullet effects in the fight sequences. He says, “Occasionally there would be a production sound we could use to help sell the space, but for the most part it’s all a construct.”

There were unique hurdles for each fight scene, but Rankin feels the catacombs were the most challenging from a design standpoint, and Zub agrees in terms of mix. “In the catacombs there’s a rapid-fire sequence with lots of shots and ricochets, with body hits and head explosions. It’s all going on at the same time. You have to be delicate with each gunshot so that they don’t all sound the same. It can’t sound repetitive and boring. So that was pretty tricky.”

To keep the gunfire exciting, Zub played with the perspective, the dynamics and the sound layers to make each shot unique. “For example, a shotgun sound might be made up of eight different elements. So in any given 40-second sequence, you might have 40 gunshots. To keep them all from sounding the same, you go through each element of the shotgun sound and either turn some layers off, tune some of them differently or put different reverb on them. This gives each gunshot its own unique character. Doing that keeps the soundtrack more interesting and that helps to tell the story better,” says Zub. For reverb, he used the PhoenixVerb Surround Reverb plug-in to create reverbs in 7.1.

Another challenge was the fight sequence at the museum. To score the first part of Wick’s fight, director Stahelski chose a classical selection from Vivaldi… but with a twist. Instead of relying solely on traditional percussion, “Mark’s team intermixed gunshots with the music,” notes Stahelski. “That is one of my favorite overall sound sequences.”

At the museum, there’s a multi-level mirrored room exhibit with moving walls. In there, Wick faces several opponents. “The mirror room battle was challenging because we had to represent the highly reflective space in which the gunshots were occurring,” explains Rankin. “Martyn [Zub] was really diligent about keeping the sounds tight and contained so the audience doesn’t get worn out from the massive volume of gunshots involved.”

Their goal was to make as much distinction as possible between the gunshot and the bullet impact sounds since visually there were only a few frames between the two. “There was lots of tweaking the sync of those sounds in order to make sure we got the necessary visceral result that the director was looking for,” says Rankin.

Stahelski adds, “The mirror room has great design work. The moment a gun fires, it just echoes through the whole space. As you change the guns, you change the reverb and change the echo in there. I really dug that.”

On the dialogue side, the mirror room offered Koyama an opportunity to play with the placement of the voices. “You might be looking at somebody, but because it’s just a reflection, Andy has their voice coming from a different place in the theater,” Stoeckinger explains. “It’s disorienting, which is what it is supposed to be. The visuals inspired what the sound does. The location design — how they shot it and cut it — that let us play with sound.”

The Manhattan Bridge
Koyama’s biggest challenge on dialogue was during a scene where Laurence Fishburne’s character The Bowery King is talking to Wick while they’re standing on a rooftop near the busy Manhattan Bridge. Koyama used iZotope RX 5 to help clean up the traffic noise. “The dialogue was very difficult to understand and Laurence was not available for ADR, so we had to save it. With some magic we managed to save it, and it actually sounds really great in the film.”

Once Koyama cleaned the production dialogue, Stoeckinger was able to create an unsettling atmosphere there by weaving tonal sound elements with a “traffic on a bridge” roar. “For me personally, building weird spaces is fun because it’s less literal,” says Stoeckinger.

Stahelski strives for a detailed and deep world in his John Wick films. He chooses Stoeckinger to lead his sound team because Stoeckinger’s “work is incredibly immersive, incredibly detailed,” says the director. “The depths that he goes, even if it is just a single sound or tone or atmosphere, Mark has a way to penetrate the visuals. I think his work stands out so far above most other sound design teams. I love my sound department and I couldn’t be happier with them.”


Jennifer Walden is a New Jersey-based writer and audio engineer.

FMPX8.14

CAS and MPSE bestow craft honors to audio pros, filmmakers

By Mel Lambert

While the Academy Awards spotlight films released during the past year, members of the Cinema Audio Society (CAS) and Motion Picture Sound Editors (MPSE) focus on both film and TV productions.

The 53rd CAS Awards — held at the Omni Los Angeles Hotel on February 18, and hosted once again by comedian Elayne Boosler — celebrated the lifetime contributions of production mixer John Pritchett with the CAS Career Achievement Award for his multiple film credits. The award was presented by re-recording mixer Scott Millan, CAS, and actor/producer Jack Black, with a special video tribute from actor/director/producer Tom Hanks. Quoting seasoned sound designer Walter Murch, Millan shared, “Dialog is the backbone of a film.”

“Sound mixing is like plastic surgery,” Black advised. “You only notice it when it’s done badly.”

Actor/director Jon Favreau received the CAS Filmmaker Award from actor/writer Seth McFarlane, film composer John Debney and CAS president Mark Ulano. Clips from the directors’ key offerings, including The Jungle Book, Chef, Cowboys & Aliens, Iron Man and Iron Man 2, were followed by pre-recorded congratulations from Stan Lee and Ed Asner. “Production and post production are invisible arts,” said Favreau. “Because if you do it right, it’s invisible. If you want to look good on the set you need to understand sound.”

Presenters Robert Forster and Melissa Hoffman flanking winners of the CAS Award for Outstanding Sound Mixing Motion Picture for La La Land.

The CAS Award for Outstanding Sound Mixing Motion Picture — Live Action went to the team behind La La Land: production mixer Steven Morrow, CAS; re-recording mixers Andy Nelson, CAS, and Ai-Ling Lee, scoring mixer Nicholai Baxter, ADR mixer David Betancourt and Foley mixer James Ashwill. “It was a blast to work with Andy Nelson and the Fox Sound Department,” said Lee. The film’s director, Damien Chazelle, also was on hand to support his award-winning crew. Other nominees included Doctor Strange, Hacksaw Ridge, Rogue One: A Star Wars Story and Sully.

The CAS Award for Outstanding Sound Mixing Motion Picture — Animated went to Finding Dory and original dialogue mixer Doc Kane, CAS, re-recording mixers Nathan Nance and Michael Semanick, CAS, scoring mixer Thomas Vicari, CAS, and Foley mixer Scott Curtis. “I’ve got the best job in the world,” Kane offered, “recording all these talented people.”

 

Kevin O’Connell and Angela Sarafyan flanking Dennis Hamlin and Peter Horner, winners of the CAS Award for Outstanding Sound Mixing Motion Picture — Documentary.

During a humorous exchange with his co-presenter Angela Sarafyan, an actress who starred in HBO’s Westworld series, re-recording mixer Kevin O’Connell, CAS, was asked why the 21-time Oscar-nominee had not — as yet — received an Academy Award. Pausing briefly to collect his thoughts, O’Connell replied that he thought the reasons were three-fold. “First, because I do not work at Skywalker Sound,” he said, referring to Disney Studios’ post facility in Northern California, which has hosted a number of nominated sound projects. “Secondly, I do not work on musicals,” he continued, referring to the high number of Oscar and similar nominations this year for La La Land. “And third, because I do not sit next to Andy Nelson,” an affectionate reference to the popular re-recording engineer’s multiple Oscar wins and current nomination for La La Land. (For O’Connell it seems the 21st time is the charm. He walked away from this year’s Oscar with a statuette for his work on Hacksaw Ridge.)

O’Connell and Sarafyan then presented the first-ever CAS Award for Outstanding Sound Mixing Motion Picture — Documentary to the team that worked on The Music of Strangers: Yo-Yo Ma and The Silk Road Ensemble: production mixers Dimitri Tisseyre and Dennis Hamlin, plus re-recording mixer Peter Horner.

The CAS Award for Outstanding Sound Mixing Television Movie or Miniseries went to The People v. O.J. Simpson: American Crime Story and production mixer John Bauman, re-recording mixers Joe Earle, CAS, and Doug Andham, CAS, ADR mixer Judah Getz and Foley mixer John Guentner. The award for Television Series — 1-Hour went to Game of Thrones: Battle of the Bastards and production mixers Ronan Hill, CAS, and Richard Dyer, CAS, re-recording mixers Onnalee Blank, CAS, and Mathew Waters, CAS, and Foley mixer Brett Voss, CAS. “Game of Thrones was a great piece of art to work on,” said Blank.

L-R:Game of Thrones: Battle of the Bastards team — Onnalee Blank, Brett Voss, and Matthew Waters, with Karol Urban and Clyde Kusatsu.

The award for Television Series — 1/2-Hour went to Modern Family: The Storm and production mixer Stephen A. Tibbo, CAS, and re-recording mixers Dean Okrand, CAS, and Brian R. Harman, CAS. The award for Television Non-Fiction, Variety or Music Series or Specials went to Grease Live! and production mixer J. Mark King, music mixer Biff Dawes, playback and SFX mixer Eric Johnston and Pro Tools playback music mixer Pablo Munguía.

The CAS Student Recognition Award went to Wenrui “Sam” Fan from Chapman University. Outstanding Product Awards went to Cedar Audio for its DNS2 Dynamic Noise Suppression Unit and McDSP for its SA-2 dialog processor.

Other presenters included Nancy Cartwright (The Simpsons), Robert Forster (Jackie Brown), Janina Gavankar (Sleepy Hollow), Clyde Kusatsu (SAG/AFTRA VP and Madame Secretary), Rhea Seehorn (Better Call Saul) and Nondumiso Tembe (Six).

MPSE
Held on February 19 at the Westin Bonaventure Hotel in downtown Los Angeles, opening remarks for the 64th MPSE Golden Reel Awards came from MPSE president Tom McCarthy. “Digital technology is creating new workflows for our sound artists. We need to take the initiative and drive technology, and not let technology drive us,” he said, citing recent and upcoming MPSE Sound Advice confabs. “The horizons for sound are expanding, particularly virtual reality. Immersive formats from Dolby, Auro, DTS and IMAX are enriching the cinematic experience.”

Scott Gershin, MPSE Filmmaker Award recipient Guillermo Del Toro and Tom McCarthy.

The annual MPSE Filmmaker Award was presented to writer/director Guillermo del Toro by supervising sound editor/sound designer Scott Gershin, who has worked with him for the past 15 years on such films as Hellboy II: The Golden Army (2008) and Pacific Rim (2013). “Sound editing is an opportunity in storytelling,” the director offered. “There is always a balance we need to strike between sound effects and music. It’s a delicate tango. Sound design and editing is a curatorial position. I always take that partnership seriously in my films.”

Referring to recent presidential decisions to erect border walls and tighten immigration controls, del Torro was candid in his position. “I’m a Mexican,” he stated. “Giving me this award [means] that the barriers people are trying to erect between us are false,” he stressed, to substantial audience applause.

Supervising sound editor/sound designer Wiley Stateman and producer Shannon McIntosh presented the MPSE Career Achievement Award to supervising sound editor/sound designer Harry Cohen, who has worked on more than 150 films, including many directed by Quentin Tarantino, who made a surprise appearance to introduce the award recipient. “I aspired to be a performing musician,” Cohen acknowledged, “and was 31 when I became an editor. Sound design is a craft. You refine the director’s creativity through your own lens.” He also emphasized the mentoring process within the sound community, “which leads to a free flow of information.”

The remaining Golden Reel Awards comprised several dozen categories encompassing feature films, long- and short-form TV, animation, documentaries and other media.

The Best Sound Editing In Feature Film — Music Score award went to Warcraft: The Beginning and music editors Michael Bauer and Peter Myles. The Best Sound Editing In Feature Film — Music, Musical Feature award went to La La Land music editor Jason Ruder.

The Hacksaw Ridge team included (L-R) Michelle Perrone, Kimberly Harris, Justine Angus, Jed Dodge, Robert Mackenzie Liam Price and Tara Webb.

The Best Sound Editing In Feature Film — Dialog/ADR award went to director Mel Gibson’s Hacksaw Ridge and supervising sound editor Andy Wright, supervising ADR editors Justine Angus and Kimberly Harris, dialog editor Jed Dodge and ADR editor Michele Perrone. The Best Sound Editing In Feature Film — FX/Foley Award also went to Hacksaw Ridge and supervising sound editors Robert Mackenzie, Foley editor Steve Burgess and Alex Francis, plus sound effects editors Liam Price, Tara Webb and Steve Burgess.

The MPSE Best Sound & Music Editing: Television Animation Award went to Albert  supervising sound editor Jeff Shiffman, MPSE, dialogue editors Michael Petak and Anna Adams, Foley editor Tess Fournier, music editor Brad Breeck plus SFX editors Jessey Drake, MPSE, Tess Fournier and Jeff Shiffman, MPSE. The Best Sound & Music Editing: Television Documentary Short-Form award to Sonic Sea and supervising sound editor Trevor Gates, dialog editor Ryan Briley and SFX editors Ron Aston and Christopher Bonis. The Best Sound & Music Editing: Television Documentary Long-Form award went to My Beautiful Broken Brain supervising sound editor Nick Ryan, dialog editor Claire Ellis and SFX editor Tom Foster. The Best Sound & Music Editing: Animation — Feature Film award went to Moana supervising sound editor Tim Nielsen, supervising dialog editor Jacob Riehle, Foley editors Thom Brennan and Matthew Harrison, music editors Earl Ghaffari and Dan Pinder, plus SFX editors Jonathan Borland, Pascal Garneau and Lee Gilmore. The Best Sound & Music Editing: Documentaries — Feature Film award to The Music of Strangers: Yo-Yo Ma and The Silk Road Ensemble and supervising sound editor Pete Horner, sound designer Al Nelson and SFX editor Andre Zweers.

The Verna Fields Award in Sound Editing in Student Films was a tie, with $1,500 checks being awarded to Fishwitch, directed by Adrienne Dowling from the National Film and Television School, and Icarus by supervising sound editor/sound designer Zoltan Juhasz from Dodge College of Film and Media Arts, Chapman University.

The MPSE Best Sound & Music Editing: Special Venue award went to supervising sound editor/sound designer Jamey Scott for his work on director Patrick Osborne’s Pearl, a panoramic virtual reality presentation — and which has also been nominated in the Oscars Best Animated Short Category. The Best Sound Editing In Television: Short Form — Music Score award went to music editor David Klotz for his work on Stranger Things, Chapter Three: Holly Jolly. “The show’s composers — Kyle Dixon and Michael Stein — were an inspiration to work with,” said Klotz, “as was the sound team at Technicolor.” The Best Sound Editing In Television: Short Form — Music, Musical award was another tie between music editor Jason Tregoe Newman and Bryant J. Fuhrmann for Mozart in the Jungle — Now I Will Sing and music editor Jamieson Shaw for The Get Down — Raise Your Words, Not Your Voice.

The winning Westworld team included Thomas E. de Gorter (center), Matthew Sawelson, Geordy Sincavage, Michael Head, Mark R. Allen and Marc Glassman.

The Best Sound Editing In Television: Short Form — Dialog/ADR award went to the team from Penny Dreadful III, including supervising sound editor Jane Tattersall, supervising dialogue editor David McCallum, dialog editor Elma Bello, and ADR editors Dale Sheldrake and Paul Conway. The Best Sound Editing In Television: Short Form — FX/Foley award went to Westworld — Trompe L’Oeil supervising sound editors Thomas E. de Gorter, MPSE, and Matthew Sawelson, MPSE, Foley editors Geordy Sincavage and Michael Head, and sound designers Mark R. Allen, MPSE, and Marc Glassman, MPSE. The same post team won The Best Sound Editing In Television: Long Form — FX/Foley award for Westworld — The Bicameral Mind. The Best Sound Editing In Television: Long Form — Dialog/ADR award went to The Night Of — Part 1 The Beach and supervising sound editor Nicholas Renbeck, and dialog editors Sara Stern, Luciano Vignola and Odin Benitez.

Presenters included actor Erich Riegelmann, actress Julie Parker, Avid director strategic solutions Rich Nevens, SFX editor Liam Price, producer/journalist Geoff Keighley, Formosa Interactive VP of creative services Paul Lipson, CAS president Mark Ulano, actress Andrene Ward-Hammond, supervising sound editors Mark Lanza and Bernard Weiser, picture editor Sabrina Plisco, and Technicolor VP/head of theatrical Sound Jeff Eisner.

MPSE president McCarthy offered that the future for entertainment sound has no boundaries. “It is impossible to predict what new challenges will be presented to practitioners of our craft in the years to come,” he said. “It is up to all of us to meet those challenges with creativity, professionalism and skill. MPSE membership now extends around the world. We are building a global network of sound professionals in order to help artists collaborate and share ideas with their peers.”

A complete list of MPSE Golden Reel Awards can be found on its website.

Main Image (L-R): John Debney, CAS Filmmaker Award recipient Jon Favreau, Seth MacFarlane and Mark Ulano. 

CAS images – Alex J. Berliner/ABImages
MPSE Images – Chris Schmitt Photography


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Quick Chat: Scott Gershin from The Sound Lab at Technicolor

By Randi Altman

Veteran sound designer and feature film supervising sound editor Scott Gershin is leading the charge at the recently launched The Sound Lab at Technicolor, which, in addition to film and television work, focuses on immersive storytelling.

Gershin has more than 100 films to his credit, including American Beauty (which earned him a BAFTA nomination), Guillermo del Toro’s Pacific Rim and Dan Gilroy’s Nightcrawler. But films aren’t the only genre that Gershin has tackled — in addition to television work (he has an Emmy nom for the TV series Beauty and the Beast), this audio post pro has created the sound for game titles such as Resident Evil, Gears of War and Fable. One of his most recent projects was contributing to id Software’s Doom.

We recently reached out to Gershin to find out more about his workflow and this new Burbank-based audio entity.

Can you talk about what makes this facility different than what Technicolor has at Paramount? 
The Sound Lab at Technicolor works in concert with our other audio facilities, tackling film, broadcast and gaming projects. In doing so we are able to use Technicolor’s world-class dubbing, ADR and Foley stages.

One of the focuses of The Sound Lab is to identify and use cutting-edge technologies and workflows not only in traditional mediums, but in those new forms of entertainment such as VR, AR, 360 video/films, as well as dedicated installations using mixed reality. The Sound Lab at Technicolor is made up of audio artists from multiple industries who create a “brain trust” for our clients.

Scott Gershin and The Sound Lab team.

As an audio industry veteran, how has the world changed since you started?
I was one of the first sound people to use computers in the film industry. When I moved from the music industry into film post production, I brought that knowledge and experience with me. It gave me access to a huge number of tools that helped me tell better stories with audio. The same happened when I expanded into the game industry.

Learning the interactive tools of gaming is now helping me navigate into these new immersive industries, combining my film experience to tell stories and my gaming experience using new technologies to create interactive experiences.

One of the biggest changes I’ve seen is that there are so many opportunities for the audience to ingest entertainment — creating competition for their time — whether it’s traveling to a theatre, watching TV (broadcast, cable and streaming) on a new 60- or 70-inch TV, or playing video games alone on a phone or with friends on a console.

There are so many choices, which means that the creators and publishers of content have to share a smaller piece of the pie. This forces budgets to be smaller since the potential audience size is smaller for that specific project. We need to be smarter with the time that we have on projects and we need to use the technology to help speed up certain processes — allowing us more time to be creative.

Can you talk about your favorite tools?
There are so many great technologies out there. Each one adds a different color to my work and provides me with information that is crucial to my sound design and mix. For example, Nugen has great metering and loudness tools that help me zero in on my clients LKFS requirements. With each client having their own loudness requirements, the tools allow me to stay creative, and meet their requirements.

Audi’s The Duel

What are some recent projects you’ve worked on?
I’ve been working on a huge variety of projects lately. Recently, I finished a commercial for Audi called The Duel, a VR piece called My Brother’s Keeper, 10 Webisodes of The Strain and a VR music piece for Pentatonix. Each one had a different requirement.

What is your typical workflow like?
When I get a job in, I look at what the project is trying to accomplish. What is the story or the experience about? I ask myself, how can I use my craft, shaping audio, to better enhance the experience. Once I understand how I am going to approach the project creatively, I look at what the release platform will be. What are the technical challenges and what frequencies and spacial options are open to me? Whether that means a film in Dolby Atmos or a VR project on the Rift. Once I understand both the creative and technical challenges then I start working within the schedule allotted me.

Speed and flow are essential… the tools need to be like musical instruments to me, where it goes from brain to fingers. I have a bunch of monitors in front of me, each one supplying me with different and crucial information. It’s one of my favorite places to be — flying the audio starship and exploring the never-ending vista of the imagination. (Yeah, I know it’s corny, but I love what I do!)

The A-List: The sound of La La Land

By Jennifer Walden

Director/writer Damien Chazelle’s musical La La Land has landed an incredible 14 Oscar nominations — not to mention fresh BAFTA wins for Best Film, Best Cinematography, Original Music and Best Leading Actress, in addition to many, many other accolades.

The story follows aspiring actress Mia (Emma Stone) who meets the talented-but-struggling jazz pianist Sebastian (Ryan Gosling) at a dinner club, where he’s just been fired from his gig of plinking out classic Christmas tunes for indifferent diners. Mia throws out a compliment as Sebastian approaches, but he just breezes right past, ignoring her completely. Their paths cross again at a Los Angeles pool party, and this time Mia makes a lasting impression on Sebastian. They eventually fall in love, but their life together is complicated by the realities of making their own dreams happen.

Sounds of the City
La La Land is a love story but it’s also a love letter to Los Angeles, says supervising sound editor Ai-Ling Lee, who shares an Oscar nomination for Best Sound Editing on the film with co-supervising sound editor Mildred Iatrou Morgan. One of Chazelle’s initial directives was to have the cityscape sound active and full of life. “He gave me film references, like Boogie Nights and Mean Streets, even though the latter was a New York film. He liked the amount of sound coming out from the city, but wanted a more romantic approach to the soundscape on La La Land. He likes the idea of the city always being bustling,” says Lee.

Mildred Iatrou Morgan and Ai-Ling Lee. Photo Credit: Jeffrey Harlacker

In addition to La La Land’s musical numbers, director Chazelle wanted to add musical moments throughout the film, some obvious, like the car radios in the opening traffic jam, and some more subtle. Lee explains, “You always hear music coming from different sources in the city, like music coming out of a car going by or mariachi music coming from down the hallway of Sebastian’s apartment building.” The culturally diverse incidental music, traffic sounds, helicopters, and local LA birds, like mourning doves, populate the city soundscape and create a distinct Los Angeles vibe.

For Lee’s sound editorial and sound design, she worked in a suite at EPS-Cineworks in Burbank — the same facility where the picture editor and composer were working. “Damien and Tom Cross [film editor] were cutting the picture there, and Justin Hurwitz the composer was right next door to them, and I was right across the hall from them. It was a very collaborative environment so it was easy to bring someone over to review a scene or sounds. I could pop over there to see them if I had any questions,” says Lee, who was able to design sound against the final music tracks. That was key to helping those two sound elements gel into one cohesive soundtrack.

Bursting Into Song
Director Chazelle’s other initial concern for sound was the music, particularly how the spoken dialogue would transitions into the studio recorded songs. That’s where supervising sound editor Morgan got to flex her dialogue editing muscles. “Milly [Morgan] knows this style of ADR, having worked on musicals before,” says Lee. “Damien wanted the dialogue to seamlessly transition into a musical moment. He didn’t want it to feel like suddenly we’re playing a pre-recorded song. He liked to have things sound more natural, with realistic grounded sounds, to help blend the music into the scene,” says Lee.

To achieve a smooth dialogue transition, Morgan recorded ADR for every line that led into a song to ensure she had a good transition between production dialogue and studio recorded dialogue, which would transition more cleanly into the studio-recorded music. “I cued that way for La La Land, but I ended up not having to use a lot of that. The studio recorded vocals and the production sound were beautifully recorded using the same mics in both cases. They were matching very well and I was able to go with the more emotional, natural sounding songs that were sung on-set in some cases,” says Morgan, who worked from her suite at 20th Century Fox studios along with ADR editor Galen Goodpaster.

Mia’s audition song, “The Fools Who Dream,” was one track that Morgan and the director were most concerned about. As Mia gives her impromptu audition she goes from speaking softly to suddenly singing, and then she starts singing louder. That would have been difficult to recreate in post because her performance on-set — captured by production mixer Steven Morrow — was so beautiful and emotional. The trouble was there were creaking noises on the track. Morgan explains, “As Mia starts singing, the camera moves in on her. It moves through the office and through the desk. It was a breakaway desk and they broke it apart so that the camera could move through it. That created all the creaking I heard on the track.”

Morgan was able to save the live performance by editing in clean ambience between words, and finding alternate takes that weren’t ruined by the creaking noise. She used Elastic Audio inside Pro Tools, as well as the Pro Tools TCE tool (time compression/expansion tool) to help tweak the alt takes into place. “I had to go through all of the outtakes, word by word, syllable by syllable, and find ones that fit in with the singing, and didn’t have creaks on them… and fit in terms of sync. It was very painstaking. It took me a couple of days to do it but it was a very rewarding result. That took a lot of time but it was so worth it because that was a really important moment in the movie,” says Morgan.

Reality Steps In
Not all on-set song performances could be used in the final track, so putting the pre-recorded songs in the space helped to make the transition into musical moments feel more realistic. Precisely crafted backgrounds, made with sounds that fit the tone of the impending song, gradually step aside as the music takes over. But not all of the real-world sounds go away completely. Foley helped to ground a song into the reality on screen by marrying it to the space. For example, Mia’s roommates invite her to a party in a song called “Someone in the Crowd.” Diegetic sounds, such as the hairdryer, the paper fan flicking open, occasional footsteps, and clothing rustles helped the pre-recorded song fit naturally into the scene. Additionally, Morgan notes that production mixer Morrow “did an excellent job of miking the actors with body mics and boom mics, even during the musical numbers that were sung to playback, like ‘Someone in the Crowd,’ just in case there was something to capture that we could use. There were a couple of little vocalizations that we were able to use in the number.”

Foley also played a significant role in the tap dance song “A Lovely Night.” Originally performed as a soft shoe dance number, director Chazelle decided to change it to a tap dance number in post. Lee reveals, “We couldn’t use the production sound since there was music playback in the scene for the actors to perform to. So, we had to fully recreate everything with the sound. Damien had a great idea to try to replace the soft shoe sound with tap shoes. It was an excellent idea because the tap sound plays so much better with the dance music than the soft shoe sound does.”

Lee enlisted Mandy Moore, the dance choreographer on the film, and several dancers to re-record the Foley on that scene. Working with Foley artist Dan O’Connell, of One Step Up located on The Jane Russell Foley Stage at 20th Century Fox Studios, they tried various weights of tap shoes on different floor surfaces before narrowing it down to the classic “Fred and Ginger” sound that Chazelle was looking for. “Even though they are dancing on asphalt, we ended up using a wooden floor surface on the Foley stage. Damien was very precise about playing up a step here and playing up a scuff there, because it plays better against the music. It was really important to have the taps done to the rhythm of the song as opposed to being in sync with the picture. It fools your brain. Once you have everything in rhythm with the music, the rest flows like butter,” says Lee. She cut the tap dance Foley to picture according to Chazelle’s tastes, and then invited Moore to listen to the mix to make sure that the tap dance routine was realistic from a dancer’s point of view.

Inside the Design
One of Lee’s favorite scenes to design was the opening sequence of the film, which starts with the sound of a traffic jam on a Los Angeles freeway. The sound begins in mono with a long horn honk over a black and white Cinemascope logo. As the picture widens and the logo transitions into color, Lee widens the horn honk into stereo and then into the surrounds. From that, the sound builds to a few horns and cars idling. Morgan recorded a radio announcer to establish the location as Los Angeles. The 1812 Overture plays through a car radio, and the sound becomes futzed as the camera pans to the next car in the traffic jam. With each car the camera passes the radio station changes. “This is Los Angeles and it is a mixed cultural city. Damien wanted to make sure there was a wide variety of music styles, so Justin [Hurwitz] gave me a bunch of different music choices, an eclectic selection to choose from,” says Lee. She added radio tuning sounds, car idling sounds, and Foley of tapping on the steering wheel to ground the scene in reality. “We made sure that the sound builds but doesn’t overpower the first musical number. The first trumpet hit comes through this traffic soundscape, and gradually the real city sounds give way to the first song, ‘Another Day of Sun.’”

One scene that stood out for Morgan was after Mia’s play, when she’s in her dressing room feeling sad that the theater was mostly empty for her performance. Not even Sebastian showed up. As she’s sitting there, we hear two men from the audience disparaging her and her play. Initially, Chazelle and his assistant recorded a scratch track for that off-stage exchange, but he asked Morgan to reshoot it with actors. “He wanted it to sound very naturalistic, so we spent some time finding just the right actors who didn’t sound like actors. They sound like regular people,” says Morgan.

She had the actors improvise their lines on why they hated the play, how superficial it was and how pretentious it was. Following some instruction from Chazelle, they cut the scene together. “We screened it and it was too mean, so we had to tone it back a little,” shares Morgan. “That was fun because I don’t always get to do that, to create an ADR scene from scratch. Damien is meticulous. He knows what he wants and he knows what he doesn’t want. But in this case, he didn’t know exactly what they should say. He had an idea. So I do my version and he gave me ideas and it went back and forth. That was a big challenge for me but a very enjoyable one.”

The Mix
In addition to sound editing, Lee also mixed the final soundtrack with re-recording mixer Andy Nelson at Fox Studios in Los Angeles. She and Nelson share an Oscar nomination for Best Sound Mixing on La La Land. Lee says, “Andy and I had made a film together before, called Wild, directed by Jean-Marc Vallée. So it made sense for me to do both the sound design and to mix the effects. Andy mixed the music and dialogue. And Jason Ruder was the music editor.”

From design to mix, Chazelle’s goal was to have La La Land sound natural — as though it was completely natural for these people to burst into song as they went through their lives. “He wanted to make sure it sounded fluid. With all the work we did, we wanted to make the film sound natural. The sound editing isn’t in your face. When you watch the movie as a whole, it should feel seamless. The sound shouldn’t take you out of the experience and the music shouldn’t stand apart from the sound. The music shouldn’t sound like a studio recording,” concludes Lee. “That was what we were trying to achieve, this invisible interaction of music and sound that ultimately serves the experience.”


Jennifer Walden is a New Jersey-based audio engineer and writer.