Category Archives: Audio

Genelec intros The Ones series of near-field monitors

Genelec is now offering point source monitoring with The Ones series, featuring the established 8351 three-way monitor along with the new 8341 and 8331. These small three-way coaxial monitors are housed in enclosures no larger than a traditional two-way Genelec 8040 or 8030. Their coaxial driver design provides accurate imaging and improved sound quality with clear accuracy, both on- and off-axis, vertically as well as horizontally. Also, there are no visible woofers.

Like the 8351, both the 8341 and 8331 can be orientated horizontally or vertically using an adjustable IsoPod base for isolation. But while the 8341 and 8331 both echo the 8351 in form and function, the new models have been entirely reengineered and feature ultra-compact dimensions: 13.78 in. x 9.33 in. x 9.57 in. [350 mm x 237 mm x 243 mm] for the 8341, and 11.77 in. x 7.44 in. x 8.70 in. [299 mm x 189 mm x 212 mm] for the 8331.

Innovations include a motor assembly that sees both the midrange and the tweeter share the same compact magnet system, reducing size and weight with no reduction in response. The midrange coaxial driver cone is now composed of concentric sections, optimizing midrange linearity — as does the DCW, which covers the entire front face of the enclosure. Despite the size of the 8341 and 8331, each unit incorporates three stages of dedicated Class D amplification.

Short-term maximum output capacity is 110 dB SPL for the 8341 (at 1 m) and 104 dB SPL for the 8331 (at 1 m), with accuracy better than ±1.5 dB, and respective frequency responses start at 45 Hz and 38 Hz (-6 dB) and extend beyond 40 kHz both for the analog and digital inputs.

The coaxial design allows for ultra-near-field listening, creating a dramatic improvement in the direct sound-to-reverberant sound ratio and further reducing the room’s influence while monitoring. The listening distance may be as short as 16 inches, with no loss of precision.

The Ones were recently used by Richard Chycki for his latest project, a 5.1 mix of The Tragically Hip – A National Celebration.

Capturing Foley for Epix’s Berlin Station

Now in its second season on Epix, the drama series Berlin Station centers on undercover agents, diplomats and whistleblowers inhabiting a shadow world inside the German capital.

Leslie Bloome

Working under the direction of series supervising sound editor Ruy Garcia, Westchester, New York-based Foley studio Alchemy Post Sound is providing Berlin Station with cinematic sound. Practical effects, like the clatter of weapons and clinking glass, are recorded on the facility’s main Foley stage. Certain environmental effects are captured on location at sites whose ambience is like the show’s settings. Interior footsteps, meanwhile, are recorded in the facility’s new “live” room, a 1,300-square-foot space with natural reverb that’s used to replicate the environment of rooms with concrete, linoleum and tile floors.

Garcia wants a soundtrack with a lot of detail and depth of field,” explains lead Foley artist and Alchemy Post founder Leslie Bloome. “So, it’s important to perform sounds in the proper perspective. Our entire team of editors, engineers and Foley artists need to be on point regarding the location and depth of field of sounds we’re recording. Our aim is to make every setting feel like a real place.”

A frequent task for the Foley team is to come up with sounds for high-tech cameras, surveillance equipment and other spy gadgetry. Foley artist Joanna Fang notes that sophisticated wall safes appear in several episodes, each one featuring differing combinations of electronic, latch and door sounds. She adds that in one episode a character has a microchip concealed in his suit jacket and the Foley team needed to invent the muffled crunch the chip makes when the man is frisked. “It’s one of those little ‘non-sounds’ that Foley specializes in,” she says. “Most people take it for granted, but it helps tell the story.”

The team is also called on to create Foley effects associated with specific exterior and interior locations. This can include everything from seedy safe houses and bars to modern office suites and upscale hotel rooms. When possible, Alchemy prefers to record such effects on location at sites closely resembling those pictured on-screen. Bloome says that recording things like creaky wood floors on location results in effects that sound more real. “The natural ambiance allows us to grab the essence of the moment,” he explains, “and keep viewers engaged with the scene.”

Footsteps are another regular Foley task. Fang points out that there is a lot of cat-and-mouse action with one character following another or being pursued, and the patter of footsteps adds to the tension. “The footsteps are kind of tough,” she says. “Many of the characters are either diplomats or spies and they all wear hard soled shoes. It’s hard to build contrast, so we end up creating a hierarchy, dark powerful heels for strong characters, lighter shoes for secondary roles.”

For interior footsteps, large theatrical curtains are used to adjust the ambiance in the live stage to fit the scene. “If it’s an office or a small room in a house, we draw the curtains to cut the room in half; if it’s a hotel lobby, we open them up,” Fang explains. “It’s amazing. We’re not only creating depth and contrast by using different types of shoes and walking surfaces, we’re doing it by adjusting the size of the recording space.”

Alchemy edits their Foley in-house and delivers pre-mixed and synced Foley that can be dropped right into the final mix seamlessly. “The things we’re doing with location Foley and perspective mixing are really cool,” says Foley editor and mixer Nicholas Seaman. “But it also means the responsibility for getting the sound right falls squarely on our shoulders. There is no ‘fix in the mix.’ From our point of view, the Foley should be able to stand on its own. You should be able to watch a scene and understand what’s going on without hearing a single line of dialogue.”

The studio used Neumann U87 and KMR81 microphones, a Millennia mic-pre and Apogee converter, all recorded into Avid Pro Tools on a C24 console. In addition to recording a lot of guns, Alchemy also borrowed a Doomsday prep kit for some of the sounds.

The challenge to deliver sound effects that can stand up to that level of scrutiny keeps the Foley team on its toes. “It’s a fascinating show,” says Fang. “One moment, we’re inside the station with the usual office sounds and in the next edit, we’re in the field in the middle of a machine gun battle. From one episode to the next, we never know what’s going to be thrown at us.”

Cinna 1.2

Review: Blackmagic’s DaVinci Resolve 14 for editing

By Brady Betzel

Resolve 14 has really stepped up Blackmagic’s NLE game with many great new updates over the past few months. While I typically refer to Resolve as a high-end color correction and finishing tools, this review will focus on the Editing tab.

Over the last two years, Resolve has grown from a high-end color correction and finishing app to include a fully-capable nonlinear editor, media organizer and audio editing tool. Fairlight is not currently at the same level as Avid Pro Tools, but it is still capable, and with a price of free or at most $299 you can’t lose. For this review, I am using the $299 version, which has a few perks — higher than UHD resolutions; higher than 60 frames per second timelines; the all-important spatial and/or temporal noise reduction; many plugins like the new face tracker; multi-user collaboration; and much more. The free version will work with resolutions up to UHD at up to 60fps and still gives you access to all of the powerful base tools like Fairlight and the mighty color correction tool set.

Disclaimer: While I really will try and focus on the Editing tab, I can’t make any promises I won’t wander.

Digging In
My favorite updates to Resolve 14’s Editing tab revolve around collaboration and conforming functions, but I even appreciate some smaller updates like responsiveness while trimming and video scopes on the edit page. And don’t forget the audio waveforms being visible on the source monitor!

With these new additions, among others, I really do think that Resolve is also becoming a workable nonlinear editor much like industry standards such as Avid Media Composer, Adobe Premiere Pro and Apple Final Cut Pro X. You can work from ingest to output all within one app. When connected to a collaborative project there is now bin-locking, sharing bins and even a chat window.

Multicam works as expected with up to 16 cameras in one split view. I couldn’t figure out how to watch all of the angles in the source monitor while playing down the sequence in the record monitor, so I did a live switch (something I love to do in Media Composer). I also couldn’t figure out how to adjust the multi-cam after it had been created, because say, for instance, audio was one frame out of sync or I needed to add another angle later on. But the multicam worked and did its job by allowing me to sync by in point, out point, timecode, sound or marker. In addition, you can make the multicam a different frame rate than your timeline, which is handy.

[Editor’s Note: Blackmagic says: “There are a few ways to do that. You can right click on the multicam clip and select ‘open in timeline.’ Or you can pause over any segment of a multicam clip, click on a different angle and swap out the shots. Most importantly, you get into multicam edit mode by clicking on the drop down menu on the lower left hand corner of the source viewer and selecting Multicam mode.”]

Another addition is the Position Lock located in the middle right, above the timeline. The Position Lock keeps all of your clips locked in time in your timeline. What is really interesting about this is that it still allows you to trim and apply other effects to clips while locking the position of your clips in place. This is extremely handy when doing conforms and online passes of effects when you don’t want timing and position of clips to change. It’s a great safety net. There are some more fancy additions like re-time curves directly editable in the timeline. But what I would really love is a comprehensive overhaul of the Title Tool that would allow for direct manipulation of the text on top of the video. It would be nice to have a shortcut to use the title as a matte for other footage for some quick and fancy titling effects, but maybe that is what Fusion is for? The title tool works fine and will now give you nice crisp text even when blown up. The bezier curves really come in handy here to make animations ease in and out nicely.

If you start and finish within Resolve 14, your experience will most likely be pretty smooth. For anyone coming from another NLE — like Media Composer or Premiere — there are a few things you will have to get used to, but overall it feels like the interface designers of Resolve 14 kept the interface familiar for those “older” editors, yet also packed it with interesting features to keep the “YouTube” editors’ interest piqued. As someone who’s partial to Media Composer, I really like that you can choose between frame view in the timeline and clips-only view, leaving out thumbnails and waveform views in the timeline.

I noticed a little bit of a lag when editing with the thumbnail frames turned on. I also saw recently that Dave Dugdale on YouTube found an interesting solution to the possible bug. Essentially, one of the thumbnail views of the timeline was a little slower at re-drawing when zooming into a close view in a sequence Regardless, I like to work without thumbnails, and that view seemed to work fluidly for me.

After working for about 12 minutes I realized I hadn’t saved my work and Resolve didn’t auto-saved. This is when I remembered hearing about the new feature “Live Save.” It’s a little tricky to find, but the Live Save feature lives under the DaVinci Resolve Menu > User > Auto Save and is off by default — I really think this should be changed. Turn this fuction on and your Resolve project will continually save, which in turn saves you from unnecessary conniptions when your project crashes and you try to find the spot that was last saved.

Coming from another NLE, the hardest thing for me to get used to in a new app was the keyboard layouts and shortcuts. Typically, trimming works similar to other apps and overwriting; ripple edits, dissolves and other edit functions don’t change, but the placement of their shortcuts does. In Resolve 14, you can access the keyboard shortcut commands in the same spot as the Live Save, but under the Keyboard Mapping menu under User. From here you can get grounded quickly by choosing a preset that is similar to your NLE of choice — Premiere, FCP X, Media Composer — or Resolve’s default keyboard layout, which isn’t terrible. If this could be updated to how apps like Premiere and/or Avid have their keyboard layouts designed, it would be a lot easier to navigate. Meaning there is usually a physical representation of a keyboard that allows you to drag your shortcuts to and from it realtime.

Right now, Resolve’s keyboard mapper is text-based and a little cumbersome. Overall, Resolve’s keyboard shortcuts (when in the editing tab) are pretty standard, but it would do you well to read and go through basic moves like trimming, trimming the heads and tails of clips or even just trimming by plus or minus and the total frames you want to trim.

Something else I discovered when trimming was when you go into actual “trim mode,” it isn’t like other NLEs where you can immediately start trimming. I had to click on the trim point with my mouse or pen, then I could use keyboard shortcuts to trim. This is possibly a bug, but what I would really love to happen is when you enter “trim mode,” you would see trimming icons at the A and B sides of the nearest clips on the selected tracks. This would allow you to immediately trim using keyboard shortcuts without any mouse clicks. In my mind, the more mouse clicks I have to use to accomplish a task means time wasted. This leads to having less time to spend on “important” stuff like story, audio, color, etc. When time equals money, every mouse click means money out of my pocket. [Note from Blackmagic: “In our trim tools you can also enter trim mode by hitting T on the keyboard. We did not put in specific trim tool icons on purpose because we have an all-in-one content sensitive trim tool that changes based on where you place the cursor. And if you prefer trimming with realtime playback, hit W for dynamic trim mode, and then click on the cut you want to trim with before hitting JKL to play the trim.”]

I have always treated Resolve as another app in my post workflow — I wasn’t able to use it all the way from start to finish. So in homage to the old way of working, a.k.a. “a round trip workflow,” I wanted to send a Media Composer sequence to Resolve by way of a linked AAF, then conform the media clips and work from there. I had a few objectives, but the main one was to make sure my clips and titles came over. Next was to see if any third-party effects would translate into Resolve from Media Composer and, finally, I wanted to conform an “updated” AAF to the original sequence using Resolve’s new “Compare with Current Timeline” command.

This was a standard 1080p, 23.98 sequence (transcoded to one mezzanine DNx175x codec with 12 frame handles) with plenty of slates, titles, clips, speed ramps, Boris Continuum Complete and Sapphire Effect. Right off the bat all of the clip-based media came over fine and in its correct time and place in the timeline. Unfortunately, the titles did not come over and were offline — none of them were recognized as titles so they couldn’t be edited. Dissolves came over correctly, however none of the third-party BCC or Sapphire effects came across. I didn’t really expect the third-party effects to come over, but at some point, in order to be a proper conforming application, Resolve will need to figure out a way to translate those when sending sequences from an NLE to Resolve. This is more of a grand wish, but in order to be a force in the all-in-one app for the post finishing circle, this is a puzzle that will need to be solved.

Otherwise, for those who want to use alternative nonlinear editing systems, they will have to continue using their NLE as the editor, Resolve as a color-only solution, and the NLE as their finisher. And from what I can tell Blackmagic wants Resolve to be your last stop in the post pipeline. Obviously, if you start your edit in Resolve and use third-party OpenFX (OFX) like BCC or Sapphire, you shouldn’t have any problems.

Last on my list was to test the new Compare with Current Timeline command. In order for this option to pop up when you right click, you must be in the Media tab with the sequence you want to compare to the one loaded. You then need to find the sequence you want to compare from, right click on it and click Compare with Current Timeline. Once you click the sequences you want to compare, a new window will pop up with the option to view the Diff Index. The Diff Index is a text-based list of each new edit next to the timeline that visually compares your edits between the two sequences. This visual representation of the edits between the sequences is where you will apply those changes. There are marks identifying what has changed, and if you want to apply those changes you must right click and hit Apply Changes. My suggestion is to duplicate your sequence before you apply changes (actually you should be constantly duplicating your sequence as a backup as a general rule). The Compare with Current Timeline function is pretty incredible. I tested it using an AAF I had created in Media Composer and compared it against an AAF made from the same sequence but with some “creative” changes and trimmed clips — essentially a locked sequence that suddenly became unlocked while in Online/Color and needed to reflect the latest changes from the offline edit.

I wasn’t able to test Resolve 14 in a shared-project environment, so I couldn’t test a simultaneous update coming from another editor. But this can come in really handy for anyone who has to describe any changes made to a particular sequence or for that pesky online editor that needs to conform a new edit while not losing all their work.

I can’t wait to see the potential of this update, especially if we can get Resolve to recognize third-party effects from other NLEs. Now don’t get me wrong, I’m not oblivious to the fact that asking Resolve engineers to figure out how to recognize third-party effects in an AAF workflow is a pie-in-the-sky scenario. If it was easy it probably would have already been done. But it is a vital feature if Blackmagic wants Resolve to be looked at like a Flame or Media Composer but with a high-end coloring solution and audio finishing solution. While I’m at it, I can’t help but think that Resolve may eventually include Fusion as another tab maybe as a paid add-on, which would help to close that circle to being an all-in-one post production solution.

Summing Up
In the end, Resolve 14 has all the makings of becoming someone’s choice as a sole post workflow solution. Blackmagic has really stepped up to the plate and made a workable and fully functional NLE. And, oh yeah not to mention it is one of the top color correction tools being used in the world.

I did this review of the editing tab using Blackmagic Design’s DaVinci Resolve 14.2. Find the latest version here. And check out our other Resolve review — this one from a color and finishing perspective.

Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at Follow him on Twitter @allbetzroff.

Coco’s sound story — music, guitars and bones

By Jennifer Walden

Pixar’s animated Coco is a celebration of music, family and death. In the film, a young Mexican boy named Miguel (Anthony Gonzalez) dreams of being a musician just like his great-grandfather, even though his family is dead-set against it. On the evening of Día de los Muertos (the Mexican holiday called Day of the Dead), Miguel breaks into the tomb of legendary musician Ernesto de la Cruz (Benjamin Bratt) and tries to steal his guitar. The attempted theft transforms Miguel into a spirit, and as he flees the tomb he meets his deceased ancestors in the cemetery.

Together they travel to the Land of the Dead where Miguel discovers that in order to return to life he must have the blessing of his family. The matriarch, great-grandmother Mamá Imelda (Alanna Ubach) gives her blessing with one stipulation, that Miguel can never be a musician. Feeling as though he cannot live without music, Miguel decides to seek out the blessing of his musician great-grandfather.

Music is intrinsically tied to the film’s story, and therefore to the film’s soundtrack. Ernesto de la Cruz’s guitar is like another character in the film. The Skywalker Sound team handled all the physical guitar effects, from subtle to destructive. Although they didn’t handle any of the music, they covered everything from fret handling and body thumps to string breaks and smashing sounds. “There was a lot of interaction between music and effects, and a fine balance between them, given that the guitar played two roles,” says supervising sound editor/sound designer/re-recording mixer Christopher Boyes, who was just nominated for a CAS award for his mixing work on Coco. His Skywalker team on the film included co-supervising sound editor J.R. Grubbs, sound effects editors Justin Doyle and Jack Whittaker, and sound design assistant Lucas Miller.

Boyes bought a beautiful guitar from a pawn shop in Petaluma near their Northern California location, and he and his assistant Miller spent a day recording string sounds and handling sounds. “Lucas said that one of the editors wanted us to cut the guitar strings,” says Boyes. “I was reluctant to cut the strings on this beautiful guitar, but we finally decided to do it to get the twang sound effects. Then Lucas said that we needed to go outside and smash the guitar. This was not an inexpensive guitar. I told him there was no way we were going to smash this guitar, and we didn’t! That was not a sound we were going to create by smashing the actual guitar! But we did give it a couple of solid hits just to get a nice rhythmic sound.”

To capture the true essence of Día de los Muertos in Mexico, Boyes and Grubbs sent effects recordists Daniel Boyes, Scott Guitteau, and John Fasal to Oaxaca to get field recordings of the real 2016 Día de los Muertos celebrations. “These recordings were essential to us and director Lee Unkrich, as well as to Pixar, for documenting and honoring the holiday. As such, the recordings formed the backbone of the ambience depicted in the track. I think this was a crucial element of our journey,” says Boyes.

Just as the celebration sound of Día de los Muertos was important, so too was the sound of Miguel’s town. The team needed to provide a realistic sense of a small Mexican town to contrast with the phantasmagorical Land of the Dead, and the recordings that were captured in Mexico were a key building block for that environment. Co-supervising sound editor Grubbs says, “Those recordings were invaluable when we began to lay the background tracks for locations like the plaza, the family compound, the workshop, and the cemetery. They allowed us to create a truly rich and authentic ambiance for Miguel’s home town.”

Bone Collecting
Another prominent set of sounds in Coco are the bones. Boyes notes that director Unkrich had specific guidelines for how the bones should sound. Characters like Héctor (Gael García Bernal), who are stuck in the Land of the Dead and are being forgotten by those still alive, needed to have more rattle-y sounding bones, as if the skeleton could come apart easily. “Héctor’s life is about to dissipate away, just as we saw with his friend Chicharrón [Edward James Olmos] on the docks, so their skeletal structure is looser. Héctor’s bones demonstrated that right from the get-go,” he explains.

In contrast, if someone is well remembered, such as de la Cruz, then the skeletal structure should sound tight. “In Miguel’s family, Papá Julio [Alfonso Arau] comically bursts apart many times, but he goes back together as a pretty solid structure,” explains Boyes. “Lee [Unkrich] wanted to dig into that dynamic first of all, to have that be part of the fabric that tells the story. Certain characters are going to be loose because nobody remembers them and they’re being forgotten.”

Creating the bone sounds was the biggest challenge for Boyes as a sound designer. Unkrich wanted to hear the complexity of the bones, from the clatter and movement down to the detail of cartilage. “I was really nervous about the bones challenge because it’s a sound that’s not easily embedded into a track without calling attention to itself, especially if it’s not done well,” admits Boyes.

Boyes started his bone sound collection by recording a mobile he built using different elements, like real bones, wooden dowels, little stone chips and other things that would clatter and rattle. Then one day Boyes stumbled onto an interesting bone sound while making a coconut smoothie. “I cracked an egg into the smoothie and threw the eggshell into the empty coconut hull and it made a cool sound. So I played with that. Then I was hitting the coconut on concrete, and from all of those sources I created a library of bone sounds.” Foley also contributed to the bone sounds, particularly for the literal, physical movements, like walking.

According to Grubbs, the bone sounds were designed and edited by the Skywalker team and then presented to the directors over several playbacks. The final sound of the skeletons is a product of many design passes, which were carefully edited in conjunction with the Foley bone recordings and sometimes used in combination with the Foley.

L-R: J.R. Grubbs and Chris Boyes

Because the film is so musical, the bone tracks needed to have a sense of rhythm and timing. To hit moments in a musical way, Boyes loaded bone sounds and other elements into Native Instruments’ Kontakt and played them via a MIDI keyboard. “One place for the bones that was really fun was when Héctor went into the security office at the train station,” says Boyes.

Héctor comes apart and his fingers do a little tap dance. That kind of stuff really lent to the playfulness of his character and it demonstrated the looseness of his skeletal structure.”

From a sound perspective, Boyes feels that Coco is a great example of how movies should be made. During editorial, he and Grubbs took numerous trips to Pixar to sit down with the directors and the picture department. For several months before the final mix, they played sequences for Unkrich that they wanted to get direction on. “We would play long sections of just sound effects, and Lee — being such a student of filmmaking and being an animator — is quite comfortable with diving down into the nitty-gritty of just simple elements. It was really a collaborative and healthy experience. We wanted to create the track that Lee wanted and wanted to make sure that he knew what we were up to. He was giving us direction the whole way.”

The Mix
Boyes mixed alongside re-recording mixer Michael Semanick (music/dialogue) on Skywalker’s Kurosawa Stage. They mixed in native Dolby Atmos on a DFC console. While Boyes mixed, effects editor Doyle handled last-minute sound effects needs on the stage, and Grubbs ran the logistics of the show. Grubbs notes that although he and Boyes have worked together for a long time this was the first time they’ve shared a supervising credit.

“J.R. [Grubbs] and I have been working together for probably 30 years now.” Says Boyes. “He always helped to run the show in a very supervisory way, so I just felt it was time he started getting credit for that. He’s really kept us on track, and I’m super grateful to him.”

One helpful audio tool for Boyes during the mix was the Valhalla Room reverb, which he used on Miguel’s footsteps inside de la Cruz’s tomb. “Normally, I don’t use plug-ins at all when I’m mixing. I’m a traditional mixer who likes to use a console and TC Electronic’s TC 6000 and the Leixcon 480 reverb as outboard gear. But in this one case, the Valhalla Room plug-in had a preset that really gave me a feeling of the stone tomb.”

Unkrich allowed Semanick and Boyes to have a first pass at the soundtrack to get it to a place they felt was playable, and then he took part in the final mix process with them. “I just love Lee’s respect for us; he gives us time to get the soundtrack into shape. Then, he sat there with us for 9 to 10 hours a day, going back and forth, frame by frame at times and section by section. Lee could hear everything, and he was able to give us definitive direction throughout. The mix was achieved by and directed by Lee, every frame. I love that collaboration because we’re here to bring his vision and Pixar’s vision to the screen. And the best way to do that is to do it in the collaborative way that we did,” concludes Boyes.

Jennifer Walden is a New Jersey-based audio engineer and writer.

The 54th annual CAS Award nominees

The Cinema Audio Society announced the nominees for the 54th Annual CAS Awards for Outstanding Achievement in Sound Mixing. There are seven creative categories for 2017, and the Outstanding Product nominations were revealed as well.

Here are this year’s nominees:

Baby Driver

Motion Picture – Live Action

Baby Driver

Production Mixer – Mary H. Ellis, CAS

Re-recording Mixer – Julian Slater, CAS

Re-recording Mixer – Tim Cavagin

Scoring Mixer – Gareth Cousins, CAS

ADR Mixer – Mark Appleby

Foley Mixer – Glen Gathard


Production Mixer – Mark Weingarten, CAS

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Gary Rizzo, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Thomas J. O’Connell

Foley Mixer – Scott Curtis

Star Wars: The Last Jedi

Production Mixer – Stuart Wilson, CAS

Re-recording Mixer – David Parker

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Ren Klyce

Scoring Mixer – Shawn Murphy

ADR Mixer – Doc Kane, CAS

Foley Mixer – Frank Rinella

The Shape of Water

Production Mixer – Glen Gauthier

Re-recording Mixer – Christian T. Cooke, CAS

Re-recording Mixer – Brad Zoern, CAS

Scoring Mixer – Peter Cobbin

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Peter Persaud, CAS

Wonder Woman

Production Mixer – Chris Munro, CAS

Re-recording Mixer – Chris Burdon

Re-recording Mixer – Gilbert Lake, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Nick Kray

Foley Mixer – Glen Gathard


Motion Picture Animated

The Lego Batman Movie

Cars 3

Original Dialogue Mixer – Doc Kane, CAS

Re-recording Mixer – Tom Meyers

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Nathan Nance

Scoring Mixer – David Boucher

Foley Mixer – Blake Collins


Original Dialogue Mixer – Vince Caro

Re-recording Mixer – Christopher Boyes

Re-recording Mixer – Michael Semanick

Scoring Mixer – Joel Iwataki

Foley Mixer – Blake Collins

Despicable Me 3

Original Dialogue Mixer – Carlos Sotolongo

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Tim Nielson

Re-recording Mixer – Brandon Proctor

Scoring Mixer – Greg Hayes

Foley Mixer – Scott Curtis


Original Dialogue Mixer – Bill Higley, CAS

Re-recording Mixer – Randy Thom, CAS

Re-recording Mixer – Lora Hirschberg

Re-recording Mixer – Leff Lefferts

Scoring Mixer – Shawn Murphy

Foley Mixer – Scott Curtis

The Lego Batman Movie

Original Dialogue Mixer – Jason Oliver

Re-recording Mixer – Michael Semanick

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Wayne Pashley

Scoring Mixer – Stephen Lipson

Foley Mixer – Lisa Simpson


Motion Picture – Documentary

An Inconvenient Sequel: Truth to Power

Production Mixer – Gabriel Monts

Re-recording Mixer – Kent Sparling

Re-recording Mixer – Gary Rizzo, CAS

Re-recording Mixer – Zach Martin

Scoring Mixer – Jeff Beal

Foley Mixer – Jason Butler

Long Strange Trip

Eric Clapton: Life in 12 Bars

Re-recording Mixer – Tim Cavagin

Re-recording Mixer – William Miller

ADR Mixer – Adam Mendez, CAS

Gaga: Five Feet Two

Re-recording Mixer – Jonathan Wales, CAS

Re-recording Mixer – Jason Dotts


Production Mixer – Lee Smith

Re-recording Mixer – David E. Fluhr, CAS

Re-recording Mixer – Warren Shaw

Scoring Mixer – Derek Lee

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Ryan Maguire

Long Strange Trip

Production Mixer – David Silberberg

Re-recording Mixer – Bob Chefalas

Re-recording Mixer – Jacob Ribicoff


Television Movie Or Mini-Series

Big Little Lies: “You Get What You Need”

Production Mixer – Brendan Beebe, CAS

Re-recording Mixer – Gavin Fernandes, CAS

Re-recording Mixer – Louis Gignac

Black Mirror: “USS Callister”

Production Mixer – John Rodda, CAS

Re-recording Mixer – Tim Cavagin


Re-recording Mixer – Dafydd Archard

Re-recording Mixer – Will Miller

ADR Mixer – Nick Baldock

Foley Mixer – Sophia Hardman

Fargo: ”The Narrow Escape Problem”

Production Mixer – Michael Playfair, CAS

Re-recording Mixer – Kirk Lynds, CAS

Re-recording Mixer – Martin Lee

Scoring Mixer – Michael Perfitt

Sherlock: “The Lying Detective”

Production Mixer –John Mooney, CAS

Re-recording Mixer – Howard Bargroff

Scoring Mixer – Nick Wollage

ADR Mixer – Peter Gleaves, CAS

Foley Mixer – Jamie Talbutt

Twin Peaks: “Gotta Light?”

Production Mixer – Douglas Axtell

Re-recording Mixer –Dean Hurley

Re-recording Mixer – Ron Eng


Television Series – 1-Hour

Better Call Saul: “Lantern”

Production Mixer – Phillip W. Palmer, CAS

Re-recording Mixer – Larry B. Benjamin, CAS

Re-recording Mixer – Kevin Valentine

ADR Mixer – Matt Hovland

Foley Mixer – David Michael Torres, CAS

Game of Thrones: “Beyond the Wall”

Game of Thrones

Production Mixer – Ronan Hill, CAS

Production Mixer – Richard Dyer, CAS

Re-recording Mixer – Onnalee Blank, CAS

Re-recording Mixer – Mathew Waters, CAS

Foley Mixer – Brett Voss, CAS

Stranger Things: “The Mind Flayer”

Production Mixer – Michael P. Clark, CAS

Re-recording Mixer – Joe Barnett

Re-recording Mixer – Adam Jenkins

ADR Mixer – Bill Higley, CAS

Foley Mixer – Anthony Zeller, CAS

The Crown: “Misadventure”

Production Mixer – Chris Ashworth

Re-recording Mixer – Lee Walpole

Re-recording Mixer – Stuart Hilliker

Re-recording Mixer – Martin Jensen

ADR Mixer – Rory de Carteret

Foley Mixer – Philip Clements

The Handmaid’s Tale: “Offred”

Production Mixer – John J. Thomson, CAS

Re-recording Mixer – Lou Solakofski

Re-recording Mixer – Joe Morrow

Foley Mixer – Don White


Television Series – 1/2 Hour

Ballers: “Yay Area”

Production Mixer – Scott Harber, CAS

Re-recording Mixer – Richard Weingart, CAS

Re-recording Mixer – Michael Colomby, CAS

Re-recording Mixer – Mitch Dorf

Black-ish: “Juneteenth, The Musical”

Production Mixer – Tom N. Stasinis, CAS

Re-recording Mixer – Peter J. Nusbaum, CAS

Re-recording Mixer – Whitney Purple

Modern Family: “Lake Life”

Production Mixer – Stephen A. Tibbo, CAS

Re-recording Mixer – Dean Okrand, CAS

Re-recording Mixer – Brian R. Harman, CAS

Silicon Valley: “Hooli-Con”

Production Mixer – Benjamin A. Patrick, CAS

Re-recording Mixer – Elmo Ponsdomenech

Re-recording Mixer – Todd Beckett

Veep: “Omaha”

Production Mixer – William MacPherson, CAS

Re-recording Mixer – John W. Cook II, CAS

Re-recording Mixer – Bill Freesh, CAS


Television Non-Fiction, Variety Or Music Series Or Specials

American Experience: “The Great War – Part 3”

Production Mixer – John Jenkins

Re-Recording Mixer – Ken Hahn

Anthony Bourdain: Parts Unknown: “Oman”

Re-Recording Mixer – Benny Mouthon, CAS

Anthony Bourdain: Parts Unknown

Deadliest Catch: “Last Damn Arctic Storm”

Re-Recording Mixer – John Warrin

Rolling Stone: “Stories from the Edge”

Production Mixer – David Hocs

Production Mixer – Tom Tierney

Re-Recording Mixer – Tom Fleischman, CAS

Who Killed Tupac?: “Murder in Vegas”

Production Mixer – Steve Birchmeier

Re-Recording Mixer – John Reese


Nominations For Outstanding Product – Production

DPA – DPA Slim

Lectrosonics – Duet Digital Wireless Monitor System

Sonosax – SX-R4+

Sound Devices – Mix Pre- 10T Recorder

Zaxcom – ZMT3-Phantom


Nominations For Outstanding Product – Post Production

Dolby – Dolby Atmos Content Creation Tools

FabFilter – Pro Q2 Equalizer

Exponential Audio – R4 Reverb

iZotope – RX 6 Advanced

Todd-AO – Absentia DX

The Awards will be presented at a ceremony on February 24 at the Omni Los Angeles Hotel at California Plaza. This year’s CAS Career Achievement Award will be presented to re-recording mixer Anna Behlmer, the CAS Filmmaker Award will be given to Joe Wright and the Edward J. Greene Award for the Advancement of Sound will be presented to Tomlinson Holman, CAS. The Student Recognition Award winner will also be named and will receive a cash prize.

Main Photo: Wonder Woman

Dynasty composer Paul Leonard-Morgan

By Randi Altman

Scottish-born composer Paul Leonard-Morgan, who owns a BAFTA award and Emmy nomination, has a resume that is as eclectic as it is long. He has worked on television (Limitless), films (The Numbers Station) and games (Dawn of War III). He has also produced music for artists such as No Doubt (Push and Shove).

In addition to the Wormwood miniseries for Netflix, one of Leonard-Morgan’s most recent projects is the soundtrack for The CW’s reboot of the show Dynasty. We recently reached out to him to talk about the show, the way he works and what’s next.

L-R: Dynasty showrunner Sallie Patrick, Paul Leonard-Morgan and director Brad Sieberling with various musicians.

The name Dynasty comes with certain expectations and history. Did you use the original as an inspiration or borrow bits from the original as an homage?
I remember watching Dynasty as a child, but other than the main theme I couldn’t begin to tell you what the music was like, other than it was pretty orchestral — Bill Conti is such a phenomenal composer. So right from the outset our showrunner Sallie Patrick and director Brad Sieberling and I wanted to do a title sequence with a modernized version of the iconic theme. People don’t tend to do title sequences these days, so it was very cool of The CW to let us do it.

We got a bunch of players into Capitol Studios and overlaid the orchestra onto my beats and synths. I brought in an old friend and Grammy-winning producer Troy Nokaan to pump up the beats a bit. And, of course, there was Tom (Hooten), principal trumpet player with the LA Philharmonic. For me, this is what the whole series’ ethos is about — tying the old to the new. Recording these players in the iconic Capitol Studios, where people like Sinatra recorded… we got such a vintage vibe going on. But then we added modern beats and synths – that’s what the whole score has become. Adding a cool ‘80s twist to modern sounds and orchestra. But other than the titles, the rest of the score does its own thing.

Can you talk about what the show’s producers wanted for the score? Did you have a lot of input?
We had detailed discussions at the start about what we wanted to achieve. Everything to do with the ‘80s is so trendy now — from fashion to music, but there’s a fine line between adding ’80s elements to give the music a nice edge, and creating an ’80s pastiche, which sounds dated.

I produce a lot of bands, so I started taking some of those beats and then adding in lots of analog synths. And then our scoring sessions added an orchestra. I was really keen to use a string section, as I felt that Dynasty is so iconic, giving it a small section would add that touch of class to it. The beats — the clicks, claps and kicks — are what gives the Fallon character her swagger — the synths give it the pace, and the orchestra gives it the cinematic quality. I was keen to find a sound that would become instantly recognizable as that Dynasty sound.

How would you describe your score? 

Can you walk us through your process? How do you begin? What inspires you? 
I start by watching the episode with the director, editor and writer and then have a spotting session. We work out where the music should come in and out, but even that is open to interpretation, as sometimes their vision might be different from mine. They might imagine short musical cues, where I’m envisaging longer, shaped pieces.

For example, there’s a piece in the episode I’ve just finished (110) that lasts the entire part 4. Obviously, it’s not full-on drums the whole time, but doing cues like that give it some real shape and add to the visuals filmic qualities. After the spotting sessions, I go away and start writing. After a while, you get a feel for what’s working and what’s not — when to leave the dialogue alone and when to try and help it. We’re all pretty keen on not making the music too emotionally leading in this series. We want to let the acting do that, instead of sign-posting every happy/sad moment. When everyone’s happy, we’ll start orchestrating the music, get the parts ready, and then go off to Capitol, or another studio, to record the real players.

The schedule is pretty crazy — I have a week to score each episode. So while we’re recording the real players, the dub is in its final day. As we finish mixing each cue, we then start sending them over the Internet to the dub stage, where they quickly lay them in and balance the levels with dialogue and FX. They’re lucky that I don’t get the chance to go and sit in the dub much, as we’re literally mixing to the last second!

What tools do you use to create a score?
I use MOTU’s Digital Performer to write, produce and pre-mix, then everything gets transferred to Avid ProTools for the main recording session and final mix. Obviously, I have a million samples and lots of original analog synths.

You work in many different parts of the music world — TV, films and games. Do you have a preference? How are those hats different, or are they not very different at all?
It sounds like a cop-out, but I really don’t have a preference. I like working in different fields, as I always feel that brings a freshness and different take to the next project, consciously and sub-consciously. For example, I was scoring a series of plays for The National Theatre in London a few years ago — at the same time I was scoring the film Walking With Dinosaurs in LA and the game Battlefield Hardline — and that theatre score was so different from many things I’d done before. But it led to me working with the incredible filmmaker Errol Morris for his film The B Side, and subsequently his new Netflix series Wormwood.

Dynasty came more from my work with bands. I like working in different genres, as it keeps pushing me out of my comfort zone, which I feel is really important as an artist.

You are building a new studio. Can you talk about that?
It’s been a process! Two weeks to go! Before I moved to LA with my family, I had just completed building my studio in Glasgow, Scotland. Then we moved over here, as I was living on planes between the UK and the US. This was about three years ago. I’ve been renting a studio, but finally the time came to buy a house and it’s got a huge guesthouse in the backyard (2,000 square feet), so I decided to get it properly treated.

We pulled down most of the inside and spent the last six months soundproofing and giving it the proper acoustic treatments, etc. But it’s insane, as I’ve hardly been out of my studio in Santa Monica while the build process has been going on, so the contractors have been FaceTiming me to show me how the progress is going, Trying to make decisions after a week of 20-hour days is hard.

I was keen to move to a place that had birds and nature. Coming from Scotland I like my space, which is not the easiest thing to find in LA. I insisted on having tons of windows in the studios for daylight to pour in — something that is great for me, but awful acoustically, so the acoustic guys spent weeks designing it so the glass wouldn’t affect the sound! But it’s looking fantastic, and I’ll have the ability to record up to 20 players in there. The irony is, having moved to what I thought was a pretty quiet neighborhood, I have a mega-famous hip-hop artist right next to me. His soundproofing had better be as good as mine!

What’s next for you project-wise?
Other than the rest of the season on Dynasty (we’re not even halfway there yet!), I’m working on a game score for the next year and a half, and have a new film starting in the New Year. I’ll also be working with my team on The Grand Tour, Amazon’s big series. Errol Morris’ Wormwood was recently released on Netflix — that’s been a life highlight for me!

Behind the Title: Butter Music and Sound’s Chip Herter

NAME: Chip Herter

COMPANY: NYC’s Butter Music+Sound/Haystack Music

Butter creates custom music compositions for advertising/film/TV. Haystack Music is the internal music catalog from Butter, featuring works from our composers, emerging artists and indie labels.

Director of Creative Sync Services

The role was designed to be a catch-all for all things creative music licensing. This includes music supervision (curating music for projects from the music industry at large, by way of record labels and publishers) and creative direction from our own Haystack Music library.

Rights management is an understated aspect of the role. The ability to immediately know who key players are in the ownership of a song, so that we can estimate costs for using a song on behalf of our clients and license a track with ease.

The best tool in my toolbox is the team that supports me every day.

I have a keen interest in putting the spotlight on new and emerging music. Be it a new piece written by one of our composers or an emerging act that I want to introduce to a larger audience.

Losing work to anyone else. It is a natural part of the job, but I can’t help getting personally invested in every project I work on. So the loss feels real, but in turn I always learn something from it.

Morning, for sure. Coffee and music? Yes, please!

Most likely working for a PR agency. I love to write, and I am good at it (so I’m told).

I was a late bloomer. I was 26 when I took my first internship as a music producer at Crispin Porter+Bogusky. From my first day on the job, I knew this was my higher calling. Anyone who geeks-out to the language in a music license like me is destined to do this for a living.

Lexus Innovations

We recently worked on a campaign for Lexus with Team One USA called Innovations that was particularly great and the response to the music was very positive. Recently, we also worked on projects for Levi’s, Nescafé, Starbucks and Keurig… coffee likes us, I guess!

I was fortunate to work with Wieden+Kennedy on their Coca-Cola Super Bowl ad in 2015. I placed a song from the band Hundred Waters, who have gone on to do remarkable things since. The spot carried a very positive message about anti-bullying, and it was great to work on something with such social awareness.

WiFi, Bluetooth and Spotify.

I don’t take for granted that my favorite pastime — going to concerts — is a fringe benefit of the job. When I am not listening to music, I am almost always listening to a podcast or a standup comedian. I also enjoy acting like a child with my two-year-old son as much as I can. I learn a lot from him about not taking myself too seriously.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.

Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”

Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.

Blade Runner 2049’s dynamic and emotional mix

By Jennifer Walden

“This film has more dynamic range than any movie we’ve ever mixed,” explains re-recording mixer Doug Hemphill of the Blade Runner 2049 soundtrack. He and re-recording mixer Ron Bartlett, from Formosa Group, worked with director Denis Villeneuve to make sure the audio matched the visual look of the film. From the pounding sound waves of Hans Zimmer and Benjamin Wallfisch’s score to the overwhelming wash of Los Angeles’s street-level soundscape, there’s massive energy in the film’s sonic peaks.

L-R: Ron Bartlett, Denis Villeneuve, Joe Walker, Ben Wallfisch and Doug Hemphill. Credit: Clint Bennett

The first time K (Ryan Gosling) arrives in Los Angeles in the film, the audience is blasted with a Vangelis-esque score that is reminiscent of the original Blade Runner, and that was ultimately the goal there — to envelope the audience in the Blade Runner experience. “That was our benchmark for the biggest, most enveloping sound sequence — without being harsh or loud. We wanted the audience to soak it in. It was about filling out the score, using all the elements in Hans Zimmer’s and Ben Wallfisch’s arsenal there,” says Bartlett, who handled the dialogue and music in the mix.

He and Villeneuve went through a wealth of musical elements — all of which were separated so Villeneuve could pick the ones he liked. His preference gravitated toward the analog synth sounds, like the Yamaha CS-80, which composer Vangelis used in his 1982 Blade Runner composition. “We featured those synth sounds throughout the movie,” says Bartlett. “I played with the spatial aspects, spreading certain elements into the room to envelope you into the score. It was very immersive that way.”

Bartlett notes that initially there were sounds from the original Blade Runner in their mix, like huge drum hits from the original score that were converted into 7.1 versions by supervising sound editor Mark Mangini at Formosa Group. Bartlett used those drum hits as punctuation throughout the film, for scene changes and transitions. “Those hits were everywhere. Actually, they’re the first sound in the movie. Then you can hear those big drum hits in the Vegas walk. That Vegas walk had another score with it, but we kept stripping it away until we were down to just those drum hits. It’s so dramatic.”

But halfway into the final mix for Blade Runner 2049, Mangini phoned Bartlett to tell him that the legal department said they couldn’t use any of those sounds from the original film. They’d need to replace them immediately. “Since I’m a percussionist, Mark asked if I could remake the drum hits. I stayed up until 3am and redid them all in my studio in 7.1, and then brought them in and replaced them throughout the movie. Mark had to make all these new spinner sounds and replace those in the film. That was an interesting moment,” reveals Bartlett.

Sounds of the City
Los Angeles 2049 is a multi-tiered city. Each level offers a different sonic experience. The zen-like prayer that’s broadcast at the top level gradually transforms into a cacophony the closer one gets to street-level. Advertisements, announcements, vehicles, music from storefronts and vending machine sounds mix with multi-language crowds — there’s Russian, Vietnamese, Korean, Japanese, and the list goes on. The city is bursting with sound, and Hemphill enhanced that experience by using Cargo Cult’s Spanner on the crowd effects during the scene where K is sitting outside of Bibi’s Bar to put the crowds around the theater and “give the audience a sense of this crush of humanity,” he says.

The city experience could easily be chaotic, but Hemphill and Bartlett made careful choices on the stage to “rack the focus” — determining for the audience what they should be listening to. “We needed to create the sense that you’re in this overpopulated city environment, but it still had to make sense. The flow of the sound is like ‘musique concrète.’ The sounds have a rhythm and movement that’s musical. It’s not random. There’s a flow,” explains Hemphill, who has an Oscar for his work on The Last of the Mohicans.

Bartlett adds that their goal was to keep a sense of clarity as the camera traveled through the street scene. If there was a big, holographic ad in the forefront, they’d focus on that, and as the scene panned away another sound would drive the mix. “We had to delete some of the elements and then move sounds around. It was a difficult scene and we took a long time on it but we’re happy with the clarity.”

On the quiet end of the spectrum, the film’s soundtrack shines. Spaces are defined with textural ambiences and handcrafted reverbs. Bartlett worked with a new reverb called DSpatial created by Rafael Duyos. “Mark Mangini and I helped to develop DSpatial. It’s a very unique reverb,” says Bartlett.

According to the website, DSpatial Reverb is a space modeler and renderer that offers 48 decorrelated outputs. It doesn’t use recorded impulse responses; instead it uses modeled IRs. This allows the user to select and tweak a series of parameters, like surface texture and space size, to model the acoustic and physical characteristics of any room. “It’s a decorrelated reverb, meaning you can add as many channels as you like and pan them into every Dolby Atmos speaker that is in the room. That wasn’t the only reverb we used, but it was the main one we used in specific environments in the film,” says Bartlett.

In combination with DSpatial, Bartlett used Audio Ease’s Altiverb, FabFilter reverbs and Cargo Cult’s Slapper delay to help create the multifaceted reflections that define the spaces on-screen so well. “We tried to make each space different, “says Bartlett. “We tried to evoke an emotion through the choices of reverbs and delays. It was never just one reverb or delay. I used two or three. It was very interesting creating those textures and creating those rooms.”

For example, in the Tyrell Corporation building, Niander Wallace (Jared Leto)’s private office is a cold, lonely space. Water surrounds a central platform; reflections play on the imposing stone walls. “The way that Roger Deakins lit it was just stunning,” says Bartlett. “It really evoked a cool emotion. That’s what is so intangible about what we do, creating those emotions out of sound.” In addition to DSpatial, Altiverb and FabFilter reverbs, he used Cargo Cult’s Slapper delay, which “added a soft rolling, slight echo to Jared Leto’s voice that made him feel a little more God-like. It gave his voice a unique presence without being distracting.”

Another stunning example of Bartlett’s reverb work was K’s entrance into Rick Deckard’s (Harrison Ford) casino hideout. The space is dead quiet then K opens the door and the sound rings out and slowly dissipates. It conveys the feeling that this is a vast, isolated, and empty space. “It was a combination of three reverbs and a delay that made that happen, so the tail had a really nice shine to it,” says Bartlett.

One of the most difficult rooms to find artistically, says Bartlett, was that of the memory maker, Dr. Ana Stelline (Carla Juri). “Everyone had a different idea of what that dome might sound like. We experimented with four or five different approaches to find a good place with that.”

The reverbs that Bartlett creates are never static. They change to fit the camera perspective. Bartlett needed several different reverb and delay processing chains to define how Dr. Stelline’s voice would react in the environment. For example, “There are some long shots, and I had a longer, more distant reverb. I bled her into the ceiling a little bit in certain shots so that in the dome it felt like the sound was bouncing off the ceiling and coming down at you. When she gets really close to the glass, I wanted to get that resonance of her voice bouncing off of the glass. Then when she’s further in the dome, creating that birthday memory, there is a bit broader reverb without that glass reflection in it,” he says.

On K’s side of the glass, the reverb is tighter to match the smaller dimensions and less reflective characteristics of that space. “The key to that scene was to not be distracting while going in and out of the dome, from one side of the glass to the other,” says Bartlett. “I had to treat her voice a little bit so that it felt like she was behind the glass, but if she was way too muffled it would be too distracting from the story. You have to stay with those characters in the story, otherwise you’re doing a disservice by trying to be clever with your mixing.

“The idea is to create an environment so you don’t feel like someone mixed it. You don’t want to smell the mixing,” he continues. “You want to make it feel natural and cool. If we can tell when we’ve made a move, then we’ll go back and smooth that out. We try to make it so you can’t tell someone’s mixing the sound. Instead, you should just feel like you’re there. The last thing you want to do is to make something distracting. You want to stay in the story. We are all about the story.”

Mixing Tools
Bartlett and Hemphill mixed Blade Runner 2049 at Sony Pictures Post in the William Holden Theater using two Avid S6 consoles running Avid Pro Tools 12.8.2, which features complete Dolby Atmos integration. “It’s nice to have Atmos panners on each channel in Pro Tools. You just click on the channel and the panner pops up. You don’t want to go to just one panner with one joystick all the time so it was nice to have it on each channel,” says Bartlett.

Hemphill feels the main benefit of having the latest gear — the S6 consoles and the latest version of Pro Tools — is that it gives them the ability to carry their work forward. “In times past, before we had this equipment and this level of Pro Tools, we would do temp dubs and then we would scrap a lot of that work. Now, we are working with main sessions all the way from the temp mix through to the final. That’s very important to how this soundtrack was created.”

For instance, the dialogue required significant attention due to the use of practical effects on set, like weather machines for rain and snow. All the dialogue work they did during the temp dubs was carried forward into the final mix. “Production sound mixer Mac Ruth did an amazing job while working in those environments,” explains Bartlett. “He gave us enough to work with and we were able to use iZotope RX 6 to take out noise that was distracting. We were careful not to dig into the dialogue too much because when you start pulling out too many frequencies, you ruin the timbre and quality of the dialogue— the humanness.”

One dialogue-driven scene that made a substantial transformation from temp dub to final mix was the underground sequence in which Freysa [Hiam Abbass] makes a revelation about the replicant child. “The actress was talking in this crazy accent and it was noisy and hard to understand what was happening. It’s a very strong expositional moment in the movie. It’s a very pivotal point,” says Bartlett. They looped the actress for that entire scene and worked to get her ADR performance to sound natural in context to the other sounds. “That scene came such a long way, and it really made the movie for me. Sometimes you have to dig a little deeper to tell the story properly but we got it. When K sits down in the chair, you feel the weight. You feel that he’s crushed by that news. You really feel it because the setup was there.”

Blade Runner 2049 is ultimately a story that questions the essence of human existence. While equipment and technique were an important part of the post process, in the end it was all about conveying the emotion of the story through the soundtrack.

“With Denis [Villeneuve], it’s very much feel-based. When you hear a sound, it brings to mind memories immediately. Denis is the type of director that is plugged into the emotionality of sound usage. The idea more than anything else is to tell the story, and the story of this film is what it means to be a human being. That was the fuel that drove me to do the best possible work that I could,” concludes Hemphill.

Jennifer Walden is a NJ-based writer and audio engineer. Follow her on Twitter @audiojeney.