Category Archives: Audio

Emmy Awards: OJ: Made in America composer Gary Lionelli

By Jennifer Walden

The aftermath of a tragic event plays out in front of the eyes of the nation. OJ Simpson, wanted for the gruesome murder of his wife and her friend, fails to turn himself in to the authorities. News helicopters follow the police chase that follows Simpson back to his Rockingham residence where they plan to take him into custody. Decades later, three-time Emmy-winning composer Gary Lionelli is presented with the opportunity to score that iconic Bronco chase.

Here, Lionelli talks about his approach to scoring ESPN’s massive documentary OJ: Made in America. His score on Part 3 is currently up for Emmy consideration for Outstanding Music Composition for a Limited Series. The entire OJ: Made in America score is available digitally through Lakeshore Records.

Gary Lionelli

Scoring OJ: Made in America seems like such a huge undertaking. It’s a five-part series, and each part is over 90 minutes long. How did you tackle this beast?
I’d never scored anything that long within such a short timeframe. Because each part was so long, it wasn’t like doing a TV series but more like scoring five 90-minute films back-to-back. I just focused on one cue at a time, putting one foot in front of the other so I wouldn’t feel overwhelmed by the full scope of the work and could relax enough to write the score! I knew I’d get to the finish line at some point, but it seemed so far away most of the time that I just didn’t want to dwell on that.

When you got this project, did they deliver it as one crazy, long piece? Or did they give it to you in its separate parts?
I got everything at once, which was totally mind-boggling. When you get any project, you need to watch it before you start working on it. For this one, it meant watching a seven-and-a-half-hour film, which was a feat in and of itself. The scale was just huge on this. Looking back, my eyelids still twitch.

It was a pretty nerve-racking time because the schedule was really tight. That was one of the most challenging parts of doing this project. I could have used a year to write this music, because five films are ordinarily what I‘d do in a year, not six months. But all of us who write music for film know that you have to work within extreme deadlines as a matter of course. So you say yes, and you find a way to do it.

So you basically locked yourself up for 14 hours a day, and just plugged away at it?
Right, except it was actually about 15 hours a day, seven days a week, with no breaks. I finished the score 11 days before its theatrical release, which is insane. But, hey, that part is all in the past now, and it’s great to see the film out there getting such attention. One thing that made it worthwhile to me in the end was the quality of the filmmaking — I was riveted by the film the whole time I was working on it.

When composing, you worked only on one part at a time and not with an overall story arc in mind?
I watched all five parts over the course of four days. Once I’d watched the first two parts, I couldn’t wait to start writing so I did that for a bit and then went back to watch the rest.

The director Ezra Edelman wanted me to first score the infamous Bronco chase, which is in Part 3. It’s a 30-minute segment of that particular episode. It was a long sequence of events, all having to do with the chase itself, the events leading up to it and the aftermath of it. So that is what I scored first. It’s kind of strange to dive into a film by first scoring such a pivotal, iconic event. But it worked out — what I wrote for that segment stuck.

It was strange to be writing music for something I had seen on television 20 years before – just to think that there I was, watching the Bronco chase on TV along with everyone else, not having the remotest idea that 20 years down the line I was going to be writing music for this real-life event. It’s just a very odd thing.

The Bronco chase wasn’t a high-speed chase. It was a long police escort back to OJ’s house. The music you wrote for this segment was so brooding and it fit perfectly…
I loved when Zoe Tur, the helicopter pilot, said they were giving OJ a police motorcade. That’s exactly what he got. So I didn’t want to score the sequence by commenting literally on what was happening — what people were doing, or the fact that this was a “chase.” What I tried to do was focus on the subtext, which was the tragedy of the circumstances, and have that direct the course of the music, supplying an overarching musical commentary.

For your instrumentation, did the director let you be carried away by your own muse? Or did he request specific instruments?
He was specific about two things: one, that there would be a trumpet in the score, and two, he wanted an oboe. Other than those two instruments, it was up to me. I have a trumpet player, Jeff Bunnell, who I’ve worked with before. It’s a great partnership because he’s a gifted improviser, and sometimes he knows what I want even when I don’t. He did a fantastic job on the score.

I also had a 40-piece string section recorded at the Eastman Scoring Stage at Warner Bros. Studios. We used players here in town and they added a lot, really bringing the score to life.

Were you conducting the orchestra? Or did you stay close to the engineer in the booth?
I wanted to be next to the recording engineer so I could hear everything as it was being recorded. I had a conductor instead. Besides, I’m a terrible conductor.

What instruments did you choose for the Bronco chase score?
For one of the scenes, I used layers of distorted electric guitars. Another element of the score was musical sound manipulation of acoustic instruments through electronics. It’s a time-consuming way to conjure up sounds, with all the trial and error involved, but the results can sometimes give a film an identity beyond what you can do with an orchestra alone.

So you recorded real instruments and then processed them? Can you share an example of your processing chain?
Sometimes I will get my guitar out and play a phrase. I’ll take that phrase and play it backwards, drop it two octaves, put it through a ring modulator, and then I’ll chop it up into short segments and use that to create a rhythmic pattern. The result is nothing like a real guitar. I didn’t necessarily know what I was going for at the start, but then I’d end up with this cool beat. Then I’d build a cue around that.

The original sound could be anything. I could tap a pencil on a desk and then drop that three octaves, time compress it and do all sorts of other processing. The result is a weird drum sound that no one’s ever heard before. It’s all sorts of experimentation, with the end result being a sound that has some originality and that piques the interest of the person watching the film.

To break that down a little further, what program do you work in?
I work in Pro Tools. I went from Digital Performer to Logic — I think most film composers use Logic or Cubase, but there are a growing number who actually use Pro Tools. I don’t need MIDI to jump through a lot of hoops. I just need to record basic lines because most of that stuff gets replaced by real players anyhow.

When you work in Pro Tools, it’s already the delivery format for the orchestra, so you eliminate a conversion step. I’ve been using Pro Tools for the past four years, and so far it’s been working out great. It has some limitations in MIDI, but not that many and nothing that I can’t work around.

What are some of your favorite processing plug-ins?
For pitching, I use Melodyne by Celemony and Serato’s Pitch ‘n’ Time. There’s a new pitch shifter in Pro Tools called X-Form that’s also good. I also use Waves SoundShifter — whatever seems to do a better job for what I’m working on. I always experiment to see which one works the best to give me the sound I’m looking for.

Besides pitch shifters, I use GRM Tools by Ina-GRM. They make weird plug-ins, like one called Shift, that really convolute sound to the point where you can take a drum or rhythmic guitar and turn it into a high-hat sound. It doesn’t sound like a real high-hat. It sounds like a weird high-hat, not a real one. You never know what you’re going to get from this plug-in, and that’s why I like it so much.

I also use a lot of Soundtoys plug-ins, like Crystallizer, which can really change sounds in unexpected ways. Soundtoys has great distortion plug-ins too. I’m always on the hunt for something new.

A lot of times I use hardware, like guitar pedals. It’s great to turn real knobs and get results and ideas from that. Sometimes the hardware will have a punchier sound, and maybe you can do more extreme things with it. It’s all about experimentation.

You’ve talked before about using a Guitarviol. Was that how you created the long, suspended bass notes in the Bronco chase score?
Yes, I did use the Guitarviol in that and in other places in the score, too. It’s a very weird instrument, because it looks like a cello but doesn’t sound like one, and it definitely doesn’t sound like a guitar. It has a weird, almost Middle Eastern sound to it, and that makes you want to play in that scale sometimes. Sometimes I’ll use it to write an idea, and then I’ll have my cellist play the same thing on cello.

The Guitarviol is built by Jonathan Wilson, who lives in Los Angeles. He had no idea when he invented this thing that it was going to get adopted by the film composer community here in town. But it has, and he can’t make them fast enough.

Do you end up layering the Guitarviol and the cello in the mix? Or do you just go with straight cello?
It’s usually just straight cello. There are a couple of cellists I use who are great. I don’t want to dilute their performance by having mine in the background. The Guitarviol is an inspiration to write something for the cellists to hear, and then I’ll just have them take over from there.

The overall sound of Part 3 is very brooding, and the percussion choices have complementary deep tones. Can you tell me about some of the choices you made there?
Those are all real drums. I don’t use any samples. I love playing real drums. I have a real timpani, a big Brazilian Surdo drum, a gigantic steel bass drum that sounds like a Caribbean steel drum but only two octaves lower (it has a really odd sound), and I have a classic Ludwig Beatles drum kit. I have a marimba and a collection of small percussion instruments. I use them all.

Sometimes I will pitch the recordings down to make them sound bigger. The Surdo by itself sounds huge, and when you pitch that down half an octave it’s even bigger. So I used all of those instruments and I played them. I don’t think I used a single drum sample on the entire score.

When you use percussion samples, you have to hunt around in your entire hard drive for a great tom-tom or a Taiko drum. It’s so much easier to run over to one in your studio and just play it. You never know how it’s going to sound, depending on how you mic it that day. And it’s more inspiring to play the real thing. You get great variation. Every time you hit the drum it sounds different, but a sample sound pretty much sounds the same every time you trigger it.

For striking, did you choose mallets, brushes, sticks, your hands, or other objects?
For the Surdo, I used my hands. I use marimba mallets and timpani mallets for the other instruments. For example, I’ll use timpani mallets for the big steel bass drum. Sometimes I’ll use timpani mallets on my drum kit’s bass drum, because it gives a different sound. It has a more orchestral sound, not like a kick drum from a rock band.

I’m always experimenting. I use brushes a lot on cymbals, and I use the brushes on the steel drum because it gives it a weird sound. You can even use brushes on the timpani, and that creates a strange sound. There are definitely no rules. Whatever you think or can imagine having an effect on the drum, you just try it out. You never know what you’ll get — it’s always good to give it a chance.

In addition to the Bronco chase scene, are there any other tracks that stood out for you in Part 3?
When you score something this long, at a certain point everything starts to run together in your mind. You don’t remember what cue belongs to what scene. But there are many that I can remember. During the jury section of that episode, I used an oboe for Johnny Cochran speaking to the jury. That was an interesting pairing, the oboe and Johnny Cochran. In a way, the oboe became an extension of his voice during his closing argument. I can’t really explain why it worked, but somehow it was the right match.

For the beginning of Part 3, when the police arrive because there was a phone call from Nicole Brown Simpson saying she was afraid of OJ, the cue there was very understated. It had a lot of strange, low sounds to it. That one comes to mind.

At the end of Part 3, they go to OJ’s Rockingham residence, and his lawyers had staged the setting. I did a cue there that was sort of quizzical in a way, just to show the ridiculousness of the whole thing. It was like a farce, the way they set up his residence. So I made the score take a right turn into a different area for that part. It gets away from the dark, brooding undercurrent that the rest of Part 3’s score had.

Of all the parts you could have submitted for Emmy consideration, why did you choose Part 3?
It was a toss-up between Part 2 and Part 3. Part 2 had some of the more major trumpet themes, more of the signature sound with the trumpet and the orchestra. But there were a few examples of that in Part 3, too.

I just felt the Bronco chase, score-wise, had a lot of variation to it, and that it moved in a way that was unpredictable. I ultimately thought that was the way to go, though it was a close race between Part 2 and Part 3.

I found out later that ESPN had submitted Part 3 for Emmy consideration in other categories, so there is a bit of synergy there.

—————-

Jennifer Walden is a New Jersey-based audio engineer and writer.

Behind the Title: 3008 Editorial’s Matt Cimino and Greg Carlson

NAMES: Matt Cimino and Greg Carlson

COMPANY: 3008 Editorial in Dallas

WHAT’S YOUR JOB TITLE?
Cimino: We are sound designers/mixers.

WHAT DOES THAT ENTAIL?
Cimino: Audio is a storytelling tool. Our job is to enhance the story directly or indirectly and create the illusion of depth, space and a sense of motion with creative sound design and then mix that live in the environment of the visuals.

Carlson: And whenever someone asks, I always tend to prioritize sound design before mixing. Although I love every aspect of what we do, when a spot hits my room as a blank slate, it’s really the sound design that can take it down a hundred different paths. And for me, it doesn’t get better than that.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Carlson: I’m not sure a brief job title can encompass what anyone really does. I am a composer as well as a sound designer/mixer, so I bring that aspect into my work. I love musical elements that help stitch a unified sound into a project.

Cimino: That there really isn’t “a button” for that!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Carlson: The freedom. Having the opportunity to take a project where I think it should go and along the way, pushing it to the edge and back. Experimenting and adapting makes every spot a completely new trip.

Matt Cimino

Cimino: I agree. It’s the challenge of creating an expressive and aesthetically pleasing experience by taking the soundtrack to a whole new level.

WHAT’S YOUR LEAST FAVORITE?
Cimino: Not Much. However, being an imperfect perfectionist, I get pretty bummed when I do not have enough time to perfect the job.

Carlson: People always say, “It’s so peaceful and quiet in the studio, as if the world is tuned out.” The downside of that is producer-induced near heart attacks. See, when you’re rocking out at max volume and facing away from the door, well, people tend to come in and accidentally scare you to death.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Cimino: I’m a morning person!

Carlson: Time is an abstract notion in a dark room with no windows, so no time in particular. However, the funniest time of day is when you notice you’re listening about 15 dB louder than the start of the day. Loud is better.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cimino: Carny. Or Evel Knievel.

Carlson: Construction/carpentry. Before audio, I had lots of gritty “hands-on” jobs. My dad taught me about work ethic, to get my hands dirty and to take pride in everything. I take that same approach with every spot I touch. Now I just sit in a nice chair while doing it.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Cimino: I’ve had a love for music since high school. I used to read all the liner notes on my vinyl. One day I remember going through my father’s records and thinking at that moment, I want to be that “sound engineer” listed in the notes. This led me to study audio at Columbia College in Chicago. I quickly gravitated towards post production audio classes and training. When I wasn’t recording and mixing music, I was doing creative sound design.

Carlson: I was always good with numbers and went to Michigan State to be an accountant. But two years in, I was unhappy. All I wanted was to work on music and compose, so I switched to audio engineering and never looked back. I knew the second I walked into my first studio, I had found my calling. People always say there isn’t a dream job; I disagree.

CAN YOU DESCRIBE YOUR COMPANY?
Cimino: A fun, stress-free environment full of artistry and technology.

Carlson: It is a place I look forward to every day. It’s like a family, solely focused on great creative.

CAN YOU NAME SOME RECENT SPOTS YOU HAVE WORKED ON?
Cimino: Snapple, RAM, Jeep, Universal Orlando, Cricket Wireless, Maserati.

Carlson: AT&T, Lay’s, McDonald’s, Bridgestone Golf.

Greg Carlson

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Carlson: It’s nearly impossible to pick one, but there is a project I see as pivotal in my time here in Dallas. It was shortly after I arrived six years ago. I think it was a boost to my confidence and in turn, enhanced my style. The client was The Home Depot and the campaign was Lets Do This. A creative I admire greatly here in town gave me the chance to spearhead the sonic approach for the work. There are many moments, milestones and memories, but this was a special project to me.

Cimino: There are so many. One of the most fun campaigns I worked on was for Snapple, where each spot opened with the “pop!” of the Snapple cap. I recorded several pops (close-miced) and selected one that I manipulated to sound larger than life but also retain the sound of the brands signature cap pop being opened. After the cap pops, the spot transforms into an exploding fruit infusion. The sound was created by smashing Snapple bottles for the glass break, crushing, smashing and squishing fruit with my hands, and using a hydrophone to record splashing and underwater sounds to create the slow-motion effect of the fruit morphing. So much fun.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cimino: During a mix, my go-tos are iZotope, Sound Toys and Slate Digital. Outside the studio I can’t live without my Apple!

Carlson: ProTools, all things iZotope, Native Instruments.

THIS IS A HIGH-STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cimino: Family and friends. I love watching my kiddos play select soccer. Relaxing pool or beachside with a craft cider. Or on a single path/trail with my mountain bike.

Carlson: I work on my home, build things, like to be outside. When I need to detach for a bit, I prefer dangerous power tools or being on a body of water.

Dell 6.15

Multiple Emmy-winner Edward J. Greene, CAS, has passed away

Ed Greene died peacefully in Los Angeles on August 9, with his family by his side. He was 82 years old and is survived by his wife and children.

Born and raised in New York City, Greene attended Rensselaer Polytechnic Institute.  He began his pro audio career with a summer job in 1954 at Allegro Studios in New York doing voice and piano demos for music publishers. Within two years the studio was doing full recording sessions. Greene joined the army 1956 and served as recording engineer for the US Army Band and Chorus in Washington, DC. Upon discharge from the Army he co-founded Edgewood Studios in Washington, with partners radio and television commentator Charles Osgood and composer George Wilkins. Some of his recordings are legendary and include Charlie Byrd and Stan Getz’s “Jazz Samba” and Ramsey Lewis’ “The In Crowd.”

In 1970, Greene came to California as chief engineer for MGM Records and worked with Sammy Davis Jr., The Osmonds, Lou Rawls and the prominent artists of that time. When many of these artists started doing television programs, he was asked to participate. He was brought into television mixing by Frank Sinatra, at a production meeting for Sinatra’s first broadcast.

Greene mixed many music, variety and award shows as well as earned the well-deserved reputation of being the “go-to” guy when it came to doing live drama for television like ER, Fail Safe and The West Wing Live. Ed garnered 22 Emmy wins, his most recent in 2015, and an astonishing 61 Emmy nominations (ranking him 3rd for most nominations and 2nd for most wins by an individual). He was a member of the Television Academy when it formed its current incarnation in 1977.

 

had a special affinity for mixing live broadcasts. His live productions included decades of The Kennedy Center Honors, The Grammy Awards, The Tony Awards, The Academy Awards and The SAG Awards. He also mixed the Live from Lincoln Center specials, Carnegie Hall, Live at 100, numerous Macy’s Thanksgiving Day Parades, Tournament of Roses Parade, The AFI Life Achievement Awards, The 52nd Presidential Inaugural Gala, The 1996 Summer Olympics, The 2002 Winter Olympics Opening and Closing Ceremonies and years of American Idol. His live production work garnered him a Cinema Audio Society Award and four additional CAS nominations.

Green served on the Board of Directors of the Cinema Audio Society from 2005 to his death. In 2007, Ed was presented with the CAS Career Achievement Award recognizing his career, his willingness to mentor and his contribution to the art of sound.

 


Richard King talks sound design for Dunkirk

Using historical sounds as a reference

By Mel Lambert

Currently garnering critical acclaim for its stunning and immersive soundtrack — particularly the IMAX showcase screenings — writer/director Christopher Nolan’s latest film follows the fate of nearly 400,000 allied soldiers who were marooned on the beaches of Dunkirk, and the extraordinary plans to rescue them using small ships from nearby English seaports. Although, sadly, more than 68,000 soldiers were captured or killed during the Battle of Dunkirk and the subsequent retreat, more than 300,000 were rescued over a nine-day period in May 1940.

Uniquely, Dunkirk’s primary story arcs — the Mole, or harbor from which the larger ships can take off troops; the Sea, focusing on the English flotilla of small boats; and the Air, spotlighting the activities of Spitfire pilots who protect the beaches and ships from German air-force attacks — follow different timelines, with the Mole sequences being spread over a week, the Sea over a day and the Air over an hour. A Warner Bros. release, Dunkirk stars Fionn Whitehead, Mark Rylance, Cillian Murphy, Tom Hardy and Kenneth Branagh. (An uncredited Michael Caine is the voice heard during various radio communications.)

Richard King

Marking his sixth collaboration with Nolan, supervising sound editor Richard King worked previously on Interstellar (2014), The Dark Knight Rises, Inception, The Dark Knight and The Prestige. He brings his unique sound perspective to these complex narratives, often with innovative sound design. Born in Tampa, King attended the University of South Florida, graduating with a BFA in painting and film, and entered the film industry in 1985. He is the recipient of three Academy Awards for Best Achievement in Sound Editing for Inception, The Dark Knight and Master and Commander: The Far Side of the World (2003), plus two BAFTA Awards and four MPSE Golden Reel Awards for Best Sound Editing.

The Sound of History
“When we first met to discuss the film,” King recalls, “Chris [Nolan] told me that he wanted Dunkirk to be historically accurate but not slavishly so — he didn’t plan to make a documentary. For example, several [Junkers Ju 87] Stuka dive bombers appear in the film, but there are no high-quality recordings of these aircraft, which had sirens built into the wheel struts for intimidation purposes. There are no Stukas still flying, nor could I find any design drawings so we could build our own. Instead, we decided to re-imagine the sound with a variety of unrelated sound effects and ambiences, using the period recordings as inspiration. We went out into a nearby desert with some real air raid sirens, which we over-cranked to make them more and more piercing — and to add some analog distortion. To this more ‘pure’ version of the sound we added an interesting assortment of other disparate sounds. I find the result scary as hell and probably very close to what the real thing sounded like.”

For other period Axis and Allied aircraft, King was able to locate several British Supermarine Spitfire fighters and a Bristol Blenheim bomber, together with a German Messerschmitt Bf 109 fighter. “There are about 200 Spitfires in the world that still fly; three were used during filming of Dunkirk,” King continues. “We received those recordings, and in post recorded three additional Spitfires.”

King was able to place up to 24 microphones in various locations around the airframe near the engine — a supercharged V-12 Rolls-Royce Merlin liquid-cooled model of 27-liter capacity, and later 37-liter Gremlin motors — as well as close to the exhaust and within the cockpit, as the pilots performed a number of aerial movements. “We used both mono and stereo mics to provide a wide selection for sound design,” he says.

King was looking for the sound of an “air ballet” with the aircraft moving quickly across the sky. “There are moments when the plane sounds are minimized to place the audience more in the pilot’s head, and there are sequences where the plane engines are more prominent,” he says. “We also wanted to recreate the vibrations of this vintage aircraft, which became an important sound design element and was inspired by the shuddering images. I remember that Chris went up in a trainer aircraft to experience the sensation for himself. He reported that it was extremely loud with lots of vibration.

To match up with the edited visuals secured from 65/70mm IMAX and Super Panavision 65mm film cameras, King needed to produce a variety of aircraft sounds. “We had an ex-RAF pilot that had flown in modern dogfights to recreate some of those wartime flying gymnastics. The planes don’t actually produce dramatic changes in the sound when throttling and maneuvering, so I came up with a simple and effective way to accentuate this somewhat. I wanted the planes to respond to the pilots stick and throttle movements immediately.”

For armaments, King’s sound effects recordists John Fasal and Eric Potter oversaw the recording of a vintage Bofors 40mm anti-aircraft cannon seen aboard the allied destroyers and support ships. “We found one in Napa Valley,” north of San Francisco, says King. “The owner had to make up live rounds, which we fired into a nearby hill. We also recorded a number of WWII British Lee-Enfield bolt-action rifles and German machine guns on a nearby range. We had to recreate the sound of the Spitfire’s guns, because the actual guns fitted to the Spitfires overheat when fired at sea level and cannot maintain the 1,000 rounds/minute rate we were looking for, except at altitude.”

King readily acknowledges the work at Warner Bros Sound Services of sound-effects editor Michael Mitchell, who worked on several scenes, including the ship sinkings, and sound effects editor Randy Torres, who worked with King on the plane sequences.

Group ADR was done primarily in the UK, “where we recorded at De lane Lea and onboard a decommissioned WWII warship owned by the Imperial War Museum,” King recalls. “The HMS Belfast, which is moored on the River Thames in central London, was perfect for the reverberant interiors we needed for the various ships that sink in the film. We also secured some realistic Foley of people walking up and down ladders and on the superstructure.” Hugo Weng served as dialog editor and David Bach as supervising ADR editor.

Sounds for Moonstone, the key small boat whose fortunes the film follows across the English Channel, were recorded out of Marina del Rey in Southern California, “including its motor and water slaps against the hull. “We also secured some nice Foley on deck, as well as opening and closing of doors,” King says.

Conventional Foley was recorded at Skywalker Sound in Northern California by Shelley Roden, Scott Curtis and John Roesch. “Good Foley was very important for Dunkirk,” explains King. “It all needed to sound absolutely realistic and not like a Hollywood war movie, with a collection of WWII clichés. We wanted it to sound as it would for the film’s characters. John and his team had access to some great surfaces and textures, and a wonderful selection of props.” Michael Dressel served as supervising Foley editor.

In terms of sound design, King offers that he used historical sounds as a reference, to conjure up the terror of the Battle for Dunkirk. “I wanted it to feel like a well-recorded version of the original event. The book ‘Voices of Dunkirk,’ written by Joshua Levine and based on a compilation of first-hand accounts of the evacuation, inspired me and helped me shape the explosions on the beach, with the muffled ‘boom’ as the shells and bombs bury themselves in the sand and then explode. The under-water explosions needed to sound more like a body slam than an audible noise. I added other sounds that amped it a couple more degrees.”

The soundtrack was re-recorded in 5.1-channel format at Warner Bros. Sound Services Stage 9 in Burbank during a six-week mix by mixers Gary Rizzo handling dialog, with sound effects and music overseen by Gregg Landaker — this was his last film before his retiring. “There was almost no looping on the film aside from maybe a couple of lines,” King recalls. “Hugo Weng mined the recordings for every gem, and Gary [Rizzo] was brilliant at cleaning up the voices and pushing them through the barrage of sound provided by sound effects and music somehow without making them sound pushed. Production recordist Mark Weingarten faced enormous challenges, contending with strong wind and salt spray, but he managed to record tracks Gary could work with.”

The sound designer reports that he provided some 20 to 30 tracks of dialog and ADR “with options for noisy environments,” plus 40 to 50 tracks of Foley, dependent on the action. This included shoes and hob-nailed army boots, and groups of 20, especially in the ship scenes. “The score by composer Hans Zimmer kept evolving as we moved through the mixing process,” says King. “Music editor Ryan Rubin and supervising music editor Alex Gibson were active participants in this evolution.”

“We did not want to repeat ourselves or repeat others work,” King concludes. “All sounds in this movie mean something. Every scene had to be designed with a hard-hitting sound. You need to constantly question yourself: ‘Is there a better sound we could use?’ Maybe something different that is appropriate to the sequence that recreates the event in a new and fresh light? I am super-proud of this film and the track.”

Nolan — who was born in London to an American mother and an English father and whose family subsequently split their time between London and Illinois — has this quote on his IMDB page: “This is an essential moment in the history of the Second World War. If this evacuation had not been a success, Great Britain would have been obliged to capitulate. And the whole world would have been lost, or would have known a different fate: the Germans would undoubtedly have conquered Europe, the US would not have returned to war. Militarily it is a defeat; on the human plane it is a colossal victory.”

Certainly, the loss of life and supplies was profound — wartime Prime Minister Winston Churchill described Operation Dynamo as “the greatest military disaster in our long history.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.


The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com


Nugen adds 3D Immersive Extension to Halo Upmix

Nugen Audio has updated its Halo Upmix with a new 3D Immersive Extension, adding further options beyond the existing Dolby Atmos bed track capability. The 3D Immersive Extension now provides ambisonic-compatible output as an alternative to channel-based output for VR, game and other immersive applications. This makes it possible to upmix, re-purpose or convert channel-based audio for an ambisonic workflow.

With this 3D Immersive Extension, Halo fully supports Avid’s newly announced Pro Tools V.2.8, now with native 7.1.2 stems for Dolby Atmos mixing. The combination of Pro Tools 12.8 and Halo 3D Immersive Extension can provide a more fluid workflow for audio post pros handling multi-channel and object-based audio formats.

Halo Upmix is available immediately at a list price of $499 for both OS X and Windows, with support for Avid AAX, AudioSuite, VST2, VST3 and AU formats. The new 3D Immersive Extension replaces the Halo 9.1 Extension and can now be purchased for $199. Owners of the existing Halo 9.1 Extension can upgrade to the Halo 3D Immersive Extension for no additional cost. Support for native 7.1.2 stems in Avid Pro Tools 12.8 is available on launch.


Behind the Title: Nylon Studios creative director Simon Lister

NAME: Simon Lister

COMPANY: Nylon Studios

CAN YOU DESCRIBE YOUR COMPANY?
Nylon Studios is a New York- and Sydney-based music and sound house offering original composition and sound design for films and commercials. I am based in the Australia location.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
I help manage and steer the company, while also serving as a sound designer, client liaison, soundtrack creative and thinker.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People are constantly surprised with the amount of work that goes into making a soundtrack.

WHAT TOOLS DO YOU USE?
I use Avid Pro Tools, and some really cool plugins

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is being able to bring a film to life through sound.

WHAT’S YOUR LEAST FAVORITE?
At times, clients can be so stressed and make things difficult. However, sometimes we just need to sit back and look at how lucky we are to be in such a fun industry. So in that case, we try our best to make the client’s experience with us as relaxing and seamless as possible.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Anything that involves me having a camera in my hand and taking pictures.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was pretty young. I got a great break when I was 19 years old in one of the best music studios in New Zealand and haven’t stopped since. Now, I’ve been doing this for 31 years (cough).

Honda Civic spot

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
In the last couple of months I think I’ve counted several different car brand spots we’ve worked on, including Honda, Hyundai, Subaru, Audi and Toyota. All great spots to sink our teeth and ears into.

Also we have been working on the great wildlife series Tales by Light, which is being played on National Geographic and Netflix.

For Every Child

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It would be having the opportunity to film and direct my own commercial, For Every Child, for Unicef global rebranding TVC. We had the amazing voiceover of Liam Neeson and the incredible singing voice of Lisa Gerard (Gladiator, Heat, Black Hawk Down).

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My camera, my computer and my motorbike.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I ride motorbikes throughout Morocco, Baja, Himalayas, Mongolia, Vietnam, Thailand, New Zealand and in the traffic of India.

FMPX8.14

Harbor’s Bobby Johanson discusses ADR for TV and film

By Jennifer Walden

A lot of work comes in and out of the ADR department at New York City’s Harbor Picture Company. A lot.

Over the past year alone, ADR mixer Bobby Johanson has been cranking out ADR and loop group for films such as Beauty and the Beast, The Light Between Oceans, Patriots Day, The Girl on the Train, Triple 9, Hail, Caesar! and more.

His expertise goes beyond film though. Johanson also does ADR for series, for shows like Amazon’s Red Oaks and their upcoming series The Marvelous Mrs. Maisel, and Netflix’s Master of None, which we will touch on lightly in a bit. First, let’s talk the art of ADR.

According to Johanson, “Last week, I did full days on three different films. Some weeks we record full days, nights and weekends, depending on the season, film festivals, what’s in post, actor availability and everything else that goes on with scheduling. Some sessions will book for two hours out of a day, while another client will want eight hours because of actor availability.”

With so many projects passing through his studio, efficiency is essential, but not at the cost of a job well done. “You have an actor on the stage and the director in the room, and you have to make things efficient,” says Johanson. “You have to play lines back as they are going to be in the show. You want to play the line and hear, ‘Was that ADR?’ Instantly, it’s a whole new world. People have been burned by not so good ADR in the past, and I feel like that compromises the performance. It’s very important for the talent to feel like they’re in good hands, so they forget about the technical side and just focus on their acting.”

Johanson got his start in ADR at New York’s Sound One facility, first as a messenger running reels around, and then moving up to the machine room when there was an opening for Sound One’s new ADR stage. “We didn’t really have anyone teaching us. The job was shown to us once; then we just had to figure out how to thread the dubbers and the projector. Once we got those hung, we would sit in the ADR studio and watch. I picked up a lot of my skills old-school. I’ve learned to incorporate those techniques into current technology and that works well for us.”

Tools
Gear-wise, one staple of his ADR career has been the Soundmaster ADR control system. Johanson calls it an “old-school tool,” probably 25 years old at this point, but he hasn’t found anything faster for recording ADR. “I used it at Sound One, and I used it at Digital Cinema, and now I use it here at Harbor. Until someone can invent another ADR synchronizer, this is the best for me.”

Johanson integrates the Soundmaster system with Avid Pro Tools 12 and works as a two-man team with ADR recordist Mike Rivera. “You can’t beat the efficiency and the attention to detail that you can get with the two-man team.”

Rivera tags the takes and makes minor edits while Johanson focuses on the director and the talent. “Because we are working on a synchronizer, the ADR recordist can do things that you couldn’t do if you were just shooting straight to Pro Tools,” explains Johanson. “We can actually edit on the fly and instantly playback the line in sync. I have the time to get the reverb on it and sweeten it. I can mix the line in because I’m not cutting it or pulling it into the track. That is being done while the system is moving on the pre-roll for a playback.”

For reverb, Johanson chooses an outboard Lexicon PCM80. This puts the controls easily within reach, and he can quickly add or change the reverb on the fly, helping the clean ADR line to sync into the scene. “The reverb unit is pretty old, but it is single-handedly the easiest reverb unit that you can use. There are four room sizes, and then you can adjust the delay of the reverb four times. I have been using this reverb for so many years now that I can match any reverb from any movie or TV show because I know this unit so well.”

Another key piece of gear in his set-up is an outboard Eventide H3000 SE sampler, which Johanson uses to sample the dialogue line they need to replace and play it back over and over for the actor to re-perform. “We offer a variety of ways to do ADR, like using beeps and having the actor perform to picture, but many actors prefer an older method that goes back to ‘looping.’ Back in the day, you would just run a line over and over again and the actor would emulate it. Then we put the select take of that line to picture. It’s a method that 60 percent of our actors who come in here love to do, and I can do that using the sampler.”

He also uses the sampler for playback. By sampling background noise from the scene, he can play that under the ADR line during playback and it helps the ADR to sit in the scene. “I keep the sampler and reverb as outboard gear because I can control them quickly. I’m doing things freestyle and we don’t have to stop the session. We don’t have to stop the system and wait for a playback or wait to do a record pass. Because we are a two-man operation, I can focus on these pieces of gear while Mike is tagging the takes with their cue numbers and managing them in the Pro Tools session for delivery. I can’t find an easier or quicker way to do what I do.”

While Johanson’s set-up may lack the luster of newly minted audio tools, it’s hard to argue with results. It’s not a case of “if it’s not broke then don’t fix it,” but rather a case of “don’t mess with perfection.”

Master of None
The set-up served them well while recording ADR and loop group for Netflix’s Emmy-winning comedy series Master of None. “Kudos to production sound mixer Michael Barosky because there wasn’t too much dialogue that we needed to replace with ADR for Season 2,” says Johanson. “But we did do a lot of loop group — sweetening backgrounds and walla, and things like that.”

For the Italian episodes, they brought in bilingual actors to record Italian language loop group. One scene that stood out for Johanson was the wedding scene in Italy, where the guests start jumping into the swimming pool. “We have a nice-sized ADR stage and so that frees us up to do a lot of movement. We were directing the actors to jump in front of the mic and run by the mic, to give us the effect of people jumping into the pool. That worked quite nicely in the track.”


VR Audio — Differences between A Format and B Format

By Claudio Santos

A Format and B Format. What is the difference between them after all? Since things can get pretty confusing, especially with such non-descriptive nomenclature, we thought we’d offer a quick reminder of what each is in the spatial audio world.

A Format and B Format are two analog audio standards that are part of the ambisonics workflow.

A Format is the raw recording of the four individual cardioid capsules in ambisonics microphones. Since each microphone has different capsules at slightly different distances, the A Format is somewhat specific to the microphone model.

B Format is the standardized format derived from the A Format. The first channel carries the amplitude information of the signal, while the other channels determine the directionality through phase relationships between each other. Once you get your sound into B Format you can use a variety of ambisonic tools to mix and alter it.

It’s worth remembering that the B Format also has a few variations on the standard itself; the most important to understand are Channel Order and Normalization standards.

Ambisonics in B Format consists of four channels of audio — one channel carries the amplitude signal while the others represent the directionality in a sphere through phase relationships. Since this can only be achieved by the combination between the channels, it is important that:

– The channels follow a known order
– The relative level between the amplitude channel and the others must be known in order to properly combine them together

Each of these characteristics has a few variations, with the most notable ones being

– Channel Order
– Furse-Malham standard
– ACN standard

– Normalization (level)
– MaxN standard
-SN3D standard

The combination of these variations result in two different B Format standards:
– Furse-Malham – Older standard that is still supported by a variety of plug-ins and other ambisonic processing tools
– AmbiX – Modern standard that has been widely adopted by distribution platforms such as YouTube

Regardless of the format you will deliver your ambisonics file in, it is vital to keep track of the standards you are using in your chain and make the necessary conversions when appropriate. Otherwise rotations and mirrors will end up in the wrong direction and the whole soundsphere will break down into a mess.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

Audio post vet Rex Recker joins Digital Arts in NYC

Rex Recker has joined the team at New York City’s Digital Arts as a full-time audio post mixer and sound designer. Recker, who co-founded NYC’s AudioEngine after working as VP and audio post mixer at Photomag recording studios, is an award-winning mixer with a long list of credits. Over the span of his career he has worked on countless commercials with clients including McCann Erickson JWT, Ogilvy & Mather, BBDO, DDB, HBO and Warner Books.

Over the years, Recker has developed a following of clients who seek him out for his audio post mixer talents — they seek his expertise in surround sound audio mixing for commercials airing via broadcast, Web and cinemas. In addition to spots, Recker also mixes long-form projects, including broadcast specials and documentaries.

Since joining the Digital Arts team, Recker has already worked on several commercial campaigns, promos and trailers for such clients as Samsung, SlingTV, Ford, Culturelle, Orvitz, NYC Department of Health, and HBO Documentary Films.

Digital Arts, owned by Axel Ericson, is an end-to-end production, finishing and audio facility.