Tag Archives: audio post production

Behind the Title: 3008 Editorial’s Matt Cimino and Greg Carlson

NAMES: Matt Cimino and Greg Carlson

COMPANY: 3008 Editorial in Dallas

WHAT’S YOUR JOB TITLE?
Cimino: We are sound designers/mixers.

WHAT DOES THAT ENTAIL?
Cimino: Audio is a storytelling tool. Our job is to enhance the story directly or indirectly and create the illusion of depth, space and a sense of motion with creative sound design and then mix that live in the environment of the visuals.

Carlson: And whenever someone asks, I always tend to prioritize sound design before mixing. Although I love every aspect of what we do, when a spot hits my room as a blank slate, it’s really the sound design that can take it down a hundred different paths. And for me, it doesn’t get better than that.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Carlson: I’m not sure a brief job title can encompass what anyone really does. I am a composer as well as a sound designer/mixer, so I bring that aspect into my work. I love musical elements that help stitch a unified sound into a project.

Cimino: That there really isn’t “a button” for that!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Carlson: The freedom. Having the opportunity to take a project where I think it should go and along the way, pushing it to the edge and back. Experimenting and adapting makes every spot a completely new trip.

Matt Cimino

Cimino: I agree. It’s the challenge of creating an expressive and aesthetically pleasing experience by taking the soundtrack to a whole new level.

WHAT’S YOUR LEAST FAVORITE?
Cimino: Not Much. However, being an imperfect perfectionist, I get pretty bummed when I do not have enough time to perfect the job.

Carlson: People always say, “It’s so peaceful and quiet in the studio, as if the world is tuned out.” The downside of that is producer-induced near heart attacks. See, when you’re rocking out at max volume and facing away from the door, well, people tend to come in and accidentally scare you to death.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Cimino: I’m a morning person!

Carlson: Time is an abstract notion in a dark room with no windows, so no time in particular. However, the funniest time of day is when you notice you’re listening about 15 dB louder than the start of the day. Loud is better.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cimino: Carny. Or Evel Knievel.

Carlson: Construction/carpentry. Before audio, I had lots of gritty “hands-on” jobs. My dad taught me about work ethic, to get my hands dirty and to take pride in everything. I take that same approach with every spot I touch. Now I just sit in a nice chair while doing it.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Cimino: I’ve had a love for music since high school. I used to read all the liner notes on my vinyl. One day I remember going through my father’s records and thinking at that moment, I want to be that “sound engineer” listed in the notes. This led me to study audio at Columbia College in Chicago. I quickly gravitated towards post production audio classes and training. When I wasn’t recording and mixing music, I was doing creative sound design.

Carlson: I was always good with numbers and went to Michigan State to be an accountant. But two years in, I was unhappy. All I wanted was to work on music and compose, so I switched to audio engineering and never looked back. I knew the second I walked into my first studio, I had found my calling. People always say there isn’t a dream job; I disagree.

CAN YOU DESCRIBE YOUR COMPANY?
Cimino: A fun, stress-free environment full of artistry and technology.

Carlson: It is a place I look forward to every day. It’s like a family, solely focused on great creative.

CAN YOU NAME SOME RECENT SPOTS YOU HAVE WORKED ON?
Cimino: Snapple, RAM, Jeep, Universal Orlando, Cricket Wireless, Maserati.

Carlson: AT&T, Lay’s, McDonald’s, Bridgestone Golf.

Greg Carlson

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Carlson: It’s nearly impossible to pick one, but there is a project I see as pivotal in my time here in Dallas. It was shortly after I arrived six years ago. I think it was a boost to my confidence and in turn, enhanced my style. The client was The Home Depot and the campaign was Lets Do This. A creative I admire greatly here in town gave me the chance to spearhead the sonic approach for the work. There are many moments, milestones and memories, but this was a special project to me.

Cimino: There are so many. One of the most fun campaigns I worked on was for Snapple, where each spot opened with the “pop!” of the Snapple cap. I recorded several pops (close-miced) and selected one that I manipulated to sound larger than life but also retain the sound of the brands signature cap pop being opened. After the cap pops, the spot transforms into an exploding fruit infusion. The sound was created by smashing Snapple bottles for the glass break, crushing, smashing and squishing fruit with my hands, and using a hydrophone to record splashing and underwater sounds to create the slow-motion effect of the fruit morphing. So much fun.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cimino: During a mix, my go-tos are iZotope, Sound Toys and Slate Digital. Outside the studio I can’t live without my Apple!

Carlson: ProTools, all things iZotope, Native Instruments.

THIS IS A HIGH-STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cimino: Family and friends. I love watching my kiddos play select soccer. Relaxing pool or beachside with a craft cider. Or on a single path/trail with my mountain bike.

Carlson: I work on my home, build things, like to be outside. When I need to detach for a bit, I prefer dangerous power tools or being on a body of water.

The sounds of Spider-Man: Homecoming

By Jennifer Walden

Columbia Pictures and Marvel Studios’ Spider-Man: Homecoming, directed by Jon Watts, casts Tom Holland as Spider-Man, a role he first played in 2016 for Marvel Studios’ Captain America: Civil War (directed by Joe and Anthony Russo).

Homecoming reprises a few key character roles, like Tony Stark/Iron Man (Robert Downey Jr.) and Aunt May Parker (Marisa Tomei), and it picks up a thread of Civil War’s storyline. In Civil War, Peter Parker/Spider-Man helped Tony Stark’s Avengers in their fight against Captain America’s Avengers. Homecoming picks up after that battle, as Parker settles back into his high school life while still fighting crime on the side to hone his superhero skills. He seeks to prove himself to Stark but ends up becoming entangled with the supervillain Vulture (Michael Keaton).

Steven Ticknor

Spider-Man: Homecoming supervising sound editors/sound designers Steven Ticknor and Eric A. Norris — working at Culver City’s Sony Pictures Post Production Services — both brought Spidey experience to the film. Ticknor was a sound designer on director Sam Raimi’s Spider-Man (2002) and Norris was supervising sound editor/sound designer on director Marc Webb’s The Amazing Spider-Man 2 (2014). With experiences from two different versions of Spider-Man, together Ticknor and Norris provided a well-rounded knowledge of the superhero’s sound history for Homecoming. They knew what’s worked in the past, and what to do to make this Spider-Man sound fresh. “This film took a ground-up approach but we also took into consideration the magnitude of the movie,” says Ticknor. “We had to keep in mind that Spider-Man is one of Marvel’s key characters and he has a huge fan base.”

Web Slinging
Being a sequel, Ticknor and Norris honored the sound of Spider-Man’s web slinging ability that was established in Captain America: Civil War, but they also enhanced it to create a subtle difference between Spider-Man’s two suits in Homecoming. There’s the teched-out Tony Stark-built suit that uses the Civil War web-slinging sound, and then there’s Spider-Man’s homemade suit. “I recorded a couple of 5,000-foot magnetic tape cores unraveling very fast, and to that I added whooshes and other elements that gave a sense of speed. Underneath, I had some of the web sounds from the Tony Stark suit. That way the sound for the homemade suit had the same feel as the Stark suit but with an old-school flair,” explains Ticknor.

One new feature of Spider-Man’s Stark suit is that it has expressive eye movements. His eyes can narrow or grow wide with surprise, and those movements are articulated with sound. Norris says, “We initially went with a thin servo-type sound, but the filmmakers were looking for something less electrical. We had the idea to use the lens of a DSLR camera to manually zoom it in and out, so there’s no motor sound. We recorded it up close-up in the quiet environment of an unused ADR stage. That’s the primary sound for his eye movement.”

Droney
Another new feature is the addition of Droney, a small reconnaissance drone that pops off of Spider-Man’s suit and flies around. The sound of Droney was one of director Watt’s initial focus-points. He wanted it sound fun and have a bit of personality. He wanted Droney “to be able to vocalize in a way, sort of like Wall-E,” explains Norris.

Ticknor had the idea of creating Droney’s sound using a turbo toy — a small toy that has a mouthpiece and a spinning fan. Blowing into the mouthpiece makes the fan spin, which generates a whirring sound. The faster the fan spins, the higher the pitch of the generated sound. By modulating the pitch, they created a voice-like quality for Droney. Norris and sound effects editor Andy Sisul performed and recorded an array of turbo toy sounds to use during editorial. Ticknor also added in the sound of a reel-to-reel machine rewinding, which he sped up and manipulated “so that it sounded like Droney was fluttering as it was flying,” Ticknor says.

The Vulture
Supervillain the Vulture offers a unique opportunity for sound design. His alien-tech enhanced suit incorporates two large fans that give him the ability to fly. Norris, who was involved in the initial sound design of Vulture’s suit, created whooshes using Whoosh by Melted Sounds — a whoosh generator that runs in Native Instruments Reaktor. “You put individual samples in there and it creates a whoosh by doing a Doppler shift and granular synthesis as a way of elongating short sounds. I fed different metal ratcheting sounds into it because Vulture’s suit almost has these metallic feathers. We wanted to articulate the sound of all of these different metallic pieces moving together. I also fed sword shings into it and came up with these whooshes that helped define the movement as the Vulture was flying around,” he says. Sound designer/re-recording mixer Tony Lamberti was also instrumental in creating Vulture’s sound.

Alien technology is prevalent in the film. For instance, it’s a key ingredient to Vulture’s suit. The film’s sound needed to reflect the alien influence but also had to feel realistic to a degree. “We started with synthesized sounds, but we then had to find something that grounded it in reality,” reports Ticknor. “That’s always the balance of creating sound design. You can make it sound really cool, but it doesn’t always connect to the screen. Adding organic elements — like wind gusts and debris — make it suddenly feel real. We used a lot of synthesized sounds to create Vulture, but we also used a lot of real sounds.”

The Washington Monument
One of the big scenes that Ticknor handled was the Washington Monument elevator sequence. Spider-Man stands on the top of the Washington Monument and prepares to jump over a helicopter that looms ever closer. He clears the helicopter’s blades and shoots a web onto the helicopter’s skid, using that to sling himself through a window just in time to shoot another web that grabs onto the compromised elevator car that contains his friends. “When Spider-Man jumps over the helicopter, I couldn’t wait to make that work perfectly,” says Ticknor. “When he is flying over the helicopter blades it sounds different. It sounds more threatening. Sound creates an emotion but people don’t realize how sound is creating the emotion because it is happening so quickly sometimes.”

To achieve a more threatening blade sound, Ticknor added in scissor slicing sounds, which he treated using a variety of tools like zPlane Elastique Pitch 2 and plug-ins from FabFilter plug-ins and Soundtoys, all within the Avid Pro Tools 12 environment. “This made the slicing sound like it was about to cut his head off. I took the helicopter blades and slowed them down and added low-end sweeteners to give a sense of heaviness. I put all of that through the plug-ins and basically experimented. The hardest part of sound design is experimenting and finding things that work. There’s also music playing in that scene as well. You have to make the music play with the sound design.”

When designing sounds, Ticknor likes to generate a ton of potential material. “I make a library of sound effects — it’s like a mad science experiment. You do something and then wonder, ‘How did I just do that? What did I just do?’ When you are in a rhythm, you do it all because you know there is no going back. If you just do what you need, it’s never enough. You always need more than you think. The picture is going to change and the VFX are going to change and timings are going to change. Everything is going to change, and you need to be prepared for that.”

Syncing to Picture
To help keep the complex soundtrack in sync with the evolving picture, Norris used Conformalizer by Cargo Cult. Using the EDL of picture changes, Conformalizer makes the necessary adjustments in Pro Tools to resync the sound to the new picture.

Norris explains some key benefits of Conformalizer. “First, when you’re working in Pro Tools you can only see one picture at a time, so you have to go back and forth between the two different pictures to compare. With Conformalizer, you can see the two different pictures simultaneously. It also does a mathematical computation on the two pictures in a separate window, a difference window, which shows the differences in white. It highlights all the subtle visual effects changes that you may not have noticed.

Eric Norris

For example, in the beginning of the film, Peter leaves school and heads out to do some crime fighting. In an alleyway, he changes from his school clothes into his Spider-Man suit. As he’s changing, he knocks into a trash can and a couple of rats fall out and scurry away. Those rats were CG and they didn’t appear until the end of the process. So the rats in the difference window were bright white while everything else was a dark color.”

Another benefit is that the Conformalizer change list can be used on multiple Pro Tools sessions. Most feature films have the sound effects, including Foley and backgrounds, in one session. For Spider-Man: Homecoming, it was split into multiple sessions, with Foley and backgrounds in one session and the sound effects in another.

“Once you get that change list you can run it on all the Pro Tools sessions,” explains Norris. “It saves time and it helps with accuracy. There are so many sounds and details that match the visuals and we need to make sure that we are conforming accurately. When things get hectic, especially near the end of the schedule, and we’re finalizing the track and still getting new visual effects, it becomes a very detail-oriented process and any tools that can help with that are greatly appreciated.”

Creating the soundtrack for Spider-Man: Homecoming required collaboration on a massive scale. “When you’re doing a film like this, it just has to run well. Unless you’re really organized, you’ll never be able to keep up. That’s the beautiful thing, when you’re organized you can be creative. Everything was so well organized that we got an opportunity to be super creative and for that, we were really lucky. As a crew, we were so lucky to work on this film,” concludes Ticknor.


Jennifer Walden in a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.com

Sound — Wonder Woman’s superpower

By Jennifer Walden

When director Patty Jenkins first met with supervising sound editor James Mather to discuss Warner Bros. Wonder Woman, they had a conversation about the physical effects of low-frequency sound energy on the human body, and how it could be used to manipulate an audience.

“The military spent a long time investigating sound cannons that could fire frequencies at groups of people and debilitate them,” explains Mather. “They found that the lower frequencies were far more effective than the very high frequencies. With the high frequencies, you can simply plug your ears and block the sound. The low-end frequencies, however, impact the fluid content of the human body. Frequencies around 5Hz-9Hz can’t be heard, but can have physiological, almost emotional effects on the human body. Patty was fascinated by all of that. So, we had a very good sound-nerd talk at our first meeting — before we even talked about the story of the film.”

Jenkins was fascinated by the idea of sound playing a physical role as well as a narrative one, and that direction informed all of Mather’s sound editorial choices for Wonder Woman. “I was amazed by Patty’s intent, from the very beginning, to veer away from very high-end sounds. She did not want to have those featured heavily in the film. She didn’t want too much top-end sonically,” says Mather, who handled sound editorial at his Soundbyte Studios in West London.

James Mather (far right) and crew take to the streets.

Soundbyte Studios offers creative supervision, sound design, Foley and dialog editing. The facility is equipped with Pro Tools 12 systems and Avid S6 and S3 consoles. Their client list includes top studios like Warner Bros., Disney, Fox, Paramount, DreamWorks, Aardman and Pathe. Mather’s team includes dialog supervisor Simon Chase, and sound effects editors Jed Loughran and Samir Fočo. When Mather begins a project, he likes to introduce his team to the director as soon as possible “so that they are recognized as contributors to the soundtrack,” he says. “It gives the team a better understanding of who they are working with and the kind of collaboration that is expected. I always find that if you can get everyone to work as a collaborative team and everyone has an emotional investment or personal investment in the project, then you get better work.”

Following Jenkins’s direction, Mather and his team designed a tranquil sound for the Amazonian paradise of Themyscira. They started with ambience tracks that the film’s sound recordist Chris Munro captured while they were on-location in Italy. Then Mather added Mediterranean ambiences that he and his team had personally collected over the years. Mather embellished the ambience with songbirds from Asia, Australasia and the Amazon. Since there are white peacocks roaming the island, he added in modified peacock sounds. Howler monkeys and domestic livestock, like sheep and goats, round out the track. Regarding the sheep and goats, Mather says, “We pitched them and manipulated them slightly so that they didn’t sound quite so ordinary, like a natural history film. It was very much a case of keeping the soundtrack relatively sparse. We did not use crickets or cicadas — although there were lots there while they were filming, because we wanted to stay away the high-frequency sounds.”

Waterfalls are another prominent feature of Themyscira, according to Mather, but thankfully they weren’t really on the island so the sound recordings were relatively clean. The post sound team had complete control over the volume, distance and frequency range of the waterfall sounds. “We very much wanted the low-end roar and rumble of the waterfalls rather than high-end hiss and white noise.”

The sound of paradise is serene in contrast to London and the front lines of World War I. Mather wanted to exaggerate that difference by overplaying the sound of boats, cars and crowds as Steve [Chris Pine] and Diana [Gal Gadot] arrived in London. “This was London at its busiest and most industria

l time. There were structures being built on a major scale so the environment was incredibly active. There were buses still being drawn by horses, but there were also cars. So, you have this whole mishmash of old and new. We wanted to see Diana’s reaction to being somewhere that she has never experienced before, with sounds that she has never heard and things she has never seen. The world is a complete barrage of sensory information.”

They recorded every vehicle they could in the film, from planes and boats to the motorcycle that Steve uses to chase after Diana later on in the film. “This motorcycle was like nothing we had ever seen before,” explains Mather. “We knew that we would have to go and record it because we didn’t have anything in our sound libraries for it.”

The studio spent days preparing the century-old motorcycle for the recording session. “We got about four minutes of recording with it before it fell apart,” admits Mather. “The chain fell off, the sprockets broke and then it went up in smoke. It was an antique and probably shouldn’t have been used! The funny thing is that it sounded like a lawnmower. We could have just recorded a lawnmower and it would’ve sounded the same!”

(Mather notes that the motorcycle Steve rides on-screen was a modern version of the century-old one they got to record.)

Goosing Sounds
Mather and his sound team have had numerous opportunities to record authentic weapons, cars, tanks, planes and other specific war-era machines and gear for projects they’ve worked on. While they always start with those recordings as their sound design base, Mather says the audience’s expectation of a sound is typically different from the real thing. “The real sound is very often disappointing. We start with the real gun or real car that we recorded, but then we start to work on them, changing the texture to give them a little bit more punch or bite. We might find that we need to add some gun mechanisms to make a gun sound a bit snappier or a bit brighter and not so dull. It’s the same with the cars. You want the car to have character, but you also want it to be slightly faster or more detailed than it actually sounds. By the nature of filmmaking, you will always end up slightly embellishing the real sound.”

Take the gun battles in Wonder Woman, for instance. They have an obvious sequentiality. The gun fires, the bullet travels toward its target and then there is a noticeable impact. “This film has a lot of slow-motion bullets firing, so we had to amp up the sense of what was propelling that very slow-motion bullet. Recording the sound of a moving bullet is very hard. All of that had to be designed for the film,” says Mather.

In addition to the real era-appropriate vehicles, Wonder Woman has imaginary, souped-up creations too, like a massive bomber. For the bomber’s sound, Mather sought out artist Joe Rush who builds custom Mad Max-style vehicles. They recorded all of Rush’s vehicles, which had a variety of different V8, V12 and V6 engines. “They all sound very different because the engines are on solid metal with no suspension,” explains Mather. “The sound was really big and beefy, loud and clunky and it gave you a sense of a giant war monster. They had this growl and weight and threat that worked well for the German machines, which were supposed to feel threatening. In London, you had these quaint buses being drawn by horses, and the counterpoint to that were these military machines that the Germans had, which had to be daunting and a bit terrifying.

“One of the limitations of the WWI-era soundscapes is the lack of some very useful atmospheric sounds. We used tannoy (loudspeaker) effects on the German bomb factory to hint at the background activity, but had to be very sparing as these were only just invented in that era. (Same thing with the machine guns — a far more mechanical version than the ‘retatatat’ of the familiar WWII versions).”

One of Mather’s favorite scenes to design starts on the frontlines as Diana makes her big reveal as Wonder Woman. She crosses No Man’s Land and deflects the enemies’ fire with her bulletproof bracelets and shield. “We played with that in so many different ways because the music was such an important part of Patty’s vision for the film. She very much wanted the music to carry the narrative. Sound effects were there to be literal in many ways. We were not trying to overemphasize the machismo of it. The story is about the people and not necessarily the action they were in. So that became a very musical-based moment, which was not the way I would have normally done it. I learned a lot from Patty about the different ways of telling the story.”

The Powers
Following that scene, Wonder Woman recaptured the Belgian village they were fighting for by running ahead and storming into the German barracks. Mather describes it as a Guy Ritchie-style fight, with Wonder Woman taking on 25 German soldiers. “This is the first time that we really get to see her use all of her powers: the lasso, her bracelets, her shield, and even her shin guards. As she dances her way around the room, it goes from realtime into slow motion and back into realtime. She is repelling bullets, smashing guns with her back, using her shield as a sliding mat and doing slow-motion kicks. It is a wonderfully choreographed scene and it is her first real action scene.”

The scene required a fluid combination of realistic sounds and subdued, slow-motion sounds. “It was like pushing and pulling the soundtrack as things slowed down and then sped back up. That was a lot of fun.”

The Lasso
Where would Wonder Woman be without her signature lasso of truth? In the film, she often uses the lasso as a physical weapon, but there was an important scene where the lasso was called upon for its truth-finding power. Early in the film, Steve’s plane crashes and he’s washed onto Themyscira’s shore. The Amazonians bind Steve with the lasso and interrogate him. Eventually the lasso of truth overpowers him and he divulges his secrets. “There is quite a lot of acting on Chris Pine’s part to signify that he’s uncomfortable and is struggling,” says Mather. “We initially went by his performance, which gave the impression that he was being burned. He says, ‘This is really hot,’ so we started with sizzling and hissing sounds as if the rope was burning him. Again, Patty felt strongly about not going into the high-frequency realm because it distracts from the dialogue, so we wanted to keep the sound in a lower, more menacing register.”

Mather and his team experimented with adding a multitude of different elements, including low whispering voices, to see if they added a sense of personality to the lasso. “We kept the sizzling, but we pitched it down to make it more watery and less high-end. Then we tried a dozen or so variations of themes. Eventually we stayed with this blood-flow sound, which is like an arterial blood flow. It has a slight rhythm to it and if you roll off the top end and keep it fairly muted then it’s quite an intriguing sound. It feels very visceral.”

The last elements Mather added to the lasso were recordings he captured of two stone slabs grinding against each other in a circular motion, like a mill. “It created this rotating, undulating sound that almost has a voice. So that created this identity, this personality. It was very challenging. We also struggled with this when we did the Harry Potter films, to make an inert object have a character without making it sound a bit goofy and a bit sci-fi. All of those last elements we put together, we kept that very low. We literally raised the volume as you see Steve’s discomfort and then let it peel away every time he revealed the truth. As he was fighting it, the sound would rise and build up. It became a very subtle, but very meaningful, vehicle to show that the rope was actually doing something. It wasn’t burning him but it was doing something that was making him uncomfortable.”

The Mix
Wonder Woman was mixed at De Lane Lea (Warner Bros. London) by re-recording mixers Chris Burdon and Gilbert Lake. Mather reveals that the mixing process was exhausting, but not because of the people involved. “Patty is a joy to work with,” he explains. “What I mean is that working with frequencies that are so low and so loud is exhausting. It wasn’t even the volume; it was being exposed to those low frequencies all day, every day for nine weeks or so. It was exhausting, and it really took its toll on everybody.”

In the mix, Jenkins chose to have Rupert Gregson-Williams’s score lead nearly all of the action sequences. “Patty’s sensitivity and vision for the soundtrack was very much about the music and the emotion of the characters,” says Mather. “She was very aware of the emotional narrative that the music would bring. She did not want to lean too heavily on the sound effects. She knew there would be scenes where there would be action and there would be opportunities to have sound design, but I found that we were not pushing those moments as hard as you would expect. The sound design highs weren’t so high that you felt bereft of momentum and pace when those sound design heavy scenes were finished. We ended up maintaining a far more interesting soundtrack that way.”

With DC films like Batman v Superman: Dawn of Justice and Spider-Man, the audience expects a sound design-heavy track, but Jenkins’s music-led approach to Wonder Woman provides a refreshing spin on superhero film soundtracks. “The soundtrack is less supernatural and more down to earth,” says Mather. “I don’t think it could’ve been any other way. It’s not a predictable soundtrack and I really enjoyed that.”

Mather really enjoys collaborating with people who have different ideas and different approaches. “What was exciting about doing this film was that I was able to work with someone who had an incredibly strong idea about the soundtrack and yet was very happy to let us try different routes and options. Patty was very open to listening to different ideas, and willing to take the best from those ideas while still retaining a very strong vision of how the soundtrack was going to play for the audience. This is Patty’s DC story, her opportunity to open up the DC universe and give the audience a new look at a character. She was an extraordinary person to work with and for me that was the best part of the process. In the time of remakes, it’s nice to have a film that is fresh and takes a different approach.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @AudioJeney

Netflix’s The Last Kingdom puts Foley to good use

By Jennifer Walden

What is it about long-haired dudes strapped with leather, wielding swords and riding horses alongside equally fierce female warriors charging into bloody battles? There is a magic to this bygone era that has transfixed TV audiences, as evident by the success of HBO’s Game of Thrones, History Channel’s Vikings series and one of my favorites, The Last Kingdom, now on Netflix.

The Last Kingdom, based on a series of historical fiction novels by Bernard Cornwell, is set in late 9th century England. It tells the tale of Saxon-born Uhtred of Bebbanburg who is captured as a child by Danish invaders and raised as one of their own. Uhtred gets tangled up in King Alfred of Wessex’s vision to unite the three separate kingdoms (Wessex, Northumbria and East Anglia) into one country called England. He helps King Alfred battle the invading Danish, but Uhtred’s real desire is to reclaim his rightful home of Bebbanburg from his duplicitous uncle.

Mahoney Audio Post
The sound of the series is gritty and rich with leather, iron and wood elements. The soundtrack’s tactile quality is the result of extensive Foley work by Mahoney Audio Post, who has been with the series since the first season. “That’s great for us because we were able to establish all the sound for each character, village, environment and more, right from the first episode,” says Foley recordist/editor/sound designer Arran Mahoney.

Mahoney Audio Post is a family-operated audio facility in Sawbridgeworth, Hertfordshire, UK. Arran Mahoney explains the studio’s family ties. “Clare Mahoney (mum) and Jason Swanscott (cousin) are our Foley artists, with over 30 years of experience working on high-end TV shows and feature films. My brother Billy Mahoney and I are the Foley recordists and editors/sound designers. Billy Mahoney, Sr. (dad) is the founder of the company and has been a dubbing mixer for over 40 years.”

Their facility, built in 2012, houses a mixing suite and two separate audio editing suites, each with Avid Pro Tools HD Native systems, Avid Artist mixing consoles and Genelec monitors. The facility also has a purpose-built soundproof Foley stage featuring 20 different surfaces including grass, gravel, marble, concrete, sand, pebbles and multiple variations of wood.

Foley artists Clare Mahoney and Jason Swanscott.

Their mic collection includes a Røde NT1-A cardioid condenser microphone and a Røde NTG3 supercardioid shotgun microphone, which they use individually for close-micing or in combination to create more distant perspectives when necessary. They also have two other studio staples: a Neumann U87 large-diaphragm condenser mic and a Sennheiser MKH-416 short shotgun mic.

Going Medieval
Over the years, the Mahoney Foley team has collected thousands of props. For The Last Kingdom specifically, they visited a medieval weapons maker and bought a whole armory of items: swords, shields, axes, daggers, spears, helmets, chainmail, armor, bridles and more. And it’s all put to good use on the series. Mahoney notes, “We cover every single thing that you see on-screen as well as everything you hear off of it.” That includes all the feet (human and horses), cloth, and practical effects like grabs, pick-ups/put downs, and touches. They also cover the battle sequences.

Mahoney says they use 20 to 30 tracks of Foley just to create the layers of detail that the battle scenes need. Starting with the cloth pass, they cover the Saxon chainmail and the Vikings leather and fur armor. Then they do basic cloth and leather movements to cover non-warrior characters and villagers. They record a general weapons track, played at low volume, to provide a base layer of sound.

Next they cover the horses from head to hoof, with bridles and saddles, and Foley for the horses’ feet. When asked what’s the best way to Foley horse hooves, Mahoney asserts that it is indeed with coconuts. “We’ve also purchased horseshoes to add to the stable atmospheres and spot FX when required,” he explains. “We record any abnormal horse movements, i.e. crossing a drawbridge or moving across multiple surfaces, and sound designers take care of the rest. Whenever muck or gravel is needed, we buy fresh material from the local DIY stores and work it into our grids/pits on the Foley stage.”

The battle scenes also require Foley for all the grabs, hits and bodyfalls. For the blood and gore, they use a variety of fruit and animal flesh.

Then there’s a multitude of feet to cover the storm of warriors rushing at each other. All the boots they used were wrapped in leather to create an authentic sound that’s true to the time. Mahoney notes that they didn’t want to capture “too much heel in the footsteps, while also trying to get a close match to the sync sound in the event of ADR.”

Surfaces include stone and marble for the Saxon castles of King Alfred and the other noble lords. For the wooden palisades and fort walls, Mahoney says they used a large wooden base accompanied by wooden crates, plinths, boxes and an added layer of controlled creaks to give an aged effect to everything. On each series, they used 20 rolls of fresh grass, lots of hay for the stables, leaves for the forest, and water for all the sea and river scenes. “There were many nights cleaning the studio after battle sequences,” he says.

In addition to the aforementioned props of medieval weapons, grass, mud, bridles and leather, Mahoney says they used an unexpected prop: “The Viking cloth tracks were actually done with samurai suits. They gave us the weight needed to distinguish the larger size of a Danish man compared to a Saxon.”

Their favorite scenes to Foley, and by far the most challenging, were the battle scenes. “Those need so much detail and attention. It gives us a chance to shine on the soundtrack. The way that they are shot/edited can be very fast paced, which lends itself well to micro details. It’s all action, very precise and in your face,” he says. But if they had to pick one favorite scene, Mahoney says it would be “Uhtred and Ragnar storming Kjartan’s stronghold.”

Another challenging-yet-rewarding opportunity for Foley was during the slave ship scenes. Uhtred and his friend are sold into slavery as rowers on a Viking ship, which holds a crew of nearly 30 men. The Mahoney team brought the slave ship to life by building up layers of detail. “There were small wood creaks with small variations of wood and big creaks with larger variations of wood. For the big creaks, we used leather and a broomstick to work into the wood, creating a deep creak sound by twisting the three elements against each other. Then we would pitch shift or EQ to create size and weight. When you put the two together it gives detail and depth. Throw in a few tracks of rigging and pulleys for good measure and you’re halfway there,” says Mahoney.

For the sails, they used a two-mic setup to record huge canvas sheets to create a stereo wrap-around feel. For the rowing effects, they used sticks, brooms and wood rubbing, bouncing, or knocking against large wooden floors and solid boxes. They also covered all the characters’ shackles and chains.

Foley is a very effective way to draw the audience in close to a character or to help the audience feel closer to the action on-screen. For example, near the end of Season 2’s finale, a loyal subject of King Alfred has fallen out of favor. He’s eventually imprisoned and prepares to take his own life. The sound of his fingers running down the blade and the handling of his knife make the gravity of his decision palpable.

Mahoney shares another example of using Foley to draw the audience in — during the scene when Sven is eaten by Thyra’s wolves (following Uhtred and Ragnar storming Kjartan’s stronghold). “We used oranges and melons for Sven’s flesh being eaten and for the blood squirts. Then we created some tracks of cloth and leather being ripped. Specially manufactured claw props were used for the frantic, ravenous wolf feet,” he says. “All the action was off-screen so it was important for the audience to hear in detail what was going on, to give them a sense of what it would be like without actually seeing it. Also, Thyra’s reaction needed to reflect what was going on. Hopefully, we achieved that.”

Post developments at the AES Berlin Convention

By Mel Lambert

The AES Convention returned to Berlin after a three-year absence, and once again demonstrated that the Audio Engineering Society can organize a series of well-attended paper programs, seminars and workshops, in addition to an exhibition of familiar brands, for the European tech-savvy post community. 

Held at the Maritim Hotel in the creative heart of Berlin in late May, the 142nd AES Convention was co-chaired by Sascha Spors from University of Rostock in Germany and Nadja Wallaszkovits from the Austrian Academy of Sciences. According to AES executive director Bob Moses, attendance was 1,800 — a figure at least 10% higher than last year’s gathering in Paris — with post professional from several overseas countries, including China and Australia.

During the opening ceremonies, current AES president Alex Case stated that, “AES conventions represent an ideal interactive meeting place,” whereas “social media lacks the one-on-one contact that enhances our communications bandwidth with colleagues and co-workers.” Keynote speaker Dr. Alex Arteaga, whose research integrates aesthetic and philosophical practices, addressed the thorny subject of “Auditory Architecture: Bringing Phenomenology, Aesthtic Practices and Engineering Together,” arguing that when considering the differences between audio soundscapes, “our experience depends upon the listening environment.” His underlying message was that a full appreciation of the various ways in which we hear immersive sounds requires a deeper understanding of how listeners interact with that space.

As part of his Richard C. Heyser Memorial Lecture, Prof. Dr. Jorg Sennheiser outlined “A Historic Journey in Audio-Reality: From Mono to AMBEO,” during which he reviewed the basis of audio perception and the interdependence of hearing with other senses. “Our enjoyment and appreciation of audio quality is reflected in the continuous development from single- to multi-channel reproduction systems that are benchmarked against sonic reality,” he offered. “Augmented and virtual reality call for immersive audio, with multiple stakeholders working together to design the future of audio.”

Post-Focused Technical Papers
There were several interesting technical papers that covered the changing requirements of the post community, particularly in the field of immersive playback formats for TV and cinema. With the new ATSC 3.0 digital television format scheduled to come online soon, including object-based immersive sound, there is increasing interest in techniques for capturing surround material and then delivering the same to consumer audiences.

In a paper titled “The Median-Plane Summing Localization in Ambisonics Reproduction,” Bosun Xie from the South China University of Technology in Guangzhou explained that, while one aim of Ambisonics playback is to recreate the perception of a virtual source in arbitrary directions, practical techniques are unable to recreate correct high-frequency spectra in binaural pressures that are referred to as front-back and vertical localization cues. Current research shows that changes of interaural time difference/ITD that result from head-turning for Ambisonics playback match with those of a real source, and hence provide dynamic cue for vertical localization, especially in the median plane. In addition, the LF virtual source direction can be approximately evaluated by using a set of panning laws.

“Exploring the Perceptual Sweet Area in Ambisonics,” presented by Matthias Frank from University of Music in Graz, Austria, described how the sweet-spot area does not match the large area needed in the real world. A method was described to experimentally determine the perceptual sweet spot, which is not limited to assessing the localization of both dry and reverberant sound using different Ambisonic encoding orders.

Another paper, “Perceptual Evaluation of Synthetic Early Binaural Room Impulse Responses Based on a Parametric Model,” presented by Philipp Stade from the Technical University of Berlin, described how an acoustical environment can be modeled using sound-field analysis plus spherical head-related impulse response/HRIRs — and the results compared with measured counterparts. Apparently, the selected listening experiment showed comparable performance and, in the main, was independent from room and test signals. (Perhaps surprisingly, the synthesis of direct sound and diffuse reverberation yielded almost the same results as for the parametric model.)

“Influence of Head Tracking on the Externalization of Auditory Events at Divergence between Synthesized and Listening Room Using a Binaural Headphone System,” presented by Stephan Werner from the Technical University of Ilmenau, Germany, reported on a study using a binaural headphone system that considered the influence of head tracking on the localization of auditory events. Recordings were conducted of impulse responses from a five-channel loudspeaker set-up in two different acoustic rooms. Results revealed that head tracking increased sound externalization, but that it did not overcome the room-divergence effect.

Heiko Purnhagen from Dolby Sweden, in a paper called “Parametric Joint Channel Coding of Immersive Audio,” described a coding scheme that can deliver channel-based immersive audio content in such formats as 7.1.4, 5.1.4, or 5.1.2 at very low bit rates. Based on a generalized approach for parametric spatial coding of groups of two, three or more channels using a single downmix channel, together with a compact parametrization that guarantees full covariance re-instatement in the decoder, the coding scheme is implemented using Dolby AC-4’s A-JCC standardized tool.

Hardware Choices for Post Users
Several manufacturers demonstrated compact near-field audio monitors targeted at editorial suites and pre-dub stages. Adam Audio focused on their new near/mid-fieldS Series, which uses the firm’s ART (Accelerating Ribbon Technology) ribbon tweeter. The five models, which are comprised of the S2V, S3H, S3V, S5V and S5H for horizontal or vertical orientation. The firm’s newly innovated LF and mid-range drivers with custom-designed waveguides for the tweeter — and MF driver on the larger, multiway models — are powered by a new DSP engine that “provides crossover optimization, voicing options and expansion potential,” according to the firm’s head of marketing, Andre Zeugner.

The Eve Audio SC203 near-field monitor features a three-inch LF/MF driver plus a AMT ribbon tweeter, and is supplied with a v-shaped rubberized pad that allows the user to decouple the loudspeaker from its base and reduce unwanted resonances while angling it flat or at a 7.5- or 15-degree angle. An adapter enables mounting directly on any microphone or speaker stand with a 3/8-inch thread. Integral DSP and a passive radiator located at the rear are said to reinforce LF reproduction to provide a response to 62Hz (-3dB).

Genelec showcased The Ones, a series of point-source monitors that are comprised of the current three-way Model 8351 plus the new two-way Model 8331 and three-way Model 8341. All three units include a co-axial MF/HF driver plus two acoustically concealed LF drivers for vertical and horizontal operation. A new Minimum Diffraction Enclosure/MDE is featured together with the firm’s loudspeaker management and alignment software via a dedicated Cat5 network port.

The Neumann KH-80 DSP near-field monitor is designed to offer automatic system alignment using the firm’s control software that is said to “mathematically model dispersion to deliver excellent detail in any surroundings.” The two-way active system features a four-inch LF/MF driver and one-inch HF tweeter with an elliptical, custom-designed waveguide. The design is described as offering a wide horizontal dispersion to ensure a wide sweet spot for the editor/mixer, and a narrow vertical dispersion to reduce sound reflections off the mix console.

To handle multiple monitoring sources and loudspeaker arrays, the Trinnov D-Mon Series controllers enable stereo to 7.1-channel monitoring from both analog and digital I/Os using Ethernet- and/or MIDI-based communication protocols and a fast-switching matrix. An internal mixer creates various combinations of stems, main or aux mixes from discrete inputs. An Optimizer processor offers tuning of the loudspeaker array to match studio acoustics.

Unveiled at last year’s AES Convention in Paris, the Eventide H9000 multichannel/multi-element processing system has been under constant development during the past 12 months with new functions targeted at film and TV post, including EQ, dynamics and reverb effects. DSP elements can be run in parallel or in a series to create multiple, fully-programmable channel strips per engine. Control plug-ins for Avid Pro Tools and other DAWs are being finalized, together with Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking.

Filmton, the German association for film sound professionals, explained to AES visitors its objective “to reinforce the importance of sound at an elemental level for the film community.” The association promotes the appreciation of film sound, together with the local film industry and its policy toward the public, while providing “an expert platform for technical, creative and legal issues.”

Philipp Sehling

Lawo demonstrated the new mc²96 Grand Audio production console, an IP-based networkable design for video post production, available with up to 200 on-surface faders. Innovative features include automatic gain control across multiple channels and miniature TFT color screens above each fader that display LiveView thumbnails of the incoming channel sources.

Stage Tec showed new processing features for its Crescendo Platinum TV post console, courtesy of v4.3 software, including an automixer based on gain sharing that can be used on every input channel, loudness metering to EBU R128 for sum and group channels, a de-esser on every channel path, and scene automation with individual user-adjustable blend curves and times for each channel.

Avid demonstrated native support for the new 7.1.2 Dolby Atmos channel-bed format — basically the familiar 9.1-channel bed with two height channels — for editorial suites and consumer remastering, plus several upgrades for Pro Tools, including new panning software for object-based audio and the ability to switch between automatable object and buss outputs. Pro Tools HD is said to be the only DAW natively supporting in-the-box Atmos mixing for this 10-channel 7.1.2 format. Full integration for Atmos workflows is now offered for control surfaces such as the Avid S6.

Jon Schorah

There was a new update to Nugen Audio’s popular Halo Upmix plug-in for Pro Tools — in addition to stereo to 5.1, 7.1 or 9.1 conversion it is now capable of delivering 7.1.2-channel mixes for Dolby Atmos soundtracks.

A dedicated Dante Pavilion featured several manufacturers that offer network-capable products, including Solid State Logic, whose Tempest multi-path processing engine and router is now fully Audinate Dante-capable for T Series control surfaces with unique arbitration and ownership functions; Bosch RTS intercom systems featuring Dante connectivity with OCA system control; HEDD/Heinz Electrodynamic Designs, whose Series One monitor speakers feature both Dante and AES67/Ravenna ports; Focusrite, whose RedNet series of modular pre-amps and converters offer “enhanced reliability, security and selectivity” via Dante, according to product specialist for EMEA/Germany, Dankmar Klein; and NTP Technology’s DAD Series DX32R and RV32 Dante/MADI router bridges and control room monitor controllers, which are fully compatible with Dante-capable consoles and outboard systems, according to the firm’s business development manager Jan Lykke.

What’s Next For AES
The next European AES convention will be held in Milan during the spring of 2018. “The society also is planning a new format for the fall convention in New York,” said Moses, as the AES is now aligning with the National Association of Broadcasters. “Next January we will be holding a new type of event in Anaheim, California, to be titled AES @ NAMM.” Further details will be unveiled next month. He also explained there will be no West Coast AES Convention next year. Instead the AES will return to New York in the autumn of 2018 with another joint AES/NAB gathering at the Jacob K. Javits Convention Center.


Mel Lambert is an LA-based writer and photographer. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Creating sounds of science for Bill Nye: Science Guy

By Jennifer Walden

Bill Nye, the science hero of a generation of school children, has expanded his role in the science community over the years. His transformation from TV scientist to CEO of The Planetary Society (the world’s largest non-profit space advocacy group) is the subject of Bill Nye: Science Guy — a documentary directed by David Alvarado and Jason Sussberg.

The doc premiered in the US at the SXSW Film Festival and had its international premiere at the Hot Docs Canadian International Documentary Festival in Toronto.

Peter Albrechtsen – Credit: Povl Thomsen

Supervising sound editor/sound designer Peter Albrechtsen, MPSE, started working with directors Alvarado and Sussberg in 2013 on their first feature-length documentary The Immortalists. When they began shooting the Bill Nye documentary in 2015, Albrechtsen was able to see the rough cuts and started collecting sounds and ambiences for the film. “I love being part of projects very early on. I got to discuss some sonic and musical ideas with David and Jason. On documentaries, the actual sound design schedule isn’t typically very long. It’s great knowing the vibe of the film as early as I can so I can then be more focused during the sound editing process. I know what the movie needs and how I should prioritize my work. That was invaluable on a complicated, complex and multilayered movie like this one.”

Before diving in, Albrechtsen, dialogue editor Jacques Pedersen, sound effects editor Morten Groth Brandt and sound effects recordist/assistant sound designer Mikkel Nielsen met up for a jam session — as Albrechtsen calls it — to share the directors’ notes for sound and discuss their own ideas. “It’s a great way of getting us all on the same page and to really use everyone’s talents,” he says.

Albrechtsen and his Danish sound crew had less than seven weeks for sound editorial at Offscreen in Copenhagen. They divided their time evenly between dialogue editing and sound effects editing. During that time, Foley artist Heikki Kossi spent three days on Foley at H5 Film Sound in Kokkola, Finland.

Foley artist Heikki Kossi. Credit: Clas-Olav Slotte

Bill Nye: Science Guy mixes many different media sources — clips from Bill Nye’s TV shows from the ‘90s, YouTube videos, home videos on 8mm film, TV broadcasts from different eras, as well as the filmmakers’ own footage. It’s a potentially headache-inducing combination. “Some of the archival material was in quite bad shape, but my dialogue editor Jacques Pedersen is a magician with iZotope RX and he did a lot of healthy cleaning up of all the rough pieces and low-res stuff,” says Albrechtsen. “The 8mm videos actually didn’t have any sound, so Heikki Kossi did some Foley that helped it to come alive when we needed it to.”

Sound Design
Albrechtsen’s sound edit was also helped by the directors’ dedication to sound. They were able to acquire the original sound effects library from Bill Nye’s ‘90s TV show, making it easy for the post sound team to build out the show’s soundscape from stereo to surround, and also to make it funnier. “A lot of humor in the old TV show came from the imaginative soundtrack that was often quite cartoonish, exaggerated and hilariously funny,” he explains. “I’ve done sound for quite a few documentaries now and I’ve never tried adding so many cartoonish sound effects to a track. It made me laugh.”

The directors’ dedication goes even deeper, with director Sussberg handling the production sound himself when they’re out shooting. He records dialogue with both a boom mic and radio mics, and also records wild tracks of room tones and ambience. He even captures special sound signatures for specific locations when applicable.

For example, Nye visits the creationist theme park called Noah’s Ark, built by Christian fundamentalist Ken Ham. The indoor park features life-size dioramas and animatronics to explain creationism. There are lots of sound effects and demonstrations playing from multiple speaker setups. Sussberg recorded all of them, providing Albrechtsen with the means of creating an authentic sound collage.

“People might think we added lots of sounds for these sequences, but actually we just orchestrated what was already there,” says Albrechtsen. “At moments, it’s like a cacophony of noises, with corny dinosaur screams, savage human screams and violent war noises. When I heard the sounds from the theme park that David and Jason had recorded, I didn’t believe my own ears. It’s so extreme.”

Albrechtsen approaches his sound design with texture in mind. Not every sound needs to be clean. Adding texture, like crackling or hiss, can change the emotional impact of a sound. For example, while creating the sound design for the archival footage of several rocket launches, Albrechtsen pulled clean effects of rocket launches and explosions from Tonsturm’s “Massive Explosions” sound effects library and transferred those recordings to old NAGRA tape. “The special, warm, analogue distortion that this created fit perfectly with the old, dusty images.”

In one of Albrechtsen’s favorite sequences in the film, there’s a failure during launch and the rocket explodes. The camera falls over and the video glitches. He used different explosions panned around the room, and he panned several low-pitched booms directly to the subwoofer, using Waves LoAir plug-in for added punch. “When the camera falls over, I panned explosions into the surrounds and as the glitches appear I used different distorted textures to enhance the images,” he says. “Pete Horner did an amazing job on mixing that sequence.”

For the emotional sequences, particularly those exploring Nye’s family history, and the genetic disorder passed down from Nye’s father to his two siblings, Albrechtsen chose to reduce the background sounds and let the Foley pull the audience in closer to Nye. “It’s amazing what just a small cloth rustle can do to get a feeling of being close to a person. Foley artist Heikki Kossi is a master at making these small sounds significant and precise, which is actually much more difficult than one would think.”

For example, during a scene in which Nye and his siblings visit a clinic Albrechtsen deliberately chose harsh, atonal backgrounds that create an uncomfortable atmosphere. Then, as Nye shares his worries about the disease, Albrechtsen slowly takes the backgrounds out so that only the delicate Foley for Nye plays. “I love creating multilayered background ambiences and they really enhanced many moments in the film. When we removed these backgrounds for some of the more personal, subjective moments the effect was almost spellbinding. Sound is amazing, but silence is even better.”

Bill Nye: Science Guy has layers of material taking place in both the past and present, in outer space and in Nye’s private space, Albrechtsen notes. “I was thinking about how to make them merge more. I tried making many elements of the soundtrack fit more with each other.”

For instance, Nye’s brother has a huge model train railway set up. It’s a legacy from their childhood. So when Nye visits his childhood home, Albrechtsen plays the sound of a distant train. In the 8mm home movies, the Nye family is at the beach. Albrechtsen’s sound design includes echoes of seagulls and waves. Later in the film, when Nye visits his sister’s home, he puts in distant seagulls and waves. “The movie is constantly jumping through different locations and time periods. This was a way of making the emotional storyline clearer and strengthening the overall flow. The sound makes the images more connected.”

One significant story point is Nye’s growing involvement with The Planetary Society. Before Carl Sagan’s death, Sagan conceptualized a solar sail — a sail for use in space that could harness the sun’s energy and use it as a means of propulsion. The Planetary Society worked hard to actualize Sagan’s solar sail idea. Albrechtsen needed to give the solar sail a sound in the film. “How does something like that sound? Well, in the production sound you couldn’t really hear the solar sail and when it actually appeared it just sounded like boring, noisy cloth rustle. The light sail really needed an extraordinary, unique sound to make you understand the magnitude of it.”

So they recorded different kinds of materials, in particular a Mylar blanket, which has a glittery and reflective surface. Then Albrechtsen tried different pitches and panning of those recordings to create a sense of its extraordinary size.

While they handled post sound editorial in Denmark, the directors were busy cutting the film stateside with picture editor Annu Lilja. When working over long distances, Albrechtsen likes to send lots of QuickTimes with stereo downmixes so the directors can hear what’s happening. “For this film, I sent a handful of sound sketches to David and Jason while they were busy finishing the picture editing,” he explains. “Since we’ve done several projects together we know each other very well. David and Jason totally trust me and I know that they like their soundtracks to be very detailed, dynamic and playful. They want the sound to be an integral part of the storytelling and are open to any input. For this movie, they even did a few picture recuts because of some sound ideas I had.”

The Mix
For the two-week final mix, Albrechtsen joined re-recording mixer Pete Horner at Skywalker Sound in Marin County, California. Horner started mixing on the John Waters stage — a small mix room featuring a 5.1 setup of Meyer Sound’s Acheron speakers and an Avid ICON D-Command control surface, while Albrechtsen finished the sound design and premixed the effects against William Ryan Fritch’s score in a separate editing suite. Then Albrechtsen sat with Horner for another week, as Horner crafted the final 5.1 mix.

One of Horner’s mix challenges was to keep the dialogue paramount while still pushing the layered soundscapes that help tell the story. Horner says, “Peter [Albrechtsen] provided a wealth of sounds to work with, which in the spirit of the original Bill Nye show were very playful. But this, of course, presented a challenge because there were so many sounds competing for attention. I would say this is a problem that most documentaries would be envious of, and I certainly appreciated it.”

Once they had the effects playing along with the dialogue and music, Horner and Albrechtsen worked together to decide which sounds were contributing the most and which were distracting from the story. “The result is a wonderfully rich, sometimes manic track,” says Horner.

Albrechtsen adds, “On a busy movie like this, it’s really in the mix where everything comes together. Pete [Horner] is a truly brilliant mixer and has the same musical approach to sound as me. He is an amazing listener. The whole soundtrack — both sound and score — should really be like one piece of music, with ebbs and flows, peaks and valleys.”

Horner explains their musical approach to mixing as “the understanding that the entire palette of sound coming through the faders can be shaped in a way that elicits an emotional response in the audience. Music is obviously musical, but sound effects are also very musical since they are made up of pitches and rhythmic sounds as well. I’ve come to feel that dialogue is also musical — the person speaking is embedding their own emotions into the way they speak using both pitch (inflection or emphasis) and rhythm (pace and pauses).”

“I’ll go even further to say that the way the images are cut by the picture editor is inherently musical. The pace of the cuts suggests rhythm and tempo, and a ‘hard cut’ can feel like a strong downbeat, as emotionally rich as any orchestral stab. So I think a musical approach to mixing is simply internalizing the ‘music’ that is already being communicated by the composer, the sound designer, the picture editor and the characters on the screen, and with the guidance of the director shaping the palette of available sounds to communicate the appropriate complexity of emotion,” says Horner.

In the mix, Horner embraces the documentary’s intention of expressing the duality of Nye’s life: his celebrity versus his private life. He gives the example of the film’s opening, which starts with sounds of a crowd gathering to see Nye. Then it cuts to Nye backstage as he’s preparing for his performance by quietly tying his bowtie in a mirror. “Here the exceptional Foley work of Heikki Kossi creates the sense of a private, intimate moment, contrasting with the voice of the announcer, which I treated as if it’s happening through the wall in a distant auditorium.”

Next it cuts to that announcer, and his voice is clearly amplified and echoing all around the auditorium of excited fans. There’s an interview with a fan and his friends who are waiting to take their seats. The fan describes his experience of watching Nye’s TV show in the classroom as a kid and how they’d all chant “Bill, Bill, Bill” as the TV cart rolled in. Underneath, plays the sound of the auditorium crowd chanting “Bill, Bill, Bill” as the picture cuts to Nye waiting in wings.

Horner says, “Again, the Foley here keeps us close to Bill while the crowd chants are in deep echo. Then the TV show theme kicks on, blasting through the PA. I embraced the distorted nature of the production recording and augmented it with hall echo and a liberal use of the subwoofer. The energy in this moment is at a peak as Bill takes the stage exclaiming, “I love you guys!” and the title card comes on. This is a great example of how the scene was already cut to communicate the dichotomy within Bill, between his private life and his public persona. By recognizing that intention, the sound team was able to express that paradox more viscerally.”


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

Behind the Title: Sounding Sweet audio producer/MD Ed Walker

NAME: Ed Walker

COMPANYSounding Sweet (@sounding_sweet)

CAN YOU DESCRIBE YOUR STUDIO?
We are a UK-based independent recording and audio production company with a recording studio in Stratford Upon Avon, Warwickshire, and separate postproduction facilities in Leamington Spa. Our recording studio is equipped with the latest technology, including a 7.1 surround sound dubbing suite and two purpose-built voiceover booths, which double as Foley studios and music recording spaces when necessary. We are also fully equipped to record ADR, via Source Connect and ISDN.

WHAT’S YOUR JOB TITLE?
Audio producer, sound engineer and managing director — take your pick.

WHAT DOES THAT ENTAIL?
As we are a small business and I am very hands-on., and my responsibilities change on a daily basis. They may include pitching to new clients, liaising with existing clients, overseeing projects from start to finish and ensuring our audio deliveries as a team are over and above what the client is expecting.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Creating and implementing interactive sound into video games is a technical challenge. While I don’t write code myself, as part of working in this industry, I have had to develop a technical understanding of game development and software programming in order to communicate effectively and achieve my audio vision.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I often get the opportunity to go out and record supercars and motorbikes, as well as occasionally recording celebrity voiceovers in the studio. We work with clients both locally and globally, often working across different time zones. We are definitely not a 9-to-5 business.

WHAT’S YOUR LEAST FAVORITE?
Working through the night during crunch periods is hard. However, we understand that the main audio effort is usually applied toward the end of a project, so we are kind of used to it.

WHAT’S YOUR FAVORITE TIME OF THE DAY?
I would have to say first thing in the morning. My studio is so close to home that I get to see my family before I go to work.

IF YOU DID NOT HAVE THIS JOB WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t producing audio I would have to be doing something equally creative. I need an outlet for my thoughts and emotions, perhaps video editing or creating visual effects.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I have always loved music, as both my parents are classically trained musicians. After trying to learn lots of different instruments, I realized that I had more of an affinity with sound recording. I studied “Popular Music and Recording” at university. Later on, I realized that a lot of the music recording skills I had learned were transferable to creating sound effects for computer games.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
– BMW Series 7 launch in Bahrain — sound design
– Jaguar F Pace launch in Bahrain — sound design
Forza Horizon 3 for Microsoft/Playground Games —  audio design
Guitar Hero Live for Activision — audio design

Forza Horizon 3 Lamborghini

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I worked as a sound designer at Codemasters for several years, and I have very fond memories of working on Dirt 2. It sounded awesome back in 2009 in surround sound on the Xbox 360! More recently, Sounding Sweet’s work for Playground Games on Forza Horizon 3 was a lot of fun, and I am very proud of what we achieved.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT?
A portable sound recorder, an iPhone and a kettle.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Facebook, LinkedIn and Twitter

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
All kinds of music — classics, reggae, rock, electronic, the Stones, Led Zeppelin… the list is truly endless.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
My wife is half Italian, so we often visit her “homeland” to see the family. This really is the time when I get to switch off.

Lime opens sound design division led by Michael Anastasi, Rohan Young

Santa Monica’s Lime Studios has launched a sound design division. LSD (Lime Sound Design), featuring newly signed sound designer Michael Anastasi and Lime sound designer/mixer Rohan Young has already created sound design for national commercial campaigns.

“Having worked with Michael since his early days at Stimmung and then at Barking Owl, he was always putting out some of the best sound design work, a lot of which we were fortunate to be final mixing here at Lime,” says executive producer Susie Boyajan, who collaborates closely with Lime and LSD owner Bruce Horwitz and the other company partners — mixers Mark Meyuhas and Loren Silber. “Having Michael here provides us with an opportunity to be involved earlier in the creative process, and provides our clients with a more streamlined experience for their audio needs. Rohan and Michael were often competing for some of the same work, and share a huge client base between them, so it made sense for Lime to expand and create a new division centered around them.”

Boyajan points out that “all of the mixers at Lime have enjoyed the sound design aspect of their jobs, and are really talented at it, but having a new division with LSD that operates differently than our current, hourly sound design structure makes sense for the way the industry is continuing to change. We see it as a real advantage that we can offer clients both models.”

“I have always considered myself a sound designer that mixes,” notes Young. “It’s a different experience to be involved early on and try various things that bring the spot to life. I’ve worked closely with Michael for a long time. It became more and more apparent to both of us that we should be working together. Starting LSD became a no-brainer. Our now-shared resources, with the addition of a Foley stage and location audio recordists only make things better for both of us and even more so for our clients.”

Young explains that setting up LSD as its own sound design division, as opposed to bringing in Michael to sound design at Lime, allows clients to separate the mix from the sound design on their production if they choose.

Anastasi joins LSD from Barking Owl, where he spent the last seven years creating sound design for high-profile projects and building long-term creative collaborations with clients. Michael recalls his fortunate experiences recording sounds with John Fasal, and Foley sessions with John Roesch and Alyson Dee Moore as having taught him a great deal of his craft. “Foley is actually what got me to become a sound designer,” he explains.

Projects that Anastasi has worked on include the PSA on human trafficking called Hide and Seek, which won an AICP Award for Sound Design. He also provided sound design to the feature film Casa De Mi Padre, starring Will Ferrell, and was sound supervisor as well. For Nike’s Together project, featuring Lebron James, a two-minute black-and-white piece, Anastasi traveled back to Lebron’s hometown of Cleveland to record 500+ extras.

Lime is currently building new studios for LSD, featuring a team of sound recordists and a stand-alone Foley room. The LSD team is currently in the midst of a series of projects launching this spring, including commercial campaigns for Nike, Samsung, StubHub and Adobe.

Main Image: Michael Anastasi and Rohan Young.

The sound of John Wick: Chapter 2 — bigger and bolder

The director and audio team share their process.

By Jennifer Walden

To achieve the machine-like precision of assassin John Wick for director Chad Stahelski’s signature gun-fu-style action films, Keanu Reeves (Wick) goes through months of extensive martial arts and weapons training. The result is worth the effort. Wick is fast, efficient and thorough. You cannot fake his moves.

In John Wick: Chapter 2, Wick is still trying to retire from his career as a hitman, but he’s asked for one last kill. Bound by a blood oath, it’s a job Wick can’t refuse. Reluctantly, he goes to work, but by doing so, he’s dragged further into the assassin lifestyle he’s desperate to leave behind.

Chad Stahelski

Stahelski builds a visually and sonically engaging world on-screen, and then fills it full of meticulously placed bullet holes. His inspiration for John Wick comes from his experience as a stunt man and martial arts stunt coordinator for Lily and Lana Wachowski on The Matrix films. “The Wachowskis are some of the best world creators in the film industry. Much of what I know about sound and lighting has to do with their perspective that every little bit helps define the world. You just can’t do it visually. It’s the sound and the look and the vibe — the combination is what grabs people.”

Before the script on John Wick: Chapter 2 was even locked, Stahelski brainstormed with supervising sound editor Mark Stoeckinger and composer Tyler Bates — alumni of the first Wick film — and cinematographer Dan Laustsen on how they could go deeper into Wick’s world this time around. “It was so collaborative and inspirational. Mark and his team talked about how to make it sound bigger and more unique; how to make this movie sound as big as we wanted it to look. This sound team was one of my favorite departments to work with. I’ve learned more from those guys about sound in these last two films then I thought I had learned in the last 15 years,” says Stahelski.

Supervising sound editor Stoeckinger, at the Formosa Group in West Hollywood, knows action films. Mission Impossible II and III, both Jack Reacher films, Iron Man 3, and the upcoming (April) The Fate of the Furious, are just a part of his film sound experience. Gun fights, car chases, punches and impacts — Stoeckinger knows that all those big sound effects in an action film can compete with the music and dialogue for space in a scene. “The more sound elements you have, the more delicate the balancing act is,” he explains. “The director wants his sounds to be big and bold. To achieve that, you want to have a low-frequency punch to the effects. Sometimes, the frequencies in the music can steal all that space.”

The Sound of Music
Composer Bates’s score was big and bold, with lots of percussion, bass and strong guitar chords that existed in the same frequency range as the gunshots, car engines and explosions. “Our composer is very good at creating a score that is individual to John Wick,” says Stahelski. “I listened to just the music, and it was great. I listened to just the sound design, and that was great. When we put them together we couldn’t understand what was going on. They overlapped that much.”

During the final mix at Formosa’s Stage B on The Lot, re-recording mixers Andy Koyama and Martyn Zub — who both mixed the first John Wick — along with Gabe Serrano, approached the fight sequences with effects leading the mix, since those needed to match the visuals. Then Koyama made adjustments to the music stems to give the sound effects more room.

“Andy made some great suggestions, like if we lowered the bass here then we can hear the effects punch more,” says Stahelski. “That gave us the idea to go back to our composers, to the music department and the music editor. We took it to the next level conceptually. We had Tyler [Bates] strip out a lot of the percussion and bass sounds. Mark realized we have so many gunshots, so why not use those as the percussion? The music was influenced by the amount of gunfire, sound design and the reverb that we put into the gunshots.”

Mark Stoeckinger

The music and sound departments collaborated through the last few weeks of the final mix. “It was a really neat, synergistic effect of the sound and music complementing each other. I was super happy with the final product,” says Stahelski.

Putting the Gun in Gun-Fu
As its name suggests, gun-fu involves a range of guns —handguns, shotguns and assault rifles. It was up to sound designer Alan Rankin to create a variety of distinct gun effects that not only sounded different from weapon to weapon but also differentiated between John Wick’s guns and the bad guys’ guns. To help Wick’s guns sound more powerful and complex than his foes, Rankin added different layers of air, boom and mechanical effects. To distinguish one weapon from another, Rankin layered the sounds of several different guns together to make a unique sound.

The result is the type of gun sound that Stoeckinger likes to use on the John Wick films. “Even before this film officially started, Alan would present gun ideas. He’d say, ‘What do you think about this sound for the shotgun? Or, ‘How about this gun sound?’ We went back and forth many times, and once we started the film, he took it well beyond that.”

Rankin developed the sounds further by processing his effects with EQ and limiting to help the gunshots punch through the mix. “We knew we would inevitably have to turn the gunshots down in the mix due to conflicts with music or dialogue, or just because of the sheer quantity of shots needed for some of the scenes,” Rankin says.

Each gun battle was designed entirely in post, since the guns on-screen weren’t shooting live rounds. Rankin spent months designing and evolving the weapons and bullet effects in the fight sequences. He says, “Occasionally there would be a production sound we could use to help sell the space, but for the most part it’s all a construct.”

There were unique hurdles for each fight scene, but Rankin feels the catacombs were the most challenging from a design standpoint, and Zub agrees in terms of mix. “In the catacombs there’s a rapid-fire sequence with lots of shots and ricochets, with body hits and head explosions. It’s all going on at the same time. You have to be delicate with each gunshot so that they don’t all sound the same. It can’t sound repetitive and boring. So that was pretty tricky.”

To keep the gunfire exciting, Zub played with the perspective, the dynamics and the sound layers to make each shot unique. “For example, a shotgun sound might be made up of eight different elements. So in any given 40-second sequence, you might have 40 gunshots. To keep them all from sounding the same, you go through each element of the shotgun sound and either turn some layers off, tune some of them differently or put different reverb on them. This gives each gunshot its own unique character. Doing that keeps the soundtrack more interesting and that helps to tell the story better,” says Zub. For reverb, he used the PhoenixVerb Surround Reverb plug-in to create reverbs in 7.1.

Another challenge was the fight sequence at the museum. To score the first part of Wick’s fight, director Stahelski chose a classical selection from Vivaldi… but with a twist. Instead of relying solely on traditional percussion, “Mark’s team intermixed gunshots with the music,” notes Stahelski. “That is one of my favorite overall sound sequences.”

At the museum, there’s a multi-level mirrored room exhibit with moving walls. In there, Wick faces several opponents. “The mirror room battle was challenging because we had to represent the highly reflective space in which the gunshots were occurring,” explains Rankin. “Martyn [Zub] was really diligent about keeping the sounds tight and contained so the audience doesn’t get worn out from the massive volume of gunshots involved.”

Their goal was to make as much distinction as possible between the gunshot and the bullet impact sounds since visually there were only a few frames between the two. “There was lots of tweaking the sync of those sounds in order to make sure we got the necessary visceral result that the director was looking for,” says Rankin.

Stahelski adds, “The mirror room has great design work. The moment a gun fires, it just echoes through the whole space. As you change the guns, you change the reverb and change the echo in there. I really dug that.”

On the dialogue side, the mirror room offered Koyama an opportunity to play with the placement of the voices. “You might be looking at somebody, but because it’s just a reflection, Andy has their voice coming from a different place in the theater,” Stoeckinger explains. “It’s disorienting, which is what it is supposed to be. The visuals inspired what the sound does. The location design — how they shot it and cut it — that let us play with sound.”

The Manhattan Bridge
Koyama’s biggest challenge on dialogue was during a scene where Laurence Fishburne’s character The Bowery King is talking to Wick while they’re standing on a rooftop near the busy Manhattan Bridge. Koyama used iZotope RX 5 to help clean up the traffic noise. “The dialogue was very difficult to understand and Laurence was not available for ADR, so we had to save it. With some magic we managed to save it, and it actually sounds really great in the film.”

Once Koyama cleaned the production dialogue, Stoeckinger was able to create an unsettling atmosphere there by weaving tonal sound elements with a “traffic on a bridge” roar. “For me personally, building weird spaces is fun because it’s less literal,” says Stoeckinger.

Stahelski strives for a detailed and deep world in his John Wick films. He chooses Stoeckinger to lead his sound team because Stoeckinger’s “work is incredibly immersive, incredibly detailed,” says the director. “The depths that he goes, even if it is just a single sound or tone or atmosphere, Mark has a way to penetrate the visuals. I think his work stands out so far above most other sound design teams. I love my sound department and I couldn’t be happier with them.”


Jennifer Walden is a New Jersey-based writer and audio engineer.

Quick Chat: Scott Gershin from The Sound Lab at Technicolor

By Randi Altman

Veteran sound designer and feature film supervising sound editor Scott Gershin is leading the charge at the recently launched The Sound Lab at Technicolor, which, in addition to film and television work, focuses on immersive storytelling.

Gershin has more than 100 films to his credit, including American Beauty (which earned him a BAFTA nomination), Guillermo del Toro’s Pacific Rim and Dan Gilroy’s Nightcrawler. But films aren’t the only genre that Gershin has tackled — in addition to television work (he has an Emmy nom for the TV series Beauty and the Beast), this audio post pro has created the sound for game titles such as Resident Evil, Gears of War and Fable. One of his most recent projects was contributing to id Software’s Doom.

We recently reached out to Gershin to find out more about his workflow and this new Burbank-based audio entity.

Can you talk about what makes this facility different than what Technicolor has at Paramount? 
The Sound Lab at Technicolor works in concert with our other audio facilities, tackling film, broadcast and gaming projects. In doing so we are able to use Technicolor’s world-class dubbing, ADR and Foley stages.

One of the focuses of The Sound Lab is to identify and use cutting-edge technologies and workflows not only in traditional mediums, but in those new forms of entertainment such as VR, AR, 360 video/films, as well as dedicated installations using mixed reality. The Sound Lab at Technicolor is made up of audio artists from multiple industries who create a “brain trust” for our clients.

Scott Gershin and The Sound Lab team.

As an audio industry veteran, how has the world changed since you started?
I was one of the first sound people to use computers in the film industry. When I moved from the music industry into film post production, I brought that knowledge and experience with me. It gave me access to a huge number of tools that helped me tell better stories with audio. The same happened when I expanded into the game industry.

Learning the interactive tools of gaming is now helping me navigate into these new immersive industries, combining my film experience to tell stories and my gaming experience using new technologies to create interactive experiences.

One of the biggest changes I’ve seen is that there are so many opportunities for the audience to ingest entertainment — creating competition for their time — whether it’s traveling to a theatre, watching TV (broadcast, cable and streaming) on a new 60- or 70-inch TV, or playing video games alone on a phone or with friends on a console.

There are so many choices, which means that the creators and publishers of content have to share a smaller piece of the pie. This forces budgets to be smaller since the potential audience size is smaller for that specific project. We need to be smarter with the time that we have on projects and we need to use the technology to help speed up certain processes — allowing us more time to be creative.

Can you talk about your favorite tools?
There are so many great technologies out there. Each one adds a different color to my work and provides me with information that is crucial to my sound design and mix. For example, Nugen has great metering and loudness tools that help me zero in on my clients LKFS requirements. With each client having their own loudness requirements, the tools allow me to stay creative, and meet their requirements.

Audi’s The Duel

What are some recent projects you’ve worked on?
I’ve been working on a huge variety of projects lately. Recently, I finished a commercial for Audi called The Duel, a VR piece called My Brother’s Keeper, 10 Webisodes of The Strain and a VR music piece for Pentatonix. Each one had a different requirement.

What is your typical workflow like?
When I get a job in, I look at what the project is trying to accomplish. What is the story or the experience about? I ask myself, how can I use my craft, shaping audio, to better enhance the experience. Once I understand how I am going to approach the project creatively, I look at what the release platform will be. What are the technical challenges and what frequencies and spacial options are open to me? Whether that means a film in Dolby Atmos or a VR project on the Rift. Once I understand both the creative and technical challenges then I start working within the schedule allotted me.

Speed and flow are essential… the tools need to be like musical instruments to me, where it goes from brain to fingers. I have a bunch of monitors in front of me, each one supplying me with different and crucial information. It’s one of my favorite places to be — flying the audio starship and exploring the never-ending vista of the imagination. (Yeah, I know it’s corny, but I love what I do!)

The A-List: The sound of La La Land

By Jennifer Walden

Director/writer Damien Chazelle’s musical La La Land has landed an incredible 14 Oscar nominations — not to mention fresh BAFTA wins for Best Film, Best Cinematography, Original Music and Best Leading Actress, in addition to many, many other accolades.

The story follows aspiring actress Mia (Emma Stone) who meets the talented-but-struggling jazz pianist Sebastian (Ryan Gosling) at a dinner club, where he’s just been fired from his gig of plinking out classic Christmas tunes for indifferent diners. Mia throws out a compliment as Sebastian approaches, but he just breezes right past, ignoring her completely. Their paths cross again at a Los Angeles pool party, and this time Mia makes a lasting impression on Sebastian. They eventually fall in love, but their life together is complicated by the realities of making their own dreams happen.

Sounds of the City
La La Land is a love story but it’s also a love letter to Los Angeles, says supervising sound editor Ai-Ling Lee, who shares an Oscar nomination for Best Sound Editing on the film with co-supervising sound editor Mildred Iatrou Morgan. One of Chazelle’s initial directives was to have the cityscape sound active and full of life. “He gave me film references, like Boogie Nights and Mean Streets, even though the latter was a New York film. He liked the amount of sound coming out from the city, but wanted a more romantic approach to the soundscape on La La Land. He likes the idea of the city always being bustling,” says Lee.

Mildred Iatrou Morgan and Ai-Ling Lee. Photo Credit: Jeffrey Harlacker

In addition to La La Land’s musical numbers, director Chazelle wanted to add musical moments throughout the film, some obvious, like the car radios in the opening traffic jam, and some more subtle. Lee explains, “You always hear music coming from different sources in the city, like music coming out of a car going by or mariachi music coming from down the hallway of Sebastian’s apartment building.” The culturally diverse incidental music, traffic sounds, helicopters, and local LA birds, like mourning doves, populate the city soundscape and create a distinct Los Angeles vibe.

For Lee’s sound editorial and sound design, she worked in a suite at EPS-Cineworks in Burbank — the same facility where the picture editor and composer were working. “Damien and Tom Cross [film editor] were cutting the picture there, and Justin Hurwitz the composer was right next door to them, and I was right across the hall from them. It was a very collaborative environment so it was easy to bring someone over to review a scene or sounds. I could pop over there to see them if I had any questions,” says Lee, who was able to design sound against the final music tracks. That was key to helping those two sound elements gel into one cohesive soundtrack.

Bursting Into Song
Director Chazelle’s other initial concern for sound was the music, particularly how the spoken dialogue would transitions into the studio recorded songs. That’s where supervising sound editor Morgan got to flex her dialogue editing muscles. “Milly [Morgan] knows this style of ADR, having worked on musicals before,” says Lee. “Damien wanted the dialogue to seamlessly transition into a musical moment. He didn’t want it to feel like suddenly we’re playing a pre-recorded song. He liked to have things sound more natural, with realistic grounded sounds, to help blend the music into the scene,” says Lee.

To achieve a smooth dialogue transition, Morgan recorded ADR for every line that led into a song to ensure she had a good transition between production dialogue and studio recorded dialogue, which would transition more cleanly into the studio-recorded music. “I cued that way for La La Land, but I ended up not having to use a lot of that. The studio recorded vocals and the production sound were beautifully recorded using the same mics in both cases. They were matching very well and I was able to go with the more emotional, natural sounding songs that were sung on-set in some cases,” says Morgan, who worked from her suite at 20th Century Fox studios along with ADR editor Galen Goodpaster.

Mia’s audition song, “The Fools Who Dream,” was one track that Morgan and the director were most concerned about. As Mia gives her impromptu audition she goes from speaking softly to suddenly singing, and then she starts singing louder. That would have been difficult to recreate in post because her performance on-set — captured by production mixer Steven Morrow — was so beautiful and emotional. The trouble was there were creaking noises on the track. Morgan explains, “As Mia starts singing, the camera moves in on her. It moves through the office and through the desk. It was a breakaway desk and they broke it apart so that the camera could move through it. That created all the creaking I heard on the track.”

Morgan was able to save the live performance by editing in clean ambience between words, and finding alternate takes that weren’t ruined by the creaking noise. She used Elastic Audio inside Pro Tools, as well as the Pro Tools TCE tool (time compression/expansion tool) to help tweak the alt takes into place. “I had to go through all of the outtakes, word by word, syllable by syllable, and find ones that fit in with the singing, and didn’t have creaks on them… and fit in terms of sync. It was very painstaking. It took me a couple of days to do it but it was a very rewarding result. That took a lot of time but it was so worth it because that was a really important moment in the movie,” says Morgan.

Reality Steps In
Not all on-set song performances could be used in the final track, so putting the pre-recorded songs in the space helped to make the transition into musical moments feel more realistic. Precisely crafted backgrounds, made with sounds that fit the tone of the impending song, gradually step aside as the music takes over. But not all of the real-world sounds go away completely. Foley helped to ground a song into the reality on screen by marrying it to the space. For example, Mia’s roommates invite her to a party in a song called “Someone in the Crowd.” Diegetic sounds, such as the hairdryer, the paper fan flicking open, occasional footsteps, and clothing rustles helped the pre-recorded song fit naturally into the scene. Additionally, Morgan notes that production mixer Morrow “did an excellent job of miking the actors with body mics and boom mics, even during the musical numbers that were sung to playback, like ‘Someone in the Crowd,’ just in case there was something to capture that we could use. There were a couple of little vocalizations that we were able to use in the number.”

Foley also played a significant role in the tap dance song “A Lovely Night.” Originally performed as a soft shoe dance number, director Chazelle decided to change it to a tap dance number in post. Lee reveals, “We couldn’t use the production sound since there was music playback in the scene for the actors to perform to. So, we had to fully recreate everything with the sound. Damien had a great idea to try to replace the soft shoe sound with tap shoes. It was an excellent idea because the tap sound plays so much better with the dance music than the soft shoe sound does.”

Lee enlisted Mandy Moore, the dance choreographer on the film, and several dancers to re-record the Foley on that scene. Working with Foley artist Dan O’Connell, of One Step Up located on The Jane Russell Foley Stage at 20th Century Fox Studios, they tried various weights of tap shoes on different floor surfaces before narrowing it down to the classic “Fred and Ginger” sound that Chazelle was looking for. “Even though they are dancing on asphalt, we ended up using a wooden floor surface on the Foley stage. Damien was very precise about playing up a step here and playing up a scuff there, because it plays better against the music. It was really important to have the taps done to the rhythm of the song as opposed to being in sync with the picture. It fools your brain. Once you have everything in rhythm with the music, the rest flows like butter,” says Lee. She cut the tap dance Foley to picture according to Chazelle’s tastes, and then invited Moore to listen to the mix to make sure that the tap dance routine was realistic from a dancer’s point of view.

Inside the Design
One of Lee’s favorite scenes to design was the opening sequence of the film, which starts with the sound of a traffic jam on a Los Angeles freeway. The sound begins in mono with a long horn honk over a black and white Cinemascope logo. As the picture widens and the logo transitions into color, Lee widens the horn honk into stereo and then into the surrounds. From that, the sound builds to a few horns and cars idling. Morgan recorded a radio announcer to establish the location as Los Angeles. The 1812 Overture plays through a car radio, and the sound becomes futzed as the camera pans to the next car in the traffic jam. With each car the camera passes the radio station changes. “This is Los Angeles and it is a mixed cultural city. Damien wanted to make sure there was a wide variety of music styles, so Justin [Hurwitz] gave me a bunch of different music choices, an eclectic selection to choose from,” says Lee. She added radio tuning sounds, car idling sounds, and Foley of tapping on the steering wheel to ground the scene in reality. “We made sure that the sound builds but doesn’t overpower the first musical number. The first trumpet hit comes through this traffic soundscape, and gradually the real city sounds give way to the first song, ‘Another Day of Sun.’”

One scene that stood out for Morgan was after Mia’s play, when she’s in her dressing room feeling sad that the theater was mostly empty for her performance. Not even Sebastian showed up. As she’s sitting there, we hear two men from the audience disparaging her and her play. Initially, Chazelle and his assistant recorded a scratch track for that off-stage exchange, but he asked Morgan to reshoot it with actors. “He wanted it to sound very naturalistic, so we spent some time finding just the right actors who didn’t sound like actors. They sound like regular people,” says Morgan.

She had the actors improvise their lines on why they hated the play, how superficial it was and how pretentious it was. Following some instruction from Chazelle, they cut the scene together. “We screened it and it was too mean, so we had to tone it back a little,” shares Morgan. “That was fun because I don’t always get to do that, to create an ADR scene from scratch. Damien is meticulous. He knows what he wants and he knows what he doesn’t want. But in this case, he didn’t know exactly what they should say. He had an idea. So I do my version and he gave me ideas and it went back and forth. That was a big challenge for me but a very enjoyable one.”

The Mix
In addition to sound editing, Lee also mixed the final soundtrack with re-recording mixer Andy Nelson at Fox Studios in Los Angeles. She and Nelson share an Oscar nomination for Best Sound Mixing on La La Land. Lee says, “Andy and I had made a film together before, called Wild, directed by Jean-Marc Vallée. So it made sense for me to do both the sound design and to mix the effects. Andy mixed the music and dialogue. And Jason Ruder was the music editor.”

From design to mix, Chazelle’s goal was to have La La Land sound natural — as though it was completely natural for these people to burst into song as they went through their lives. “He wanted to make sure it sounded fluid. With all the work we did, we wanted to make the film sound natural. The sound editing isn’t in your face. When you watch the movie as a whole, it should feel seamless. The sound shouldn’t take you out of the experience and the music shouldn’t stand apart from the sound. The music shouldn’t sound like a studio recording,” concludes Lee. “That was what we were trying to achieve, this invisible interaction of music and sound that ultimately serves the experience.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Alvaro Rodríguez

Behind the Title: Histeria Music’s chief audio engineer Alvaro Rodríguez

NAME: Alvaro Rodríguez

COMPANY: Histeria Music (@histeriamusic)

CAN YOU DESCRIBE YOUR COMPANY?
Miami’s Histeria Music is a music production and audio post company. Since its foundation in 2003 we have focused on supporting our clients’ communication needs with powerful music and sound that convey a strong message and create a bond with the audience. We offer full audio post production, music production, and sound design services for advertising, film, TV, radio, video games and the corporate world.

WHAT’S YOUR JOB TITLE?
CEO/ Chief Audio Engineer

WHAT DOES THAT ENTAIL?
As an audio post engineer, I work on 5.1 and stereo mixing, ADR and voiceover recordings, voiceover castings and talent direction, music search and editing, dialogue cleanup, remote recording via ISDN and/or Source Connect and sound design.

Studio A

Studio A

As the owner and founder of the studio, I take care of a ton of things. I make sure our final productions are of the highest quality possible, and handle client services, PR, bookkeeping, social media and marketing. Sometimes it’s a bit overwhelming but I wouldn’t trade it for anything else!

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Some people might think that I just sit behind a console, pushing buttons trying to make things sound pretty. In reality, I do much more than that. I advise creative and copywriters on changes in scripts that might help better fit whatever project we are recording. I also direct talent using creative vocabulary to ensure that their delivery is adequate and their performance hits that emotion we are trying to achieve. I get to sound design, edit and move audio clips around on my DAW, almost as if I were composing a piece of music, adding my own sound to the creative process.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Sound design! I love it when I get a video from any of our clients that has no sound whatsoever, not even a scratch recording of a voiceover. This gives me the opportunity to add my signature sound and be as creative as possible and help tell a story. I also love working on radio spots. Since there is no video to support the audio, I usually get to be a bigger part of the creative process once we start putting together the spots. Everything from the way the talent is recorded to the sounds and the way phrases and words are edited together is something I’ll never get tired of doing.

WHAT’S YOUR LEAST FAVORITE?
Sales. It’s tricky because as the owner when you succeed, it’s the best feeling in the world, but it can be very frustrating and overwhelming sometimes.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
During work it has to be that moment you get the email saying the spots have been approved and are ready for traffic. On a personal level, it’s when I take my nine-year old to soccer practice, usually around 6pm

Studio B

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Wow, I have no idea how to answer this question. I can’t see myself doing anything else, really, although I’ll add that I am an avid home brewer and enjoy the craft quite a bit.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Ever since I was a kid I had this fascination with things that make sounds. I was always drawn to a guitar or simply buckets I could smack and make some sort of a rhythmic pattern. After high school, I went to college and started studying business administration, only to follow in my dad and brother’s steps. Not to anyone’s surprise I quit after the second semester and ended up doing a bit of soul searching. Long story short, I ended up attending Full Sail University where I graduated in the Recording Arts program back in 2000

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year started with a great and fun project for us. We are recording ADR for the Netflix series Bloodline. We are also currently working on the audio post and film scoring of a short film called Andante based on a story from Argentinian author Julio Cortazar.

Also worth mentioning is that we recently concluded the audio post for seasons one and two of the MTV show Ridículos, which is the Spanish and Portuguese language adaptations of the original English version of Ridiculousness that currently airs in Latin America and Brazil.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The first project I ever did for the advertising industry. I was 23 and a recent graduate of Full Sail. All the stars and planets aligned and a campaign for Budweiser — both for the general and US Hispanic markets — landed in my lap. This came from Del Rivero Messianu DDB (currently known as ALMA DDB, Ad Age’s 2017 multicultural agency of the year).

I was living with my parents at the time and had a small home studio in the garage. No Pro Tools, no Digi Beta, just good-old Cool Edit and a VHS player (yes, I manually pressed play on the VHS and Cool Edit to sync my music to picture). Long story short, I ended up writing and producing the music for that TV spot. This led to me unavoidably opening the doors of Histeria Music to the public in 2003.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iZotope’s RX Post Production Suite, Telos Zephyr Xstream ISDN box and Source Connect. I also use the FabFilter Pro-Q 2 quite a bit.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Facebook, Twitter and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I live in Miami and the beach is my backyard, so I find myself relaxing for hours at the beach on weekends. I love to spend time with my family during my son’s soccer practices and games. When I am really stressed and need to be alone, I tend to brew some crafty beers at home. Great hobby!

Netflix's Stranger Things

AES LA Section & SMPTE Hollywood: Stranger Things sound

By Mel Lambert

The most recent joint AES/SMPTE meeting at the Sportsmen’s Lodge in Studio City showcased the talents of the post production crew that worked on the recent Netflix series Stranger Things at Technicolor’s facilities in Hollywood.

Over 160 attendees came to hear how supervising sound editor Brad North, sound designer Craig Henighan, sound effects editor Jordan Wilby, music editor David Klotz and dialog/music re-recording mixer Joe Barnett worked their magic on last year’s eight-episode Season One (Sadly, effects re-recording mixer Adam Jenkins was unable to attend the gathering.) Stranger Things, from co-creators Matt Duffer and Ross Duffer, is scheduled to return in mid-year for Season 2.

L-R: Jordan Wilby, Brad North, Craig Henighan, Joe Barnett, David Klotz and Mel Lambert. Photo Credit: Steve Harvey.

Attendees heard how the crew developed each show’s unique 5.1-channel soundtrack, from editorial through re-recording — including an ‘80s-style, synth-based music score, from Austin-based composers Kyle Dixon and Michael Stein, that is key to the show’s look and feel — courtesy of a full-range surround sound playback system supplied by Dolby Labs.

“We drew our inspiration — subconsciously, at least — from sci-fi films like Alien, The Thing and Predator,” Henighan explained. The designer also revealed how he developed a characteristic sound for the monster that appears in key scenes. “The basic sound is that of a seal,” he said. “But it wasn’t as simple as just using a seal vocal, although it did provide a hook — an identifiable sound around which I could center the rest of the monster sounds. It’s fantastic to take what is normally known as a nice, light, fun-loving sound and use it in a terrifying way!” Tim Prebble, a New Zealand-based sound designer, and owner of sound effects company Hiss and A Roar, offers a range of libraries, including SD003 Seal Vocals|Hiss and A Roar.

Gear used includes Avid Pro Tools DAWs — everybody works in the box — and Avid 64-fader, dual-operator S6 console at the Technicolor Seward Stage. The composers use Apple Logic Pro to record and edit their AAF-format music files.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

 

Jon Hamm

Audio post for Jon Hamm’s H&R Block spots goes to Eleven

If you watch broadcast television at all, you’ve likely seen the ubiquitous H&R Block spots featuring actor Jon Hamm of Mad Men fame. The campaign out of Fallon Worldwide features eight spots — all take place either on a film set or a studio backlot, and all feature Hamm in costume for a part. Whether he’s breaking character dressed in traditional Roman garb to talk about how H&R Block can help with your taxes, or chatting up a zombie during a lunch break, he’s handsome, funny and on point: use H&R Block for your tax needs. Simon McQuoid from Imperial Woodpecker directed.

Studio C /Katya Jeff Payne

Jeff Payne

The campaign’s audio post was completed at Eleven in Santa Monica. Eleven founder Jeff Payne worked the spots. “As well as mixing, I created sound design for all of the spots. The objective was to make the sound design feel very realistic and to enhance the scenes in a natural way, rather than a sound design way. For example, on the spot titled Donuts the scene was set on a studio back lot with a lot of extras moving around, so it was important to create that feel without distracting from the dialogue, which was very subtle and quiet. On the spot titled Switch, there was a very energetic music track and fast cutting scenes, but again it needed support with realistic sounds that gave all the scenes more movement.”

Payne says the major challenge for all the spots was to make the dialogue feel seamless. “There were many different angle shots with different microphones that needed to be evened out so that the dialogue sounded smooth.”

In terms of tools, all editing and mixing was done with Avid’s Pro Tools HDX system and S6 console. Sound design was done through Soundminer software.

Jordan Meltzer was assistant mixer on the campaign, and Melissa Elston executive produced for Eleven. Arcade provided the edit, Timber the VFX and post and color was via MPC.

Behind the Title: Stir Post Audio sound designer/mixer Nick Bozzone

NAME: Nick Bozzone

COMPANY: Chicago’s Stir Post Audio (@STIRpost)

DESCRIBE YOUR COMPANY:
Stir Post Audio is comprised of engineers, mixers, sound designers and producers, who transform audio mixes into what we call “sonic power shots.”

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
As a post sound professional, there are many different disciplines of audio that I use on a day-to-day basis — voiceover recording/mic techniques (ADR included), creative sound designing, voiceover and music editing, 5.1 and stereo broadcast (LKFS) mixing, as well as providing a positive (and fun) voice in the room.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The term sound designer envelops more than simply spotting stock sound effects to picture, it’s an opportunity to be as creative as my mind allows. It’s a chance at making a sonic signature —a signature that, most of the time, is associated with the product itself. I have been very fortunate through my career so far to have worked on these types of commercial campaigns and short films… projects that have allowed me to stretch my sonic imagination.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is when its time to mix. Mixing can be just as creative, if not more so, as sound design. There are a lot of technical aspects to mixing heavy-hitting commercials. Most of the time there are a bunch of very dynamic elements going on at the same time. The finesse of a great mix is the ability to take all of these things, bring them all together and have them all sitting in their own spot.

WHAT’S YOUR LEAST FAVORITE?
It may be my least favorite part, but it’s a necessary evil… archiving!

WHAT’S YOUR FAVORITE TIME OF THE DAY?
During work, it’s when the whole room gives my mix a thumbs up. During the weekend, it’s definitely around sunset. For whatever reason, no matter how tired I am, around sunset is when my body kicks into its second wind and I become a night owl (or at least I used to be one before my daughter was born five months ago).

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
“If you love what you do, you’ll never work a day in your life.” That was told to me when I entered college, and I took that quote to heart. Originally, I thought that I wanted to be a creative writer and then I had an interest in being a hypnotherapist. Both were interesting to me, but neither one was holding my interest for very long. Thankfully, I took an introductory class in Pro Tools. That one class showed me that there could be a future in sound. You never know where you’ll get your inspiration.

Nick creating sounds for Mist Twst.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Many projects that come through our doors require quite a bit of strategy with regard to the intention or emotion of the project. I worked on the re-branding campaign for Pepsi’s Sierra Mist, which changed its name to Mist Twst.

There were a lot of very specific sound design elements I created in that session. The intention was to not just make an everyday run-of-the-mill soda commercial; we wanted it to feel crisp, clean and natural like the drink. So, we went to the store and bought a bunch of different fruits and vegetables, and recorded ourselves cutting, squeezing, and dropping them into a fizzy glass of Mist Twst. We even recorded ourselves opening soda cans at different speeds and pouring soda into glasses with and without ice.

I also worked on a really fun 5 Gum radio campaign that won a Radio Mercury Award. The concept was a “truth or dare” commercial geared toward people streaming music with headphones on. It allows the listener to choose whether to play along with listening to the left headphone for a truth, or the right headphone to do a dare.

We did campaign for Aleve with beautiful film showing a grandfather on an outing with his granddaughter at an amusement park and suddenly he throws his back out. The entire park grinds to a halt as a result — visually and audio-wise. There was a lot of sound design involved in this process, and was a very fun and creative experience.

Kerrygold

For a recent package of TV spots for Kerrygold, the Irish dairy group, created by Energy BBDO. my main goal for “Made for this Moment” was to let the gentile music track and great lyrics have center stage and breathe, as if they were their own character in the story. My approach to the sound design was to fill out each scene with subtle sound design elements that are almost felt and not heard… nothing poking through further than anything else, and nothing competing with the music, only enhancing the overall mood.”

Focusing on sound bars at CES 2017

By Tim Hoogenakker

My day job is as a re-recording mixer and sound editor working on long-form projects, so when I attended this year’s Consumer Electronics Show in Las Vegas, I honed in on the leading trends in home audio playback. It was important for me to see what the manufacturers are planning regarding multi-channel audio reproduction for the home. From the look of it, sound bars seem to be leading the charge. My focus was primarily with immersive sound bars, single-box audio components capable of playing Dolby Atmos and DTS:X as close as they can in their original format.

Klipsch TheaterBar

Klipsch Theaterbar

Now I must admit, I’ve kicked and screamed about sound bars in the past, audibly rolling my eyes at the concept. We audio mixers are used to working in perfect discrete surround environments, but I wanted to keep an open mind. Whether we as sound professionals like it or not, this is where the consumer product technology is headed. That and I didn’t see quite the same glitz and glam over discrete surround speaker systems at CES.

Here are some basic details with immersive sound bars in general:

1. In addition to the front channels, they often have up-firing drivers on the left and right edges (normally on the top and sides) that are intended to reflect onto the walls and the ceiling of the room. This is to replicate the immersiveness as much as possible. Sure this isn’t exact replication, but I’ll certainly give manufacturers praise for their creativity.
2. Because of the required reflectivity, the walls have to be of a flat enough surface to reflect the signal, yet still balanced so that it doesn’t sound like you’re sitting in the middle of your shower.
3. There is definitely a sweet spot in the seating position when listening to sound bars. If you move off-axis, you may experience somewhat of a wash sitting near the sides, but considering what they’re trying to replicate, it’s an interesting take.
4. They usually have an auto-tuning microphone system for calculating the room for the closest accuracy.
5. I’m convinced that there’s a conspiracy by the manufacturers to make each and every sound bar, in physical appearance, resemble the enigmatic Monolith in 2001: A Space Odyssey…as if literally someone just knocked it over.

Yamaha YSP5600

My first real immersive sound bar experience happened last year with the Yamaha YSP-5600, which comes loaded with 40 (yes 40!) drivers. It’s a very meaty 26-pound sound bar with a height of 8.5 inches and width of 3.6 feet. I heard a few projects that I had mixed in Dolby Atmos played back on this system. Granted, even when correctly tuned it’s not going to sound the same as my dubbing stage or with dedicated home theater speakers, but knowing this I was pleasantly surprised. A few eyebrows were raised for sure. It was fun playing demo titles for friends, watching them turn around and look for surround speakers that weren’t there.

A number of the sound bars displayed at CES bring me to my next point, which honestly is a bit of a complaint. Many were very thin in physical design, often labeled as “ultra-thin,” which to me means very small drivers, which tells me that there’s an elevated frequency crossover line for the subwoofer(s). Sure, I understand that they need to look sleek so they can sell and be acceptable for room aesthetics, but I’m an audio nerd. I WANT those low- to mid-frequencies carried through from the drivers, don’t just jam ALL the low- and mid-frequencies to the sub. It’ll be interesting to see how this plays out as these products reach market during the year.

Sony HTST 5000

Besides immersive audio, most of these sound bars will play from a huge variety of sources, formats and specs, such as Blu-ray, Blu-ray UHD, DVD, DVD-Audio, streaming via network and USB, as well as connections for Wi-Fi, Bluetooth and 4K pass-through.

Some of these sound bars — like many things at CES 2017 — are supported with Amazon Alexa and Google Home. So, instead of fighting over the remote control, you and your family can now confuse Alexa with arguments over controlling your audio between “Game of Thrones” and Paw Patrol.

Finally, I probably won’t be installing a sound bar on my dub stage for reference anytime soon, but I do feel that professionally it’s very important for me to know the pros and the cons — and the quirks — so we can be aware how our audio mixes will translate through these systems. And considering that many major studios and content creators are becoming increasingly ready to make immersive formats their default deliverable standard, especially now with Dolby Vision, I’d say it’s a necessary responsibility.

Looking forward to seeing what NAB has up its sleeve on this as well.

Here are some of the more notable soundbars debuted:

LG SJ9

Sony HT-ST5000: This sound bar is compatible with Google Home. They say it works well with ceilings as high as 17 feet. It’s not DTS:X-capable yet, but Sony said that will happen by the end of the year.LG SJ9: The LG SJ9 sound bar is currently noted by LG as “4K high resolution audio” (which is an impossible statement). It’s possible that they mean it’ll pass through a 4K signal, but the LG folks couldn’t clarify. That snafu aside, it has a very wide dimensionality, which helps for stereo imaging. It will be Dolby Vision/HDR-capable via a future firmware upgrade.

The Klipsch “Theaterbar”: This another eyebrow raiser. It’ll release in Q4 of 2017. There’s no information on the web yet, but they’re showcasing this at CES.

Pioneer Elite FS-EB70: There’s no information on the web yet, but they were showcasing this at CES.

Onkyo SBT-A500 Network: Also no information but it was shown at CES.


Formosa Group re-recording mixer and sound editor Tim Hoogenakker has over 20 years of experience in audio post for music, features and documentaries, television and home entertainment formats. He had stints at Prince’s Paisley Park Studios and POP Sound before joining Formosa.

Stranger Things

Upcoming AES LA meeting features Netflix’s Stranger Things sound team

On January 31, the AES LA Section monthly meeting will showcase the sound editorial and re-recording of the Netflix series Stranger Things. Attendees will hear first-hand how the sound team creates the 5.1-channel soundtrack, including the eerie music that is key to the show’s look and feel. A second season from the Duffer Brothers is scheduled to start later this year, with its haunting ’80s-style, synth-based musical score.

For those of you not familiar with the show, it’s set in Indiana in 1983 and focuses on a 12-year-old boy gone missing and the resulting search for him by the police chief and his friends.

The editorial team for Stranger Things is headed up by supervising sound editor Brad North, who works closely with sound designer Craig Henighan, sound effects editor Jordan Wilby and music editor David Klotz. The re-recording crew, working at the Technicolor Seward stage, is Joe Barnett, who handles dialogue and music, and Adam Jenkins, who handles sound effects.

“We drew our inspiration — subconsciously, at least — from such sci-fi films as Alien, The Thing and Predator,” Henighan recalls. Part sci-fi, part horror and part family drama, Stranger Things is often considered an homage to 80’s movies like Close Encounters of the Third Kind and ET.

The joint AES/SMPTE January meeting, which will be held at the Sportsmen’s Lodge in Studio City on Tuesday, January 31, is open to both AES and SMPTE members and non-members.

Panelists will include Adam Jenkins, Jordan Wilby, Joe Barnett, David Klotz, Brad North and Craig Henighan.

Patriots Day

Augmenting Patriots Day‘s sound with archival audio

By Jennifer Walden

Fresh off the theatrical release of his dramatized disaster film Deepwater Horizon, director Peter Berg brings another current event to the big screen with Patriots Day. The film recounts the Boston Marathon bombing by combining Berg’s cinematic footage with FBI-supplied archival material from the actual bombing and investigation.

Once again, Berg chose to partner with Technicolor’s supervising sound editor/re-recording mixer Dror Mohar, who contributed to the soundtrack of Berg’s Deepwater Horizon (2016) and Lone Survivor (2013).  He earned an MPSE award nomination for sound editing on the latter.

According to Mohar, Berg’s intention for Patriots Day was not to make a film about tragedy and terrorism, but rather to tell the story of a community’s courage in the face of this disaster. “This was personal for Peter [Berg]. His conviction about not exploiting or sensationalizing any of it was in every choice he made,” says Mohar. “He was vigilant about the cinematic attributes never compromising the authenticity and integrity of the story of the events and the people who were there — the law enforcement, victims and civilians. Peter wanted to evolve and explore the sound continuously. My compass throughout was to create a soundtrack that was as immersive as it was genuine.”

From a sound design perspective, Mohar was conscious of keeping the qualities and character of the sounds in check — favoring raw, visceral sounds over treated or polished ones. He avoided oversized “Hollywood” treatments. For example, Mohar notes the Watertown shootout sequence. The lead-up to the firefight was inspired by a source audio recording of the Watertown shootout captured by a neighbor on a handheld camera.

“Two things grabbed my attention — the density of the firefight, which sounded like Chinese New Year, and the sound of wind chimes from a nearby home,” he explains. “Within what sounded like war and chaos, there was a sweet sound that referenced home, family, porch… This shootout is happening in a residential area, in the middle of everyday life. Throughout the film, I wanted to maintain the balance between emotional and visceral sounds. Working closely with picture editors Colby Parker Jr. and Gabriel Fleming, we experimented with sound design that aligned directly with the dramatic effect of the visuals versus designs that counteracted the drama and created an experience that was less comfortable but ultimately more emotional.”

Tension was another important aspect of the design. The bombing disrupted life, and not just the lives of those immediately or physically affected by the bombing. Mohar wanted the sound to express those wider implications. “When the city is hit, it affects everyone. Something in that time period is just not the same. I used a variety of recordings of calls to prayer and crowds of people from all over the world to create soundscapes that you could expect to hear in a city but not in Boston. I incorporated these in different times throughout the film. They aren’t in your face, but used subtly.”

Patriots DayThe Mix
On the mix, he and re-recording mixer Mike Prestwood-Smith chose a realistic approach to their sonic treatments.

Prestwood-Smith notes that for an event as recent and close to the heart as the Boston Marathon bombing, the goal was to have respect for the people who were involved — to make Patriots Day feel real and not sensationalized in any sense. “We wanted it to feel believable, like you are witnessing it, rather than entertaining people. We want to be entertaining, engaging and dramatic, but ultimately we don’t want this to feel gratuitous, as though we are using these events to our advantage. That’s a tight rope to tread, not just for sound but for everything, like the shooting and the performances. All of it.”

Mohar reinforces the idea of enabling the audience to feel the events of the bombing first-hand through sound. “When we experience an event that shocks us, like a car crash, or in this case, an act of terror, the way we experience time is different. You assess what’s right there in front of you and what is truly important. I wanted to leverage this characteristic in the soundtrack to represent what it would be like to be there in real time, objectively, and to create a singular experience.”

Archival Footage
Mohar and Prestwood-Smith had access to enormous amounts of archival material from the FBI, which was strategically used throughout the soundtrack. In the first two reels, up to and including the bombing, Prestwood-Smith explains that picture editors Fleming and Parker Jr. intercut between the dramatized footage and the archived footage “literally within seconds of each other. Whole scenes became a dance between the original footage and the footage that Peter shot. In many cases, you’re not aware of the difference between the two and I think that is a very clever and articulate thing they accomplished. The sound had to adhere to that and it had to make you feel like you were never really shifting from one thing to the other.”

It was not a simple task to transition from the Hollywood-quality sound of the dramatized footage to sound captured on iPhones and low-resolution cameras. Prestwood-Smith notes that he and Mohar were constantly evolving the qualities of the sounds and mix treatments so all elements would integrate seamlessly. “We needed to keep a balance between these very different sound sources and make them feel coherently part of one story rather than shifting too much between them all. That was probably the most complex part of the soundtrack.”

Berg’s approach to perspective — showing the event from a reporter’s point of view as opposed to a spectator’s point of view — helped the sound team interweave the archival material and fictionalized material. For example, Prestwood-Smith reports the crowd sounds were 90 percent archival material, played from the perspective of different communication sources, like TV broadcasts, police radio transmissions and in-ear exchanges from production crews on the scene. “These real source sounds are mixed with the actors’ dialogue to create a thread that always keeps the story together as we alternate through archival and dramatized picture edits.”

While intercutting various source materials for the marathon and bombing sequences, Mohar and Prestwood-Smith worked shot by shot, determining for each whether to highlight an archival sound, carry the sound across from the previous shot or go with another specific sound altogether, regardless of whether it was one they created or one that was from the original captured audio.

“There would be archival footage with screaming on it that would go across to another shot and connect the archive footage to the dramatized, or sometimes not. We literally worked inch-by-inch to make it feel like it all belonged in one place,” explains Prestwood-Smith. “We did it very boldly. We embraced it rather than disguised it. Part of what makes the soundtrack so dynamic is that we allow each shot to speak in its genuine way. In the earlier reels, where there is more of the archival footage, the dynamics of it really shift dramatically.”

Patriots Day is not meant to be a clinical representation of the event. It is not a documentary. By dramatizing the Boston Marathon bombing, Berg delivers a human story on an emotional level. He uses music to help articulate the feeling of a scene and guide the audience through the story emotionally.

“On an emotional level, the music did an enormous amount of heavy lifting because so much of the sound work was really there to give the film a sense of captured reality and truth,” says Prestwood-Smith. “The music is one of the few things that allows the audience to see the film — the event — slightly differently. It adds more emotion where we want it to but without ever tipping the balance too far.”

The Score
Composers Trent Reznor and Atticus Ross had a definitive role for each cue. Their music helps the audience decompress for certain moments before being thrust right back into the action. “Their compositions were so intentional and so full of character and attitude. It’s not generic,” says Mohar. “Each cue feels like a call to action. The tracks have eyes and mouths and teeth. It’s very intentional. The music is not just an emotional element; it’s part of the sound design and sound overall. The sound and music work together to contribute equally to this film.”

The way that we go back and forth between the archival footage and the dramatized footage was the same way we went from designed audio to source audio, from music to musical, from sound effects to sound effective,” he continues. “On each scene, we decided to either blur the line between music and effects, between archival sound and designed sound, or to have a hard line between each.”

To complement the music, Mohar experimented with rhythmic patterns of sounds to reinforce the level of intensity of certain scenes. “I brought in mechanical keyboards of various types, ages and material, and recorded different typing rhythms on them. These sounds were used in many of the Black Falcon terminal scenes. I used softer sounding keyboards with slower tempos when I wanted the level of tension to be lower, and then accelerated them into faster tempos with harsher sounding keyboards as the drama in the terminal increased,” he says. “By using modest, organic sounds I could create a subliminal sense of tension. I treated the recordings with a combination of plug-ins, delays, reverbs and EQs to create sounds that were not assertive.”

Dialogue
In terms of dialogue, the challenge was to get the archive material and the dramatized material to live in the same space emotionally and technically, says Prestwood-Smith. “There were scenes where Mark Wahlberg’s character is asking for ambulances or giving specific orders and playing underneath that dialogue is real, archival footage of people who have just been hurt by these explosions talking on their phones. Getting those two things to feel integrated was a complex thing to do. The objective was to make the sound believable. ‘Is this something I can believe?’ That was the focus.”

Prestwood-Smith used a combination of Avid and FabFilter plug-ins for EQ and dynamics, and created reverbs using Exponential Audio’s PhoenixVerb and Audio Ease’s Altiverb.

Staying in The Box
From sound editorial through to the final mix, Mohar and Prestwood-Smith chose to keep the film in Pro Tools. Staying in the box offered the best workflow solution for Patriots Day. Mohar designed and mixed for the first phase of the film at his studio at Technicolor’s Tribeca West location in Los Angeles, a satellite of Technicolor at Paramount’s main sound facility while Prestwood-Smith worked out of his own mix room in London. The two collaborated remotely, sharing their work back and forth, continuously developing the mix to match the changing picture edit. “We were on a very accelerated schedule, and they were cutting the film all the way through mastering. Having everything in the box meant that we could constantly evolve the soundtrack,” says Prestwood-Smith.

7.1 Surround Mix
Mohar and Prestwood-Smith met up for the final 7.1 surround mix at 424 Post in Hollywood and mixed the immersive versions at Technicolor Hollywood.

While some mix teams prefer to split the soundtrack, with one mixer on music and dialogue and the other handling sound effects and Foley, Mohar and Prestwood-Smith have a much more fluid approach. There is no line drawn across the board; they share the tracks equally.

“Mike has great taste and instincts; he doesn’t operate like a mixer. He operates like a filmmaker and I look to him to make the final decisions and direct the shape of the soundtrack,” explain Mohar. “The best thing about working with Mike is that it’s truly collaborative, no part of the mix belonged to just one person. Anything was up for grabs and the sound as a whole belonged to the story. It makes the mix more unified, and I wouldn’t have it any other way.”


Jennifer Walden is a New Jersey-based audio pro and writer. 

Cory Melious

Behind the Title: Heard City senior sound designer/mixer Cory Melious

NAME: Cory Melious

COMPANY: Heard City (@heardcity)

CAN YOU DESCRIBE YOUR COMPANY?
We are an audio post production company.

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
I provide final mastering of the audio soundtrack for commercials, TV shows and movies. I combine the production audio recorded on set (typically dialog), narration, music (whether it’s an original composition or artist) and sound effects (often created by me) into one 5.1 surround soundtrack that plays on both TV and Internet.

Heard City

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
I think most people without a production background think the sound of a spot just “is.” They don’t really think about how or why it happens. Once I start explaining the sonic layers we combine to make up the final mix they are really surprised.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The part that really excites me is the fact that each spot offers its own unique challenge. I take raw audio elements and tweak and mold them into a mix. Working with the agency creatives, we’re able to develop a mix that helps tell the story being presented in the spot. In that respect I feel like my job changes day in and day out and feels fresh every day.

WHAT’S YOUR LEAST FAVORITE?
Working late! There are a lot of late hours in creative jobs.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I really like finishing a job. It’s that feeling of accomplishment when, after a few hours, I’m able to take some pretty rough-sounding dialog and manipulate that into a smooth-sounding final mix. It’s also when the clients we work with are happy during the final stages of their project.

WHAT TOOLS DO YOU USE ON A DAY-TO-DAY BASIS?
Avid Pro Tools, Izotope RX, Waves Mercury, Altiverb and Revibe.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
One of my many hobbies is making furniture. My dad is a carpenter and taught me how to build at a very young age. If I never had the opportunity to come to New York and make a career here, I’d probably be building and making furniture near my hometown of Seneca Castle, New York.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think this profession chose me. When I was a kid I was really into electronics and sound. I was both the drummer and the front of house sound mixer for my high school band. Mixing from behind the speakers definitely presents some challenges! I went on to college to pursue a career in music recording, but when I got an internship in New York at a premier post studio, I truly fell in love with creating sound for picture.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, I’ve worked on Chobani, Google, Microsoft, and Budweiser. I also did a film called The Discovery for Netflix.

The Discovery for Netflix.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I’d probably have to say Chobani. That was a challenging campaign because the athletes featured in it were very busy. In order to capture the voiceover properly I was sent to Orlando and Los Angeles to supervise the narration recording and make sure it was suitable for broadcast. The spots ran during the Olympics, so they had to be top notch.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iPhone, iPad and depth finder. I love boating and can’t imagine navigating these waters without knowing the depth!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on the basics — Facebook, LinkedIn and Instagram. I dabble with SnapChat occasionally and will even open up Twitter once in a while to see what’s trending. I’m a fan of photography and nature, so I follow a bunch of outdoor Instagramers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I joke with my friends that all of my hobbies are those of retired folks — sailing, golfing, fly fishing, masterful dog training, skiing, biking, etc. I joke that I’m practicing for retirement. I think hobbies that force me to relax and get out of NYC are really good for me.

The A-List: Jackie and Neruda director Pablo Larraín

By Iain Blair

Chilean director Pablo Larraín has been hailed as one of the most ambitious, iconoclastic, daring — and important — political filmmakers of his generation thanks to such films as No, a drama about the 1988 plebiscite that brought an end to the Pinochet era; Tony Manero, about a man obsessed with John Travolta’s disco dancing character from Saturday Night Fever; and The Club, a drama about disgraced priests.

iain-and-pablo

Writer Iain Blair and director Pablo Larraín.

He’s also one of the hardest-working directors in the business, with two major releases out before Christmas. First up is Fox’s Jackie, about one of the greatest icons of the 20th Century. It stars Natalie Portman as first lady Jackie Kennedy and is set in the immediate aftermath of the Kennedy assassination. That’s followed by Neruda, which focuses on the life of Pablo Neruda, one of the greatest poets of the 20th Century. Neruda is Chile’s Oscar submission, and Jackie, Larrain’s first English-language film, is also getting a lot of Oscar and awards season buzz.

I talked to Larraín about making the films and his workflow.

Why make back-to-back films?
I never planned it this way. I was going to make Neruda, and then we had to push it six months for a lot of reasons. My last film, The Club, won an award at Berlin, and Darren Aronofsky headed up the jury and asked me to direct Jackie, which he produced. So I ended up doing Jackie right after Neruda.

So what does a Chilean director shooting in Paris bring to such an iconic American subject?
The view of an outsider, maybe. We were doing a lot of post on Neruda in Paris, and the film was mainly made and cut there at Film Factory. Natalie was also living there, so it all came together organically. We built all the interiors there — the White House and so on.

Jackie

Neither film is your run-of-the-mill biopic. Can you talk about Jackie, which has a lot of time compression, random memories and flashbacks?
I don’t like normal biopics. They’re very tricky to do, I think. More than anything we wanted to find and discover the specific sensibility that was Jackie and examine all the events that happened after the assassination. It was also about capturing specific emotions and showing her strengths and weaknesses, and all the paradoxes and controversies that surrounded her. So we approached it from fiction. Good biopics aren’t really biographical; they just try to capture a sense of the person more through atmosphere and emotions than a linear plot and structure.

You must have done a lot of research?
Extensive — looking at newsreels, interviews, reading books. Before all that, I had a very superficial idea of her as this person who was mainly concerned about clothes and style and furniture. But as I researched her character, I discovered just what an incredible woman she was. And for me, it’s also the story of a mother.

Jackie

What were the main technical challenges of making this?
The biggest challenge for me was, of course, making my first film in English. It wasn’t easy to do. My other biggest challenge was making a film about a woman. In my films, the main characters have always been men, so that was the biggest one for me to deal with and understand.

Do you like the post process?
I love it — and more and more, the editing. It’s just so beautiful when you sit with the editor, and every scene you’ve shot is now cut in that first cut. Then you go, “Alright, where do we go now, to really shape the film?” You start moving scenes around and playing with the narrative. I think it was Truffaut who said that when you shoot, you have to fight with the script, and then when you edit, you have to fight with the shoot, and it’s so true. I’ve learned over the years to really embrace post and editing.

You worked with editor Sebastián Sepúlveda on Jackie. Tell us about that relationship and how it worked.
He began cutting while we were shooting, and when we wrapped we finished cutting it at Primo Solido, in Santiago, Chile. We did all the pre-mixes there too.

This is obviously not a VFX-driven piece, but as with any period piece the VFX play a big role.
Absolutely, and Garage, a VFX company in Santiago, did about 80 percent of them. They did a great job. We also used Mikros and Digital District in Paris. I like working with visual effects when I have to, but I’m not really a greenscreen guy (laughs). Both films were fun to do in terms of the effects work, and you can’t tell that they’re visual effects — all the backgrounds and so on are very photorealistic, and I love that illusion… that magic. Then there’s a lot of work erasing all the modern things and doing all the cleanup. It’s the kind of post work that’s most successful when no one notices it. (Check out our interview with Jackie editor Sebastián Sepúlveda.)

Neruda

Neruda

Let’s talk about Neruda, which is also not a typical biopic, but more of “policier” thriller.
Yes, it’s less about Neruda himself and more about what we call the “Nerudian world.” It’s about what he created and what happened when he went into hiding when the political situation changed in Chile. We created this fictional detective who’s hunting him as a way of exploring his life.

Along with Jackie, he was a real person. Did you feel an extra responsibility in making two films about such icons?
Yes, of course, but if you think about it too much it can just paralyze you. You’re trying to capture a sense of the person, their world, and we shot Neruda in Chile, Buenos Aires and a little bit in Paris.

What did you shoot the films on?
We shot Jackie on film and on Super 16, and Neruda on Red. I still love shooting on film more than digital, but we had a great experience with the Red cameras and we used some old Soviet anamorphic lenses from the ‘60s that I found in LA about eight years ago. We got a beautiful look with them. Then we did all the editing in Paris with Hervé Schneid but with a little help at the end from Sebastián Sepúlveda to finish it in time for its Cannes debut. We changed quite a few things — especially the music.

Neruda

Can you talk about the importance of music and sound in both of the films?
Well, film is an audio-visual medium, so sound is half the movie. It triggers mood, emotion, atmosphere, so it’s crucial to the image you’re looking at, and I spend a lot of time working on the music and sound with my team — I love that part of post too. When I work with my editors, I always ask them to cut to sound and work with sound as well, even if they don’t like to work that way.

How is the movie industry in Chile?
I think it’s healthy, and people are always challenging themselves, especially the younger generation. It’s full of great documentaries — and people who’ve never worked with film, only digital. It’s exciting.

What’s next?
I don’t quite know, but I’m developing several projects. It’s whatever happens first.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”

Behind the Title: Sound mixer/sound designer Rob DiFondi

Name: Rob DiFondi

Company: New York City’s Sound Lounge

Can you describe your company?
Sound Lounge is an audio post company that provides creative services for TV and radio commercials, feature films, television series, digital campaigns, gaming and other emerging media. Artist-owned and operated, we’re made up of an incredibly diverse, talented and caring group of people who all love the advertising and film worlds.

We recently celebrated Sound Lounge’s 18th birthday. I’m proud to say I’ve been a part of the SL family for over 13 years now, and I couldn’t ask for a better group of friends to hang out with every day.

What’s your job title?
Senior Mixer/Sound Designer

What does that entail?
I have actors in my booth all day recording VO (voiceover) for different commercials. My clients (usually brands, ad agencies, production companies, or editorials) hang in my room, and together we get the best possible read from the actor while they’re in the booth. I then craft sound design for the spot by either pulling sound effects from my library or recreating the necessary sounds myself (a.k.a. “Foley”). Once that’s set, I’ll take the lines the actor recorded, the sound effects I created, and any music, and then mix them all together so the spot sounds perfect (and is legal for TV broadcast)!

Being a mixer in the advertising post world isn’t easy. I also have to be able to provide a solid lunch recommendation — I always need to make sure I know where my clients can get the best sushi in the Flatiron district!

What would surprise people the most about what falls under that title?
That most of us are musicians who wanted to be rock stars but thought better of it. Maybe that isn’t so surprising though.

Sound Lounge

What’s your favorite part of the job?
The people, and the social part of the advertising industry. This business is filled with so many kind, funny and talented people, and it’s so nice to have them be a part of your life. And how can you beat partying every year at the MOMA for the AICP Gala?

What’s your least favorite?
Probably the lack of travel. I love our office, but it would be fun to do my job in different cities once in a while.

What is your favorite time of the day?
Walking in my front door and seeing my wife and kids.

If you didn’t have this job, what would you be doing instead?
Something that involves beaches and nice weather.

How early on did you know this would be your path?
I totally fell into this profession. I went to school to become a music engineer/producer. I had no idea there was a whole industry for mixing TV spots. Once I got into it though, I knew immediately that I loved it.

Can you name some recent projects you have worked on?
I worked on some really nice pieces for Maybelline, Google, Lincoln and TD Ameritrade.

What is the project that you are most proud of?
Miracle Stain, a Super Bowl commercial that I mixed for Tide a few years back. I finished the mix at 10pm on Thursday and got a call at 2am that there had been some changes, so I had to come back to work in the middle of the night. I tweaked the mix until the sun came up and had it ready to ship by 9am. It was one of those very epic projects that had all the classic markings of a Super Bowl spot.

Name three pieces of technology you can’t live without.
My iPhone, my DSLR camera and iZotope RX.

What social media channels do you follow?
I’m a big Instagram guy. I love seeing people’s lives told through photos. Facebook is so 2015.

Do you listen to music while you work? Care to share your favorite music to work to?
Since I work in audio I can’t listen to music while I work, but when I’m not working I listen to a lot of modern country music, Dave Matthews Band (not afraid to say it!), prog metal and pretty much everything in between.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
I just leased a Jeep Wrangler Unlimited. There’s nothing like putting the top down and taking a drive to the beach!

The sound of two worlds for The Lost City of Z

By Jennifer Walden

If you are an explorer, your goal is to go where no one has gone before, or maybe it’s to unearth and re-discover a long-lost world. Director James Gray (The Immigrant), takes on David Grann’s non-fiction novel The Lost City of Z, which follows the adventures of British explorer Colonel Percival Fawcett, who in 1925 disappeared with his son in the Amazon jungle while on a quest to locate an ancient lost city.

Gray’s biographical film, which premiered October 15 at the 54th New York Film Festival, takes an interpretive approach to the story by exploring Fawcett’s inner landscape, which is at odds with his physical location — whether he’s in England or the Amazon, his thoughts drift between the two incongruent worlds.

Once Gray returned from filming The Lost City of Z in the jungles of Colombia, he met up with supervising sound editor/sound designer Robert Hein at New York’s Harbor Picture Company. Having worked together on The Immigrant years ago, Hein says he and Gray have an understanding of each other’s aesthetics. “He has very high goals for himself, and I try to have that also. I enjoy our collaboration; we keep pushing the envelope. We have a mutual appreciation for making a film the greatest it can be. It’s an evolution, and we keep pushing the film to new places.”

The Sound of Two Worlds
Gray felt Hein and Harbor Picture Company would be the perfect partner to handle the challenging sound job for The Lost City of Z. “It involved the creation of two very different worlds: Victorian England, and the jungle. Both feature the backdrop of World War I. Therefore, we wanted someone who naturally thinks outside the box, someone who doesn’t only look at the images on the screen, but takes chances and does things outside the realm of what you originally had in mind, and Bob [Hein] and his crew are those people.”

Bob Hein

Gray tasked Hein with designing a soundscape that could merge Fawcett’s physical location with his inner world. Fawcett (Charlie Hunnam) is presented with physical attacks and struggles, but it’s his inner struggle that Gray wanted to focus on. Hein explains, “Fawcett is a conflicted character. A big part of the film is his longing for two worlds: the Amazon and England. When he’s in one place, his mind is in the other, so that was very challenging to pull off.”

To help convey Fawcett’s emotional and mental conflicts, Hein introduced the sounds of England into the Amazon, and vice-versa, subtly blending the two worlds. Through sound, the audience escapes the physical setting and goes into Fawcett’s mind. For example, the film opens with the sounds of the jungle, to which Hein added an indigenous Amazonian battle drum that transforms into the drumming of an English soldier, since Fawcett is physically with a group of soldiers preparing for a hunt. Hein explains that Fawcett’s belief that the Amazonians were just as civilized as Europeans (maybe even more so) was a controversial idea at the time. Merging their drumming wasn’t just a means of carrying the audience from the Amazon to England; it was also a comment on the two civilizations.

“In a way, it’s kind of emblematic of the whole sound design,” explains Hein. “It starts out as one thing but then it transforms into another. We did that throughout the film. I think it’s very beautiful and engaging. Through the sound you enter into his world, so we did a lot of those transitions.”

In another scene, Fawcett is traveling down a river in the jungle and he’s thinking about his family in England. Here, Hein adds an indigenous bird calling, and as the scene develops he blends the sound of that bird with an English church bell. “It’s very subtle,” he says. “The sounds just merge. It’s the merging of two worlds. It’s a feeling more than an obvious trick.”

During a WWI battle scene, Fawcett leads a charge of troops out of their trench. Here Hein adds sounds related to the Amazon in juxtaposition of Fawcett’s immediate situation. “Right before he goes into war, he’s back in the jungle even though he is physically in the trenches. What you hear in his head are memories of the jungle. You hear the indigenous Amazonians, but unless you’re told what it is you might not know.”

A War Cry
According to Hein, one of the big events in the film occurs when Fawcett is being attacked by Amazonians. They are shooting at him but he refuses to accept defeat. Fawcett holds up his bible and an arrow goes tearing into the book. At that moment, the film takes the audience inside Fawcett’s mind as his whole life flashes by. “The sound is a very big part of that because you hear memories of England and memories of his life and his family, but then you start to hear an indigenous war cry that I changed dramatically,” explains Hein. “It doesn’t sound like something that would come out of a human voice. It’s more of an ethereal, haunted reference to the war cry.”

As Fawcett comes back to reality that sound gets erased by the jungle ambience. “He’s left alone in the jungle, staring at a tribe of Indians that just tried to kill him. That was a very effective sound design moment in this film.”

To turn that war cry into an ethereal sound, Hein used a granular synthesizer plug-in called Paulstretch (or Paul’s Extreme Sound Stretch) created by Software Engineer by Paul Nasca. “Paulstretch turns sounds almost into music,” he says. “It’s an old technology, but it does some very special things. You can set it for a variety of effects. I would play around with it until I found what I liked. There were a lot of versions of a lot of different ideas as we went along.”

It’s all part of the creative process, which Gray is happy to explore. “What’s great is that James [Gray] is excited about sound,” says Hein. “He would hang out and we would play things together and we would talk about the film, about the main character, and we would arrive at sounds together.”

Drones
Additionally, Hein sound designed drones to highlight the anxiety and trepidation that Fawcett feels. “The drones are low, sub-frequency sounds but they present a certain atmosphere that conveys dread. These elements are very subtle. You don’t get hit over the head with them,” he says.

The drones and all the sound design were created from natural sounds from the Amazon or England. For example, to create a low-end drone, they would start with jungle sounds — imagine a bee’s nest or an Amazonian instrument — and then manipulate those. “Everything was done to immerse the audience in the world of The Lost City of Z in its purest sense,” says Hein, who worked closely with Harbor’s sound editors Glenfield Payne, Damian Volpe and Dave Paterson. “They did great work and were crucial in the sound design.”

The Amazon
Gray also asked that Hein design the indigenous Amazon world exactly the way that it should be, as real as it could be. Hein says, “It’s very hard to find the correct sound to go along with the images. A lot of my endeavor was researching and finding people who did recordings in the Amazon.”

He scoured the Smithsonian Institute Archives, and did hours of research online, looking for audio preservationists who captured field recordings of indigenous Amazonians. “There was one amazing coincidence,” says Hein. “There’s a scene in the movie where the Indians are using an herbal potion to stun the fish in the river. That’s how they do it so as not to over-fish their environment. James [Gray] had found this chant that he wanted to have there, but that chant wasn’t actually a fishing chant. Fortunately, I found a recording of the actual fishing chant online. It’s beautifully done. I contacted the recordist and he gave us the rights to use it.”

Filming in the Amazon, under very difficult conditions presented Hein with another post production challenge. “Location sound recording in the jungle is challenging because there were loud insects, rain and thunder. There were even far-afield trucks and airplanes that didn’t exist at the time.”

Gray was very concerned that sections of the location dialogue would be unusable. “The performances in the film are so great because they went deep into the Amazon jungle to shoot this film. Physically being in that environment I’m sure was very stressful, and that added a certain quality to the actors’ performances that would have been very difficult to replace with ADR,” says Hein, who carefully cleaned up the dialogue using several tools, including iZotope’s RX 5 Advanced audio restoration software. “With RX 5 Advanced, we could microscopically choose which sounds we wanted to keep and which sounds we wanted to remove, and that’s done visually. RX gives you a visual map of the audio and you can paint out sounds that are unnecessary. It’s almost like Photoshop for sound.”

Hein shared the cleaned dialogue tracks with Gray, who was thrilled. “He was so excited about them. He said, “I can use my location sound!” That was a big part of the project.”

ADR and The Mix
While much of the dialogue was saved, there were still a few problematic scenes that required ADR, including a scene that was filmed during a tropical rainstorm, and another that was shot on a noisy train as it traveled over the mountains in Colombia. Harbor’s ADR supervisor Bobby Johanson, who has worked with Gray on previous films, recorded everything on Harbor’s ADR stage that is located just down the hall from Hein’s edit suite and the dub stage.

Gray says, “Harbor is not just great for New York; it’s great, period. It is this fantastic place where they’ve got soundstages that are 150 feet away from the editing rooms, which is incredibly convenient. I knew they could handle the job, and it was really a perfect scenario.”

The Lost City of Z was mixed in 5.1 surround on an Avid/Euphonix System 5 console by re-recording mixers Tom Johnson (dialogue/music) and Josh Berger (effects, Foley, backgrounds) in Studio A at Harbor Sound’s King Street location in Soho. It was also reviewed on the Harbor Grand stage, which is the largest theatrical mix stage in New York. The team used the 5.1 environment to create the feeling of being engulfed by the jungle. Fawcett’s trips, some which lasted years, were grueling and filled with disease and death. “The jungle is a scary place to be! We really wanted to make sure that the audience understood the magnitude of Percy’s trips to the Amazon,” says Berger. “There are certain scenes where we used sound to heighten the audience’s perspective of how erratic and punishing the jungle can be, i.e. when the team gets caught in rapids or when they come under siege from various Indian tribes.”

Johnson, who typically mixes at Skywalker Sound, had an interesting approach to the final mix. Hein explains that Johnson would first play a reel with every available sound in it — all the dialogue and ADR, all the sound effects and Foley — and the music. “We played it all in the reel,” says Hein. “It would be overwhelming. It would be unmixed and at times chaotic. But it gave us a very good idea of how to approach the mix.”

As they worked through the film, the sound would evolve in unexpected ways. What they heard toward the end of the first pass influenced their approach on the beginning of the second pass. “The film became a living being. We became very flexible about how the sound design was coming in and out of different scenes. The sound became very integrated into the film as a whole. It was really great to experience that,” shares Hein.

As Johnson and Berger mixed, Hein was busy creating new sound design elements for the visual effects that were still coming in at the last minute. For example, the final version of the arrows that were shot in the film didn’t come in until the last minute. “The arrows had to have a real special quality about them. They were very specific in communicating just how dangerous the situation actually was and what they were up against,” says Hein.

Later in the film, Amazonians throw tomahawks at Fawcett and his son as they run through the jungle. “Those tomahawks were never in the footage,” he says. “We had just an idea of them until days before we finished the mix. There was also a jaguar that comes out of the jungle and threatens them. That also came in at the last minute.”

While Hein created new sound elements in his edit suite next to the dub stage, Gray was able to join him for critique and collaboration before those sounds were sent next door to the dub stage. “Working with James is a high-energy, creative blast and super fun. He’s constantly coming up with new ideas and challenges. He spends every minute in the mix encouraging us, challenging us and, best of all, making us laugh a lot. He’s a great storyteller, and his knowledge of film and film history is remarkable. Working with James Gray is a real highlight in my career,” concludes Hein.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Napoleon Audio launched, Gregg Singer named EP

Veteran agency and audio producer Gregg Singer has joined The Napoleon Group in New York City as executive producer of its newly-launched Napoleon Audio.

Singer intends to integrate Napoleon’s audio capabilities with the group’s soup-to-nuts offerings, which span previz through live action production and post. “We’re creating a true full-service audio production company within The Napoleon Group,” he explains. “This will encompass everything from audio recording and mixing to in-studio direction and supervision, creative writing, sound design, original and stock music, music supervision and licensing, voice-over work and on-camera casting.”

gregg_singerNapoleon Audio’s rooms have mirrored gear and shared ISDN capabilities, a common network and separate isolation booths that can be linked or paired with either control room for simultaneous recording. Alongside the suites is an acoustically-treated stage that connects to the control rooms and enables the capture of live performances. In addition to audio post, the new division will offer trafficking, talent services, location recording and foreign language services, he adds.

Singer himself has an eclectic background, spanning everything from TV and radio production, marketing, advertising and creative development to sales, management, budgeting and strategic planning.  A film and television graduate of the Newhouse School at Syracuse, he got his start working on commercial shoots in New York. He then transitioned to the agency side and worked his way up through the production department, working as a producer, senior producer and head of production at such shops as JWT, BBDO, Bozell/Eskew, Cline Davis Mann and Kirshenbaum & Bond.

Singer left the agency world and joined audio post facility Sound Lounge in 2002 to launch a full-service audio production company. He left Sound Lounge in 2011 and was most recently partner and EP at Propeller Music Group.

The color and sound of Netflix’s The Get Down

The Get Down, Baz Luhrmann’s new series for Netflix, tells the story of the birth of hip-hop in the late 1970s in New York’s South Bronx. The show depicts a world filled with colorful characters pulsating to the rhythms of an emerging musical form.

Shot on the Red Dragon and Weapon in 6K, sound and picture finishing for the full series was completed over several months at Technicolor PostWorks New York. Re-recording mixers Martin Czembor and Eric Hirsch, working under Luhrmann’s direction and alongside supervising sound designer Ruy Garcia, put the show’s dense soundtrack into its final form.

The Get Down

Colorist John Crowley, meanwhile, collaborated with Luhrmann, cinematographer William Rexer and executive producer Catherine Martin in polishing its look. “Every episode is like a movie,” says Czembor. “And the expectations, on all levels, were set accordingly. It was complex, challenging, unique… and super fascinating.”

The Get Down’s soundtrack features original music from composer Elliott Wheeler, along with classic hip-hop tracks and flashes of disco, new wave, salsa and even opera. And the music isn’t just ambiance; it is intricately woven into the story. To illustrate the creative process, a character’s attempt to work out a song lyric might seamlessly transform into a full-blown finished song.

According to Garcia, the show’s music team began working on the project from the writing stage. “Baz uses songs as plot devices — they become part of the story. The music works together with the sound effects, which are also very musical. We tuned the trains, the phones and other sounds and synced them to the music. When a door closes, it closes on the beat.”

Ruy Garcia

The blending of story, music, dialogue and sound came together in the mix. Hirsch, who mixed Foley and effects, recalls an intensive trial-and-error process to arrive at a layering that felt right. “There was more music in this show than anything I’ve previously worked on,” he says. “It was a challenge to find enough sound effects to fill out the world without stepping on the music. We looked for places where they could breathe.”

In terms of tools, they used Avid Pro Tools 12 HD for sound and music, ADR manager for ADR cueing and Sound Miner for Sound FX library management. For sound design they called on Altiverb, Speakerphone and SoundToys EchoBoy to create spaces, and iZotope Iris for sampling. “We mixed using two Avid Pro Tools HDX2 systems and a double operator Avid S6 control surface,” explains Garcia. “The mix sessions were identical to the editorial sessions, including plug-ins, to allow seamless exchange of material and elaborate conformations.”

Music plays a crucial role in the series’ numerous montage sequences, acting as a bridge as the action shifts between various interconnecting storylines. “In Episode 2, Cadillac interrogates two gang members about a nightclub shooting, as Shaolin and Zeke are trying to work out the ‘get down’ — finding the break for a hip-hop beat,” recalls Czembor. “The way those two scenes are cut together with the music is great! It has an amazing intensity.”

Czembor, who mixed dialogue and music, describes the mix as a collaborative process. During the early phases, he and Hirsch worked closely with Wheeler, Garcia and other members of the sound and picture editing teams. “We spent several days pre-mixing the dialogue, effects and music to get it into a basic shape that we all liked,” he explains. “Then Baz would come in and offer ideas on what to push and where to take it next. It was a fun process. With Baz, bigger and bolder is always better.”

The team mostly called on Garcia’s personal sound library, “plus a lot of vintage New York E train and subway recordings from some very generous fellow sound editors,” he says. “Shaolin Fantastic’s kung-fu effects come from an old British DJ’s effects record. We also recorded and edited extensive Foley, which was edited against the music reference guide.”

The Color of Hip-Hop
Bigger and bolder also applied to the picture finishing. Crowley notes that cinematographer William Rexer employed a palette of rich reddish brown, avocado and other colors popular during the ‘70s, all elevated to levels slightly above simple realism. During grading sessions with Rexer, Martin and Luhrmann, Crowley spent time enhancing the look within the FilmLight Baselight, sharpening details and using color to complement the tone of the narrative. “Baz uses color to tell the story,” he observes. “Each scene has its own look and emotion. Sometimes, individual characters have their own presence.”

ohn-crowley

John Crowley

Crowley points to a scene where Mylene gives an electrifying performance in a church (photo above). “We made her look like a superstar,” he recalls. “We darkened the edges and did some vignetting to make her the focus of attention. We softened her image and added diffusion so that she’s poppy and glows.”

The series uses archival news clips, documentary material and stock footage as a means of framing the story in the context of contemporary events. Crowley helped blend this old material with the new through the use of digital effects. “In transitioning from stock to digital, we emulated the gritty 16mm look,” he explains. “We used grain, camera shake, diffusion and a color palette of warm tones. Then, once we got into a scene that was shot digitally, we would gradually ride the grain out, leaving just a hint.”

Crowley says it’s unusual for a television series to employ such complex, nuanced color treatments. “This was a unique project created by a passionate group of artists who had a strong vision and knew how to achieve it,” he says.

AES: Avid intros Pro Tools 12.6 and new MRTX audio interface

Avid was at AES in LA with several new tools and updates for audio post pros. New releases include Pro Tools 12.6 software and Pro Tools MTRX, an audio interface for Pro Tools, HDX and HD Native.

Avid Pro Tools 12.6 delivers new editing capabilities, including Clip Effects and layered editing features, making it possible to edit and prepare mixes faster. Production can also be accelerated using automatic playlist creation and selection using shortcut keys. Enhanced “in-the-box” dubber workflows have also been included.

Pro Tools MTRX, developed by Digital Audio Denmark, gives Pro Tools users the superior sonic quality of DAD’s A to D and D to A converters, along with flexible monitoring, I/O and routing capabilities, all in one unit. MTRX will let users gain extended monitor control and flexible routing with Pro Tools S6, S3 and other EUCON surfaces, use the converter as a high-performance 64-channel Pro Tools HD interface, and get automatic sample rate conversion on AES inputs. MTRX (our main photo) will be available later this year.

Tony Cariddi

During AES LA, we caught up with Tony Cariddi, director of product and solutions marketing for Avid, to see what he had to say about where Avid is going next. “What we have seen in the industry is that there is no shortage of innovation and there are new solutions for problems that are always emerging,” says Cariddi. “But what happens when you have all of these different solutions is it puts a lot of pressure on the user to make sure everything works together seamlessly. So what you’ll see from Avid Everywhere going forward is a continuation of trying to connect our own products closer together on the MediaCentral Platform, so it’s really fluid for our users, but also for people to be able to integrate other solutions into that platform just as easily.

“We also have to be responsive to how people want to access our tools,” he continued. “What kind of packages are they looking for? Do they want to subscribe? Do they want to buy? Enterprise licensing? Floating license? So you’ll probably see bundles and new ways to access licensing and new flexible ways to maybe rent the software when you need it. We’re trying to be very responsive to the multifaceted needs of the industry, and part of that is workflow, part of that is financial and part of that is the integration of everything.”

The 2016 HPA Award nominees

This year’s HPA Award nominees have been announced. Launched in 2006, the HPA Awards recognize outstanding achievement in editing, sound, visual effects and color grading for work in television, commercials and feature films.

The winners will receive their trophies during the 11th Annual HPA Awards ceremony on November 17 at the Skirball Cultural Center in Los Angeles. The list of 2016 HPA Award nominees are:

Outstanding Color Grading – Feature Film

Carol

Carol
John Dowdell // Goldcrest Post Productions Ltd

The Revenant
Steven J. Scott // Technicolor Production Services

Brooklyn
Asa Shoul // Molinare

The Martian
Stephen Nakamura // Company 3

The Jungle Book
Steven J. Scott // Technicolor Production Services

Outstanding Color Grading – Television

Vinyl – E.A.B
Steven Bodner // Deluxe/Encore NY

Fargo – The Myth of Sysiphus
Mark Kueper // Technicolor

Show Me A Hero

Outlander – Faith
Steven Porter // MTI Film

Gotham – By Fire
Paul Westerbeck // Encore Hollywood

Show Me A Hero – Part 1
Sam Daley // Technicolor PostWorks NY

Outstanding Color Grading – Commercial
Fallout 4The Wanderer
Siggy Ferstl / Company 3

Toyota Prius – Poncho
Sofie Borup // Company 3

NASCAR – Team
Lez Rudge // Nice Shoes

Audi R8 – Commander
Stefan Sonnenfeld // Company 3

Apple Music – History of Sound
Gregory Reese // The Mill

Pennzoil – Joyride Circuit
Dave Hussey // Company 3

Hennessy – Odyssey
Tom Poole // Company 3

Outstanding Editing – Feature Film

The Martian

The Martian
Pietro Scalia, ACE

The Revenant
Stephen Mirrione, ACE

The Big Short
Hank Corwin, ACE

Sicario
Joe Walker, ACE

Spotlight
Tom McArdle, ACE

Outstanding Editing – Television

Body Team 12
David Darg // RYOT Films

Underground – The Macon 7
Zack Arnold, Ian Tan // Sony Pictures Television

Vinyl – Pilot
David Tedeschi

Roots – Night One
Martin Nicholson, ACE, Greg Babor

Game of Thrones – Battle of the Bastards
Tim Porter, ACE

Outstanding Editing – Commercial

Wilson – Nothing Without It
Doobie White // Therapy Studios

Nespresso – Training Day
Chris Franklin // Big Sky Edit

Saucony – Be A Seeker
Lenny Mesina // Therapy Studios

Samsung – Teresa
Kristin McCasey // Therapy Studios

Outstanding Sound – Feature Film

Room
Steve Fanagan, Niall Brady, Ken Galvin // Ardmore Sound

Room

Eye In The Sky
Craig Mann, Adam Jenkins, Bill R. Dean, Chase Keehn // Technicolor Creative Services

Batman Vs Superman: Dawn of Justice
Scott Hecker // Formosa Group
Chris Jenkins, Michael Keller // Warner Bros. Post Production Services

Zootopia
David Fluhr, CAS, Gabriel Guy, CAS, Addison Teague // Walt Disney Company

Sicario
Alan Murray, Tom Ozanich, John Reitz // Warner Bros. Post Production Services

Outstanding Sound – Television

Outlander – Prestonpans
Nello Torri, Alan Decker // NBCUniversal Post Sound

Game of Thrones – Battle of the Bastards
Tim Kimmel, MPSE, Paula Fairfield, Mathew Waters, CAS, Onnalee Blank, CAS, Bradley C. Katona, Paul Bercovitch // Formosa Group

Preacher

Preacher

Preacher – See
Richard Yawn, Mark Linden, Tara Paul // Sony Sound

Marco Polo – One Hundred Eyes
David Paterson, Roberto Fernandez, Alexa Zimmerman, Glenfield Payne, Rachel Chancey // Harbor Picture Company

House of Cards – Chapter 45
Jeremy Molod, Ren Klyce, Nathan Nance, Scott R. Lewis, Jonathan Stevens // Skywalker Sound

Outstanding Sound – Commercial

Sainsbury’s – ­Mog’s Christmas Calamity
Anthony Moore, Neil Johnson // Factory

Save the Children UK – Still The Most Shocking Second A Day
Jon Clarke // Factory

Wilson – Nothing Without It
Doobie White // Therapy Studios

Honda – Paper
Phil Bolland // Factory

Honda – Ignition
Anthony Moore // Factory

Outstanding Visual Effects – Feature Film

Star Wars: The Force Awakens
Jay Cooper, Yanick Dusseault, Rick Hankins, Carlos Munoz, Polly Ing // Industrial Light & Magic

The Jungle Book
Robert Legato, Andrew R. Jones
Adam Valdez, Charley Henley // MPC
Keith Miller // Weta Digital

Captain America: Civil War
Russell Earl, Steve Rawlins, Francois Lambert, Pat Conran, Rhys Claringbull // Industrial Light & Magic

The Martian
Chris Lawrence, Neil Weatherley, Bronwyn Edwards, Dale Newton // Framestore

Teenage Mutant Ninja Turtles: Out of the Shadows
Pablo Helman, Robert Weaver, Kevin Martel, Shawn Kelly, Nelson Sepulveda // Industrial Light & Magic

Outstanding Visual Effects – Television

Supergirl – Pilot
Armen V. Kevorkian, Andranik Taranyan, Gevork Babityan, Elaina Scott, Art Sayan // Encore VFX

Ripper Street – The Strangers’ Home
Ed Bruce, Nicholas Murphy, Denny Cahill, John O’Connell // Screen Scene

Black Sails – XXI
Erik Henry // Starz
Matt Dougan // Digital Domain
Martin Ogren, Jens Tenland, Nicklas Andersson // ILP

The Flash – Guerilla Warfare
Armen V. Kevorkian, Thomas J. Conners, Andranik Taranyan, Gevork Babityan, Jason Shulman // Encore VFX

Game of Thrones: Iloura

Game of Thrones – Battle of the Bastards
Joe Bauer, Eric Carney // Fire & Blood Productions
Derek Spears // Rhythm & Hues Studios
Glenn Melenhorst // Iloura
Matthew Rouleau // Rodeo FX

Outstanding Visual Effects – Commercial

Sainsbury’s – Mog’s Christmas Calamity
Ben Cronin, Grant Walker, Rafael Camacho // Framestore

Microsoft Xbox – Halo 5: The Hunt Begins
Ben Walsh, Ian Holland, Brian Delmonico, Brian Burke // Method

AT&T – Power of &amp
James Dick, Corrina Wilson, Euna Kho, Callum McKeveny // Framestore

Kohler – Never Too Next
Andy Boyd, Jake Montgomery, Zachary DiMaria, David Hernandez // JAMM

Gatorade – Sports Fuel
JD Yepes, Richard Shallcross // Framestore

The sounds ‘Inside Amy Schumer’

By Jennifer Walden

After four seasons of Comedy Central’s Inside Amy Schumer, it’s hard to imagine still being shocked by the comic’s particular brand of comedy. Somehow Schumer still manages to make jokes that leave you uncomfortable — jokes that are so, so wrong, but so, so funny that you can’t help but laugh. Like the image-building pop song that Schumer and her three friends belt out karaoke-style in the show opener for episode 407, “Psychopath Test.”

The empowering lyrics, provided by Schumer’s writing team, proclaim, “Just be strong cause haters gonna hate. So falsely accuse a college lacrosse team of rape. Go ahead, girl, you a hero. Selfie sticks bitch take pics at Ground Zero. You’re a perfect 10, between nine and eleven. Never forget, you are beautiful.”

The variety-show approach to the half-hour series only adds fuel to Schumer’s creative fire. There are on-set scripted skits, man-on-the-street interviews, comedy club performances and random music videos like Milk Milk Lemonade — with music created by composers Chris Maxwell and Phil Hernandez of Elegant Too in New York City. The duo also created the catchy music for episode 407’s “You are Beautiful” karaoke song.

Great City Post
Chief audio engineer Ian Stynes at Great City Post in NYC has been handling the show’s sound post since the pilot episode with additional mixing by Jay Culliton. He says that although the show’s budget is not quite what people would expect, everybody involved with Inside Amy Schumer has high expectations for it. “We’re always asking, ‘Is this the best we can do?’ The process can be a little down and dirty sometimes, but working with the producers and editors at Running Man Post is amazing. They’ve really fine tuned things at this point. They’re very good at prepping it for sound post,” Stynes shares.

Stynes and Culliton typically spend four days cleaning, doing the edit, the sound design and pre-mix of an episode using Avid Pro Tools 12. Then Stynes is joined at Great City Post by show creator/executive producer Dan Powell, producer Ryan Cunningham from Running Man, executive producer Kevin Kane, executive producer/head writer Jessi Klein and Amy Schumer for a half-day of final playback and edits. “Amy has been in for almost every mix session this season. She is very hands-on and really fun to work with,” says Stynes.

While the production tracks, provided by sound mixer Matt McLarty, are always well recorded, Stynes admits there was a sketch in episode 408, “Everyone for Themselves,” that was particularly challenging from a production sound standpoint. In the sketch, 16 people are in a Lamaze class voicing their concerns over their kids being assholes. “There was no boom and it’s a wide shot, so there were 16 lav mics, some of which were cutting in and out,” says Stynes, who has years of experience in cutting dialogue for narrative features.

In terms of tools, in addition to Pro Tools, he uses iZotope RX5 for cleaning, Avid’s EQ III for frequency management,and various Waves compressors (including his new favorite — Kramer PIE) to help the scene play smoothly. “You spend a lot of time and effort to put something together and hope that no one will think twice about the fact that you did anything at all with the dialogue edit.”

Another interesting sound-driven scene is Bridget Everett’s closing performance in episode 408. She performs a call-to-action song called “Eat It, Eat It” on stage at a comedy club. She’s singing along live to a backing track. In post, Stynes had tracks with Everett’s handheld mic and boom mics, plus the vocal splits and other stems from the composed tracks. “There were pretty good options to work with in the mix,” says Stynes.

Coming from a music background, Stynes is not shy about diving into the music stems to do different treatments on the vocals, or even significantly altering the tracks. In the first season, for a 1920s-era sketch called “A Porn Star is Born,” Amy and her friend discover cocaine. Stynes says, “Composer Chris Anderson wrote an awesome old-timey song for a montage in it. It came to me split out into 30 tracks of stems because the showrunners wanted to make a part of it sound vintage. I ended up speeding up a whole section and edited in different things. Sometimes the music editing can get pretty complicated on this show.”

The show’s format offers plenty of opportunity on the sound design side as well. Last season, in episode 303, “80s Ladies,” there is a sketch where Schumer is in a bar with her friends and she decides to ride a mechanical bull to look sexy. Instead, she ends up failing, hurting herself badly and not looking sexy at all.

“There can be so many layers that we get to create and mix in. The show is very cinematic at times,” says Stynes. “For this scene there was score, and source music. There were a bunch of production tracks to clean up. There was a DJ in the room that we recorded ADR lines for, so we had to match his production lines. There was bar ambience and background walla. There were sounds for Amy getting knocked about on the bull. We added a whole layer for the mechanical bull speeding up and slowing down. That was a fun scene.”

He recalls another fun sound design scene, from Season 2, “Schumerenka vs. Everett,” in which Schumer and Everett compete in a tennis match. Schumer distracts the judges and fools the commentators into thinking she is better than she is by being a sexy tennis player. “Once again, it can be very similar to working on a full-length narrative feature film. We added in all of the ambiences, reactions for the crowds, the footsteps on the tennis court, the ball and body movements, and so on.”

Jay and Ian at work.

There is no Foley on the show, so footsteps, touches and clothing movements are cut in when necessary from Great City Post’s extensive collection of sound effects, which, luckily for the Schumer team, includes hundreds of farts sounds. “There was a really great skit where a murderer comes into Amy’s house. The thing is, when she gets scared… she farts. So she is unsuccessfully hiding from the murderer in the closet and farting. We spent 20 minutes going through every fart we had in our library, analyzing them, like, ‘Oh no, it should be juicer. It’s a weird way to spend your time,” says Stynes, “but all things considered, it’s not such a bad way to earn your paycheck.”

Post sound editing and mixing is done in Pro Tools 12. Stynes likes the upgrade to this version specifically for the offline bounce feature, which is a real time-saver when generating the significant amount of deliverables that Comedy Central requires. “They want a 5.1 mix, stereo mix, a version without VO fully mixed, bleeped mono, non-bleeped mono, dual mono, all sorts of stems for everything,” says Stynes.

Now instead of spending a half hour on every single split bounce, each bounce only takes a minute. Still, Stynes says, “It takes a few hours to version it out, and bounce it down, and put together all the different deliverables Comedy Central requires for the show.”

“The best part about working on Inside Amy Schumer really is that everyone sincerely wants it to be the best show it can be and do their best to follow through with that sentiment. From the showrunners and producers to the team at Running Man to all the folks here at Great City — right up to the top with Amy Schumer actually coming in to the mix sessions and getting involved. Really, it’s very rewarding and a lot of fun,” Stynes concludes.

Formosa Group outgrowing space, moving to new home in 2017

In early 2017, Formosa Group will be moving their home base from West Hollywood to a new creative office campus at 959 North Seward Street, named Hollywood 959. The audio post house will be taking up two floors of the green building, which has LEED Silver certification.

“Over the three years since our initial launch, we’ve experienced tremendous growth,” says Robert C. Rosenthal, CEO of Formosa Group. “As our requirements on The Lot expanded, we grabbed edit/office space where we could, resulting in us residing in multiple buildings. This move will allow us to be in a contiguous space with sound editorial, operations and administration together.”

After the move, the company will have over 30,000 square feet at Hollywood 959, covering sound design, sound and music editorial, mixing — there will be multiple Atmos sound design rooms and one Atmos mixing environment — and ADR capabilities, along with administrative and support staff. The new creative campus itself features large outdoor gathering areas, an on-site commissary and ample parking.

Formosa will continue to operate its other facilities in West Hollywood, Santa Monica, West LA and Burbank.

Call of the Wild —Tarzan’s iconic yell

By Jennifer Walden

For many sound enthusiasts, Tarzan’s iconic yell is the true legend of that story. Was it actually actor Johnny Weissmuller performing the yell? Or was it a product of post sound magic involving an opera singer, a dog, a violin and a hyena played backwards as MGM Studios claims? Whatever the origin, it doesn’t impact how recognizable that yell is, and this fact wasn’t lost on the filmmakers behind the new Warner Bros. movie The Legend of Tarzan.

The updated version is not a far cry from the original, but it is more guttural and throaty, and less like a yodel. It has an unmistakable animalistic quality. While we may never know the true story behind the original Tarzan yell, postPerspective went behind the scenes to learn how the new one was created.

Supervising sound editor/sound designer Glenn Freemantle and sound designer/re-recording mixer Niv Adiri at Sound24, a multi-award winning audio post company located on the lot of Pinewood Film Studios in Buckinghamshire, UK, reveal that they went through numerous iterations of the new Tarzan yell. “We had quite a few tries on that but in the end it’s quite a simple sound. It’s actor Alexander Skarsgård’s voice and there are some human and animal elements, like gorillas, all blended together in it,” explains Freemantle.

Since the new yell always plays in the distance, it needed to feel powerful and raw, as though Tarzan is waking up the jungle. To emphasize this, Freemantle says, “We have animal sounds rushing around the jungle after the Tarzan yell, as if he is taking control of it.”

The jungle itself is a marvel of sight and sound. Freemantle notes that everything in the film, apart from the actors on screen, was generated afterward — the Congo, the animals, even the villages and people, a harbor with ships and an action sequence involving a train. Everything.

LEGEND OF TARZANThe film was shot on a back lot of Warner Bros. Studios in Leavesden, UK, so making the CGI-created Congo feel like the real deal was essential. They wanted the Congo to feel alive, and have the sound change as the characters moved through the space. Another challenge was grounding all the CG animals — the apes, wildebeests, ostriches, elephants, lions, tigers, and other animals — in that world.

When Sound24 first started on the film, a year and a half before its theatrical release, Freemantle says there was very little to work with visually. “Basically it was right from the nuts and bolts up. There was nothing there, nothing to see in the beginning apart from still pictures and previz. Then all the apes, animals and jungles were put in and gradually the visuals were built up. We were building temp mixes for the editors to use in their cut, so it was like a progression of sound over time,” he says.

Sound24’s sound design got increasingly detailed as the visuals presented more details. They went from building ambient background for different parts of Africa — from the deep jungle to the open plains — at different times of the day and night to covering footsteps for the CG gorillas. The sound design team included Ben Barker, Tom Sayers, and Eilam Hoffman, with sound effects editing by Dan Freemantle and Robert Malone. Editing dialogue and ADR was Gillian Dodders. Foley was recorded at Shepperton Studios by Foley mixer Glen Gathard.

Capturing Sounds
Since capturing their own field recordings in the Congo would have proved too challenging, Sound 24 opted to source sound recordings authentic to that area. They also researched and collected the best animal sounds they could find, which were particularly useful for the gorilla design.

Sound24’s sound design team designed the gorillas to have a range of reactions, from massive roars and growls to smaller grunts and snorts. They cut and layered different animal sounds, including processed human vocalizations, to create a wide range of gorilla sounds.

There were three main gorillas, and each sounds a bit different, but the most domineering of all was Akut. During a fight between Akut and Tarzan, Adiri notes that in the mix, they wanted to communicate Akut’s presence and power through sound. “We tried to create dynamics within Akut’s voice so that you feel that he is putting in a lot of effort into the fight. You see him breathing hard and moving, so his voice had to have his movement in it. We had to make it dynamic and make sure that there was space for the hits, and the falls, and whatever is happening visually. We had to make sure that all of the sounds are really tied to the animal and you feel that he’s not some super ape, but he’s real,” Adiri says. They also designed sounds for the gang of gorillas that came to egg on Akut in his fight.

The Mix
All the effects, Foley and backgrounds were edited and premixed in Avid Pro Tools 11. Since Sound24 had been working on The Legend of Tarzan for over a year, keeping everything in the box allowed them to update their session over time and still have access to previous elements and temp mixes. “The mix was evolving throughout the sound editorial process. Once we had that first temp mix we just kept working with that, remixing sounds and reworking scenes but it was all done in the box up until the final mix. We never started the mix from scratch on the dub stage,” says Adiri.

For the final Dolby Atmos mix at Warner Bros. De Lane Lea Studios in London, Adiri and Freemantle brought in their Avid S6 console to studio. “That surface was brilliant for us,” says Adiri, who mixed the effects/Foley/backgrounds. He shared the board with re-recording mixer Ian Tapp, on dialogue/music.

Adiri feels the Atmos surround field worked best for quiet moments, like during a wide aerial shot of the jungle where the camera moves down through the canopy to the jungle floor. There he was able to move through layers of sounds, from the top speakers down, and have the ambience change as the camera’s position changed. Throughout the jungle scenes, he used the Atmos surrounds to place birds and distant animal cries, slowly panning them around the theater to make the audience feel as though they are surrounded by a living jungle.

He also likes to use the overhead speakers for rain ambience. “It’s nice to use them in quieter scenes when you can really feel the space, moving sounds around in a more subliminal way, rather than using them to be in-your-face. Rain is always good because it’s a bright sound. You know that it is coming from above you. It’s good for that very directional sort of sound.”

Ambience wasn’t the only sound that Adiri worked with in Atmos. He also used it to pan the sounds of monkeys swinging through the trees and soaring overhead, and for Tarzan’s swinging. “We used it for these dynamic moments in the storytelling rather than filling up those speakers all the time. For the moments when we do use the Atmos field, it’s striking and that becomes a moment to remember, rather than just sound all the time,” concludes Freemantle.

Jennifer Walden is a New Jersey-based writer and audio engineer. 

Behind the Title: Nutmeg creative director Dave Rogan

NAME: Dave Rogan

COMPANY: New York City’s Nutmeg Creative

CAN YOU DESCRIBE NUTMEG?
We are a single-resource creative partner that brings targeted communications to life for brands, networks and ad agencies. A post resource for nearly 40 years, Nutmeg also provides audio, editing, color and graphics, in addition to interactive, identity and social.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
It depends on the day, on the project and on the client. In some cases, I am acting in the traditional role of agency creative director, coming up with original ideas that meet stated goals for the project. In many other cases, I am guiding a project from genesis to completion, adding a creative perspective or ensuring that our clients’ expectations are met or surpassed.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Almost anything can fall under that title — from original concepting and scripting, to “MacGyvering” a makeshift tracking marker out of a stick on set for an effects-heavy spot.

WHAT’S YOUR FAVORITE PART OF THE JOB?
When I was an ad agency CD, it was almost impossible getting the creative, production and post talent on the same page at the same time, to my satisfaction. Half of my job was making sure everyone was current on any rolling changes, adaptations to, or special challenges presented by the creative. But because Nutmeg has creative, interactive, production and post all under one roof, we’re able to think through every stage of the project together from the get-go. Instances of something unexpected popping up are almost non-existent because so many heads are in the game at the same time.

WHAT’S YOUR LEAST FAVORITE?
I seem to always be flying the day before or after a holiday. Last year, I flew home from a shoot on Thanksgiving morning. Wasn’t in love with that, I must admit.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I’m a morning person — I love waking up before the sun and watching it rise.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be trying to convince Dream Theater they need a second keyboard player.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew I wanted to be in a creative professional environment from a very early age — before college. I’ve been lucky enough to ride industry trends and continual reinvention to a place where I am still able to continue to shape creative communications in any number of ways on a day-to-day basis.

ParagardCAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
My current projects range from working on a critically celebrated pharma campaign for a disease called nontuberculous mycobacteria — NTM, for short —to a series of hilarious spots for a female contraceptive to an animated PSA aimed at wiping out polio in the Third World. There is also the forthcoming launch of a famous Broadway reboot. It varies every day with no rhyme or reason, and I love it.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Scale-wise, it certainly pales in comparison to most of the projects I’m involved in here, but I take a special amount of pride in Nutmeg’s semi-finalist submission to the Doritos’ Crash the Super Bowl (image below) contest a year ago. It was a spot I wrote, directed and co-produced with our internal production team with almost no budget. To know that people really enjoyed it was thrilling and very satisfying for all of us.

Doritos Crash the Superbowl-Dave RoganNAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPhone, laptop and my ancient, but beloved, Korg Trinity.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Facebook, mostly.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Only with headphones! Lately, I’ve been listening to Dream Theater’s prog metal opera, “The Astonishing.” I think my coworkers would tear their earballs out if I played it at any kind of audible volume.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I wish I had a more interesting answer than this, but I find my clients enjoyable, not stress inducing. When I worked at agencies, the dynamic was different and I definitely felt under the gun on a daily basis. At Nutmeg, it’s different. The clients really want our perspective and guidance, and, in most cases, we’re very much partners with the same goals.

SuperExploder’s Jody Nazzaro creates sounds of love for Popeyes, Comedy Central

Sound designer/mixer Jody Nazzaro from New York audio house SuperExploder teamed up with Comedy Central to help tell the story of a boy who needs to be more “Southern Fair” if he wants to land the girl of his dreams in a new :60 parody movie trailer Southern Crossed Lovers for Popeyes.

Poor Chester!

In the faux trailer, a young couple meets and falls in love at a country fair, but the girl’s parents disapprove, saying he’s “not Fair enough” for her. They are pushing her toward Chester, the red suspenders-wearing corn dog dipper. In the end, our love-struck hero shows up in a traditional southern suit holding a box of Popeyes Southern Fair tenders and Cajun fries, quickly winning the heart of the girl’s father.

The direction that Nazzaro got from the client was what every artist wants to hear: ”We trust your instincts, go for it. Make it feel like a trailer.”

According to Nazarro, “This project aligned with essentially the new standard of sound design and mixing for broadcast networks. The money isn’t there for ISDNs and phone patches anymore, and most talent records at home with the producer on the phone and sends the VO files via file sharing.”

He received the picture reference as a 1920×180 ProRes QuickTime and an AAF from Adobe Premiere. “Clean production dialogue was sent that I conformed as they cut with the camera mix. Once I prepped the session in Pro Tools, I began to clean up the dialog in Izotope RX5 Advanced and build the ambience tracks,” he explains. “I added some Foley, edited the music and enhanced the dramatic music swell a bit with Omnisphere.”

He mixed the spot in stereo and 5.1, in case they needed it for cinema release — which he says is standard workflow for him now — and sent it off for approval. It was approved on his first mix pass.

“It was a lot of fun working on a non-standard project with a twist — making it feel like a real trailer,” says Nazzaro. “With the audio, I felt like less was more. I wanted to let the voiceover and the dialogue carry it into a comedic misdirection.”

AES Paris: A look into immersive audio, cinematic sound design

By Mel Lambert

The Audio Engineering Society (AES) came to the City of Light in early June with a technical program and companion exhibition that attracted close to 2,600 pre-registrants, including some 700 full-pass attendees. “The Paris International Convention surpassed all of our expectations,” AES executive director Bob Moses told postPerspective. “The research community continues to thrive — there was great interest in spatial sound and networked audio — while the business community once again embraced the show, with a 30 percent increase in exhibitors over last year’s show in Warsaw.” Moses confirmed that next year’s European convention will be held in Berlin, “probably in May.”

Tom Downes

Getting Immersed
There were plenty of new techniques and technologies targeting the post community. One presentation, in particular, caught my eye, since it posed some relevant questions about how we perceive immersive sound. In the session, “Immersive Audio Techniques in Cinematic Sound Design: Context and Spatialization,” co-authors Tom Downes and Malachy Ronan — both of who are AES student members currently studying at the University of Limerick’s Digital Media and Arts Research Center, Ireland — questioned the role of increased spatial resolution in cinematic sound design. “Our paper considered the context that prompted the use of elevated loudspeakers, and examined the relevance of electro-acoustic spatialization techniques to 3D cinematic formats,” offered Downes. The duo brought with them a scene from writer/director Wolfgang Petersen’s submarine classic, Das Boot, to illustrate their thesis.

Using the university’s Spatialization and Auditory Display Environment (SpADE) linked to an Apple Logic Pro 9 digital audio workstations and a 7.1.4 playback configuration — with four overhead speakers — the researchers correlated visual stimuli with audio playback. (A 7.1-channel horizontal playback format was determined by the DAW’s I/O capabilities.) Different dynamic and static timbre spatializations were achieved by using separate EQ plug-ins assigned to horizontal and elevated loudspeaker channels.

“Sources were band-passed and a 3dB boost applied at 7kHz to enhance the perception of elevation,” Downes continued. “A static approach was used on atmospheric sounds to layer the soundscape using their dominant frequencies, whereas bubble sounds were also subjected to static timbre spatialization; the dynamic approach was applied when attempting to bridge the gap between elevated and horizontal loudspeakers. Sound sources were split, with high frequencies applied to the elevated layer, and low frequencies to the horizontal layer. By automating the parameters within both sets of equalization, a top-to-bottom trajectory was perceived. However, although the movement was evident, it was not perceived as immersive.”

The paper concluded that although multi-channel electro-acoustic spatialization techniques are seen as a rich source of ideas for sound designers, without sufficient visual context they are limited in the types of techniques that can be applied. “Screenwriters and movie directors must begin to conceptualize new ways of utilizing this enhanced spatial resolution,” said Downes.

Rich Nevens

Rich Nevens

Tools
Merging Technologies demonstrated immersive-sound applications for the v.10 release of Pyramix DAW software, with up to 30.2-channel routing and panning, including compatibly for Barco Auro, Dolby Atmos and other surround formats, without the need for additional plug-ins or apps, while Avid showcased additions for the modular S6 Assignable Digital Console, including a Joystick Panning Module and a new Master Film Module with PEC/DIR switching.

“The S6 offers improved ergonomics,” explained Avid’s Rich Nevens, director of worldwide pro audio solutions, “including enhanced visibility across the control surface, and full Ethernet connectivity between eight-fader channel modules and the Pro Tools DSP engines.” Reportedly, more than 1,000 S6 systems have been sold worldwide since its introduction in December 2013, including two recent installations at Sony Pictures Studios in Culver City, California.

Finally, Eventide came to the Paris AES Convention with a remarkable new multichannel/multi-element processing system that was demonstrated by invitation only to selected customers and distributors; it will be formally introduced during the upcoming AES Convention in Los Angeles in October. Targeted at film/TV post production, the rackmount device features 32 inputs and 32 discrete outputs per DSP module, thereby allowing four multichannel effects paths to be implemented simultaneously. A quartet of high-speed ARM processors mounted on plug-in boards can be swapped out when more powerful DSP chips became available.

Joe Bamberg and Ray Maxwell

Joe Bamberg and Ray Maxwell

“Initially, effects will be drawn from our current H8000 and H9 processors — with other EQ, dynamics plus reverb effects in development — and can be run in parallel or in series, to effectively create a fully-programmable, four-element channel strip per processing engine,” explained Eventide software engineer Joe Bamberg.

“Remote control plug-ins for Avid Pro Tools and other DAWs are in development,” said Eventide’s VP of sales and marketing, Ray Maxwell. The device can also be used via a stand-alone application for Apple iPad tablets or Windows/Macintosh PCs.

Multi-channel I/O and processing options will enable object-based EQ, dynamic and ambience processing for immersive-sound production. End user price for the codenamed product, which will also feature Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking, has yet to be announced.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Keeping score for ‘Better Call Saul’

Breaking Bad composer Dave Porter returns for this prequel

By Jennifer Walden

When AMC’s Breaking Bad ended, many went through withdrawal from the multi-Emmy Award-winning show. Thanks to its prequel, Better Call Saul, the world that Vince Gilligan created in the New Mexico desert lives on. But while the landscapes might seem familiar, don’t expect the show to look or sound the same as Breaking Bad.

Dave Porter

“For me, it all starts with the black and white keys,” says Los Angeles-based composer Dave Porter, whose score for AMC’s Better Call Saul is anything but black and white emotionally. At the piano he works out melodies and harmonies that communicate the complicated blend of emotions that move the show. “The characters are complex. The challenge is in trying to find the right balance between the different emotions that are at play in any given scene.”

While Better Call Saul and Breaking Bad feature some of the same characters, Better Call Saul show runners/creators Vince Gilligan and Peter Gould were adamant that this should be a very different show, says Porter, who won an ASCAP Award for Best Television Composer of 2013 for his work on Breaking Bad. “That meant everything from how they write it, to how they shoot it, to how it sounds. We went back to the drawing board to create a whole new musical vocabulary for Better Call Saul, particularly for Jimmy (Bob Odenkirk) who, eventually, becomes Saul.”
Porter defines the show’s score with words like intimate, human and relatable. “Breaking Bad [feels] very worldly,” he says. “The scope is much larger, whereas in Better Call Saul, Jimmy’s fight is a smaller fight. Although it is no less important, it is the challenge of one man.”

Getting Real
In terms of instrumentation, Porter gravitated toward real instruments, relying less on the computer and synths he used on Breaking Bad. “I use instruments that I can actually sit down and play, like organs, electric piano, lots of bass and guitar, vibraphone and different mallet percussion, such as vibes and different little xylophones,” he explains.

BCS_210-20151026-UC_0371.JPGWhile Porter performed the piano/keyboards and percussion parts, he hired studio musicians to play the bass and guitar parts. He works with recording engineer James Saez, owner/president of Glendale, California’s The Audio Labs, when the session requires more than one musician. Otherwise, Porter handles recording, editing and mixing at his home studio using an Avid Pro Tools 12 rig and a collection of outboard effects.

For Better Call Saul, Porter likes the 1980’s-era Korg GR-1 Gated Spring Reverb and a TC Electronic guitar-oriented rackmount effects processor from the 1990s called Fireworx. “I usually play and record everything into Pro Tools first, as unprocessed as possible, and then I go back and process the sound. That gives me the flexibility to play around with the effects later.”

Turnaround time for Porter’s score is seven to 10 days per episode. After reviewing the episode, Porter meets with show runners Gilligan and Gould, the episode’s picture editor and supervising sound editor Nick Forshager for a spotting session. They determine where original music is needed and what it needs to express emotionally. “We try not to use music as a placeholder or filler. If it’s going to be in there than it needs to have a purpose,” says Porter, who notes that Better Call Saul, like Breaking Bad before it, is not edited with temp music tracks. “I have ingrained in all of those folks not to use temp music. This way, when I get the episode there is no preconceived notion about what the music should sound like. It’s a great and very rare thing, and I am blessed to have had that on these two shows.”

BCS_209-20151020-UC_0619.JPG

Letting the Music Do the Talking
Porter’s favorite track on Season 2 was for the opening sequence of Episode 8 — a five-minute scene featuring a US-Mexico border crossing in which a previously unknown character goes through a customs inspection of his transport truck. “There is very little dialogue, so the music was front and center and required a kind of confidence and swagger, which is something I don’t always get to do on the show,” he explains. “The character plays his part so well, so calm and cool and collected, that I took my inspiration from him.” One fun feature to the track is a rock ‘n’ roll horn section, which is something Porter had never done for any of the Better Call Saul episodes before.

Knowing that the opening sequence was a long, fluid shot, Porter began thinking about how to make a track that rhythmically was able to sustain itself over a long period of time without getting boring. “I had to find ways to change it up and divide it into different sections,” he says. “I attacked it that way. Often, when you’re scoring to picture you are building up to a certain moment, but this piece was like a big arc that lasts five or six minutes. It was about keeping the music fresh and interesting and evolving for that length of time.”

Better Call Saul requires music that is emotionally complex, but it also offers another challenge. As the prequel of Breaking Bad, the two shows are related even though they’re very different. Better Call Saul’s score needs to gradually evolve as the timelines of the two shows converge. “The challenge,” says Porter, “is to be present and honest with where these characters are, but at the same time be able to look ahead and map out the path musically, to evolve the score as the characters evolve into the characters we know they will become eventually on Breaking Bad.”

Porter says he’s happy exploring this new world of Better Call Saul, especially Jimmy before he becomes Saul Goodman. “I am in no hurry to get to where the stories have to overlap. Personally, I hope that takes many years because I’m having a fantastic time watching these characters evolve.”

Jennifer Walden is an New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.

Behind the Title: Sonic Union studio director Justine Cortale

NAME: Justine Cortale

COMPANY: New York City-based Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
We’re an audio post production company. We mix sound for commercials, with a dash of documentary and feature work.

WHAT’S YOUR JOB TITLE?
Studio Director

WHAT DOES THAT ENTAIL?
I assist with all scheduling needs before, during and after sessions. I try and listen to our clients’ needs and help them problem-solve whenever I can. Within the studio, I act as a sounding board, troubleshooter and cheerleader to every one of our team members. Sometimes, I’m also the comic relief!

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have two seats near my desk and, unsurprisingly, they are always filled. They are in many ways like a therapy couch. No one hesitates before sitting down and telling me his or her life stories, dramas and troubles. And I love it. Fortunately, we have some of the most interesting people I’ve ever met who have come to work here so it’s a real treat for me. I enjoy hearing their stories or just being an ear when they need someone to listen.

It’s something I’ve enjoyed doing since I was a kid, and now that I’m an old fart, I mean, more experienced, people actually want to hear my advice. So for me the most surprising part of my job is the amount of time I spend talking to people about anything but projects/work. I’m considering a future in psychology.

WHAT TOOLS DO YOU USE?
Farmerswife. Facebook. Email. Instant Messaging. Most of all… my big, loud mouth.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Juggling! My favorite part of the job is scheduling on days that seem almost impossible to sort or turn around — when there are a ton of scheduling conflicts and there doesn’t seem to be a way to fit everyone in. And then, with the help of a flexible client (we love them!), and some brainstorming with the staff, it all comes together and everyone is happy. That makes me happy. And it’s during those moments that I feel a real sense of accomplishment.

WHAT’S YOUR LEAST FAVORITE?
When Cheres, our chef, or Jen, our client service person, makes something so incredibly irresistible — fattening — in the kitchen and I’m here and just cannot resist. Let this article serve as a documented record. “This will be the cause of my heart attack one day.”

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The moment I have my first sip of coffee in the morning.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I assume by now I would have accomplished my lifelong dream of winning the lottery or one of the many sweepstakes I like to enter. This job thing is slowing down my entry/play time.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I actually thought I would be a celebrity/movie star for most of my life. I had a brief career as a hand model. No joke! It wasn’t until college that I even considered not being in the limelight… and least of all working in audio post. Audio!? No one can see me! But when I found Sonic Union, I was instantly sure this is where I was meant to be. It just fits.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sonic Union recently worked on some really visually intricate spots for Game of War, mixed by Brian Goodheart and directed by MJZ’s Fredrik Bond, out of Untitled Worldwide; an athletically and aesthetically inspiring spot for Under Armour, mixed by David Papa with music by Phillip Glass, out of Droga5; and a PSA for Above the Influence, also mixed by David Papa, out of Hill Holliday that was directed by Vine star Maris Jones.

We are also really proud of the 2015 AICE Award that mixer Steve Rosen earned for his work on the Adidas Superstar spot. This is the third time that Sonic Union has won for “Best Mix” in the four years that it has been a category at the AICE Awards.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are so lucky to be able to work on so many fun projects, that it’s hard to pick just one! But, over the years, we have had a lot of fun working on a number of Super Bowl spots and hearing the buzz about them once they’re released — both positive and negative!

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone/Internet/iTunes. My Television/Netflix. My Keurig.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
It’s always playing in the office, so yes. At the end of the day, after Pat my co-scheduler has left, the office has pretty much gotten used to me playing Zumba music until I leave. I need to get pumped and ready for my class after work. It’s not easy motivating to go exercise at 7:30/8pm at night — ever — so they’re all pretty forgiving in letting me “Zumba out” as they like to call it. I work with some pretty tolerant people.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ll let you know when I figure it out. I’m certain it will involve vodka.

Main Image: Juan Patino Photography

Slamdance, Sundance: Why it’s 
important to audio post pros

By Cory Choy

Why are we, audio post professionals, in Park City right now? The most immediate reason is Silver Sound has some skin in the game this year: we are both executive producers and the post sound team for Driftwood, a feature narrative in competition at Slamdance that was shot completely MOS. We also provided production and audio post on content Resonance and World Tour for Google’s featured VR Google Cardboard demos at Sundance’s New Frontier.

Sundance’s footprint is everywhere here. During the festival, the entirety of Park City is transformed — schools, libraries, cafes, restaurants, hotels and office buildings are now venues for screenings, panel discussions and workshops. A complex and comprehensive network of shuttle busses allows festival goers to get around without having to rely on their own vehicles.

Tech companies, such as Samsung and Canon, set up public areas for people to rest, talk, demo their wares and mingle. You can’t take three steps in any direction without bumping into a director, producer or someone who provides services to filmmakers. In addition to being chock full of industry folk — and this is a very important ingredient —Park City is charming, beautiful and very different than the American film hubs, New York and Los Angeles. So people are in a relaxed and friendly mood.

Films in competition at Sundance often feature big-name actors, receive critical acclaim and more and more often are receiving distribution. In short, this is the place to make personal connections with “indie” filmmaking professionals who are either directly, or through friends, connected to the studio system.

As a partner and engineer at a boutique sound studio in Manhattan, I see this as a fantastic opportunity to cut through the noise and hopefully put myself, and my company, on the radar of folks with whom I might not otherwise get a chance to meet or collaborate. It’s a chance for me, a post professional in the indie world, to elevate my game.

Slamdance
Slamdance sets up shop in one very specific location, the Treasure Mountain Inn on Main Street in Park City. It happens at the same time as Sundance — and is located right in eye of the storm — but has built a reputation for celebrating the most indie of the indies. Films in competition at Slamdance must have budgets under one million dollars (and many often have budgets far below that.) Where Sundance is a sprawling behemoth — long lines, hard-to-get tickets, dozens of venues, the inability to see all that is offered — Slamdance sort of feels like a friend’s very nice living room.

Slamdance logo

Many folks see most of or even the entire line-up of films. There’s no rushing about to different locations. Slamdance embraces the DIY, and is about empowering people outside of the industry establishment. Tech companies such as Blackmagic and Digital Bolex hold workshops geared towards enabling filmmakers with smaller budgets to be able to make films unencumbered by technical limits. This is a place where daring and often new or first-time filmmakers showcase their work. Often this is one of the first times or perhaps even the first time they’ve gone through the post and finishing process. It is the perfect place for an audio professional to shine.

In my experience, the films that screen best at Slamdance — the ones that are the most immersive and get the most attention — are the ones with a solid sound mix and a creative sound design. This is because some of the films in competition have had minimal or no post sound. They are enjoyable, but the audience finds itself sporadically taken out of the story for technical reasons. The directors and producers of these films are going to keep creating, and after being exposed to and competing against films with very good sound, are probably going to be looking to forge a creative partnership — one that could quite possibly grow and last the entirety or majority of their future careers — with a post sound person or team. Like Silver Sound!

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City.

A closer look at Southpaw’s audio

Director Antoine Fuqua and the film’s sound team talked about their process during a panel at Sony Pictures.

By Mel Lambert

With Oscar buzz swirling around the film Southpaw, director Antoine Fuqua paid tribute to his sound crew on The Weinstein Company’s drama during a screening and Q&A session on the Cary Grant Stage at Sony Pictures in Culver City — the same venue where the film’s soundtrack was re-recorded earlier this year.

The event was co-moderated by Cinema Audio Society president Mark Ulano and Motion Picture Sound Editor president Frank Morrone; it was introduced by MPSE president-elect Tom McCarthy, Sony Pictures Studio’s EVP of post production facilities.

The film depicts the decline and rise of former World Light Heavyweight boxer Billy Hope (Jake Gyllenhaal), who turns to trainer Tick Wills (Forest Whitaker) for help getting his life back on track after losing his wife (Rachel McAdams) in a tragic accident and his daughter Leila (Oona Laurence) to child protection services. Once the custody of his daughter falls into question, Hope decides to regain his former life by returning to the ring for a grudge match in Las Vegas with Miguel “Magic” Escobar (Miguel Gomez).

“Boxing is a violent sport,” Fuqua told the large audience of industry pros and guests. “It’s always best to be ready to train or you’re going to get hurt! I spent a lot of time with the actors preparing them for their roles, and on Jake’s pivotal relationship with his daughter, but I had to make sure that Jake’s character wasn’t too consumed by anger. If you don’t control your anger [in the boxing ring] you cannot control your performance.”

Fuqua is best known for his work on Training Day, as well as The Replacement Killers, King Arthur, Shooter, Olympus Has Fallen and The Equalizer. He has also directed a number of music videos for artists such as Prince, Stevie Wonder and Coolio. The latter’s Gansta’s Paradise rap video won a The Young Generators Award.

Director Antoine Fuqua with his Southpaw sound crew.

Director Antoine Fuqua (center, leather jacket) with the panel.

Fuqua revealed that he has worked with most of the crew since Training Day (2001), his major directorial debut. “I like to give them a copy of the script as early as possible so that they can prepare” for the editorial and post process. “The script shows me the ‘nuts and bolts’ of the film,” stated production mixer Ed Novick. “It shows the planned environments and gives me an idea of how I can capture the sound. Most of the key boxing matches were staged as a TV event, like an audience watching an HBO Production, for example. I placed mics in the corners of the boxing ring, on the referee and around the audience areas.”

“I drove Ed crazy,” Fuqua said. “I gave the actors the freedom to improvise; Jake is that type of actor and he just went with it! But often we had no idea where we were heading — we were just riffing a lot of the time to get the fire going — but Ed did an amazing job of securing what we were looking for.”

“The actors were very cooperative and very accommodating to my needs,” said Novick. “They wore mics while fighting, and Jake and Rachel helped me get great tracks.”

“Sound secured from the set is always the best,” added the film’s dialog/music re-recording mixer, Steve Pederson. “There was very little ADR on this film — most of it is production.”

“We developed a wide range of crowd sounds, which became our medium shots,” explained supervising sound editor Mandell Winter, MPSE.

Sound designer David Esparza and supervising sound editor Mandell Winter

Sound designer David Esparza and supervising sound editor Mandell Winter

“We made a number of ambience recordings during HBO boxing matches in Las Vegas using microphones located around the perimeter of the boxing ring and under the balcony, as well as mounting a DPA 5100 surround mic below the press box and camera platforms,” added sound designer David Esparza, MPSE. “We covered every angle we could to place the action into the middle of the ring using the sound of real crowds, and not effects libraries.”

As sound effects re-recording mixer Dan Leahy stated: “We used a combination of close-up and distant sounds to accurately locate the audience in the center of the fighting action.”

“It’s all about using sound to reinforce the feeling and emotion of a scene,” stressed Fuqua.

Picture editor John Refoua, ACE, added that “the sound also drove the cut. We had an initial mix with pre-cut effects — the final mix evolved with effects being cut at different audio frequencies to heighten the crowd’s excitement. It was an amazing process to witness, to have the soundtrack evolve during that period.”

“You could feel the heart beat rising,” Fuqua added.

For the major fight at the end of the film, Refoua recalled that there were 12 cameras running simultaneously, includingSOUTHPAW a handful of Canon EOS-5D DSLRs being assigned to the press. “That was a lot of footage,” he recalled. “We looked at it all a shot at a time, and made decisions about which one worked better than another.”

Originally, the final boxing match was choreographed for six rounds, “but we then cut it into 12,” continued Refoua. “We stretched and took alternate takes to build the other rounds.”

Regarding the use of a haunting score by the late James Horner, music editor Joe E. Rand said that the composer was drawn to the film because of the intimate father/daughter relationship, “and looked to different harmonic structures and balances” to reinforce that core element.

But the sound for one pivotal scene didn’t run as expected. “For the graveyard scene [between Gyllenhaal and Laurence, at the grave of the lead character’s wife] we lost most of the radio mics,” reported Winter. “We had a lot of RF hits and [because of camera angles] the boom mic wasn’t close to the actors. The only viable track was Oona [Laurence]’s lavaliere, which still had RF dropouts on it — iZotope RX saved the day.” “We needed to use iZotope to extract the signal from the RF noise,” recalled re-recording mixer Pederson. “Mandell [Winter] and I were surprised it worked out so well.”

“No director can make a movie by themselves,” concluded Fuqua. “The sound crew all came up with creative ideas that I needed to hear. After all, moviemaking is a highly collaborative effort.”
Mel Lambert is principal of Content Creators, an LA-based editorial service. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Sound Design for ‘The Hunger Games: Mockingjay — Part 2’

Warner Bros. Sound re-teams with director Francis Lawrence for the final chapter

By Jennifer Walden

It’s the final installment of The Hunger Games, and all the cards are on the table. Katniss Everdeen encourages all the districts to ban together and turn against the Capitol, but President Snow is ready for their attack.

In true Hunger Games-style, he decides to broadcast the invasion and has rigged the city to be a maze full of traps, called pods, which unleash deadly terrors on the rebel attackers. The pods trigger things like flamethrowers, a giant wave of toxic oil, a horde of subhuman creatures called “mutts,” heat-lasers and massive machine guns, all of which are brought to life on-screen thanks to the work of the visual effects team led by VFX supervisor Charles Gibson.

Warner Bros. Sound supervising sound editor/re-recording mixer Jeremy Peirson, who began working with director Francis Lawrence on The Hunger Games franchise during Catching Fire, knew what to expect in terms of VFX. He and Lawrence developed a workflow where Peirson was involved very early on, with a studio space set up in the cutting room.

Picture and Sound Working Together
Without much in the way of VFX early in the post process, Lawrence relied on Peirson’s sound design to help sell the idea of what was happening on-screen. It’s like a rough pencil sketch of how the scene might sound. As the visuals start coming in, Peirson redesigned, refineed and recorded more elements to better fit the scene. “As we move through the process, sometimes the ideas change,” he explains. “Unfortunately, sound is usually the last step before we finish the film. The visual effects were coming in pretty late in the game and sometimes we got surprised, and they’re completely different. All the work we did in trying to prepare ourselves for the final version changed. You just have to roll with it basically.”

Despite having to rework a scene four or five times, there were advantages to this workflow. One was having constant input from director Lawrence. He was able to hear the sound take shape from a very rough point, and guide Peirson’s design. “Francis popped in a couple times a day to listen to what I was doing. He’d say, ‘Yes this is the right direction’ or ‘No, I was thinking more purple or more bold.’ It allowed for this unique situation where we could fine-tune how the movie is going to sound starting very early in the process,” he says.

Jeremy Peirson

Jeremy Peirson

Another advantage to being embedded with the picture department is that sound is able to inform how the picture is cut. “Sometimes they will give me a scene and ask me to quickly create the sound for it so they can re-cut the scene to make it better. That’s always a fun collaboration, when the picture department and sound department can work so closely together,” Peirson states.

The Gun Pod
One of Peirson’s most challenging “pods” to design sound for was the gun pod, where two .50 caliber machine guns were blasting away a concrete archway, causing it to collapse. Peirson needed to build detail and clarity into a scene that had bullets and rubble spraying everywhere. To do this, he spent hours recording specific, individual impacts. “I bought a bunch of brick and tile of various different kinds, and I took a 12-pound shot-put, raised it up about 10 feet and dropped it onto these things to get individual impacts, as well as clatter and debris.”

In the edit, he finessed the rhythm of the impacts, spacing them out so there was a distinguishable variety of sounds and it wasn’t just a wash. “It’s not a single note of sound,” he says. “It was a wide palette of impacts. Each individual impact was hand placed throughout the whole sequence. I tried to differentiate the sound of the wall from the pavement and the grass, the stairs and the metal pole which happened to be in that particular area.”

For Mockingjay  —Part 1, Peirson, sound recordist John Fasal, and sound designer Bryan O. Watkins, did a bullet-by and bullet-ricochet recording session. All of that material came into play for Mockingjay — Part 2, in addition to new material, such as the gun sounds captured by Peirson, Fasal, Watkins and sound designer Mitch Osias.

For one of their gun recording sessions, Peirson notes they headed to an industrial park where they were able to capture the gun sounds in a mock-urban environment that would match the acoustics of the city streets on-screen. “We wanted to know how the guns would echo off the buildings and down the alleys — how that would sound from various distances.”

They took it one step further by recording gun sounds inside a warehouse that simulated the underground subway environment in the film. “We were able to record them in different ways, putting the guns in certain spots in the warehouse so we could get a tighter, closer feel that sounded very different from an outside perspective,” he says.

With four recordists, they were able to capture 26 individual sets of recordings for each gunshot — some mono, some stereo and some quad recordings. “We used a large range of mics, everything from Neumann to Schoeps to Sennheiser to AKG. You name it and we probably used it.”

      

When building a gun sound in the edit, Peirson started by selecting a close-up gunshot, then he added an acoustic flavor to that gun. “We didn’t always pick the same type of gun for the acoustic response,” he explains. “It was a lot of hand-cutting to make sure everything was in sync since certain guns fire at different rates; some fire faster and some are slower, but they had to be in the same range as the initial close-up sound.”

Another challenge was designing the mutts — the subhuman lizard-like creatures that inhabit the underground area. Peirson says, “Anytime you have creatures — and we had a lot of creatures — you can design the perfect sound for each one, but how do you sell the difference between all of these creatures when you’re surrounded by 30 or 40 of them?”

Even though there may have been a large group of mutts, within that the characters were only fighting a few of them at any given time. They needed to sound the same, yet different. Peirson’s design also had to factor in how the sound would work against the music, and it had to evolve with the VFX as well.

As the re-recording mixer on the effects, Peirson was able to mix a sound as he was designing it. If something wasn’t working, he could get rid of it right away. “I didn’t need to carry it around and then pick and choose later. By the time we got to the stage, we had the opportunity to refine the whole sonic palette so we only had what we wanted.”

He found that moving to the larger space of the dub stage, and hearing how the sound design plays with the music, generated new ideas for sound. “We added a bit of a different flavor to help the sound cut through, or we add a little bit of detail that was getting lost in the music.”

Since composer James Newton Howard scored all four films in The Hunger Games series, Peirson had a wealth of demos and themes to reference when designing the sound. They were a good indication of what frequency range he could work within and still have the effects cut through the music. “We had an idea of how it would sound, but when you get that fully recorded score, it’s a totally different ballgame in terms of scope. It kicks that demo up a huge notch.”

SS_D142-42501.dng

The Mix
Peirson and re-recording mixer Skip Lievsay — who worked on dialogue and music — crafted the final mix first in Dolby Atmos on Warner Bros. Stage 6 in Burbank, using three Avid ICONs. “This was a completely in-the-box virtual mix,” says Peirson. “We had sound effects on one Pro Tools system, dialogue on another and music on a third system. My sound effects session, which had close to 730 tracks, was a completely virtual mix, meaning there were no physically recorded pre-dubs.”

Using the final Atmos mix as their guide, Peirson and Lievsay then mixed the film in Barco Auro-3D, DTS:X, IMAX and IMAX 12.0, plus 7.1, 5.1 and two-track. “That’s every single format that I know of right now for film,” he concludes. “It was an interesting exercise in seeing the difference between all those formats.”

With Oscar season getting into full swing, we wouldn’t be surprised if the sound team on Mockingjay — Part 2 gets a nod.

Review: Nugen Audio’s Halo Upmixer

By Robin Shore

Upmixing is nothing new. The basic goal is to take stereo audio and convert it to higher channel count formats (5.1, 7.1, etc.) that can meet surround sound delivery requirements. The most common use case for this is when needing to use stereo music tracks in a surround sound mix for film or television.

Various plug-ins exist for this task, and the results run the gamut from excellent to lackluster. In terms of sonic quality Nugen Audio’s new Halo Upmixer plug-in falls firmly on the excellent side of this range. It creates a nice enveloping surround field, while staying true to the original stereo mix, and it doesn’t seem to rely on any weird reverb or delay effects that you sometimes find in other upmix plug-ins.

NUGEN Audio Halo Upmix - IO panel

What really sets Halo apart is its well-thought-out design, and the high level of control it offers in sculpting the surround environment.

Digging In
At the top of the plug-in window is a dropdown for selecting the channel configuration of the upmix — you can select any standard format from LCR up to 7.1. The centerpiece of Halo is a large circular scope that gives a visual representation of the location and intensity of the upmixed sound. Icons representing each speaker surround the scope, and can be clicked on to solo and mute individual channels.

Several arcs around the perimeter of the scope provide controls for steering the upmix. The Fade arcs around the scope will adjust how much signal is sent to the rear surround channels, while the Divergence arc at the top of the scope adjusts the spread between the mono center and front stereo speakers. On the left side of the scope is a grid representing diffusion. Increasing the amount of diffusion spreads the sound more evenly throughout the surround field, creating a less directionally focused upmix. Lower values of diffusion give a more detailed sound, with greater definition between the front and rear.

The LFE channel in the upmix can be handled in two ways. The “normal” LFE mode in Halo will add additional content into the LFE channels based on low frequencies in the original source. This is nice for adding a little extra oomph to the mix and it also preserves the LFE information when downmixing back to stereo.

For those that are worried about adding too much additional bass into the upmix, the “Split” LFE mode works more like a traditional crossover, siphoning off low frequencies into the LFE without leaving them in the full range channels.

NUGEN Audio Halo Upmix - 5_1 main view - using colour to determine energy source

An Easy And Nuanced UI
The layout and controls in Halo are probably the best of I’ve ever seen in this sort of plug-in. Moving the Fade and Divergence arcs around the circle feels very smooth and intuitive, almost like gesturing on a touchscreen, and the position of the arcs along the edge of the scope seems to correspond really well with what I hear through the speakers.

New users should have no problem quickly wrapping their heads around the basic controls. The diffusion is an especially nice touch as it allows you to very quickly alter the character of the upmix without drastically changing the overall balance between front, rear and center. Typically, I’ve found that leaving the diffusion somewhere on the higher end gives a nice even feel, but for times when I want the upmix to have a little more punch, dragging the diffusion down can really add a lot.

Of course, digging a little deeper reveals some more nuanced controls that may take some more time to master. Below the scope are controls for a shelf filter which, combined with higher levels of diffusion, can be used to dull the surround speakers without decreasing their overall level. This ensures that sharp transients in the rear don’t pop out too much and distract the audience’s attention from the screen in front of them.

The Center window focuses only on the front speakers and gives you some fine control on how the mono center channel is derived and played back. An I/O window acts like a mixer, allowing you to adjust input level of the stereo source, as well levels for each individual channel in the upmix. The settings window provides a high level of customization for the appearance and behavior of the plug-in. One of my favorite things here is the ability to assign different colors for each channel in the surround scope, which aside from creating a really neat looking display, gives a nice clear visual representation of what’s happening with the upmix.

NUGEN Audio Halo Upmix - IO panel       NUGEN Audio Halo Upmix - 7_1 main view

Playback
One of the most important considerations in an upmix tool is how it will all sound once everything is folded down for playback from televisions and portable devices, and Halo really shines here. Less savvy upmixing can cause phasing and other issues when converted back to stereo, so it’s important to be able to compare as you are working.

A monitoring section at the bottom of the plug-in allows you to switch between listening to the original source audio, the upmixed version and a stereo downmix so you can be certain that your mixing is folding down correctly. If that’s not enough, hitting the “Exact” button will guarantee that the downmixed version matches the stereo version completely, by disabling certain parameters that might affect the downmix. All of this can be done as you’re listening in realtime, allowing for fast and easy A-B comparisons.

Summing Up
Nugen has really put out a fine well-thought-out product with the Halo upmixer. It’s at once simple to operate and incredibly tweakable, giving lots of attention to important technical considerations. Above all it sounds great. For mixers who often find themselves having to fit two channel music into a multi-channel mix you’ll be hard pressed to find a nicer solution than this.

Halo Upmixer retails for $499 and is available in AAX, AU, VST2 and VST3 formats.

Robin Shore is a co-owner and audio post pro at SilverSound Studios in New York City.

Checking in with Tattersall Sound & Picture’s Jane Tattersall

By Randi Altman

Toronto-based audio post house Tattersall Sound & Picture has been a fixture in audio post production since 2003, even though the origins of the studio go back further than that. Tattersall Sound & Picture’s work spans films, documentaries, television series, spots, games and more.

Now part of the SIM Group of companies, the studio is run by president/supervising sound editor Jane Tattersall and her partners Lou Solakofski, Peter Gibson and David McCallum. Tattersall is an industry veteran who found her way to audio post in a very interesting way. Let’s find out more…

(back row, L-R) David McCallum, Rob Sim, and Peter Gibson (front row) Jane Tattersall and Lou Solakofski.

How did you get your start in this business?
My start was an accident, but serendipitous. I had just graduated from university with a degree in philosophy and had begun to think of what options I might have — law and journalism were the only fields that came to mind, but then I got a call from my boyfriend’s sister who was an art director. She had just met a producer at a party who was looking for a philosopher to do research on a documentary series. I got the job, did all the research and ended up working with the picture editor. I found his work using sound brought the scenes to life, so decided I would try to learn that job. After that I apprenticed with an editor and learned on the job. I’m still learning!

When did you open Tattersall Sound & Picture?
I started the original Tattersall Sound, which was just sound editing in 1992 but sold it in 1999 to run a larger full post facility. I opened Tattersall Sound & Picture in 2003, along with my partners.

Why did you open it?
After three years running a big post facility I missed the close involvement with projects that comes with being an editor. I was ready for a change and keen to be more hands on.

How has it evolved over the years?
When we started the company it was just sound editing. The first year we shared warehouse space with a framing factory. We had a big open workplace and we all worked with headphones. After a year we moved to where we are today. We had space for picture editing suites as well as sound editing. Over time we expanded our services and facilities. Now we have five mix stages including a Dolby Atmos stage, ADR, as well as offline and sound editorial.

How have you continued to be successful in what can be a tough business?
We focus simultaneously on good creative work and ensuring we have enough resources to continue to operate. Without good and detailed and good work we would lose our clients, but without earning enough money we couldn’t pay people properly, pay the rent and upgrade the stages and edit rooms. I like to think we attract good talent and workers because we care about doing great work, and the great work keeps the clients coming to us.

Does working on diverse types of projects play a role in that success?
Yes, that’s true as well. We have a diversity of projects — TV series, documentaries, independent feature films, some animation and some children’s TV series. Some years ago we were doing mostly indie features and a small amount of television, but our clients moved into television and brought us along with them. Now we are doing some wonderful higher-end series like Vikings, Penny Dreadful and Fargo (pictured below). We continue to do features and love doing them, but it is a smaller part of the business.

FARGO_207_0510_CL_d_hires1 FARGO_209_0108_CL_d_hires1

If you had one tip about keeping staff happy and having them stay for the long-term, what would it be?
Listen to them, and keep them involved and make them feel like an appreciated part of the business.

What is the biggest change in audio post that you’ve seen since your time in the business?
The biggest change would be the change in technology — from Moviolas to Pro Tools and all the digital plug-ins that have become the regular way of editing and mixing. Related to that would be the time allotted to post sound. Our schedules are shorter because we can and do work faster.

The other change is that we work in smaller teams or even alone. This means fewer opportunities for more junior people and assistants to learn by doing their job in the same room. This applies to picture editing as well, of course.

There is no denying that our industry is filled with more males than females, and having one own an audio post house like yours is rare. Can you talk about that?
I certainly didn’t set out to own or run anything! Just to work on interesting projects for directors and producers who wanted to work with me. The company you see today has grown organically. I attracted like minded co-workers and complementary team members and went after films and directors that I wanted to work with.

We would never have built any mix stages if we didn’t have re-recording mixer Lou Solakofski on board as partner. And he in turn would never have got involved if he didn’t trust us to keep the values of good work and respectful working environment that were essential to him. We all trusted one another to retain and respect our shared values.

It has not always been easy though! There were many projects that I just couldn’t get, which was immensely frustrating. Some of these projects were of the action/violent style. Possibly the producers thought a man might be able to provide the right sounds rather than a woman. No one ever said that, so there may have been other reasons.

However, not getting certain shows served to make me more determined to do great work for those producers and directors who did want me/us. So it seems that having customers with the same values is crucial. If there weren’t enough clients who wanted our quality and detail we wouldn’t have got to where we are today.

What type of gear do you have installed? How often to do you update the tech?
Our facility is all Avid and Pro Tools, including the mix stages. We have chosen an all-Pro Tools workflow because we feel it provides the most flexibility in terms of work flow and the easiest way to stay current with new service options. Staying current can be costly but being up to date with equipment is advantageous for both our clients and creative team.

Hyena Road had a Dolby Atmos mix

Hyena Road had a Dolby Atmos mix

We update frequently usually, driven by the requirements of a specific project. For example, in July 2015 we were scheduled to mix the Canadian war film Hyena Road and the producer, distributor and director all wanted to work in Dolby Atmos. So our head tech engineer Ed Segeren and Lou investigated to see how feasible it would be to upgrade one of the stages to accommodate the Dolby requirements. It took some careful research and some time but that stage was updated to facilitate that film.

Another example is when we began the Vikings series and knew the composer was going to deliver very wide — all separate stems as 5.0 — so we needed a dedicated music Pro Tools. This meant we had to expand the console.

As a rule when we update one mix stage, but we know we will soon update the others in order to be able to move sessions between rooms transparently. This is an expense, but it also provides us flexibility — essential in post production as project schedules inevitably shift from their original bookings.

David McCallum, fellow sound supervisor and partner, has a special interest in acoustic listening spaces and providing editors with the best environment to make good decisions. His focus on editorial upgrades help to ensure we can send good tracks to the stage.

Our head tech engineer Ed Segeren attends NAB and AES every year to see new developments, and the staff is very interested in learning about what’s out there and how we might apply new technology. We try to be smart about our upgrades, and it’s always about improving workflow and work quality.

What are some recent projects completed at Tattersall?
We recently completed the series Fargo (mixing), and the feature films Beeba Boys (directed by Deepa Mehta) and Hyena Road (directed by Paul Gross), and we are in the midst of the TV series Sensitive Skin for HBO Canada. We are also doing Saving Hope, Vikings (pictured below) Season 4 and will start Season 3 of Penny Dreadful in early 2016.

v3_09_9262014_bw__13709 v3_10_10132014_bw_14071

Are you still directing?
I’m surprised you even know about that! I’m trying to! Last spring I directed a very short film, a three-minute thriller called Wildlife. This month I am co-directing a short film about a young women indirectly involved in a police shooting and her investigation into what really happened. I have an advantage, which is that I know when a story point can be made using sound rather than needing a shot to convey something, and I have a good idea of how ADR can be employed so no need to worry about the production recording.

The wonderful thing about these non-work film projects is that I learn a huge amount every time, including just how hard producers must work to get something made, and just how vulnerable a director is when putting something of themselves out there for anyone to watch.

Skywalker’s Randy Thom helps keep it authentic for ‘Peanuts’

By Jennifer Walden

Snoopy, Woodstock, Charlie Brown, Lucy… all the classic Peanuts characters hit the big screen earlier this month thanks to the Blue Sky Studios production The Peanuts Movie (20th Century Fox).

For those of you who might have worried that the Peanuts gang would “go Hollywood,” there is no need for concern. These beloved characters look and sound like they did in the Charles M. Schulz TV specials — which started airing in the 1960s — but they have been updated to fit the theatrical expectations of 2015.

While the latest technology has given depth and texture to these 2D characters, director Steve Martino and the Schulz family made sure the film didn’t stray far from Charles Schulz’s original creations.

Randy Thom

Randy Thom

According to Skywalker Sound supervising sound editor/sound designer/re-recording mixer Randy Thom, “Steve Martino (from Blue Sky) spent most of the year hanging out in Santa Rosa, California, which is where the Schulz family still lives. He worked with them very closely to make sure that this film had the same feel and look as not only the cartoon strip, but also the TV specials. They did a wonderful job of staying true to all those visual and sonic tropes that we so much associate with Peanuts.”

Thom and the Skywalker sound team, based at the Skywalker Ranch in Marin County, California, studied the style of sound effects used in the original Peanuts TV specials and aimed to evoke those sounds as closely as they could for The Peanuts Movie, while also adding a modern vibe. “Often, on animated films, the first thing the director tells us is that it shouldn’t sound like a cartoon — they don’t want it to be cartoony with sound effects,” explains Thom, who holds an Oscar for his sound design on the animated feature The Incredibles, and has two Oscar nominations for his sound editing on The Polar Express and Ratatouille. “In The Peanuts Movie, we were liberated to play around with boings and other classic cartoon type sounds. We even tried to invent some of our own.”

PEANUTS PEANUTS

The Red Baron and Subtle Sounds
The sound design is a mix of Foley effects, performed at Skywalker by Foley artists Sean England and Ronni Pittman, and cartoon classics like zips, boinks and zings. One challenge was creating a kid-friendly machine gun sound for Snoopy’s Red Baron air battles. “It couldn’t be scary, but it had to suggest the kinds of guns that were used on those planes in that era,” says Thom. The solution? Thom vocalized “ett-ett-ett-ett-ett” sounds, which they processed and combined with a “rat-tat-tat-tat-tat” rhythm that they banged out on pots and pans. The result is a faux machine gun that’s easy on little ears.

Another key element in the Red Baron sequences was the sound of the planes. Charles Schulz’s son, Craig, who was very involved with the film, owns a vintage WWI plane that, amazingly, still flies. “Craig [Schulz] flew the plane and a couple of people on our sound team rode in it. They were very brave and kept the recorder running the whole time,” says Thom, who completed the sound edit and premix in Avid Pro Tools 12

PEANUTS

They captured recordings on the plane, as well as from the ground as the plane performed a few acrobatic aerial maneuvers. During the final 7.1 mix in Mix G at Skywalker Sound, via the Neve DFC console, Thom says the challenge was to make the film sound exciting without being too dynamic. The final plane sounds were very mellow without any harsh upper frequencies or growly tones. “We had to be careful of the nature of the sounds,” he says. “If you make the airplanes too scary or intimidating, or sound to animalistic, little kids are going to be scared and cover their ears. We wanted to make sure it was fun without being scary.”

Many of the scenes in The Peanuts Movie have subtle sound design, with Foley being a big part of the track. There are a few places where sound gets to deliver the joke. One of Thom’s favorite scenes was when Charlie Brown visits the library to find the book “Leo’s Toy Store.”

“The library is supposed to be quiet and we had to be very playful with the sound of Charlie’s feet squeaking on the floor and making too much noise,” says Thom. “After he leaves the library, he slides down the hillside in the snow and ice and ends up running right through a house. That was a fun sequence also.”

PEANUTS PEANUTS

One surprising piece of the soundtrack was the music. The name Vince Guaraldi is practically synonymous with Peanuts. His jazzy compositions are part of the Peanuts cultural lexicon. If someone says Peanuts, it instantly recalls to mind the melody of Guaraldi’s “Linus and Lucy” tune. And while “Linus and Lucy” is part of the film’s soundtrack, the majority of the score is orchestral compositions by Christophe Beck. “The music is mostly orchestral but even that has a Peanuts feel somehow,” concludes Thom.

Setting the audio tone of ‘Everest’

Glenn Freemantle sounds off on making this film’s audio authentic

By Jennifer Walden

Immovable, but not insurmountable, Mount Everest has always loomed large in the minds of ambitious adventurers who seek to test their mettle against nature’s most imposing obstacle course, with unpredictable weather.

Reaching the summit takes more than just determination, it requires training, teamwork and a bit of stubborn resolve not to die. Even then, there’s no guarantee that what, or who, goes up will come down. Director Baltasar Kormákur’s film Everest, from Universal Studios, is based on the tragic true story of two separate expeditions who sought to reach the summit on the same day, May 10th 1996, only to be bested by a frigid tempest.

Glenn Freemantle

Glenn Freemantle

Supervising sound editor/sound designer Glenn Freemantle at Sound24, based at Pinewood Studios in Iver Heath, Buckinghamshire, UK, was in charge of building Everest’s blustery sound personality. All the wind, snow and ice sounds that lash the film’s characters were carefully crafted in post and designed to take the viewer on a journey up the mountain.

“Starting at the bottom and going right to the top, you feel like you are moving through the different camps,” explains Freemantle. “We tried to make each location as interesting as possible. The film is all about nature; it’s all about how the viewer would feel on that mountain. We always wanted the viewer to feel that journey that they were on.”

In addition to Freemantle, Sound24’s crew includes sound design editors Eilam Hoffman, Niv Adiri, Ben Barker, Tom Sayers and sound effects editors Danny Freemantle and Dillon Bennett.

Capturing Wind
Glenn Freemantle and his sound team collected thousands of wind sounds, like strong winter winds from along the shores of western England, Ireland and Scotland. They recorded wide canyon winds and sand storms in the deserts of Israel, and on Santorini, they recorded strong tonal mountain winds. At the base camp on Mount Everest, they set out recorders day and night to capture what it sounded like there at different times. “At the base camp on Everest, even if we didn’t use all the recordings from there, we got the sense of the real environment, exactly what it was like. From a cinematic point of view, we used that as a basis, but obviously we were also trying to tell a story with the sound,” he says.

To capture ambience from various altitudes on Everest, Freemantle sent two small recording set-ups with the camera crew who filmed at the top of Everest. “The equipment had to be small, portable and resistant to the extreme conditions,” he explains. For these set-ups, owner of Telinga Microphones, Klas Strandberg, created a small, custom-made omnidirectional mic for an A/B set-up, as well as a pair of cardioid mics in XY configuration that were connected to two Sony D100 recorders.

The best way to record wind is to have it sing through something, so on their wind capturing outings, Freemantle and crew brought along an assortment of items — sieves, coat hangers, bits of metal, pans, all sorts of oddities that would produce different tones as the wind moved through and around them. They also set up tents, like those used in the film, to capture the tent movements in the wind. “We used a multi-mic set-up to record the sound so you felt like you were in the middle of all of these situations. We put the mics in the corners and in the center of the tent, and then we shook it. We also left them up for the night,” he says.

They used Sennheiser MKH8020s, MKH8050s and MKH8040s paired with multiple Sound Devices 744T and 722 recorders set at 192k/24-bit. For high-frequency winds, they chose the Sanken COS-100k, which can capture sounds up to 100kHz. “This allowed us to pitch down the inaudible wind to audible frequencies (between 20Hz – 20kHz) and create the bass for powerful tonal winds.”

With wind being a main player in the sound, Freemantle’s design focused on its dynamics. Changing the speed of the wind, the harshness of the wind and also the weight of the wind kept it interesting. “We were moving the sound all the time, and that was really effective. There was a 20-minute section of storm in there, which wasn’t easy to build,” explains Freemantle. “We would mix a scene for a day and then walk away. You can exhaust your ears mixing a film like this.”

Having the opportunity to revisit the stormy sequences allowed the sound team to compare the different storms and wind-swept scenes, and make adjustments. One of their biggest challenges was making sure each storm didn’t feel too big, or lack dynamics. “We wanted to have something different happening for each storm or camp so the audience could feel the journey of these people. It had to build up to the big storm at the end. We’d have to look at the whole film to make sure we weren’t going wrong. The sound needed to progress.”

In addition to wind, Freemantle and his team recorded sounds of snow and ice. They purchased a few square meters of snow and froze big chunks of ice for their recording sessions. “We got all the gear the actors were wearing and we put the jackets and things into the freezer overnight, so they would have that feeling, that frozen texture, that they would have out there in the weather,” he says. “We tried to do everything we could to make it sound as real as possible. It’s exhausting how that weather makes you feel, and it was all from a human point of view that we tried to create the weather that was around them.”

ADR
The weather sounds weren’t the only thing to be recreated for Everest. The soundtrack also hosts a sizable amount of ADR thanks to massive wind machines that were constantly blowing on set, and the actors having to wear masks didn’t help the dialogue intelligibility either. “That’s why the film is 90 percent re-recorded dialogue,” shares Freemantle. “Sound mixer Adrian Bell did a hell of a job in those conditions, but they are wearing all of these masks so you can hardly hear them. Everything had to be redone.”

The dialogue was so muffled at times that it was difficult for the picture-editing department to cut Everest. Director Kormákur asked for a quick ADR track of the whole film, using sound-alike actors when the real ones weren’t available. In addition, he also asked for a rough sound design and Foley pass, giving Freemantle about a week to mock it up. “You couldn’t follow the film. They couldn’t run it for the producers to get a sense of the story because you couldn’t hear what the actors were saying,” he says. “So we recreated the whole dialogue sequence for the film, and we quickly cut — from our sound libraries — all the footsteps and we did a quick cloth pass so they had a complete soundtrack in a very short period of time.”

During the ADR session for the final tracks, Freemantle notes the actors wore weight vests and straps around their chests to make it difficult for them to breathe and talk, all in an effort to recreate the experience of what is happening to them on screen. As CG was being added to the picture, with more sprays of snow and ice, the actors could react to the environment even more.

“Having to re-create their performances was a curse in one way, but it was a blessing because then we had control over every single sound in the soundtrack. We had control of every part of their breathing, every noise from their gear and outfits. We have everything so we could pull the perspective in the sound at any given moment and not bring along a lot of muck with it.”

Everest was mixed in three immersive formats: Dolby Atmos, Barco Auro-3D and IMAX 12.0. “Each one of the formats works really well and you really feel like you are in the film,” reports Freemantle. “The weight of the sound hits you in the theater. There is a lot of bass in there. With sound, you are moving the air around, so you are feeling it when the storm hits. The presence of the bass hits you in the chest.”

But it’s not a continuous aural onslaught —there are highs and lows, with rumbly wind fighting against the side of the mountain on Hillary Step and hissing wind higher up towards the summit. “You have to have detail and the sounds should be helping to tell the story,” he says. “It’s not about how much you put in — in the end, it’s about what you take out when you finish. That’s very important. You don’t want the film to be just a massive noise.”

The Mix
Everest was mixed natively in the Dolby Atmos theatre at Pinewood Studios by Freemantle and re-recording mixers Niv Adiri, CAS, and Ian Tapp, CAS. Sound24’s tried and tested Avid set-up helped bring the sounds of Everest to life, working on the powerful Avid System 5 large-format console, using Pro Tools 11 with EUCON control. Their goal was to put the audience on the mountain with the climbers without overwhelming them with a constant barrage of sound. “The journey the characters are going through is both mental and physical, and mixing in Atmos helped us bring these emotions to the audience,” says Adiri. Since director Kormákur’s focus was on the human tragedy, the dialogue scenes were intimately shot. This enabled the mixers to shift the bala

nce towards dialogue in these sequences and maintain the emotional contact with the characters. In the Atmos format they could position sounds around the audience to immerse them in the scene without having the sounds sit on top of the dialogue. “The sheer weight and power of the sound that the Atmos system produces was perfect for this film, particularly in the storm sequence, where we were able to make the sound an almost physical experience for the audience, yet still maintain the clarity of the dialogue and not make the whole thing unbearable to watch,” says Tapp.

Once the final Atmos mix was approved by director Kormákur, the tracks were taken to Galaxy Studios in Mol, Belgium, for the Barco 3D-Auro mix, and then it was on to Toronto’s Technicolor for the 12.0 IMAX mix. Despite the change in format, the integrity of the film was kept the same. The mix they defined in Atmos was the blueprint for the other formats.

For Freemantle, the best part of making Everest was being able to capture the journey. To make the audience feel like they are moving up the mountain, and make them feel cold and distressed. “You want to feel that contact, that physical contact like you are in it, like the snow is hitting your face and the jacket around you. When people watch it you want them to experience it because it’s a true story and you want them to feel it. If they are feeling it, then they are feeling the emotion of it.”

For more on Everest, read out interview with editor Mick Audsley.

Jennifer Walden is a New Jersey-based audio engineer and writer.

Joe Dzuban joins Formosa Features

Hollywood’s Formosa Group has hired supervising sound editor/sound designer/re-recording mixer Joe Dzuban as part of its Formosa Features team.

Dzuban, a multiple Golden Reel Award-nominee, has provided sound services for film projects of all types, including recent releases such as Guillermo del Toro’s Crimson Peak and Breck Eisner’s The Last Witch Hunter.

In addition to Crimson Peak (Legendary/Universal) and The Last Witch Hunter (Lionsgate), past releases on his resume include James Wan’s Furious 7 (Universal), John R. Leonetti’s Annabelle (New Line/Warner Bros.), Dean Israelite’s Project Almanac (Paramount), Christopher Landon’s Paranormal Activity: The Marked Ones (Paramount), as well as Wan’s Insidious and Insidious: Chapter 2 (Blumhouse/FilmDistrict).

Having been in the business of sound for over 15 years, Dzuban has an MFA in Film Production from USC’s School of Cinematic Arts.

Creating the score for IFC’s ‘Gigi Does It’

Composer Jason Moss channels his inner grandmother for new series.

By Jennifer Walden

Picture a sweet, blue-haired grandma, who bakes pies, knits and reminisces about the good old days. Then meet Gigi Rotblum, the lead character in the IFC series Gigi Does It. She is a foul-mouthed bubbe who likes to kibitz about… well, just about anything. But even though she’s got the mouth of a sailor, is neurotic and often inappropriate, there’s still something charming about this Jewish grandmother, played by actor David Krumholtz. The show follows the 77-year-old Boca Raton widow as she finds odd ways to spend the remaining days of her twilight years.

It wasn’t too difficult for New Jersey-born composer Jason Moss, owner of LA’s Super Sonic Noise, to relate to this character. In fact he says he felt a connection to Gigi immediately. “My wife’s grandmother is 94 years old, and my grandmother is 94, so that uber Jewish grandmother neuroses is very strong in my life. Even though Gigi is a little more vulgar, with all her vagina and penis talk, the character is there.”

Composer Jason Moss

He also shares a connection to the character via actor Krumholtz, who lives in the part of New Jersey where Moss grew up. “It felt like everything aligned — it was kind of hysterical, and it was a nice, comforting connection. They were pretty much speaking my language.”

Finding the Score
Moss was introduced to the brothers Ben and Dan Newmark, founders of production company Grandma’s House Entertainment, through director of development/producer Michael Lopez. During their meeting to discuss the series’ score, Moss presented several options from his production music catalog, ranging from ultra contemporary hip-hop to kitschy Herb Alpert-inspired tunes. He also included his favorite track from Super Sonic Noise’s catalog, a quirky organ-based track laced with whimsical la-la-las sung by Moss himself. “When I got to that track they were like, ‘Holy shit that’s the theme song,’” says Moss. “It was one of my favorite tracks and I feel like it could not have a more perfect home then Gigi Does It.”

Moss, a long-time user of Apple’s Logic Pro, was able to open the original session for that track, which was created in an earlier version of Logic Pro. He did a re-edit and remix to tailor it specifically to Gigi Does It. “Everything was there and that was great because I was able to spruce it up a little bit,” says Moss. “They started cutting with the track and, in the end, they got it approved to be the theme.”

While Moss always uses real instruments to perform his guitar and bass parts, the score for Gigi Does It runs more in the elevator-vein, featuring organs, horns, and small percussion. His go-to for virtual instruments on the series are Arturia’s Vox Continental V for quirky sounding organs; Spectrasonics’s Trilian for upright bass; Addictive Drums by XLN Audio; and a small library called The Trumpet 3 by SampleModeling. “The Trumpet 3 is an amazing trumpet sample library that has a very authentic trumpet sound, with all the nuances of trumpet playing, like the way the tongue is used, and the different sorts of riffs,” says Moss. “With Addictive Drums, you can tweak the microphone distance to give it a bit more of a warm feel. The sounds for this show I want to be really warm, round and organic.”

SuperSoniceNoise_04SuperSoniceNoise_02

No matter what type of music Moss is composing, he feels there are three very important considerations to working creatively. “I always say it’s your ass, your ears and your eyes. You have to have something comfortable to sit on, something great to listen to and something good to look at. So I have a killer seat, and killer monitors to listen through, and a killer monitor to look at.”

The show’s score is a combination of Moss’s custom composed music —tracks pulled from the Super Sonic Noise catalog — and quirky, campy, lounge-style music tracks from composers that have written for Moss in the past. “There are a lot of funny organs, quirky drums and some Latin samba stuff that really works for Gigi,” says Moss. All the tracks and stems are delivered in stereo as 24/48-bit files uploaded to the Super Sonic Noise catalog site. “I use a platform called Source Audio and it is basically my search/play/delivery system. They are absolutely the most amazing and current company when it comes to music catalogs.”

SuperSoniceNoise_01

Delivering files via the Source Audio platform allows Moss to add artwork, metadata and publisher information. “When the post production facility downloads the files, they are all watermarked and contain all the metadata embedded in the file. It’s very organized.”

So, is Gigi Does It a series for everyone? Maybe not, but, says Moss, “She’s a funny grandma, and maybe you can relate to it because you have a grandma, or an uncle, or an aunt that is just incredibly inappropriate. Still, there’s something very sweet about Gigi. David [Krumholtz] does a brilliant job being Gigi, and you forget that there is a man in that outfit playing this old Jewish grandmother. He does such a wonderful job.”

Sound effects and dialog design for ‘The Martian’

By Mel Lambert

In The Martian, during a manned mission to Mars astronaut and botanist Mark Watney is presumed dead after a fierce storm and is left behind by his crew. With only meager supplies, he is forced to draw upon his scientific ingenuity to signal NASA that he is alive and awaits rescue. Based on the book by Andy Weir and a screenplay by Drew Goddard, the film adaptation of Twentieth Century Fox’s The Martian was directed by Ridley Scott.

“I had read both the book and the script months before we started post sound on The Martian,” recalls supervising sound editor and sound designer Oliver Tarney, who was nominated for an Oscar for his sound editing work on Captain Phillips. “This meant that I could start thinking about the design of the sound long before I received the first turnover. I’d also spoken to picture editor Pietro Scalia, ACE, about how he and director Ridley Scott wanted to approach the soundtrack. The number one priority was keeping the audience connected to what the character Mark Watney (Matt Damon) was experiencing throughout his journey, so I knew we had to have a palette of sounds that described the isolation and jeopardy of his situation, right from the time we started on the director’s cut.”

DSC02002

A month before he started on the film, Tarney took a road trip around the southwestern part of the US and brought along his recording equipment (shown right). “I wanted to build up a library of desert winds and footsteps in remote areas such as the salt flats in Death Valley and the Mesquite Flat Sand Dunes. I’d ended up in Los Angeles for a few days before returning home to London and the opportunity came up to visit the Jet Propulsion Lab in Pasadena to record the Mars rover.”

He says the JPL engineers at the Mars Yard were incredibly helpful, not just in giving him access to recording the Rover but also giving him an insight into the NASA approach.

The soundtrack was edited and re-recorded at Twickenham Studios/TW1 in West London, with Paul Massey handling dialog/music and mixer/editor Mark Taylor overseeing sound effects. Michael Fentum was co-sound designer, Rachael Tate was dialog/ADR supervisor, James Harrison was sound effects editor, Hugo Adams was the Foley supervisor and Tony Lewis was music editor.

“While recording the Rover,” Tarney recalls, “what became immediately apparent was that although the engineering is absolutely state of the art, there is also this raw, buzzing, whirring and — surprisingly — unsophisticated element to it. The cost of sending anything into space is so extreme that these machines have to be purely functional, stripped down to the bare minimum… aesthetics and ergonomics are secondary to function.”

L-R: Dafydd Archard,; Rachael Tate,Oliver Tarney, Mark Taylor and Michael Fentum. Not pictured: Paul Massey.

L-R: Sound mix technician Dafydd Archard,  Rachael Tate, Oliver Tarney, Mark Taylor and Michael Fentum. Not pictured: Paul Massey.

That realization became the basis for Tarney’s sound design. “We needed to convey the austere rawness of the technology used in keeping Mark Watney alive,” he says. “Mike Fentum and I recorded a huge library of sounds with Schertler contact mics, building up a palette of electrical buzzes, clicks and whirs that would be the basis for the equipment Watney uses in the film. It helped to describe that, although there may be billions of dollars of technology up there, there’s also a certain fragility and therefore constant threat to life. The raw sounds of the technology also played along with the fact that Watney himself is an engineer and could access, repair and re-imagine uses for it, which he does throughout the film.”

This was the fourth film on which Tarney had worked with director Scott — including Exodus: Gods and Kings and The Counselor. “You start to get to know some of the regulars he uses in other departments,” he says. “The costume department was very generous in giving us both a Mars surface suit and an EVA space suit right from day one of sound. Janty Yates’ detailed work on the suits is absolutely beautiful, but the best bit for us was that they sounded great!

“The very first scenes sent to Ridley and Pietro [Scalia] to review were of Watney outside on the Mars surface. The footstep recordings I’d made in the desert, combined with the suit Foley and temp helmet breaths, worked really effectively. They made sure that even during the director’s cut, viewers were always connected to Watney’s plight – experiencing the isolation, the discomfort of the heavy suit and the claustrophobic in-helmet breaths. Although the visuals are truly beautiful, we wanted to remind the audience that survival on the inhospitable surface of Mars is near-impossible for any lone human.”

Tarney says Scott wanted Watney’s Habitat on Mars to sound like he was living inside a life-support system. “We still wanted to have the same sense of the raw technology at play here, but with an almost womb-like protection from the dangers of Mars,” states Tarney. “We recorded very low frequencies oscillating extremely loudly though a subwoofer loudspeaker in a range of rooms, and also inside spaces such as filing cabinets. Those tracks made up the foundations of that environment. The rhythmic nature of the sounds adds an almost comforting feeling. But because Ridley wanted us to sell the idea that the Habitat wasn’t designed for use over such a long period of time, as the film progressed we also introduced various BPM-matched squeaks and creaks to the pulses. Again, the technology is there, but it is not pretty, just purely functional. With the Habitat designed to last for only 31 days, it degrades slowly, as the narrative unfolds.

(from left) Matt Damon, Jessica Chastain, Sebastian Stan, Kate Mara, and Aksel Hennie portray the crewmembers of the fateful mission to Mars.

Tarney considers that the Habitat sounds are particularly effective in Dolby Atmos immersive format. “Mark Taylor was part of the sound editorial team before he switched to mixing the FX,” he says. “He had a 9.1 set-up in his cutting room, which allowed Mike Fentum and me to review our ambiences and discuss with Mark which elements we should be looking to bleed into the overheads. The extended low frequencies in Atmos were incredibly useful in giving us that ‘enveloped’ sound we were looking for when mixing those Habitat scenes.”

“I pre-mixed all the sound effects and Foley virtually in Pro Tools,” Taylor confirms. “This [approach] gives me ultimate flexibility if something needs to be removed or altered on an elemental level. I then routed the separate buss outputs from Pro Tools into the Neve DFC console as pre-dub inputs. I love what the DFC does EQ- and dynamics-wise; it makes material blend nicely, with some gentle compression and a final EQ shaping on each pre-dub. I also love the console’s overheads pan feature, which I used extensively for the Atmos mix, with all the Hab interiors being sent in varying measure to these overhead loudspeaker channels.”

Designing Dialog and ADR
At first glance, it might appear that for the dialog department The Martian soundtrack was pretty straight forward, since most of the screen time is composed of one character who is alone on Mars. But dialog/ADR supervisor Rachael Tate quickly realized the film was actually going to be very multi-layered and technically demanding. “Our biggest challenge was the opening scene of the film,” she explains, “with a dust storm so fierce that it forces the crew to abort their mission. Ridley is always thinking about the story, so dialog clarity is paramount.

“Most of the dialog in the storm is played as if we are overhearing radio comms between the characters,” she continues. “We had to find a way of getting the lines to cut through the immersive FX of the storm without it becoming painfully sharp or distorted. We did this by using a blend of three different helmet-‘worldized’ treatments [secured by replaying dialog lines in the actual costumes], and altered slightly depending on the tone, projection and pitch of each line. Often a great sounding Futz for a low male voice will be too harsh and crisp for a female shouting at the top of her voice. Despite this, we retained a consistency, with dialog re-recording mixer Paul Massey skillfully blending within the boundaries of the overall effect we wanted to create.”

Early on, the editorial team experimented with a multitude of DAW plug-ins but found that the most effective results always came from worldizing. “We fed all in-helmet dialogue through an Avantone speaker that was placed inside a helmet we’d been given by production using a Sanken Cos-11 lavalier mic clipped inside to record the results,” explains Tate. Right from the beginning of sound post, I had this setup as an Aux Send from my Pro Tools session. This arrangement really helped instill that sense of claustrophobia Ridley was seeking to emphasize.”

Helmut mic.

Helmet mic.

Another example came when the team needed to create a sense of distance while NASA Mission Control is monitoring the communications between Watney and the Hermes space vehicle holding the Ares III crew as it returned to Earth, some 140 million miles away. “We set up a tube shortwave transmitter in one room and broadcast the lines to an old radio in another, using a [Placid Audio] Copperphone mic,” she explains. “The naturally lo-fi result gave a far more believable sense that these lines had travelled through the ether, rather than having been processed.”

NASA was particularly cooperative, Tate reports, by providing help in re-creating the world of Houston’s Mission Control. “We were put in touch with a great group of NASA employees who work regularly in Mission Control. They were amazing, immediately giving those scenes the textures we needed, with those unique timbres in the way they communicate. The way they deliver lines is incredibly flat; it’s just about delivering information to each other in the most efficient way, rather than performing a line. They provided not only general comms for background color, but we also got them to react to specific story points and launches throughout the film so that we could create 360-degree NASA activity, specific to on-screen events.”

In the studio’s EPK, Scott said, “This is the ultimate survival story. Mark Watney is placed under unimaginable duress and isolation; the movie is about how he responds. Mark’s fate was determined by whether he succumbed to panic and despair and accepted death as inevitable — or chose to rely on his training, resourcefulness and sense of humor to stay calm and solve problems.”

The Martian, which has been garnering Oscar buzz, is in theaters now.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Mark Mangini keynotes The Art of Sound 
Design at Sony Studios

Panels focus on specifics of music, effects and dialog sound design, and immersive soundtracks

By Mel Lambert

Defining a sound designer as somebody “who uses sound to tell stories,” Mark Mangini, MPSE, was adamant that “sound editors and re-recording mixers should be authors of a film’s content, and take creative risks. Art doesn’t get made without risk.”

A sound designer/re-recording mixer at Hollywood’s Formosa Group Features, Mangini outlined his sound design philosophy during a keynote speech at the recent The Art of Sound Design: Music, Effects and Dialog in an Immersive World conference, which took place at Sony Pictures Studios in Culver City.

Mangini is recipient of three Academy Award nominations for The Fifth Element (1997), Aladdin (1992) and Star Trek IV: The Voyage Home (1986).

Acknowledging that an immersive soundtrack should fully engage the audience, Mangini outlined two ways to achieve that goal. “Physically, we can place sound around an audience, but we also need to engage them emotionally with the narrative, using sound to tell the story,” he explained to the 500-member audience. “We all need to better understand the role that sound plays in the filmmaking process. For me, sound design is storytelling — that may sound obvious, but it’s worth reminding ourselves on a regular basis.”

While an understanding of the tools available to a sound designer is important, Mangini readily concedes, “Too much emphasis on technology keeps us out of the conversation; we are just seen as technicians. Sadly, we are all too often referred to as ‘The Sound Guy.’ How much better would it be for us if the director asked to speak with the ‘Audiographer,’ for example. Or the ‘Director of Sound’ or the ‘Sound Artist?’ — terms that better describe what we actually do? After all, we don’t refer to a cinematographer as ‘The Image Guy.’”

Mangini explained that he always tries to emphasize the why and not the how, and is not tempted to imitate somebody else’s work. “After all, when you imitate you ensure that you will only be ‘almost’ as good as the person or thing you imitate. To understand the ‘why,’ I break down the script into story arcs and develop a sound script so I can reference the dramatic beats rather than the visual cues, and articulate the language of storytelling using sound.”

Past Work
Offering up examples of his favorite work as a soundtrack designer, Mangini provided two clips during his keynote. “While working on Star Trek [in 2009] with supervising sound editor Mark Stoeckinger, director J. J. Abrams gave me two days to prepare — with co-designer Mark Binder — a new soundtrack for the two-minute mind meld sequence. J. J. wanted something totally different from what he already had. We scrapped the design work we did on the first day, because it was only different, not better. On day two we rethought how sound could tell the story that J. J. wanted to tell. Having worked on three previous Star Trek projects [different directors], I was familiar with the narrative. We used a complex combination of orchestral music and sound effects that turned the sequence on its head; I’m glad to say that J. J. liked what we did for his film.”

The two collaborators received the following credit: “Mind Meld Soundscape by Mark Mangini and Mark Binder.”

Turning to his second soundtrack example, Mangini recalled receiving a call from Australia about the in-progress soundtrack for George Miller’s Mad Max: Fury Road, the director’s fourth outing with the franchise. “The mix they had prepared in Sydney just wasn’t working for George. I was asked to come down and help re-invigorate the track. One of the obstacles to getting this mix off the ground was the sheer abundance of material to choose from. When you have so many choices on a soundtrack, the mix can be an agonizing process of ‘Sound Design by Elimination.’ We needed to tell him, ‘Abandon what you have and start over.’ It was up to me, as an artist, to tell George that his V8 needed an overhaul and not just a tune-up!”

“We had 12 weeks, working at Formosa with co-supervising sound editor Scott Hecker — and at Warner Bros Studios with re-recording mixers Chris Jenkins and Greg Rudloff — to come up with what George Miller was looking for. We gave each vehicle [during the extended car-chase sequence that opens the film] a unique character with sound, and carefully defined [the lead proponent Max Rockatansky’s] changing mental state during the film. The desert chase became ‘Moby Dick,’ with the war rig as the white whale. We focused on narrative decisions as we reconstructed the soundtrack, always referencing ‘the why’ for our design choices in order to provide a meaningful sonic immersion. Miller has been quoted as saying, ‘Mad Max is a film where we see with our ears.’ This from a director who has been making films for 40 years!”

His advice to fledgling sound designers? Mangini kept it succinct: “Ask yourself why, not how. Be the author of content, take risks, tell stories.”

Creating a Sonic Immersive Experience
Subsequent panels during the all-day conference addressed how to design immersive music, sound effects and dialog elements used on film and TV soundtracks. For many audiences, a 5.1-channel format is sufficient for carrying music, effects and dialog in an immersive, surround experience, but 7.1-channel — with added side speakers, in addition to the new Dolby Atmos, Barco/Auro 3D and DTS:X/MDA formats — can extend that immersive experience.

“During editorial for Guardians of the Galaxy we had so many picture changes that the re-recording mixers needed all of the music stems and breakouts we could give them,” said music editor Will Kaplan, MPSE, from Warner Bros. Studio Facilities, during the “Music: Composing, Editing and Mixing Beyond 5.1” panel. It was presented by Formosa Group and moderated by scoring mixer Dennis Sands, CAS. “In a quieter movie we can deliver an entire orchestral track that carries the emotion of a scene.”

Music: Composing, Editing and Mixing Beyond 5.1 panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

‘Music:Composing, Editing and Mixing Beyond 5.1’ panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

Describing his collaboration with Tim Burton, music editor Bill Abbott, MPSE from Formosa reported that the director “liked to hear an entire orchestral track for its energy, and then we recorded it section by section with the players remaining on the stage, which can get expensive!”

Joseph Magee, CAS, (supervising music mixer on such films as Pitch Perfect 2, The Wedding Ringer, Saving Mr. Banks and The Muppets) likes to collaborate closely with the effects editor to decide who handles which elements from each song. “Who gets the snaps and dance shoes How do we divide up the synchronous ambience and the design ambience? The synchronous ambience from the set might carry tails from the sing-offs, and needs careful matching. What if they pitch shift the recorded music in post? We then need to change the pitch of the music captured in the audience mics using DAW plug-ins.”

“I like to invite the sound designer to the music spotting session,” advised Abbott, “and discuss who handles what — is it a music cue or a sound effect?”

“We need to immerse audiences with sound and use the surrounds for musical elements,” explained Formosa’s re-recording mixer, Andy Koyama, CAS. “That way we have more real estate in the front channels for sound effects.”

“We should get the sound right on the set because it can save a lot of processing time on the dub stage,” advised production mixer Lee Orloff, CAS, during the “A Dialog on Dialog: From Set to Screen” panel moderated by Jeff Wexler, CAS.

A Dialog on Dialog: From Set to Screen panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

‘A Dialog on Dialog: From Set to Screen’ panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

“I recall working on The Patriot, where the director [Roland Emmerich] chose to create ground mist using smoke machines known as a Smoker Boats,” recalled Orloff, who received Oscar and BAFTA Awards for Terminator 2: Judgment Day (1991). “The trouble was that they contained noisy lawnmower engines, whose sound can be heard under all of the dialog tracks. We couldn’t do anything about it! But, as it turned out, that low-level noise added to the sense of being there.”

“I do all of my best work in pre-production,” added Wexler, “by working out the noise problems we will face on location. It is more than just the words that we capture; a properly recorded performance tells you so much about the character.”

“I love it when the production track is full of dynamics,” added dialog/music re-recording mixer Gary Bourgeois, CAS. “The voice is an instrument; if I mask out everything that is not needed I lose the ‘essence’ of the character’s performance. The clarity of dialog is crucial.”

“We have tools that can clean up dialog,” conceded supervising sound editor Marla McGuire, MPSE, “but if we apply them too often and too deeply it takes the life out of the track.”

“Sound design can make an important scene more impactful, but you need to remember that you’re working in the service of the film,” advised sound designer/supervising sound editor Richard King, MPSE, during the “Sound Effects: How Far Can You Go?” moderated by David Bondelevitch, MPSE, CAS.

Sound Effects: How Far Can You Go? panel L_R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

‘Sound Effects: How Far Can You Go?’ panel L-R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

In terms of music co-existing with sound effects, Formosa’s Scott Gershin, MPSE, advised, “During a plane crash sequence, I pitch shifted the sound effect to match the music.”

“I like to go to the music spotting session and ask if the director wants the music to serve as a rhythmic or thematic/tonal part of the soundtrack,” added sound effects re-recording mixer Will File from Fox Post Production Services. “I just take the other one. Or if it’s all rhythm — a train ride, for example — we’ll agree to split [the elements].”

“On the stage, I’m constantly shifting sync and pitch shifting the sound effects to match the music track,” stated Gershin. “For Pacific Rim we had many visual effects arriving late with picture changes. Director Guillermo del Toro received so many new eight-frame VFX cues he wanted to use that the music track ended up looking like bar code” in the final Pro Tools sessions.

In terms of working with new directors, “I like to let them see some good movies with good sound design to start the conversation” offered Files. “I front load the process by giving the director and picture editors a great sounding temp track using dialog predubs that they can load into the Avid Media Composer to get them used to our sound ideas It also helps the producers dazzle the studio!”

“Successful soundtrack design is a collaborative effort from production sound onwards,” advised re-recording mixer Mike Minkler, CAS, during “The Mix: Immersive Sound, Film and Television” panel, presented by DTS and moderated by Mix editor Tom Kenny. “It’s about storytelling. Somebody has to be the story’s guardian during the mix,” stated Minkler, who received Academy Awards for Dreamgirls (2006), Chicago (2002) and Black Hawk Down (2001). “Filmmaking is the ultimate collaboration. We need to be aware of what the director wants and what the picture needs. To establish your authority you need to gain their confidence.”

“For immersive mixes, you should start in Dolby Atmos as your head mix,” advised Jeremy Pearson, CAS, who is currently re-recording The Hunger Games: Mockingjay – Part 2 at Warner Bros. Studio. He also worked in that format on Mockingjay – Part 1 and Catching Fire. “Atmos is definitely the way to go; it’s what everyone can sign off on. In terms of creative decisions during an Atmos mix, I always ask myself, ‘Am I helping the story by moving a sound, or distracting the audience?’ After all, the story is up on the screen. We can enhance sound depth to put people into the scene, or during calmer, gentler scenes you can pinpoint sounds that engage the audience with the narrative.”

Kim Novak Theater at Sony Pictures Studios

Kim Novak Theater at Sony Pictures Studios.

Minkler reported that he is currently working on director Quentin Tarantino’s The Hateful Eight, “which will be released initially for two weeks in a three-hour version on 70mm film to 100 screens, with an immersive 5.1-channel soundtrack mastered to 35 mm analog mag.”

Subsequently, the film will be released next year in a slightly different version via a conventional digital DCP.

“Our biggest challenge,” reported Matt Waters, CAS, sound effects re-recording mixer for HBO’s award-winning Game of Thrones, “is getting everything competed in time. Changes are critical and we might spend half a day on a sequence and then have only 10 minutes to update the mix when we receive picture changes.”

“When we receive new visuals,” added Onnalee Blank, CAS, who handles music and dialog re-recording on the show, “[the showrunners] tell us, ‘it will not change the sound.’ But if the boats become dragons…”

Photos by Mel Lambert.

Mel Lambert is principal of Content Creators, an LA-based editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Tattersall adds re-recording mixer Matthew Chan

Tattersall Sound and Picture in Toronto has brought on veteran re-recording mixer and sound supervisor Matthew Chan, whose divers includes features, episodic television and documentaries. His recent credits include the upcoming Lionsgate release Operation Avalanche and director Atom Egoyan’s Remember, which screened at this year’s Venice Film Festival and the Toronto International Film Festival (TIFF).

Chan spent 10 years at Theatre D Digital, Toronto, serving as head of sound from 2008 to 2014. He began his career with Trail Mix Editing, also in Toronto.

He has worked with some of Toronto’s top directors, including Egoyan, Bruce McDonald and Kari Skogland. He was the mixer on Agnieszka Holland’s 2011 Academy Award-nominated film In Darkness.

Other notable feature credits include Avi Lewis’ This Changes Everything (TIFF 2015), Andrew Cividino’s Sleeping Giant (Cannes/TIFF 2015) and Igor Drljaca’s The Waiting Room (Locarno/TIFF 2015).  His work in television includes The Expanse for Syfy/Alcon Entertainment and Saving Hope for CTV/EOne Entertainment.

“Matthew adds a new dimension to our mixing team,” says Tattersall Sound and Pictures founder Jane Tattersall. “He brings creativity, passion and technical excellence to his work. He will be a great asset to both established and emerging filmmakers.”

Tattersall is a SIM Group company.

‘Aqua Teen Hunger Force’: A love story

Michael and Ashley Kohler on work, romance and the show that’s to blame

Business partnerships are like a marriage, but what if you combine the two? And what if a crazy and beloved show like Adult Swim’s Aqua Teen Hunger Force was at the center of it all? Well, for one couple that’s exactly what happened.

Atlanta-based Ashley Kohler (Awesome Inc. and Bluetube) and Michael Kohler (Bluetube) might never have fallen in love if it wasn’t for this quirky cartoon, which had its season finale this summer after 15 years on the air. Recently this happy couple sat down to reflect on romance, meatballs and the perfect marriage of sound and picture.  Recently, we just let them chat, share and reminisce.

The happy couple.

The happy couple and collaborators.

Ashley: Michael and I met prior to my becoming director of on-air promotion at Cartoon Network, and we worked together on a number of projects over the course of about seven years. When I left the network, we discussed starting our own studio. One lunch became two, and two lunches became dinner, and dinner became a date. And Michael’s cool motorcycle didn’t hurt. Michael was just getting started on the Aqua Teen Hunger Force movie and he was so busy that in order to see each other I would spend time in the studio. He’d put me to work, jingling keys and making background noises, and finding casserole dishes for just the right clang. It was a unique courtship.

Michael: No one has ever been advised to start a company with someone they’re dating, but we took it one step further and purchased a building to house both our companies: Bluetube for music and sound, and the animation studio Awesome Inc. In the middle of working on the Aqua Teen feature film we started construction on a new studio space, and Ashley took over the logistics of producing for my company Bluetube. Then we had the opportunity to go to Skywalker Sound, a place I had dreamed of visiting since I first knew I wanted to be a composer and engineer, for the final mix stage of the film.

Ashley: We are both are type A, and are passionate about what we do individually and together. Sure, we fuss and fight through it sometimes – I come in and tell him to pick up his metaphoric business socks, then we go home and he asks me to pick up my actual ones.

Michael: It takes every bit of effort, day and night, to nurture our businesses, and we never take it for granted. We work for ourselves, making crazy animated shows and commercials.

Ashley: For many seasons Michael created the audio for Aqua Teen Hunger Force (ATHF), then years later Awesome became the animation studio for the show. It has proved to be such an efficient and creatively inspiring way to work – there’s no sending the animation off into the ether to wait for the unknown… being able to collaborate throughout the process with ideas for how to bring out the best in the work is extremely helpful with tight deadlines. Awesome and Bluetube don’t work on every project together, but it is great when we can.

Michael:  I like to be involved from the beginning and to get visual elements as inspiration for the music and sound. When I work on the same projects as Awesome, I can run upstairs and talk to the animators while they are in progress. I can provide a click track for timing, or demo music early in production so that it evolves together. We really benefit by being so close in proximity and, in the case of ATHF, part of small team that creates the show.

Aqua Teen Hunger Force- African Background ATHFF_summer-preview

 

Ashley: We are all together, as a family — in our case, literally.

Michael: Ashley is an exceptional producer, she knows how to bring out the best in people – me included.

Ashley: This is going to sound really corny, but I am genuinely in awe of Michael’s talent. I sometimes feel sad that I am the only one who gets to sneak in the middle of the night and watch this interesting and amazing process unfold. There is so much that can be conveyed though music and sound. In the case of the Cartoon Network mnemonic that he created… just five powerful notes. It inspires wonderment. And if he stays at work all night, I don’t wonder where he’s gone. I know he’s in the studio making magic — or sounds for a meatball.

Michael: Clearly our lives are wrapped up in work. From the friends we’ve made over the years, to the conversations at the dinner table. We talk about the day to day, our plans for the future, our dreams.

Ashley: Although the work plug gets pulled when our four year old says, “You can talk later.” That’s our cue to play a game or go for a walk.

Michael: Aqua Teen has followed this arc of our lives for a decade. For me, it’s been 10 years of making music, smashing stuff and building things like a giant potato gun all in the name of getting that perfect sound. Most of all, throughout every damn episode we laughed. Matt and Dave are so uncommonly funny that – well, let’s just say there are a lot of memories.

Ashley: Like being animated into the shows?

Michael: We’ve been immortalized in animation. For better or pinheads.


Ashley and Michael’s cartoon selves.

Ashley:  By surprise I found out that I was being included as a waitress in the finale. At first, I had this horrendous man body.

Michael: I cautioned her against saying anything — I mean, these are animators so you never know what they’ll do if you complain.

Ashley: They basically swung a bit too far in the opposite direction. I suddenly wanted my bad man body back. Luckily, we landed in a somewhat comfortable place. It’s embarrassing but also pretty fun —you can’t take it too seriously. Michael is in the final episode, as are the show’s creators.  There are a number of people in the last two episodes who have meant something to the show over the years. We’re all hanging out in the restaurant together or at a magic show.

Michael: It’s a nod to fact that the show is like a little hometown project that kept going and going. We’re all sad it is over, because we are also big fans of the show.

Ashley: You could throw a dart at the last decade and never hit a dull moment. I know when I look back I will remember how amazing the movie experience was, being at Skywalker and falling in love. The many episodes animated at Awesome Inc. — especially the final episodes — were a crazy, emotional time of unrivaled unity.

——
The pair is currently working on Your Pretty Face Is Going To Hell, Squidbillies and Carl’s Lock, along with a variety of commercials and promo projects.

Jake Kluge’s Tips: How to be a successful audio engineer

Audio editor/mixer Jake Kluge has worked at Dallas’ Charlieuniformtango for over 14 years. This audio vet knows a thing or two about how to succeed in this business. His recent work includes spots for Fiat and Home Depot out of The Richards Group, as well as a project for Universal Orlando via TM Advertising.

This busy pro was kind enough to share his wisdom with postPerspective.

Collaborate
If you’re the type of engineer that works with a client sitting behind you, as many of us are, your middle name should be Collaboration. You’re working for your clients. Their word is final. But, there is a reason they come to you — you’re good at this “sound thing.” So it’s ok to ask, ”What if we tried something like this?” It’s even more ok to ask, “What were you hoping to hear for this area?”

THD_Bucket
Kluge recently collaborated with The Richards Group for Fiat and Home Depot.

Your Ears: Take Care of Those Money Makers Inside the Studio
Monitor at a reasonable level. You’re going to be using your ears to make a living. You’ll probably even use them when you get home from work. Do not monitor at 90dB. Do not monitor consistently at 85dB. Use your judgment, but keep it down to a reasonable level. The rule is, if your ears are ringing after a session, that’s bad. Don’t do that.

Your Ears: Take Care of Those Money Makers Outside the Studio
Carrying over from the last tip. Those ears of yours — moneymakers — are pretty darn important to your career. Wear earplugs at concerts. Wear earplugs at band practice. Wear earplugs during fireworks. Just wear earplugs. Buy some good ones and keep them with you at all times. You won’t regret it.

Change Your Mouse/Trackball/Tablet Every So Often
I’ve been track balling for 15 years solid. Recently, I have experienced what I am assuming is carpel tunnel syndrome in my wrist. It’s not bad, and if I switch from my trackball (old faithful) to a mouse, my wrist feels better. So my conclusion is, switch it up every once in a while. Oh, and stop slouching, you slob.

Other People Have Good Ideas Too
If you’re lucky enough to have other audio people working with you, pick their brains about everything. “What’s another good search word for “whoosh?” “Why is my master fader clipping so hard?” “Do these pants make me look fat?” That kind of stuff.

Fortune Favors the Bold
It’s true. Go out and get the big job. Try out crazy ideas in your sound design or mix. Ask out that girl/guy you’ve been crushing on…. send me the wedding picture.

Jake Kluge is an audio editor and mixer at Charlieuniformtango (@CUTango) in Dallas. You can reach him at jake@charlietango.com.