Category Archives: Audio Mixing

Behind the Title: Grey Ghost Music mix engineer Greg Geitzenauer

NAME: Greg Geitzenauer

COMPANY: Minneapolis-based Grey Ghost Music

CAN YOU DESCRIBE YOUR COMPANY?
Side A: Music production, creative direction and licensing for the advertising and marketing industries. Side B: Audio post production for the advertising and marketing industries.

WHAT’S YOUR JOB TITLE?
Senior Mix Engineer

WHAT DOES THAT ENTAIL?
All the hands-on audio post work our clients need — from VO recording, editing, forensic/cleanup work to sound design and final mixing.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The number of times my voice has ended up in a final spot when the script calls for “recording engineer. “

WHAT’S YOUR FAVORITE PART OF THE JOB?
There are some really funny people in this industry. I laugh a lot.

WHAT’S YOUR LEAST FAVORITE?
Working on a particular project so long that I lose perspective on whether the changes being made are helping any more.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get to work early — the time I get to spend confirming all my shit is together.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cutting together music for my daughter’s dance team.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was 14 when I found out what a recording engineer did, and I just knew. Audio and technology… it just pushes all my buttons.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Essentia Water, Best Buy, Comcast, Invisalign, 3M and Xcel Energy.

Invisalign

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
An anti-smoking radio campaign that won Radio Mercury and One Show awards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools HD, Kensington Expert Mouse trackball and Pentel Quicker-Clicker mechanical pencils.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Reddit and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Go home.

JoJo Whilden/Hulu

Color and audio post for Hulu’s The Looming Tower

Hulu’s limited series, The Looming Tower, explores the rivalries and missed opportunities that beset US law enforcement and intelligence communities in the lead-up to the 9/11 attacks. Based on the Pulitzer Prize-winning book by Lawrence Wright, who also shares credit as executive producer with Dan Futterman and Alex Gibney, the show’s 10 episodes paint an absorbing, if troubling, portrait of the rise of Osama bin Laden and al-Qaida, and offer fresh insight into the complex people who were at the center of the fight against terrorism.

For The Looming Tower’s sound and picture post team, the show’s sensitive subject matter and blend of dramatizations and archival media posed significant technical and creative challenges. Colorist Jack Lewars and online editor Jeff Cornell of Technicolor PostWorks New York, were tasked with integrating grainy, run-and-gun news footage dating back to 1998 with crisply shot, high-resolution original cinematography. Supervising sound designer/effects mixer Ruy García and re-recording mixer Martin Czembor from PostWorks, along with a Foley team from Alchemy Post Sound, were charged with helping to bring disparate environments and action to life, but without sensationalizing or straying from historical accuracy.

L-R: colorist Jack Lewars and editor Jeff Cornell

Lewars and Cornell mastered the series in Dolby Vision HDR, working from the production’s camera original 2K and 3.4K ArriRaw files. Most of the color grading and conforming work was done with a light touch, according to Lewars, as the objective was to adhere to a look that appeared real and unadulterated. The goal was for viewers to feel they are behind the scenes, watching events as they happened.

Where more specific grades were applied, it was done to support the narrative. “We developed different look sets for the FBI and CIA headquarters, so people weren’t confused about where we were,” Lewars explains. “The CIA was working out of the basement floors of a building, so it’s dark and cool — the light is generated by fluorescent fixtures in the room. The FBI is in an older office building — its drop ceiling also has fluorescent lighting, but there is a lot of exterior light, so its greener, warmer.”

The show adds to the sense of realism by mixing actual news footage and other archival media with dramatic recreations of those same events. Lewars and Cornell help to cement the effect by manipulating imagery to cut together seamlessly. “In one episode, we matched an interview with Osama bin Laden from the late ‘90s with new material shot with an Arri Alexa,” recalls Lewars. “We used color correction and editorial effects to blend the two worlds.”

Cornell degraded some scenes to make them match older, real-world media. “I took the Alexa material and ‘muddied’ it up by exporting it to compressed SD files and then cutting it back into the master timeline,” he notes. “We also added little digital hits to make it feel like the archival footage.”

While the color grade was subtle and adhered closely to reality, it still packed an emotional punch. That is most apparent in a later episode that includes the attack on the Twin Towers. “The episode starts off in New York early in the morning,” says Lewars. “We have a series of beauty shots of the city and it’s a glorious day. It’s a big contrast to what follows — archival footage after the towers have fallen where everything is a white haze of dust and debris.”

Audio Post
The sound team also strove to remain faithful to real events. García recalls his first conversations about the show’s sound needs during pre-production spotting sessions with executive producer Futterman and editor Daniel A. Valverde. “It was clear that we didn’t want to glamorize anything,” he says. “Still, we wanted to create an impact. We wanted people to feel like they were right in the middle of it, experiencing things as they happened.”

García says that his sound team approached the project as if it were a documentary, protecting the performances and relying on sound effects that were authentic in terms of time and place. “With the news footage, we stuck with archival sounds matching the original production footage and accentuating whatever sounds were in there that would connect emotionally to the characters,” he explains. “When we moved to the narrative side with the actors, we’d take more creative liberties and add detail and texture to draw you into the space and focus on the story.”

He notes that the drive for authenticity extended to crowd scenes, where native speakers were used as voice actors. Crowd sounds set in the Middle East, for example, were from original recordings from those regions to ensure local accents were correct.

Much like Lewars approach to color, García and his crew used sound to underscore environmental and psychological differences between CIA and FBI headquarters. “We did subtle things,” he notes. “The CIA has more advanced technology, so everything there sounds sharper and newer versus the FBI where you hear older phones and computers.”

The Foley provided by artists and mixers from Alchemy Post Sound further enhanced differences between the two environments. “It’s all about the story, and sound played a very important role in adding tension between characters,” says Leslie Bloome, Alchemy’s lead Foley artist. “A good example is the scene where CIA station chief Diane Marsh is berating an FBI agent while casually applying her makeup. Her vicious attitude toward the FBI agent combined with the subtle sounds of her makeup created a very interesting juxtaposition that added to the story.”

In addition to footsteps, the Foley team created incidental sounds used to enhance or add dimension to explosions, action and environments. For a scene where FBI agents are inspecting a warehouse filled with debris from the embassy bombings in Africa, artists recorded brick and metal sounds on a Foley stage designed to capture natural ambience. “Normally, a post mixer will apply reverb to place Foley in an environment,” says Foley artist Joanna Fang. “But we recorded the effects in our live room to get the perspective just right as people are walking around the warehouse. You can hear the mayhem as the FBI agents are documenting evidence.”

“Much of the story is about what went wrong, about the miscommunication between the CIA and FBI,” adds Foley mixer Ryan Collison, “and we wanted to help get that point across.”

The soundtrack to the series assumed its final form on a mix stage at PostWorks. Czembor spent weeks mixing dialogue, sound and music elements into what he described as a cinematic soundtrack.

L-R: Martin Czember and Ruy Garcia

Czembor notes that the sound team provided a wealth of material, but for certain emotionally charged scenes, such as the attack on the USS Cole, the producers felt that less was more. “Danny Futterman’s conceptual approach was to go with almost no sound and let the music and the story speak for themselves,” he says. “That was super challenging, because while you want to build tension, you are stripping it down so there’s less and less and less.”

Czembor adds that music, from composer Will Bates, is used with great effect throughout the series, even though it might go by unnoticed by viewers. “There is actually a lot more music in the series than you might realize,” he says. “That’s because it’s not so ‘musical;’ there aren’t a lot of melodies or harmonies. It’s more textural…soundscapes in a way. It blends in.”

Czembor says that as a longtime New Yorker, working on the show held special resonance for him, and he was impressed with the powerful, yet measured way it brings history back to life. “The performances by the cast are so strong,” he says. “That made it a pleasure to work on. It inspires you to add to the texture and do your job really well.”

DG 7.9.18

Pace Pictures opens large audio post and finishing studio in Hollywood

Pace Pictures has opened a new sound and picture finishing facility in Hollywood. The 20,000-square-foot site offers editorial finishing, color grading, visual effects, titling, sound editorial and sound mixing services. Key resources include a 20-seat 4K color grading theater, two additional HDR color grading suites and 10 editorial finishing suites. It also features a Dolby Atmos mix stage designed by three-time Academy Award-winning re-recording mixer Michael Minkler, who is a partner in the company’s sound division.

The new independently-owned facility is located within IgnitedSpaces, a co-working site whose 45,000 square feet span three floors along Hollywood Boulevard. IgnitedSpaces targets media and entertainment professionals and creatives with executive offices, editorial suites, conference rooms and hospitality-driven office services. Pace Pictures has formed a strategic partnership with IgnitedSpaces to provide film and television productions service packages encompassing the entire production lifecycle.

“We’re offering a turnkey solution where everything is on-demand,” says Pace Pictures founder Heath Ryan. “A producer can start out at IgnitedSpaces with a single desk and add offices as the production grows. When they move into post production, they can use our facilities to manage their media and finish their projects. When the production is over, their footprint shrinks, overnight.”

Pace Pictures is currently providing sound services for the upcoming Universal Pictures release Mamma Mia! Here We Go Again. It is also handling post work for a VR concert film from this year’s Coachella Valley Music and Arts Festival.

Completed projects include the independent features Silver Lake, Flower and The Resurrection of Gavin Stone, the TV series iZombie, VR Concerts for the band Coldplay, Austin City Limits and Lollapalooza, and a Mariah Carey music video related to Sony Pictures’ animated feature Star.

Technical features of the new facility include three DaVinci Resolve Studio color grading suites with professional color consoles, a Barco 4K HDR digital cinema projector in the finishing theater, and dual Avid Pro Tools S6 consoles in the Dolby Atmos mix stage, which also includes four Pro Tools HDX systems. The site features facilities for sound design, ADR and voiceover recording, title design and insert shooting. Onsite media management includes a robust SAN network, as well as LTO7 archiving and dailies services, and cold storage.

Ryan is an editor who has operated Pace Pictures as an editorial service for more than 15 years. His many credits include the films Woody Woodpecker, Veronica Mars, The Little Rascals, Lawless Range and The Lookalike, as well as numerous concert films, music clips, television specials and virtual reality productions. He has also served as a producer on projects for Hallmark, Mariah Carey, Queen Latifah and others. Originally from Australia, he began his career with the Australian Broadcasting Corporation.

Ryan notes that the goal of the new venture is to break from the traditional facility model and provide producers with flexible solutions tailored to their budgets and creative needs. “Clients do not have to use our talent; they can bring in their own colorists, editors and mixers,” he says. “We can be a small part of the production, or we can be the backbone.”


Sound editor/re-recording mixer Will Files joins Sony Pictures Post

Sony Pictures Post Production Services has added supervising sound editor/re-recording mixer Will Files, who has spent a decade at Skywalker Sound. His brings with him credits on more than 80 feature films, including Passengers, Deadpool, Star Wars: The Force Awakens and Fantastic Four.

Files won a 2018 MPSE Golden Reel Award for his work on War for the Planet of the Apes. His current project is the upcoming Columbia Pictures release Venom out in US theaters this October.

He adds that he was also attracted by Sony Pictures’ ability to support his work both as a sound editor/sound designer and as a re-recording mixer. “I tend to wear a lot of hats. I often supervise sound, create sound design and mix my projects,” he says. “Sony Pictures has embraced modern workflows by creating technically-advanced rooms that allow sound artists to begin mixing as soon as they begin editing. It makes the process more efficient and improves creative storytelling.”

Files will work in a new pre-dub mixing stage and sound design studio on the Sony Pictures lot in Culver City. The stage has Dolby Atmos mixing capabilities and features two Avid S6 mixing consoles, four Pro Tools systems, a Sony 4K digital cinema projector and a variety of other support gear.

Files describes the stage as a sound designer/mixer’s dream come true. “It’s a medium-size space, big enough to mix a movie, but also intimate. You don’t feel swallowed up when it’s just you and the filmmaker,” he says. “It’s very conducive to the creative process.”

Files began his career with Skywalker Sound in 2002, shortly after graduating from the University of North Carolina School of the Arts. He earned his first credit as supervising sound editor on the 2008 sci-fi hit Cloverfield. His many other credits include Star Trek: Into Darkness, Dawn of the Planet of the Apes and Loving.


AlphaDogs’ Terence Curren is on a quest: to prove why pros matter

By Randi Altman

Many of you might already know Terence Curren, owner of Burbank’s AlphaDogs, from his hosting of the monthly Editor’s Lounge, or his podcast The Terence and Philip Show, which he co-hosts with Philip Hodgetts. He’s also taken to producing fun, educational videos that break down the importance of color or ADR, for example.

He has a knack for offering simple explanations for necessary parts of the post workflow while hammering home what post pros bring to the table. You can watch them here:

I reached out to Terry to find out more.

How do you pick the topics you are going to tackle? Is it based on questions you get from clients? Those just starting in the industry?
Good question. It isn’t about clients as they already know most of this stuff. It’s actually a much deeper project surrounding a much deeper subject. As you well know, the media creation tools that used to be so expensive, and acted as a barrier to entry, are now ubiquitous and inexpensive. So the question becomes, “When everyone has editing software, why should someone pay a lot for an editor, colorist, audio mixer, etc.?”

ADR engineer Juan-Lucas Benavidez

Most folks realize there is a value to knowledge accrued from experience. How do you get the viewers to recognize and appreciate the difference in craftsmanship between a polished show or movie and a typical YouTube video? What I realized is there are very few people on the planet who can’t afford a pencil and some paper, and yet how many great writers are there? How many folks make a decent living writing, and why are readers willing to pay for good writing?

The answer I came up with is that almost anyone can recognize the difference between a paper written by a 5th grader and one written by a college graduate. Why? Well, from the time we are very little, adults start reading to us. Then we spend every school day learning more about writing. When you realize the hard work that goes into developing as a good writer, you are more inclined to pay a master at that craft. So how do we get folks to realize the value we bring to our craft?

Our biggest problem comes from the “magician” aspect of what we do. For most of the history of Hollywood, the tricks of the trade were kept hidden to help sell the illusion. Why should we get paid when the average viewer has a 4K camera phone with editing software on it?

That is what has spurred my mission. Educating the average viewer to the value we bring to the table. Making them aware of bad sound, poor lighting, a lack of color correction, etc. If they are aware of poorer quality, maybe they will begin to reject it, and we can continue to be gainfully employed exercising our hard-earned skills.

Boom operator Sam Vargas.

How often is your studio brought in to fix a project done by someone with access to the tools, but not the experience?
This actually happens a lot, and it is usually harder to fix something that has been done incorrectly than it is to just do it right from the beginning. However, at least they tried, and that is the point of my quest: to get folks to recognize and want a better product. I would rather see that they tried to make it better and failed than just accepted poor quality as “good enough.”

Your most recent video tackles ADR. So let’s talk about that for a bit. How complicated a task is ADR, specifically matching of new audio to the existing video?
We do a fair amount of ADR recording, which isn’t that hard for the experienced audio mixer. That said, I found out how hard it is being the talent doing ADR. It sounds a lot easier than it actually is when you are trying to match your delivery from the original recording.

What do you use for ADR?
We use Avid Pro Tools as our primary audio tool, but there are some additional tools in Fairlight (included free in Blackmagic’s Resolve now) that make ADR even easier for the mixer and the talent. Our mic is Sennheiser long shotgun, but we try to match mics to the field mic when possible for ADR.

I suppose Resolve proves your point — professional tools accessible for free to the masses?
Yeah. I can afford to buy a paint brush and some paint. It would take me a lot of years of practice to be a Michelangelo. Maybe Malcolm Gladwell, who posits that it takes 10,000 hours of practice to master something, is not too far off target.

What about for those clients who don’t think you need ADR and instead can use a noise reduction tool to remove the offensive noise?
We showed some noise reduction tools in another video in the series, but they are better at removing consistent sounds like air conditioner hum. We chose the freeway location as the background noise would be much harder to remove. In this case, ADR was the best choice.

It’s also good for replacing fumbled dialogue or something that was rewritten after production was completed. Often you can get away with cheating a new line of dialogue over a cutaway of another actor. To make the new line match perfectly, you would rerecord all the dialogue.

What did you shoot the video with? What about editing and color?
We shot with a Blackmagic Cinema Camera in RAW so we could fix more in post. Editing was done in Avid Media Composer with final color in Blackmagic’s Resolve. All the audio was handled in Avid’s Pro Tools.

What other topics have you covered in this series?
So far we’ve covered some audio issues and the need for color correction. We are in the planning stages for more videos, but we’re always looking for suggestions. Hint, hint.

Ok, letting you go, but is there anything I haven’t asked that’s important?
I am hoping that others who are more talented than I am pick up the mantle and continue the quest to educate the viewers. The goal is to prevent us all becoming “starving artists” in a world of mediocre media content.


Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.


London’s LipSync upgrades studio, adds Dolby Atmos

LipSync Post, located in London’s Soho, has upgraded its studio with Dolby Atmos and  installed a new control system. To accomplish this, LipSync teamed up with HHB Communications’ Scrub division to create a hybrid dual Avid S6 and AMS Neve DFC3D desk while also upgrading the room to create Dolby Atmos mixes with a new mastering unit. Now that the upgrade to Theatre 2 is complete, LipSync plans to upgrade Theatre 1 this summer.

The setup has the best of both worlds with full access to both the classic Neve DFC sound while also bringing more hands-on control of their Avid Pro Tools automation via the S6 desks. In order to streamline their workflow as more projects are mixed exclusively “in the box,” LipSync installed the S6s within the same frame as the DFC, with custom furniture created by Frozen Fish Design. This dual operator configuration frees the mix engineers to work on separate Pro Tools systems simultaneously for fast and efficient turnaround in order to meet crucial project deadlines.

“The move into extended surround formats like Dolby Atmos is very exciting,” explains LipSync senior re-recording mixer Rob Hughes. “We have now completed our first feature mix in the refitted theater (Vita & Virginia directed by Chanya Button). It has a very detailed, involved soundtrack and the new system handled it with ease.”


Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.


Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Behind the Title: Sim’s supervising sound editor David McCallum

Name: David McCallum

Company: Sim International — Sim Post (Sound) in Toronto

Can you describe your company?
Sim provides equipment and creative services for projects in film and television. We have offices in Toronto, Los Angeles, New York City, Atlanta and Vancouver. I work as part of our Sim Post team in Toronto’s King St. East post facility where our emphasis is post sound and picture. We’re a small division, but we’ve been together as a team for nearly 15 years, the last three of which have been as part of Sim.

What’s your job title?
Supervising Sound Editor

What does that entail? 
My work is 90% project and client focused. I work directly on the sound design and sound edit for television and film projects, collaborating with directors and producers to shape the sound for their show. I also manage a team of people at Sim Post (Sound) Toronto that make up our sound crew(s). Part of my job also involves studio time, working closely with actors and directors to help shape the final performances that end up on the screen.

What would surprise people the most about what falls under that title?
I don’t work extreme hours. The screen industry, and post production in particular, has a well-deserved reputation for working its people hard, with long hours and tight demands as the norm rather than the exception. I don’t believe in overworking either my crew or myself. I strongly believe that people work best under predictable conditions.

Individuals need to be placed in positions to succeed, not merely survive. So, I put a lot of effort into managing my workload, getting on top of things well in advance of deadlines. I try to keep my days and weeks structured and organized so that I’m at my best as much as possible.

Sim’s ADR room.

What’s your favorite part of the job?
Finding a unique way to solve a sound problem. I love discovering a new trick, like using parts of two different words to make a character say a new word. You never know when or where you can find these kinds of solutions — hearing the possibilities requires patience and a keen ear. Sometimes the things I put together sound ridiculous, but because I mostly work alone nobody gets to hear my mistakes. Every now and then something unexpected works, and it’s golden.

What’s your least favorite?
There can be a lot of politics that permeate the film and television world. I prefer direct communication and collaboration, even if what you hear from someone isn’t what you want to hear.

What is your favorite time of the day?
The start. I like getting in a bit early, relaxing with a good coffee while I map out my goals for the day. Every day something good needs to be accomplished, and if the day gets off to a positive start then there is a better chance that all my objectives for that day will be met.

If you didn’t have this job, what would you be doing instead?
I would probably still be working in audio, but perhaps on the consumer side, selling high-end tube audio electronics and turntables. Either that, or I would be a tennis instructor.

Why did you choose this profession? 
That is actually a long story. I didn’t find this profession or career path on my own. I was put on it by a very thoughtful university professor named Clarke Mackay at Queen’s University in Kingston, Ontario, who saw a skill set in me that I did not recognize in myself. The path started with Clarke, went through the Academy of Canadian Cinema and Television and on to Jane Tattersall, who is senior VP of Sim Toronto.

Jane’s been the strongest influence in my career by far, teaching and steering me along the way. Not all lessons were intended, and sometimes we found ourselves on the same path. Sim Post (Sound) went through so many changes, and we managed a lot of them together. I don’t know if I would have found or stayed in this profession without Clarke or Jane, so in a way they have helped choose it for me.

Can you name some recent projects you have worked on?
The Handmaid’s Tale, Vikings, Alias Grace, Cardinal, Molly’s Game, Kin and The Man Who Invented Christmas.

What is the project that you are most proud of?
The one I’m working on now! More seriously, that does feel like an impossible question to answer, as I’ve felt pride at numerous times in my career. But most recently I would say our work on The Handmaid’s Tale has been tremendously rewarding.

I’d also mention a small Canadian documentary I was a part of in 2016 called Unarmed Verses. It’s a National Film Board of Canada documentary by director Charles Officer and producer Lea Marin. It touched my heart.

I’m also very proud of some of my colleagues that I’ve been overseeing for a few years now, in particular Claire Dobson and Krystin Hunter. Claire and Krystin are two young editors who are both doing extremely impressive work with me. I’m very proud of them.

Name three pieces of technology that you can’t live without.
Avid Pro Tools, Izotope RX and NOS Amperex 6922 vacuum tubes.

What social media channels do you follow?
I’ve only ever participated in Facebook, but the global political climate has me off of social media right now. I do my best to stay away from the “comments section of life.”

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
I try to reduce stress within the workplace. I have a few rituals that help… and good coffee. Nothing beats stress in the morning like a delicious coffee. But more practically, I try my best to stay on top of my work and make sure I thoroughly understanding my client’s expectations. I then actively manage my work so I’m not pushed up against deadlines.

But really the best tool is my team. I have an amazing team of people around me and I would be nothing without them.