Tag Archives: audio post production

Netflix’s Lost in Space: New sounds for a classic series

By Jennifer Walden

Netflix’s Lost in Space series, a remake of the 1965 television show, is a playground for sound. In the first two episodes alone, the series introduces at least five unique environments, including an alien planet, a whole world of new tech — from wristband communication systems to medical analysis devices — new modes of transportation, an organic-based robot lifeform and its correlating technologies, a massive explosion in space and so much more.

It was a mission not easily undertaken, but if anyone could manage it, it was four-time Emmy Award-winning supervising sound editor Benjamin Cook of 424 Post in Culver City. He’s led the sound teams on series like Starz’s Black Sails, Counterpart and Magic City, as well as HBO’s The Pacific, Rome and Deadwood, to name a few.

Benjamin Cook

Lost in Space was a reunion of sorts for members of the Black Sails post sound team. Making the jump from pirate ships to spaceships were sound effects editors Jeffrey Pitts, Shaughnessy Hare, Charles Maynes, Hector Gika and Trevor Metz; Foley artists Jeffrey Wilhoit and Dylan Tuomy-Wilhoit; Foley mixer Brett Voss; and re-recording mixers Onnalee Blank and Mathew Waters.

“I really enjoyed the crew on Lost in Space. I had great editors and mixers — really super-creative, top-notch people,” says Cook, who also had help from co-supervising sound editor Branden Spencer. “Sound effects-wise there was an enormous amount of elements to create and record. Everyone involved contributed. You’re establishing a lot of sounds in those first two episodes that are carried on throughout the rest of the season.”

Soundscapes
So where does one begin on such a sound-intensive show? The initial focus was on the soundscapes, such as the sound of the alien planet’s different biomes, and the sound of different areas on the ships. “Before I saw any visuals, the showrunners wanted me to send them some ‘alien planet sounds,’ but there is a huge difference between Mars and Dagobah,” explains Cook. “After talking with them for a bit, we narrowed down some areas to focus on, like the glacier, the badlands and the forest area.”

For the forest area, Cook began by finding interesting snippets of animal, bird and insect recordings, like a single chirp or little song phrase that he could treat with pitching or other processing to create something new. Then he took those new sounds and positioned them in the sound field to build up beds of creatures to populate the alien forest. In that initial creation phase, Cook designed several tracks, which he could use for the rest of the season. “The show itself was shot in Canada, so that was one of the things they were fighting against — the showrunners were pretty conscious of not making the crash planet sound too Earthly. They really wanted it to sound alien.”

Another huge aspect of the series’ sound is the communication systems. The characters talk to each other through the headsets in their spacesuit helmets, and through wristband communications. Each family has their own personal ship, called a Jupiter, which can contact other Jupiter ships through shortwave radios. They use the same radios to communicate with their all-terrain vehicles called rovers. Cook notes these ham radios had an intentional retro feel. The Jupiters can send/receive long-distance transmissions from the planet’s surface to the main ship, called Resolute, in space. The families can also communicate with their Jupiters ship’s systems.

Each mode of communication sounds different and was handled differently in post. Some processing was handled by the re-recording mixers, and some was created by the sound editorial team. For example, in Episode 1 Judy Robinson (Taylor Russell) is frozen underwater in a glacial lake. Whenever the shot cuts to Judy’s face inside her helmet, the sound is very close and claustrophobic.

Judy’s voice bounces off the helmet’s face-shield. She hears her sister through the headset and it’s a small, slightly futzed speaker sound. The processing on both Judy’s voice and her sister’s voice sounds very distinct, yet natural. “That was all Onnalee Blank and Mathew Waters,” says Cook. “They mixed this show, and they both bring so much to the table creatively. They’ll do additional futzing and treatments, like on the helmets. That was something that Onna wanted to do, to make it really sound like an ‘inside a helmet’ sound. It has that special quality to it.”

On the flipside, the ship’s voice was a process that Cook created. Co-supervisor Spencer recorded the voice actor’s lines in ADR and then Cook added vocoding, EQ futz and reverb to sell the idea that the voice was coming through the ship’s speakers. “Sometimes we worldized the lines by playing them through a speaker and recording them. I really tried to avoid too much reverb or heavy futzing knowing that on the stage the mixers may do additional processing,” he says.

In Episode 1, Will Robinson (Maxwell Jenkins) finds himself alone in the forest. He tries to call his father, John Robinson (Toby Stephens — a Black Sails alumni as well) via his wristband comm system but the transmission is interrupted by a strange, undulating, vocal-like sound. It’s interference from an alien ship that had crashed nearby. Cook notes that the interference sound required thorough experimentation. “That was a difficult one. The showrunners wanted something organic and very eerie, but it also needed to be jarring. We did quite a few versions of that.”

For the main element in that sound, Cook chose whale sounds for their innate pitchy quality. He manipulated and processed the whale recordings using Symbolic Sound’s Kyma sound design workstation.

The Robot
Another challenging set of sounds were those created for Will Robinson’s Robot (Brian Steele). The Robot makes dying sounds, movement sounds and face-light sounds when it’s processing information. It can transform its body to look more human. It can use its hands to fire energy blasts or as a tool to create heat. It says, “Danger, Will Robinson,” and “Danger, Dr. Smith.” The Robot is sometimes a good guy and sometimes a bad guy, and the sound needed to cover all of that. “The Robot was a job in itself,” says Cook. “One thing we had to do was to sell emotion, especially for his dying sounds and his interactions with Will and the family.”

One of Cook’s trickiest feats was to create the proper sense of weight and movement for the Robot, and to portray the idea that the Robot was alive and organic but still metallic. “It couldn’t be earthly technology. Traditionally for robot movement you will hear people use servo sounds, but I didn’t want to use any kind of servos. So, we had to create a sound with a similar aesthetic to a servo,” says Cook. He turned to the Robot’s Foley sounds, and devised a processing chain to heavily treat those movement tracks. “That generated the basic body movement for the Robot and then we sweetened its feet with heavier sound effects, like heavy metal clanking and deeper impact booms. We had a lot of textures for the different surfaces like rock and foliage that we used for its feet.”

The Robot’s face lights change color to let everyone know if it’s in good-mode or bad-mode. But there isn’t any overt sound to emphasize the lights as they move and change. If the camera is extremely close-up on the lights, then there’s a faint chiming or tinkling sound that accentuates their movement. Overall though, there is a “presence” sound for the Robot, an undulating tone that’s reminiscent of purring when it’s in good-mode. “The showrunners wanted a kind of purring sound, so I used my cat purring as one of the building block elements for that,” says Cook. When the Robot is in bad-mode, the sound is anxious, like a pulsing heartbeat, to set the audience on edge.

It wouldn’t be Lost in Space without the Robot’s iconic line, “Danger, Will Robinson.” Initially, the showrunners wanted that line to sound as close to the original 1960’s delivery as possible. “But then they wanted it to sound unique too,” says Cook. “One comment was that they wanted it to sound like the Robot had metallic vocal cords. So we had to figure out ways to incorporate that into the treatment.” The vocal processing chain used several tools, from EQ, pitching and filtering to modulation plug-ins like Waves Morphoder and Dehumaniser by Krotos. “It was an extensive chain. It wasn’t just one particular tool; there were several of them,” he notes.

There are other sound elements that tie into the original 1960’s series. For example, when Maureen Robinson (Molly Parker) and husband John are exploring the wreckage of the alien ship they discover a virtual map room that lets them see into the solar system where they’ve crashed and into the galaxy beyond. The sound design during that sequence features sound material from the original show. “We treated and processed those original elements until they’re virtually unrecognizable, but they’re in there. We tried to pay tribute to the original when we could, when it was possible,” says Cook.

Other sound highlights include the Resolute exploding in space, which caused massive sections of the ship to break apart and collide. For that, Cook says contact microphones were used to capture the sound of tin cans being ripped apart. “There were so many fun things in the show for sound. From the first episode with the ship crash and it sinking into the glacier to the black hole sequence and the Robot fight in the season finale. The show had a lot of different challenges and a lot of opportunities for sound.”

Lost in Space was mixed in the Anthony Quinn Theater at Sony Pictures in 7.1 surround. Interestingly, the show was delivered in Dolby’s Home Atmos format. Cook explains, “When they booked the stage, the producer’s weren’t sure if we were going to do the show in Atmos or not. That was something they decided to do later so we had to figure out a way to do it.”

They mixed the show in Atmos while referencing the 7.1 mix and then played those mixes back in a Dolby Home Atmos room to check them, making any necessary adjustments and creating the Atmos deliverables. “Between updates for visual effects and music as well as the Atmos mixes, we spent roughly 80 days on the dub stage for the 10 episodes,” concludes Cook.

Behind the Title: Grey Ghost Music mix engineer Greg Geitzenauer

NAME: Greg Geitzenauer

COMPANY: Minneapolis-based Grey Ghost Music

CAN YOU DESCRIBE YOUR COMPANY?
Side A: Music production, creative direction and licensing for the advertising and marketing industries. Side B: Audio post production for the advertising and marketing industries.

WHAT’S YOUR JOB TITLE?
Senior Mix Engineer

WHAT DOES THAT ENTAIL?
All the hands-on audio post work our clients need — from VO recording, editing, forensic/cleanup work to sound design and final mixing.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The number of times my voice has ended up in a final spot when the script calls for “recording engineer. “

WHAT’S YOUR FAVORITE PART OF THE JOB?
There are some really funny people in this industry. I laugh a lot.

WHAT’S YOUR LEAST FAVORITE?
Working on a particular project so long that I lose perspective on whether the changes being made are helping any more.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get to work early — the time I get to spend confirming all my shit is together.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cutting together music for my daughter’s dance team.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was 14 when I found out what a recording engineer did, and I just knew. Audio and technology… it just pushes all my buttons.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Essentia Water, Best Buy, Comcast, Invisalign, 3M and Xcel Energy.

Invisalign

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
An anti-smoking radio campaign that won Radio Mercury and One Show awards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools HD, Kensington Expert Mouse trackball and Pentel Quicker-Clicker mechanical pencils.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Reddit and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Go home.

JoJo Whilden/Hulu

Color and audio post for Hulu’s The Looming Tower

Hulu’s limited series, The Looming Tower, explores the rivalries and missed opportunities that beset US law enforcement and intelligence communities in the lead-up to the 9/11 attacks. Based on the Pulitzer Prize-winning book by Lawrence Wright, who also shares credit as executive producer with Dan Futterman and Alex Gibney, the show’s 10 episodes paint an absorbing, if troubling, portrait of the rise of Osama bin Laden and al-Qaida, and offer fresh insight into the complex people who were at the center of the fight against terrorism.

For The Looming Tower’s sound and picture post team, the show’s sensitive subject matter and blend of dramatizations and archival media posed significant technical and creative challenges. Colorist Jack Lewars and online editor Jeff Cornell of Technicolor PostWorks New York, were tasked with integrating grainy, run-and-gun news footage dating back to 1998 with crisply shot, high-resolution original cinematography. Supervising sound designer/effects mixer Ruy García and re-recording mixer Martin Czembor from PostWorks, along with a Foley team from Alchemy Post Sound, were charged with helping to bring disparate environments and action to life, but without sensationalizing or straying from historical accuracy.

L-R: colorist Jack Lewars and editor Jeff Cornell

Lewars and Cornell mastered the series in Dolby Vision HDR, working from the production’s camera original 2K and 3.4K ArriRaw files. Most of the color grading and conforming work was done with a light touch, according to Lewars, as the objective was to adhere to a look that appeared real and unadulterated. The goal was for viewers to feel they are behind the scenes, watching events as they happened.

Where more specific grades were applied, it was done to support the narrative. “We developed different look sets for the FBI and CIA headquarters, so people weren’t confused about where we were,” Lewars explains. “The CIA was working out of the basement floors of a building, so it’s dark and cool — the light is generated by fluorescent fixtures in the room. The FBI is in an older office building — its drop ceiling also has fluorescent lighting, but there is a lot of exterior light, so its greener, warmer.”

The show adds to the sense of realism by mixing actual news footage and other archival media with dramatic recreations of those same events. Lewars and Cornell help to cement the effect by manipulating imagery to cut together seamlessly. “In one episode, we matched an interview with Osama bin Laden from the late ‘90s with new material shot with an Arri Alexa,” recalls Lewars. “We used color correction and editorial effects to blend the two worlds.”

Cornell degraded some scenes to make them match older, real-world media. “I took the Alexa material and ‘muddied’ it up by exporting it to compressed SD files and then cutting it back into the master timeline,” he notes. “We also added little digital hits to make it feel like the archival footage.”

While the color grade was subtle and adhered closely to reality, it still packed an emotional punch. That is most apparent in a later episode that includes the attack on the Twin Towers. “The episode starts off in New York early in the morning,” says Lewars. “We have a series of beauty shots of the city and it’s a glorious day. It’s a big contrast to what follows — archival footage after the towers have fallen where everything is a white haze of dust and debris.”

Audio Post
The sound team also strove to remain faithful to real events. García recalls his first conversations about the show’s sound needs during pre-production spotting sessions with executive producer Futterman and editor Daniel A. Valverde. “It was clear that we didn’t want to glamorize anything,” he says. “Still, we wanted to create an impact. We wanted people to feel like they were right in the middle of it, experiencing things as they happened.”

García says that his sound team approached the project as if it were a documentary, protecting the performances and relying on sound effects that were authentic in terms of time and place. “With the news footage, we stuck with archival sounds matching the original production footage and accentuating whatever sounds were in there that would connect emotionally to the characters,” he explains. “When we moved to the narrative side with the actors, we’d take more creative liberties and add detail and texture to draw you into the space and focus on the story.”

He notes that the drive for authenticity extended to crowd scenes, where native speakers were used as voice actors. Crowd sounds set in the Middle East, for example, were from original recordings from those regions to ensure local accents were correct.

Much like Lewars approach to color, García and his crew used sound to underscore environmental and psychological differences between CIA and FBI headquarters. “We did subtle things,” he notes. “The CIA has more advanced technology, so everything there sounds sharper and newer versus the FBI where you hear older phones and computers.”

The Foley provided by artists and mixers from Alchemy Post Sound further enhanced differences between the two environments. “It’s all about the story, and sound played a very important role in adding tension between characters,” says Leslie Bloome, Alchemy’s lead Foley artist. “A good example is the scene where CIA station chief Diane Marsh is berating an FBI agent while casually applying her makeup. Her vicious attitude toward the FBI agent combined with the subtle sounds of her makeup created a very interesting juxtaposition that added to the story.”

In addition to footsteps, the Foley team created incidental sounds used to enhance or add dimension to explosions, action and environments. For a scene where FBI agents are inspecting a warehouse filled with debris from the embassy bombings in Africa, artists recorded brick and metal sounds on a Foley stage designed to capture natural ambience. “Normally, a post mixer will apply reverb to place Foley in an environment,” says Foley artist Joanna Fang. “But we recorded the effects in our live room to get the perspective just right as people are walking around the warehouse. You can hear the mayhem as the FBI agents are documenting evidence.”

“Much of the story is about what went wrong, about the miscommunication between the CIA and FBI,” adds Foley mixer Ryan Collison, “and we wanted to help get that point across.”

The soundtrack to the series assumed its final form on a mix stage at PostWorks. Czembor spent weeks mixing dialogue, sound and music elements into what he described as a cinematic soundtrack.

L-R: Martin Czember and Ruy Garcia

Czembor notes that the sound team provided a wealth of material, but for certain emotionally charged scenes, such as the attack on the USS Cole, the producers felt that less was more. “Danny Futterman’s conceptual approach was to go with almost no sound and let the music and the story speak for themselves,” he says. “That was super challenging, because while you want to build tension, you are stripping it down so there’s less and less and less.”

Czembor adds that music, from composer Will Bates, is used with great effect throughout the series, even though it might go by unnoticed by viewers. “There is actually a lot more music in the series than you might realize,” he says. “That’s because it’s not so ‘musical;’ there aren’t a lot of melodies or harmonies. It’s more textural…soundscapes in a way. It blends in.”

Czembor says that as a longtime New Yorker, working on the show held special resonance for him, and he was impressed with the powerful, yet measured way it brings history back to life. “The performances by the cast are so strong,” he says. “That made it a pleasure to work on. It inspires you to add to the texture and do your job really well.”

Pace Pictures opens large audio post and finishing studio in Hollywood

Pace Pictures has opened a new sound and picture finishing facility in Hollywood. The 20,000-square-foot site offers editorial finishing, color grading, visual effects, titling, sound editorial and sound mixing services. Key resources include a 20-seat 4K color grading theater, two additional HDR color grading suites and 10 editorial finishing suites. It also features a Dolby Atmos mix stage designed by three-time Academy Award-winning re-recording mixer Michael Minkler, who is a partner in the company’s sound division.

The new independently-owned facility is located within IgnitedSpaces, a co-working site whose 45,000 square feet span three floors along Hollywood Boulevard. IgnitedSpaces targets media and entertainment professionals and creatives with executive offices, editorial suites, conference rooms and hospitality-driven office services. Pace Pictures has formed a strategic partnership with IgnitedSpaces to provide film and television productions service packages encompassing the entire production lifecycle.

“We’re offering a turnkey solution where everything is on-demand,” says Pace Pictures founder Heath Ryan. “A producer can start out at IgnitedSpaces with a single desk and add offices as the production grows. When they move into post production, they can use our facilities to manage their media and finish their projects. When the production is over, their footprint shrinks, overnight.”

Pace Pictures is currently providing sound services for the upcoming Universal Pictures release Mamma Mia! Here We Go Again. It is also handling post work for a VR concert film from this year’s Coachella Valley Music and Arts Festival.

Completed projects include the independent features Silver Lake, Flower and The Resurrection of Gavin Stone, the TV series iZombie, VR Concerts for the band Coldplay, Austin City Limits and Lollapalooza, and a Mariah Carey music video related to Sony Pictures’ animated feature Star.

Technical features of the new facility include three DaVinci Resolve Studio color grading suites with professional color consoles, a Barco 4K HDR digital cinema projector in the finishing theater, and dual Avid Pro Tools S6 consoles in the Dolby Atmos mix stage, which also includes four Pro Tools HDX systems. The site features facilities for sound design, ADR and voiceover recording, title design and insert shooting. Onsite media management includes a robust SAN network, as well as LTO7 archiving and dailies services, and cold storage.

Ryan is an editor who has operated Pace Pictures as an editorial service for more than 15 years. His many credits include the films Woody Woodpecker, Veronica Mars, The Little Rascals, Lawless Range and The Lookalike, as well as numerous concert films, music clips, television specials and virtual reality productions. He has also served as a producer on projects for Hallmark, Mariah Carey, Queen Latifah and others. Originally from Australia, he began his career with the Australian Broadcasting Corporation.

Ryan notes that the goal of the new venture is to break from the traditional facility model and provide producers with flexible solutions tailored to their budgets and creative needs. “Clients do not have to use our talent; they can bring in their own colorists, editors and mixers,” he says. “We can be a small part of the production, or we can be the backbone.”

Sound editor/re-recording mixer Will Files joins Sony Pictures Post

Sony Pictures Post Production Services has added supervising sound editor/re-recording mixer Will Files, who has spent a decade at Skywalker Sound. His brings with him credits on more than 80 feature films, including Passengers, Deadpool, Star Wars: The Force Awakens and Fantastic Four.

Files won a 2018 MPSE Golden Reel Award for his work on War for the Planet of the Apes. His current project is the upcoming Columbia Pictures release Venom out in US theaters this October.

He adds that he was also attracted by Sony Pictures’ ability to support his work both as a sound editor/sound designer and as a re-recording mixer. “I tend to wear a lot of hats. I often supervise sound, create sound design and mix my projects,” he says. “Sony Pictures has embraced modern workflows by creating technically-advanced rooms that allow sound artists to begin mixing as soon as they begin editing. It makes the process more efficient and improves creative storytelling.”

Files will work in a new pre-dub mixing stage and sound design studio on the Sony Pictures lot in Culver City. The stage has Dolby Atmos mixing capabilities and features two Avid S6 mixing consoles, four Pro Tools systems, a Sony 4K digital cinema projector and a variety of other support gear.

Files describes the stage as a sound designer/mixer’s dream come true. “It’s a medium-size space, big enough to mix a movie, but also intimate. You don’t feel swallowed up when it’s just you and the filmmaker,” he says. “It’s very conducive to the creative process.”

Files began his career with Skywalker Sound in 2002, shortly after graduating from the University of North Carolina School of the Arts. He earned his first credit as supervising sound editor on the 2008 sci-fi hit Cloverfield. His many other credits include Star Trek: Into Darkness, Dawn of the Planet of the Apes and Loving.

Netflix’s Godless offers big skies and big sounds

By Jennifer Walden

One of the great storytelling advantages of non-commercial television is that content creators are not restricted by program lengths or episode numbers. The total number of episodes in a show’s season can be 13 or 10 or less. An episode can run 75 minutes or 33 minutes. This certainly was the case for writer/director/producer Scott Frank when creating his series Godless for Netflix.

Award-winning sound designer, Wylie Stateman, of Twenty Four Seven Sound explains why this worked to their advantage. “Godless at its core is a story-driven ‘big-sky’ Western. The American Western is often as environmentally beautiful as it is emotionally brutal. Scott Frank’s goal for Godless was to create a conflict between good and evil set around a town of mostly female disaster survivors and their complex and intertwined pasts. The Godless series is built like a seven and a half hour feature film.”

Without the constraints of having to squeeze everything into a two-hour film, Frank could make the most of his ensemble of characters and still include the ride-up/ride-away beauty shots that show off the landscape. “That’s where Carlos Rafael Rivera’s terrific orchestral music and elements of atmospheric sound design really came together,” explains Stateman.

Stateman has created sound for several Westerns in his prodigious career. His first was The Long Riders back in 1980. Most recently, he designed and supervised the sound on writer/director Quentin Tarantino’s Django Unchained (which earned a 2013 Oscar nom for sound, an MPSE nom and a BAFTA film nom for sound) and The Hateful Eight (nominated for a 2016 Association of Motion Picture Sound Award).

For Godless, Stateman, co-supervisor/re-recording mixer Eric Hoehn and their sound team have already won a 2018 MPSE Award for Sound Editing for their effects and Foley work, as well as a nomination for editing the dialogue and ADR. And don’t be surprised if you see them acknowledged with an Emmy nom this fall.

Capturing authentic sounds: L-R) Jackie Zhou, Wylie Stateman and Eric Hoehn.

Capturing Sounds On Set
Since program length wasn’t a major consideration, Godless takes time to explore the story’s setting and allows the audience to live with the characters in this space that Frank had purpose-built for the show. In New Mexico, Frank had practical sets constructed for the town of La Belle and for Alice Fletcher’s ranch. Stateman, Hoehn and sound team members Jackie Zhou and Leo Marcil camped out at the set locations for a couple weeks, capturing recordings of everything from environmental ambience to gunfire echoes to horse hooves on dirt.

To avoid the craziness that is inherent to a production, the sound team would set up camp in a location where the camera crew was not. This allowed them to capture clean, high-quality recordings at various times of the day. “We would record at sunrise, sunset and the middle of the night — each recording geared toward capturing a range of authentic and ambient sounds,” says Stateman. “Essentially, our goal was to sonically map each location. Our field recordings were wide in terms of channel count, and broad in terms of how we captured the sound of each particular environment. We had multiple independent recording setups, each capable of recording up to eight channels of high bandwidth audio.”

Near the end of the season, there is a big shootout in the town of La Belle, so Stateman and Hoehn wanted to capture the sounds of gunfire and the resulting echoes at that location. They used live rounds, shooting the same caliber of guns used in the show. “We used live rounds to achieve the projectile sounds. A live round sounds very different than a blank round. Blanks just go pop-pop. With live rounds you can literally feel the bullet slicing through the air,” says Stateman.

Eric Hoehn

Recording on location not only supplied the team with a wealth of material to draw from back in the studio, it also gave them an intensive working knowledge of the actual environments. Says Hoehn, “It was helpful to have real-world references when building the textures of the sound design for these various locations and to know firsthand what was happening acoustically, like how the wind was interacting with those structures.”

Stateman notes how quiet and lifeless the location was, particularly at Alice’s ranch. “Part of the sound design’s purpose was to support the desolate dust bowl backdrop. Living there, eating breakfast in the quiet without anybody from the production around was really a wonderful opportunity. In fact, Scott Frank encouraged us to look deep and listen for that feel.”

From Big Skies to Big City
Sound editorial for Godless took place at Light Iron in New York, which is also where the show got its picture editing — by Michelle Tesoro, who was assisted by Hilary Peabody and Charlie Greene. There, Hoehn had a Pro Tools HDX 3 system connected to the picture department’s Avid Media Composer via the Avid Nexis. They could quickly pull in the picture editorial mix, balance out the dialog and add properly leveled sound design, sending that mix back to Tesoro.

“Because there were so many scenes and so much material to get through, we really developed a creative process that centered around rapid prototype mixing,” says Hoehn. “We wanted to get scenes from Michelle and her team as soon as possible and rapidly prototype dialogue mixing and that first layer of sound design. Through the prototyping process, we could start to understand what the really important sounds were for those scenes.”

Using this prototyping audio workflow allowed the sound team to very quickly share concepts with the other creative departments, including the music and VFX teams. This workflow was enhanced through a cloud-based film management/collaboration tool called Pix. Pix let the showrunners, VFX supervisor, composer, sound team and picture team share content and share notes.

“The notes feature in Pix was so important,” explains Hoehn. “Sometimes there were conversations between the director and editor that we could intuitively glean information from, like notes on aesthetic or pace or performance. That created a breadcrumb trail for us to follow while we were prototyping. It was important for us to get as much information as we could so we could be on the same page and have our compass pointed in the right direction when we were doing our first pass prototype.”

Often their first pass prototype was simply refined throughout the post process to become the final sound. “Rarely were we faced with the situation of having to re-cut a whole scene,” he continues. “It was very much in the spirit of the rolling mix and the rolling sound design process.”

Stateman shares an example of how the process worked. “When Michelle first cut a scene, she might cut to a beauty shot that would benefit from wind gusts and/or enhanced VFX and maybe additional dust blowing. We could then rapidly prototype that scene with leveled dialog and sound design before it went to composer Carlos Rafael Rivera. Carlos could hear where/when we were possibly leveraging high-density sound. This insight could influence his musical thinking — if he needed to come in before, on or after the sound effects. Early prototyping informed what became a highly collaborative creative process.”

The Shootout
Another example of the usefulness of Pix was shootout in La Belle in Episode 7. The people of the town position themselves in the windows and doorways of the buildings lining the street, essentially surrounding Frank Griffin (Jeff Daniels) and his gang. There is a lot of gunfire, much of it bridging action on and off camera, and that needed to be represented well through sound.

Hoehn says they found it best to approach the gun battle like a piece of music by playing with repeated rhythms. Breaking the anticipated rhythm helped to make the audience feel off-guard. They built a sound prototype for the scene and shared it via Pix, which gave the VFX department access to it.

“A lot of what we did with sound helped the visual effects team by allowing them to understand the density of what we were doing with the ambient sounds,” says Hoehn. “If we found that rhythmically it was interesting to have a wind gust go by, we would eventually see a visual effect for that wind going by.”

It was a back-and-forth collaboration. “There are visual rhythms and sound rhythms and the fact that we could prototype scenes early led us to a very efficient way of doing long-form,” says Stateman. “It’s funny that features used to be considered long-form but now ‘long-form’ is this new, time-unrestrained storytelling. It’s like we were making a long-form feature, but one that was seven and a half hours. That’s really the beauty of Netflix. Because the shows aren’t tethered to a theatrical release timeframe, we can make stories that linger a little bit and explore the wider eccentricities of character and the time period. It’s really a wonderful time for this particular type of filmmaking.”

While program length may be less of an issue, production schedule lengths still need to be kept in line. With the help of Pix, editorial was able to post the entire show with one team. “Everyone on our small team understood and could participate in the mission,” says Stateman. Additionally, the sound design rapid prototype mixing process allowed everyone in editorial to carry all their work forward, from day one until the last day. The Pro Tools session that they started with on day one was the same Pro Tools session that they used for print mastering seven months later.

“Our sound design process was built around convenient creative approval and continuous refinement of the complete soundtrack. At the end of the day, the thing that we heard most often was that this was a wonderful and fantastic way to work, and why would we ever do it any other way,” Stateman says.

Creating a long-form feature like Godless in an efficient manner required a fluid, collaborative process. “We enjoyed a great team effort,” says Stateman. “It’s always people over devices. What we’ve come to say is, ‘It’s not the devices. It’s people left to their own devices who will discover really novel ways to solve creative problems.’”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @audiojeney.

London’s LipSync upgrades studio, adds Dolby Atmos

LipSync Post, located in London’s Soho, has upgraded its studio with Dolby Atmos and  installed a new control system. To accomplish this, LipSync teamed up with HHB Communications’ Scrub division to create a hybrid dual Avid S6 and AMS Neve DFC3D desk while also upgrading the room to create Dolby Atmos mixes with a new mastering unit. Now that the upgrade to Theatre 2 is complete, LipSync plans to upgrade Theatre 1 this summer.

The setup has the best of both worlds with full access to both the classic Neve DFC sound while also bringing more hands-on control of their Avid Pro Tools automation via the S6 desks. In order to streamline their workflow as more projects are mixed exclusively “in the box,” LipSync installed the S6s within the same frame as the DFC, with custom furniture created by Frozen Fish Design. This dual operator configuration frees the mix engineers to work on separate Pro Tools systems simultaneously for fast and efficient turnaround in order to meet crucial project deadlines.

“The move into extended surround formats like Dolby Atmos is very exciting,” explains LipSync senior re-recording mixer Rob Hughes. “We have now completed our first feature mix in the refitted theater (Vita & Virginia directed by Chanya Button). It has a very detailed, involved soundtrack and the new system handled it with ease.”

Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.

Pacific Rim: Uprising‘s big sound

By Jennifer Walden

Universal Pictures’ Pacific Rim: Uprising is a big action film, with monsters and mechs that are bigger than skyscrapers. When dealing with subject matter on this grand of a scale, there’s no better way to experience it than on a 50-foot screen with a seat-shaking sound system. If you missed it in theaters, you can rent it via movie streaming services like Vudu on June 5th.

Pacific Rim: Uprising, directed by Steven DeKnight, is the follow-up to Pacific Rim (2013). In the first film, the planet and humanity were saved by a team of Jaeger (mech suit) pilots who battled the Kaiju (huge monsters) and closed the Breach — an interdimensional portal located under the Pacific Ocean that allowed the Kaiju to travel from their home planet to Earth. They did so by exploding a Jaeger on the Kaiju-side of the opening. Pacific Rim: Uprising is set 10 years after the Battle of the Breach and follows a new generation of Jaeger pilots that must confront the Kaiju.

Pacific Rim: Uprising’s audio post crew.

In terms of technological advancements, five years is a long time between films. It gave sound designers Ethan Van der Ryn and Erik Aadahl of E² Sound the opportunity to explore technology sounds for Pacific Rim: Uprising without being shackled to sounds that were created for the first film. “The nature of this film allowed us to just really go for it and get wild and abstract. We felt like we could go in our own direction and take things to another place,” says Aadahl, who quickly points out two exceptions.

First, they kept the sound of the Drift — the process in which two pilots become mentally connected with each other, as well as with the Jaeger. This was an important concept that was established in the first film.

The second sound the E² team kept was the computer A.I. voice of a Jaeger called Gipsy Avenger. Aadahl notes that in the original film, director Guillermo Del Toro (a fan of the Portal game series) had actress Ellen McLain as the voice of Gipsy Avenger since she did the GLaDOS computer voice from the Portal video games. “We wanted to give another tip of the hat to the Pacific Rim fans by continuing that Easter egg,” says Aadahl.

Van der Ryn and Aadahl began exploring Jaeger technology sounds while working with previs art. Before the final script was even complete, they were coming up with concepts of how Gipsy Avenger’s Gravity Sling might sound, or what Guardian Bravo’s Elec-16 Arc Whip might sound like. “That early chance to work with Steven [DeKnight] really set up our collaboration for the rest of the film,” says Van der Ryn. “It was a good introduction to how the film could work creatively and how the relationship could work creatively.”

They had over a year to develop their early ideas into the film’s final sounds. “We weren’t just attaching sound at the very end of the process, which is all too common. This was something where sound could evolve with the film,” says Aadahl.

Sling Sounds
Gipsy Avenger’s Gravity Sling (an electromagnetic sling that allows anything metallic to be picked up and used as a blunt force weapon) needed to sound like a massive, powerful source of energy.

Van der Ryn and Aadahl’s design is a purely synthetic sound that features theater rattling low-end. Van der Ryn notes that sound started with an old Ensoniq KT-76 piano that he performed into Avid Pro Tools and then enhanced with a sub-harmonic synthesis plug-in called Waves MaxxBass, to get a deep, fat sound. “For a sound like that to read clearly, we almost have to take every other sound out just so that it’s the one sound that fills the entire theater. For this movie, that’s a technique that we tried to do as much as possible. We were very selective about what sounds we played when. We wanted it to be really singular and not feel like a muddy mess of many different ideas. We wanted to really tell the story moment by moment and beat by beat with these different signature sounds.”

That was an important technique to employ because when you have two Jaegers battling it out, and each one is the size of a skyscraper, the sound could get really muddy really fast. Creating signature differences between the Jaegers and keeping to the concept of “less is more” allowed Aadahl and Van der Ryn to choreograph a Jaeger battle that sounds distinct and dynamic.

“A fight is almost like a dance. You want to have contrast and dynamics between your frequencies, to have space between the hits and the rhythms that you’re creating,” says Van der Ryn. “The lack of sound in places — like before a big fist punch — is just as important as the fist punch itself. You need a valley to appreciate the peak, so to speak.”

Sounds of Jaeger
Designing Jaeger sounds that captured the unique characteristics of each one was the other key to making the massive battles sound distinct. In Pacific Rim: Uprising, a rogue Jaeger named Obsidian Fury fights Gipsy Avenger, an official PPDC (Pan-Pacific Defense Corps) Jaeger. Gipsy Avenger is based on existing human-created tech while Obsidian Fury is more sci-fi. “Steven DeKnight was often asking for us to ‘sci-fi this up a little more’ to contrast the rogue Jaeger and the human tech, even up through the final mix. He wanted to have a clear difference, sonically, between the two,” explains Van der Ryn.

For example, Obsidian Fury wields a plasma sword, which is more technologically advanced than Gipsy Avenger’s chain sword. Also, there’s a difference in mechanics. Gipsy Avenger has standard servos and motors, but Obsidian Fury doesn’t. “It’s a mystery who is piloting Obsidian Fury and so we wanted to plant some of that mystery in its sound,” says Aadahl.

Instead of using real-life mechanical motors and servos for Obsidian Fury, they used vocal sounds that they processed using Soundtoys’ PhaseMistress plug-in.

“Running the vocals through certain processing chains in PhaseMistress gave us a sound that was synthetic and sounded like a giant servo but still had the personality of the vocal performance,” Aadahl says.

One way the film helps to communicate the scale of the combatants is by cutting from shots outside the Jaegers to shots of the pilots inside the Jaegers. The sound team was able to contrast the big metallic impacts and large-scale destruction with smaller, human sounds.

“These gigantic battles between the Jaegers and the Kaiju are rooted in the human pilots of the Jaegers. I love that juxtaposition of the ludicrousness of the pilots flipping around in space and then being able to see that manifest in these giant robot suits as they’re battling the Kaiju,” explains Van der Ryn.

Dialogue/ADR lead David Bach was an integral part of building the Jaeger pilots’ dialogue. “He wrangled all the last-minute Jaeger pilot radio communications and late flying ADR coming into the track. He was, for the most part, a one-man team who just blew it out of the water,” says Aadahl.

Kaiju Sounds
There are three main Kaiju introduced in Pacific Rim: Uprising — Raijin, Hakuja, and Shrikethorn. Each one has a unique voice reflective of its personality. Raijin, the alpha, is distinguished by a roar. Hakuja is a scaly, burrowing-type creature whose vocals have a tremolo quality. Shrikethorn, which can launch its spikes, has a screechy sound.

Aadahl notes that finding each Kaiju’s voice required independent exploration and then collaboration. “We actually had a ‘bake-off’ between our sound effects editors and sound designers. Our key guys were Brandon Jones, Tim Walston, Jason Jennings and Justin Davey. Everyone started coming up with different vocals and Ethan [Van der Ryn] and I would come in and revise them. It started to become clear what palette of sounds were working for each of the different Kaiju.”

The three Kaiju come together to form Mega-Kaiju. This happens via the Rippers, which are organic machine hybrids that fuse the bodies of Raijin, Hakuja and Shriekthorn together. The Rippers’ sounds were made from primate screams and macaw bird shrieks. And the voice of Mega-Kaiju is a combination of the three Kaiju roars.

VFX and The Mix
Bringing all these sounds together in the mix was a bit of a challenge because of the continuously evolving VFX. Even as re-recording mixers Frank A. Montaño and Jon Taylor were finalizing the mix in the Hitchcock Theater at Universal Studios in Los Angeles, the VFX updates were rolling in. “There were several hundred VFX shots for which we didn’t see the final image until the movie was released. We were working with temporary VFX on the final dub,” says Taylor.

“Our moniker on this film was given to us by picture editorial, and it normally started with, ‘Imagine if you will,’” jokes Montaño. Fortunately though, the VFX updates weren’t extreme. “The VFX were about 90% complete. We’re used to this happening on large-scale films. It’s kind of par for the course. We know it’s going to be an 11th-hour turnover visually and sonically. We get 90% done and then we have that last 10% to push through before we run out of time.”

During the mix, they called on the E² Sound team for last-second designs to cover the crystallizing VFX. For example, the hologram sequences required additional sounds. Montaño says, “There’s a lot of hologram material in this film because the Jaeger pilots are dealing with a virtual space. Those holograms would have more detail that we’d need to cover with sound if the visuals were very specific.”

 

Aadahl says the updates were relatively easy to do because they have remote access to all of their effects via the Soundminer Server. While on the dub stage, they can log into their libraries over the high-speed network and pop a new sound into the mixers’ Pro Tools session. Within Soundminer they build a library for every project, so they aren’t searching through their whole library when looking for Pacific Rim: Uprising sounds. It has its own library of specially designed, signature sounds that are all tagged with metadata and carefully organized. If a sequence required more complex design work, they could edit the sequence back at their studio and then share that with the dub stage.

“I want to give props to our lead sound designers Brandon Jones and Tim Walston, who really did a lot of the heavy lifting, especially near the end when all of the VFX were flooding in very late. There was a lot of late-breaking work to deal with,” says Aadahl.

For Montaño and Taylor, the most challenging section of the film to mix was reel six, when all three Kaiju and the Jaegers are battling in downtown Tokyo. Massive footsteps and fight impacts, roaring and destruction are all layered on top of electronic-fused orchestral music. “It’s pretty much non-stop full dynamic range, level and frequency-wise,” says Montaño. It’s a 20-minute sequence that could have easily become a thick wall of indistinct sound, but thanks to the skillful guidance of Montaño and Taylor that was not the case. Montaño, who handled the effects, says “E² did a great job of getting delineation on the creature voices and getting the nuances of each Jaeger to come across sound-wise.”

Another thing that helped was being able to use the Dolby Atmos surround field to separate the sounds. Taylor says the key to big action films is to not make them so loud that the audience wants to leave. If you can give the sounds their own space, then they don’t need to compete level-wise. For example, putting the Jaeger’s A.I. voice into the overheads kept it out of the way of the pilots’ dialogue in the center channel. “You hear it nice and clear and it doesn’t have to be loud. It’s just a perfect placement. Using the Atmos speaker arrays is brilliant. It just makes everything sound so much better and open,” Taylor says.

He handled the music and dialogue in the mix. During the reel-six battle, Taylor’s goal with music was to duck and dive it around the effects using the Atmos field. “I could use the back part of the room for music and stay out of the front so that the effects could have that space.”

When it came to placing specific sounds in the Atmos surround field, Montaño says they didn’t want to overuse the effect “so that when it did happen, it really meant something.”

He notes that there were several scenes where the Atmos setup was very effective. For instance, as the Kaiju come together to form the Mega-Kaiju. “As the action escalates, it goes off-camera, it was more of a shadow and we swung the sound into the overheads, which makes it feel really big and high-up. The sound was singular, a multiple-sound piece that we were able to showcase in the overheads. We could make it feel bigger than everything else both sonically and spatially.”

Another effective Atmos moment was during the autopsy of the rogue Jaeger. Montaño placed water drips and gooey sounds in the overhead speakers. “We were really able to encapsulate the audience as the actors were crawling through the inner workings of this big, beast-machine Jaeger,” he says. “Hearing the overheads is a lot of fun when it’s called for so we had a very specific and very clean idea of what we were doing immersively.”

Montaño and Taylor use a hybrid console design that combines a Harrison MPC with two 32-channel Avid S6 consoles. The advantage of this hybrid design is that the mixers can use both plug-in processing such as FabFilter’s tools for EQ and reverbs via the S6 and Pro Tools, as well as the Harrison’s built-in dynamics processing. Another advantage is that they’re able to carry all the automation from the first temp dub through to the final mix. “We never go backwards, and that is the goal. That’s one advantage to working in the box — you can keep everything from the very beginning. We find it very useful,” says Taylor.

Montaño adds that all the audio goes through the Harrison console before it gets to the recorder. “We find the Harrison has a warmer, more delicate sound, especially in the dynamic areas of the film. It just has a rounder, calmer sound to it.”

Montaño and Taylor feel their stage at Universal Studios is second-to-none but the people there are even better than that. “We have been very fortunate to work with great people, from Steven DeKnight our director to Dylan Highsmith our picture editor to Mary Parent, our executive producer. They are really supportive and enthusiastic. It’s all about the people and we have been really fortunate to work with some great people,” concludes Montaño.


Jennifer Walden is a New Jersey-based audio engineer and writer. 

Review: RTW’s Masterclass Mastering Tools

By David Hurd

RTW, based in Cologne, Germany, has been making broadcast-quality metering tools for audio professionals since 1965. Today, we will be looking at its Masterclass Mastering Tools and Loudness Tools plug-ins, which are awesome to have in your arsenal if you are mastering music or audio for broadcast.

These tools operate both as DAW plugins and in standalone mode. I tested them in Magix Sound Forge.

To start, I simply opened Sound Forge and added the RTW plug-in to the Plug-in Chain. RTW’s Masterclass Mastering Tools handle all of the loudness standards for broadcast so that your mix doesn’t get squished while giving you a detailed picture of the dynamics of your mix for use on the Web.

The Masterclass Mastering bundle includes a lot of loudness presets that will conform your audio levels to the standards of other countries. Since the listeners of most of my projects reside in the USA, I used one of the US standard presets.

The CALM Act preset uses a K- weighted metering scale with “True Peak,” “Momentary,” “Short” and “Integrated Total Level” views, as well as a meter that displays your loudness range. I was mostly concerned with the Integrated Level and True Peak displays. The integrated level shows you an average of the perceived loudness over the entire length of the program. It actually improves your dynamic range since it doesn’t count the extremely quiet and loud areas in your mix.

This comes in handy on projects like a home improvement show that I work, where I have mostly dialog except for a loud power tool like an air nailer or chop saw.

As long as the whole program conforms to the average for US standards for Integrated Level, my dialog can be heard while still allowing the power tools to be loud. This allows me to have a robust mix and still keep it legal.

If you have ever tested the difference between Peak and RMS settings on a loudness plug-in, you know that your settings can make a huge difference in the perceived loudness of your audio signal. Usually, loud is good, but it depends on the hardware path that your program will have to take on its way to the listeners.

If your audio is going to be broadcast, your loud mix may be degraded when it is processed for broadcast by the station. If the broadcast output processing limiters think that your mix is too loud they will add compression or limiting of their own. Suddenly, you’ll learn too late that the station’s hardware has squished your wonderful loud and punchy mix into mush.

If your listeners are on the Web, rather than watching a TV broadcast, you will have less of a problem. Most of the Internet broadcast venues, like YouTube and iTunes, are using an automatic volume control that just adjusts the file volume instead of applying any compression or limiting to your audio. The net result is that your listeners will hear your mix as it was intended to be heard.

Digital clipping is an ugly thing, which no one wants any part of. To make sure that my program never clips, I also keep an eye on the True Peak meter. The True Peak meter looks for peaks in your audio program, and here’s the cool part. It actually calculates where your audio wave would have peaked had there been headroom and uses that level. This allows me to easily set an overall level for the whole mix that doesn’t include any clipping distortion.
As you probably know, the phase relationship between your audio channels is very important, so Masterclass Mastering Tools include tools for these as well.

You get a Stereo Correlation Meter, a Surround Sound Analyzer and a RealTime Frequency Analyzer. To top it off, you also get a Vectorscope for monitoring the phase relationship between any pair of audio channels.

It’s not like you couldn’t add a bunch of metering plug-ins to your present system and get roughly the same results. But, why would you want to? The Masterclass Mastering Tools from RTW puts everything that you need together in one easy-to-use package.

Summing Up
If you are on a budget, you may want to look into the Loudness Tools package, which is only $239 dollars. It contains everything the Mastering Tools package offers, except for the Surround Sound Analyzer, RealTime Analyzer and the Vectorscope. The full-blown Mastering Tools package is only $578.91, which gives you everything you need to comply with loudness standards all over the world.

For conforming world-class professional audio, you need to use professional tools, and Masterclass Mastering Tools will easily enable you to get the job done.


David Hurd own David Hurd Productions in Tampa, Florida. He has been reviewing products for over 20 years.