Tag Archives: audio post

Creating a sonic world for The Zookeeper’s Wife

By Jennifer Walden

Warsaw, Poland, 1939. The end of summer brings the beginning of war as 140 German planes, Junkers Ju-87 Stukas, dive-bomb the city. At the Warsaw Zoo, Dr. Jan Żabiński (Johan Heldenbergh) and his wife Antonina Żabiński (Jessica Chastain) watch as their peaceful sanctuary crumbles: their zoo, their home and their lives are invaded by the Nazis. Powerless to fight back openly, the zookeeper and his wife join the Polish resistance. They transform the zoo from an animal sanctuary into a place of sanctuary for the people they rescue from the Warsaw Ghetto.

L-R: Anna Behlmer, Terry_Porter and Becky Sullivan.

Director Niki Caro’s film The Zookeeper’s Wife — based on Antonina Żabińska’s true account written by Diane Ackerman — presents a tale of horror and humanity. It’s a study of contrasts, and the soundtrack matches that, never losing the thread of emotion among the jarring sounds of bombs and planes.

Supervising sound editor Becky Sullivan, at the Technicolor at Paramount sound facility in Los Angeles, worked closely with re-recording mixers Anna Behlmer and Terry Porter to create immersive soundscapes of war and love. “You have this contrast between a love story of the zookeeper and his wife and their love for their own people and this horrific war that is happening outside,” explains Porter. “It was a real challenge in the mix to keep the war alive and frightening and then settle down into this love story of a couple who want to save the people in the ghettos. You have to play the contrast between the fear of war and the love of the people.”

According to Behlmer, the film’s aerial assault on Warsaw was entirely fabricated in post sound. “We never see those planes, but we hear those planes. We created the environment of this war sonically. There are no battle sequence visual effects in the movie.”

“You are listening to the German army overtake the city even though you don’t really see it happening,” adds Sullivan. “The feeling of fear for the zookeeper and his wife, and those they’re trying to protect, is heightened just by the sound that we are adding.”

Sullivan, who earned an Oscar nom for sound editing director Angelina Jolie’s WWII film Unbroken, had captured recordings of actual German Stukas and B24 bomber planes, as well as 70mm and 50mm guns. She found library recordings of the Stuka’s signature Jericho siren. “It’s a siren that Germans put on these planes so that when they dive-bombed, the siren would go off and add to the terror of those below,” explains Sullivan. Pulling from her own collection of WWII plane recordings, and using library effects, she was able to design a convincing off-screen war.

One example of how Caro used sound and clever camera work to effectively create an unseen war was during the bombing of the train station. Behlmer explains that the train station is packed with people crying and sobbing. There’s an abundance of activity as they hustle to get on the arriving trains. The silhouette of a plane darkens the station. Everyone there is looking up. Then there’s a massive explosion. “These actors are amazing because there is fear on their faces and they lurch or fall over as if some huge concussive bomb has gone off just outside the building. The people’s reactions are how we spotted explosions and how we knew where the sound should be coming from because this is all happening offstage. Those were our cues, what we were mixing to.”

“Kudos to Niki for the way she shot it, and the way she coordinated these crowd reactions,” adds Porter. “Once we got the soundscape in there, you really believe what is happening on-screen.”

The film was mixed in 5.1 surround on Stage 2 at Technicolor Paramount lot. Behlmer (who mixed effects/Foley/backgrounds) used the Lexicon 960 reverb during the train station scene to put the plane sounds into that space. Using the LFE channel, she gave the explosions an appropriate impact — punchy, but not overly rumbly. “We have a lot of music as well, so I tried really hard to keep the sound tight, to be as accurate as possible with that,” she says.

ADR
Another feature of the train station’s soundscape is the amassed crowd. Since the scene wasn’t filmed in Poland, the crowd’s verbalizations weren’t in Polish. Caro wanted the sound to feel authentic to the time and place, so Sullivan recorded group ADR in both Polish and German to use throughout the film. For the train station scene, Sullivan built a base of ambient crowd sounds and layered in the Polish loop group recordings for specificity. She was also able to use non-verbal elements from the production tracks, such as gasps and groans.

Additionally, the group ADR played a big part in the scenes at the zookeeper’s house. The Nazis have taken over the zoo and are using it for their own purposes. Each day their trucks arrive early in the morning. German soldiers shout to one another. Sullivan had the German ADR group perform with a lot of authority in their voices, to add to the feeling of fear. During the mix, Porter (who handled the dialogue and music) fit the clean ADR into the scenes. “When we’re outside, the German group ADR plays upfront, as though it’s really their recorded voices,” he explains. “Then it cuts to the house, and there is a secondary perspective where we use a bit of processing to create a sense of distance and delay. Then when it cuts to downstairs in the basement, it’s a totally different perspective on the voices, which sounds more muffled and delayed and slightly reverberant.”

One challenge of the mix and design was to make sure the audience knew the location of a sound by the texture of it. For example, the off-stage German group ADR used to create a commotion outside each morning had a distinct sonic treatment. Porter used EQ on the Euphonix System 5 console, and reverb and delay processing via Avid’s ReVibe and Digidesign’s TL Space plug-ins to give the sounds an appropriate quality. He used panning to articulate a sound’s position off-screen. “If we are in the basement, and the music and dialogue is happening above, I gave the sounds a certain texture. I could sweep sounds around in the theater so that the audience was positive of the sound’s location. They knew where the sound is coming from. Everything we did helped the picture show location.”

Porter’s treatment also applied to diegetic music. In the film, the zookeeper’s wife Antonina would play the piano as a cue to those below that it was safe to come upstairs, or as a warning to make no sound at all. “When we’re below, the piano sounds like it’s coming through the floor, but when we cut to the piano it had to be live.”

Sound Design
On the design side, Sullivan helped to establish the basement location by adding specific floor creaks, footsteps on woods, door slams and other sounds to tell the story of what’s happening overhead. She layered her effects with Foley provided by artist Geordy Sincavage at Sinc Productions in Los Angeles. “We gave the lead German commander Lutz Heck (Daniel Brühl) a specific heavy boot on wood floor sound. His authority is present in his heavy footsteps. During one scene he bursts in, and he’s angry. You can feel it in every footstep he takes. He’s throwing doors open and we have a little sound of a glass falling off of the shelf. These little tiny touches put you in the scene,” says Sullivan.

While the film often feels realistic, there were stylized, emotional moments. Picture editor David Coulson and director Caro juxtapose images of horror and humanity in a sequence that shows the Warsaw Ghetto burning while those lodged at the zookeeper’s house hold a Seder. Edits between the two locations are laced together with sounds of the Seder chanting and singing. “The editing sounds silky smooth. When we transition out of the chanting on-camera, then that goes across the cut with reverb and dissolves into the effects of the ghetto burning. It sounds continuous and flowing,” says Porter. The result is hypnotic, agrees Behlmer and Sullivan.

The film isn’t always full of tension and destruction. There is beauty too. In the film’s opening, the audience meets the animals in the Warsaw Zoo, and has time to form an attachment. Caro filmed real animals, and there’s a bond between them and actress Chastain. Sullivan reveals that while they did capture a few animal sounds in production, she pulled many of the animal sounds from her own vast collection of recordings. She chose sounds that had personality, but weren’t cartoony. She also recorded a baby camel, sea lions and several elephants at an elephant sanctuary in northern California.

In the film, a female elephant is having trouble giving birth. The male elephant is close by, trumpeting with emotion. Sullivan says, “The birth of the baby elephant was very tricky to get correct sonically. It was challenging for sound effects. I recorded a baby sea lion in San Francisco that had a cough and it wasn’t feeling well the day we recorded. That sick sea lion sound worked out well for the baby elephant, who is struggling to breathe after it’s born.”

From the effects and Foley to the music and dialogue, Porter feels that nothing in the film sounds heavy-handed. The sounds aren’t competing for space. There are moments of near silence. “You don’t feel the hand of the filmmaker. Everything is extremely specific. Anna and I worked very closely together to define a scene as a music moment — featuring the beautiful storytelling of Harry Gregson-Williams’ score, or a sound effects moment, or a blend between the two. There is no clutter in the soundtrack and I’m very proud of that.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.

Quick Chat: Monkeyland Audio’s Trip Brock

By Dayna McCallum

Monkeyland Audio recently expanded its facility, including a new Dolby Atmos equipped mixing stage. The Glendale-based Monkeyland Audio, where fluorescent lights are not allowed and creative expression is always encouraged, now offers three mixing stages, an ADR/Foley stage and six editorial suites.

Trip Brock, the owner of Monkeyland, opened the facility over 10 years ago, but the MPSE Golden Reel Award-winning supervising sound editor and mixer (All the Wilderness), started out in the business more than 23 years ago. We reached out to Brock to find out more about the expansion and where the name Monkeyland came from in the first place…

monkeyland audioOne of your two new stages is Dolby Atmos certified. Why was that important for your business?
We really believe in the Dolby Atmos format and feel it has a lot of growth potential in both the theatrical and television markets. We purpose-built our Atmos stage looking towards the future, giving our independent and studio clients a less expensive, yet completely state-of-the-art alternative to the Atmos stages found on the studio lots.

Can you talk specifically about the gear you are using on the new stages?
All of our stages are running the latest Avid Pro Tools HD 12 software across multiple Mac Pros with Avid HDX hardware. Our 7.1 mixing stage, Reposado, is based around an Avid Icon D-Control console, and Anejo, our Atmos stage, is equipped with dual 24-fader Avid S6 M40 consoles. Monitoring on Anejo is based on a 3-way JBL theatrical system, with 30 channels of discrete Crown DCi amplification, BSS processing and the DAD AX32 front end.

You’ve been in this business for over 23 years. How does that experience color the way you run your shop?
I stumbled into the post sound business coming from a music background, and immediately fell in love with the entire process. After all these years, having worked with and learned so much from so many talented clients and colleagues, I still love what I do and look forward to every day at the office. That’s what I look for and try to cultivate in my creative team — the passion for what we do. There are so many aspects and nuances in the audio post world, and I try to express that to my team — explore all the different areas of our profession, find which role really speaks to you and then embrace it!

You’ve got 10 artists on staff. Why is it important to you to employ a full team of talent, and how do you see that benefiting your clients?
I started Monkeyland as primarily a sound editorial company. Back in the day, this was much more common than the all-inclusive, independent post sound outfits offering ADR, Foley and mixing, which are more common today. The sound editorial crew always worked together in house as a team, which is a theme I’ve always felt was important to maintain as our company made the switch into full service. To us, keeping the team intact and working together at the same location allows for a lot more creative collaboration and synergy than say a set of editors all working by themselves remotely. Having staff in house also allows us flexibility when last minute changes are thrown our way. We are better able to work and communicate as a team, which leads to a superior end product for our clients.

Monkeyland AudioCan you name some of the projects you are working on and what you are doing for them?
We are currently mixing a film called The King’s Daughter, starring Pierce Brosnan and William Hurt. We also recently completed full sound design and editorial, as well as the native Atmos mix, on a new post-apocalyptic feature we are really proud of called The Worthy. Other recent editorial and mixing projects include the latest feature from Director Alan Rudolph, Ray Meets Helen, the 10-episode series Junior for director Zoe Cassavetes, and Three Days To Live, a new eight-episode true-crime series for NBC/Universal.

Most of your stage names are related to tequila… Why is that?
Haha — this is kind of a take-off from the naming of the company itself. When I was looking for a company name, I knew I didn’t want it to include the word “digital” or have any hint toward technology, which seemed to be the norm at the time. A friend in college used to tease me about my “unique” major in audio production, saying stuff like, “What kind of a degree is that? A monkey could be trained to do that.” Thus Monkeyland was born!

Same theory applied to our stage names. When we built the new stages and needed to name them, I knew I didn’t want to go with the traditional stage “A, B, C” or “1, 2, 3,” so we decided on tequila types — Anejo, Reposado, Plata, even Mezcal. It seems to fit our personality better, and who doesn’t like a good margarita after a great mix!

The sounds of Brooklyn play lead role in HBO’s High Maintenance

By Jennifer Walden

New Yorkers are jaded, and one of the many reasons is that just about anything they want can be delivered right to their door: Chinese food, prescriptions, craft beer, dry cleaning and weed. Yes, weed. This particular item is delivered by “The Guy,” the protagonist of HBO’s new series, High Maintenance.

The Guy (played by series co-creator Ben Sinclair) bikes around Brooklyn delivering pot to a cast of quintessentially quirky New York characters. Series creators Sinclair and Katja Blichfeld string together vignettes — using The Guy as the common thread — to paint a realistic picture of Brooklynites.

andrew-guastella

Nutmeg’s Andrew Guastella. Photo credit: Carl Vasile

“The Guy delivers weed to people, often going into their homes and becoming part of their lives,” explains sound editor/re-recording mixer Andrew Guastella at Nutmeg, a creative marketing and post studio based in New York. “I think that what a lot of viewers like about the show is how quickly you come to know complete strangers in a sort of intimate way.”

Blichfeld and Sinclair find inspiration for their stories from their own experiences, says Guastella, who follows suit in terms of sound. “We focus on the realism of the sound, and that’s what makes this show unique.” The sound of New York City is ever-present, just as it is in real life. “Audio post was essential for texturizing our universe,” says Sinclair. “There’s a loud and vibrant city outside of those apartment walls. It was important to us to feel the presence of a city where people live on top of each other.”

Big City Sounds
That edict for realism drives all sound-related decisions on High Maintenance. On a typical series, Guastella would strive to clean up every noise on the production dialogue, but for High Maintenance, the sound of sirens, horns, traffic, even car alarms are left in the tracks, as long as they’re not drowning out the dialogue. “It’s okay to leave sounds in that aren’t obtrusive and that sell the fact that they are in New York City,” he says.

For example, a car alarm went off during a take. It wasn’t in the way of the dialogue but it did drop out on a cut, making it stand out. “Instead of trying to remove the alarm from the dialogue, I decided to let it roll and I added a chirp from a car alarm, as if the owner turned off the alarm [or locked the car], to help incorporate it into the track. A car alarm is a sound you hear all the time in New York.”

Exterior scenes are acceptably lively, and if an interior scene is feeling too quiet, Guastella can raise a neighborly ruckus. “In New York, there’s always that noisy neighbor. Some show creators might be a little hesitant to use that because it could be distracting, but for this show, as long as it’s real, Ben and Katja are cool with it,” he says. During a particularly quiet interior scene, he tried adding the sounds of cars pulling away and other light traffic to fill up the space, but it wasn’t enough, so Guastella asked the creators, “’How do you feel about the neighbors next door arguing?’ And they said, ‘That’s real. That’s New York. Let’s try it out.’”

Guastella crafted a commotion based on his own experience of living in an apartment in Queens. Every night he and his wife would hear the downstairs neighbors fighting. “One night they were yelling and then all we heard was this loud, enormous slam. Hopefully, it was a door,” jokes Guastella. “Ben and Katja are always pulling from their own experiences, so I tried to do that myself with the soundtrack.”

Despite the skill of production sound mixer Dimitri Kouri, and a high tolerance for the ever-present sound of New York City, Guastella still finds himself cleaning dialogue tracks using iZotope’s RX 5 Advanced. One of his favorite features is RX Connect. With this plug-in feature, he can select a region of dialogue in his Avid Pro Tools session and send that region directly to iZotope’s standalone RX application where he can edit, clean and process the dialogue. Once he’s satisfied, he can return that cleaned up dialogue right back in sync on the timeline of his Pro Tools session where he originally sent it from.

“I no longer have to deal with exporting and importing audio files, which was not an efficient way to work,” he says. “And for me, it’s important that I work within the standalone application. There are plug-in versions of some RX tools, but for me, the standalone version offers more flexibility and the opportunity to use the highly detailed visual feedback of its audio-spectrum analyzer. The spectrogram makes using tools like Spectral Repair and De-click that much more effective and efficient. There are more ways to use and combine the tools in general.”

Guastella has been with the series since 2012, during its webisode days on Vimeo. Back then, it was a passion-project, something he’d work on at home on his own time. From the beginning, he’s handled everything audio: the dialogue cleaning and editing, the ambience builds and Foley and the final mix. “Andrew [Guastella] brought his professional ear and was always such a pleasure to work with. He always delivered and was always on time,” says Blichfeld.

The only aspect that Guastella doesn’t handle is the music. “That’s a combination of licensed music (secured by music supervisor Liz Fulton) and original composition by Chris Bear. The music is well-established by the time the episode gets to me,” he says.

On the Vimeo webisodes, Guastella would work an episode’s soundtrack into shape, and then send it to Blichfeld and Sinclair for notes. “They would email me or we would talk over the phone. The collaborative process wasn’t immediate,” he says. Now that HBO has picked up the series and renewed it for Season 2, Guastella is able to work on High Maintenance in his studio at Nutmeg, where he has access to all the amenities of a full-service post facility, such as sound effects libraries, an ADR booth, a 5.1 surround system and room to accommodate the series creators who like to hang around and work on the sound with Guastella. “They are very particular about sound and very specific. It’s great to have instant access to them. They were here more than I would’ve expected them to be and it was great spending all that time with them personally and professionally.”

In addition to being a series co-creator, co-writer and co-director with Blichfeld, Sinclair is also one of show’s two editors. This meant they were being pulled in several directions, which eventually prevented them from spending so much time in the studio with Guastella. “By the last three episodes of this season, I had absorbed all of their creative intentions. I was able to get an episode to the point of a full mix and they would come in just for a few hours to review and make tweaks.”

With a bigger budget from HBO, Guastella is also able to record ADR when necessary, record loop group and perform Foley for the show at Nutmeg. “Now that we have a budget and the space to record actual Foley, we’re faced with the question of how much Foley do we want to do? When you Foley sound for every movement and footstep, it doesn’t always sound realistic, and the creators are very aware of that,” says Guastella.

5.1 Surround Mix
In addition to a minimalist approach, another way he keeps the Foley sounding real is by recording it in the real world. In Episode 3, the story is told from a dog’s POV. Using a TASCAM DR 680 digital recorder and a Sennheiser 416 shotgun mic, Guastella recorded an “enormous amount of Foley at home with my Beagle, Bailey, and my father-in-law’s Yorkie and Doberman. I did a lot of Foley recording at the dog park, too, to capture Foley for the dog outside.”

Another difference between the Vimeo episodes and the HBO series is the final mix format. “HBO requires a surround sound 5.1 mix and that’s something that demands the infrastructure of a professional studio, not my living room,” says Guastella. He takes advantage of the surround field by working with ambiences, creating a richer environment during exterior shots which he can then contrast with a closer, confined sound for the interior shots.

“This is a very dialogue-driven show so I’m not putting too much information in the surrounds. But there is so much sound in New York City, and you are really able to play with perspective of the interior and exterior sounds,” he explains. For example, the opening of Episode 3, “Grandpa,” follows Gatsby the dog as he enters the front of his house and eventually exits out of the back. Guastella says he was “able to bring the exterior surrounds in with the characters, then gradually pan them from surround to a heavier LCR once he began approaching the back door and the backyard was in front of him.”

The series may have made the jump from Vimeo to HBO but the soul of the show has changed very little, and that’s by design. “Ben, Katja, and Russell Gregory [the third executive producer] are just so loyal to the people who helped get this series off the ground with them. On top of that, they wanted to keep the show feeling how it did on the web, even though it’s now on HBO. They didn’t want to disappoint any fans that were wondering if the series was going to turn into something else… something that it wasn’t. It was really important to the show creators that the series stayed the same, for their fans and for them. Part of that was keeping on a lot of the people who helped make it what it was,” concludes Guastella.

Check out High Maintenance on HBO, Fridays at 11pm.


Jennifer Walden is a NJ-based audio engineer and writer. Follow her at @audiojeney.

The sound of sensory overload for Cinemax’s ‘Outcast’

By Jennifer Walden

As a cockroach crawls along the wall, each move is watched intensely by a boy whose white knuckles grip the headboard of his bed. His shallow breaths stop just before he head-butts the cockroach and sucks its bloody remains off the wall.

That is the fantastic opening scene of Robert Kirkman’s latest series, Outcast, airing now on Cinemax. Kirkman, writer/executive producer on The Walking Dead, sets his new horror series in the small town of Rome, West Virginia, where a plague of demonic-like possessions is infecting the residents.

Ben Cook

Outcast supervising sound editor Benjamin Cook, of 424 Post in Culver City, says the opening of the pilot episode featured some of his favorite moments in terms of sound design. Each scrape of the cockroach’s feet, every twitch of its antenna, and the juicy crunch of its demise were carefully crafted. Then, following the cockroach consumption, the boy heads to the pantry and snags a bag of chips. He mindlessly crunches away as his mother and sister argue in the kitchen. When the mother yells at the boy for eating chips after supper, he doesn’t seem to notice. He just keeps crunching away. The mother gets closer as the boy turns toward her and she sees that it’s not chips he’s crunching on but his own finger. This is not your typical child.

“The idea is that you want it to seem like he’s eating potato chips, but somewhere in there you need a crossover between the chips and the flesh and bone of his finger,” says Cook. Ultimately, the finger crunching was a combination of Foley — provided by Jeff Wilhoit, Brett Voss, and Dylan Tuomy-Wilhoit at Happy Feet Foley — and 424 Post’s sound design, created by Cook and his sound designers Javier Bennassar and Charles Maynes. “We love doing all of those little details that hopefully make our soundtracks stand out. I try to work a lot of detail into my shows as a general rule.”

Sensory Overload
While hitting the details is Cook’s m.o. anyway — as evidenced by his Emmy-nominated sound editing on Black Sails — it serves a double purpose in Outcast. When people are possessed in the world of Outcast, we imagine that they are more in tune with the micro details of the human experience. Every touch and every movement makes a sound.

“Whenever we are with a possessed person we try to play up the sense that they are overwhelmed by what they are experiencing because their body has been taken over,” says Cook. “Wherever this entity comes from it doesn’t have a physical body and so what the entity is experiencing inside the human body is kind of a sensory overload. All of the Foley and sound effects are really heightened when in that experience.”

Cook says he’s very fortunate to find shows where he and his team have a lot of creative freedom, as they do on Outcast. “As a sound person that is the best; when you really are a collaborator in the storytelling.”

His initial direction for sound came from Adam Wingard, the director on the pilot episode. Wingard asked for drones and distortion, for hard-edged sounds derived from organic sources. “There are definitely more processed kinds of sounds than I would typically use. We worked with the composer Atticus Ross, so there was a handoff between the music and the sound design in the show.”

Working with a stereo music track from composer Ross, Cook and his team could figure out their palette for the sound design well before they hit the dub stage. They tailored the sound design to the music so that both worked together without stepping on each other’s toes.

He explains that Outcast was similar to Black Sails in that they were building the episodes well before they mixed them. The 424 Post team had time to experiment with the design of key sounds, like the hissing, steaming sound that happens when series protagonist Kyle Barnes (Patrick Fugit) touches a possessed person, and the sound of the entity as it is ejected from a body in a jet of black, tar-like fluid, which then evaporates into thin air. For that sound, Cook reveals that they used everything from ocean waves to elephant sounds to bubbling goo. “The entity was tough because we had to find that balance between its physical presence and its spiritual presence because it dissipates back into its original plane, where ever it came from.”

Sound Design and More
When defining the sound design for possessed people, one important consideration was what to do with their voice. Or, in this case, what not to do with their voice. Series creator Kirkman, who gave Cook carte blanche on the majority of the show’s sound work, did have one specific directive: “He didn’t want any changes to happen with their voice. He didn’t want any radical pitch shifting or any weird processing. He wanted it to sound very natural,” explains Cook, who shared the ADR workload with supervising dialogue editor Erin Oakley-Sanchez.

There was no processing to the voices at all. What you hear is what the actors were able to perform, the only exception being Joshua (Gabriel Bateman), an eight-year-old boy who is possessed. For him, the show runners wanted to hear a slight bit of difference to drive home the fact that his body had indeed been taken over. “We have Kyle beating up this kid and so we wanted to make sure that the viewers really got a sense that this wasn’t a kid he was beating up, but that he was beating up a monster,” explains Cook.

To pull off Joshua’s possessed voice, Oakley-Sanchez and Wingard had actor Bateman change his voice in different ways during their ADR session. Then, Cook doubled certain lines in the mix. “The approach was very minimalistic. We never layered in other animal sounds or anything like that. All of the change came from the actor’s performance,” Cook says.

Cook is a big proponent of using fresh sounds in his work. He used field recordings captured in Tennessee, Virginia, and Florida to build the backgrounds. He recorded hard effects like doors, body hits and furniture crashing and breaking. There were other elements used as part of the sound design, like wind and water recordings. In Sound Particles —a CGI-like software for sound design created by Nuno Fonseca — he was able to manipulate and warp sound elements to create unique sounds.

“Sound Particles has really great UI to it, like virtual mics you can place and move to record things in a virtual 3D environment. It lets you create multiple instances of sound very easily. You can randomize things like pitch and timing. You can also automate the movements and create little vignettes that can be rendered out as a piece of audio that you can bring into Pro Tools or Nuendo or other audio workstations. It’s a very fascinating concept and I’ve been using it a lot.”

Cook enjoys building rich backgrounds in shows, which he uses to help further the storyline. For example, in Episode 2 the police chief and his deputy take a trek through the woods and find an abandoned trailer. Cook used busier tracks with numerous layers of sounds at first, but as the chief and deputy get farther into the woods and closer to the abandoned trailer, the backgrounds become sparser and eerily quiet. Another good example happens in Episode 9, where there is a growing storm that builds throughout the whole episode. “It’s not a big player, just more of a subtext to the story. We do really simple things that hopefully translate and come across to people as little subtleties they can’t put their finger on,” says Cook.

Outcast is mixed in 5.1 by re-recording mixers Steve Pederson (dialogue/music) and Dan Leahy (effects/Foley/ backgrounds) via Sony Pictures Post at Deluxe in Hollywood. Cook says, “They are super talented mixers who mostly do a lot of feature films and so they bring a theatrical vibe to the series.”

New episodes of Outcast air Fridays at 10pm on Cinemax, with the season finale on August 12th. Outcast has been renewed for Season 2, and while Cook doesn’t have any inside info on where the show will go next season, he says, “at the end of Season 1, we’re not sure if the entity is alien or demonic, and they don’t really give it away one way or another. I’m really excited to see what they do in Season 2. There is lots of room to go either way. I really like the characters, like the Reverend and Kyle — both have really great back stories. They’re both so troubled and flawed and there is a lot to build on there.”

Jennifer Walden is a New Jersey-based audio engineer and writer.

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.

Larson Studios pulls off an audio post slam dunk for FX’s ‘Baskets’

By Jennifer Walden

Turnarounds for TV series are notoriously fast, but imagine a three-day sound post schedule for a single-camera half-hour episodic series? Does your head hurt yet? Thankfully, Larson Studios in Los Angeles has its workflow on FX’s Baskets down to a science. In the show, Zach Galifianakis stars as Chip Baskets, who works as a California rodeo clown after failing out of a prestigious French clown school.

So how do you crunch a week and a half’s worth of work into three days without sacrificing quality or creativity? Larson’s VP, Rich Ellis, admits they had to create a very aggressive workflow, which was made easier thanks to their experience working with Baskets post supervisor Kaitlin Menear on a few other shows.

Ellis says having a supervising sound editor — Cary Stacy — was key in setting up the workflow. “There are others competing for space in this market of single-camera half-hours, and they treat post sound differently — they don’t necessarily bring a sound supervisor to it. The mixer might be cutting and mixing and wrangling all of the other elements, but we felt that it was important to continue to maintain that traditional sound supervisor role because it actually helps the process to be more efficient when it comes to the stage.”

John Chamberlin and Cary Stacy

John Chamberlin and Cary Stacy

This allows re-recording mixer John Chamberlin to stay focused on the mix while sound supervisor Stacy handles any requests that pop-up on stage, such as alternate lines or options for door creaks. “I think director Jonathan Krisel, gave Cary at least seven honorary Emmy awards for door creaks over the course of our mix time,” jokes Menear. “Cary can pull up a sound effect so quickly, and it is always exactly perfect.”

Every second counts when there are only seven hours to mix an episode from top to bottom before post producer Menear, director Krisel and the episode’s picture editor join the stage for the two-hour final fixes and mix session. Having complete confidence in Stacy’s alternate selections, Chamberlin says he puts them into the session, grabs the fader and just lets it roll. “I know that Cary is going to nail it and I go with it.”

Even before the episode gets to the stage, Chamberlin knows that Stacy won’t overload the session with unnecessary elements, which are time consuming. Even still, Chamberlin says the mix is challenging in that it’s a lot for one person to do. “Although there is care taken to not overload what is put on my plate when I sit down to mix, there are still 8 to 10 tracks of Foley, 24 or more tracks of backgrounds and, depending on the show, the mono and stereo sound effects can be 20 tracks. Dialogue is around 10 and music can be another 10 or 12, plus futz stuff, so it’s a lot. You have to have a workflow that’s efficient and you have to feel confident about what you’re doing. It’s about making decisions quickly.”

Chamberlin mixed Baskets in 5.1 — using a Pro Tools 11 system with an Avid ICON D-Command — on Stage 4 at Larson Studios, where he’s mixed many other shows, such as Portlandia, Documentary Now, Man Seeking Woman, Dice, the upcoming Netflix series Easy, Comedy Bang Bang, Meltdown With Jonah and Kumail and Kroll Show. “I’m so used to how Stage 4 sounds that I know when the mix is in a good place.”

Another factor of the three-day turn-around is choosing to forgo loop group and minimizing ADR to only when it’s absolutely necessary. The post sound team relied on location sound mixer Russell White to capture all the lines as clearly as possible on set, which was a bit of a challenge with the non-principal characters.

Baskets

Tricky On-Set Audio
According to Menear, director Krisel loves to cast non-actors in the majority of the parts. “In Baskets, outside of our three main roles, the other people are kind of random folk that Jonathan has collected throughout his different directing experiences,” she says. While that adds a nice flavor creatively, the inexperienced cast members tend to step on each other’s lines, or not project properly — problems you typically won’t have with experienced actors.

For example, Louie Anderson plays Chip’s mom Christine. “Louie has an amazing voice and it’s really full and resonant,” explains Chamberlin. “There was never a problem with Louie or the pro actors on the show. The principals were very well represented sonically, but the show has a lot of local extras, and that poses a challenge in the recording of them. Whether they were not talking loud enough or there was too much talking.”

A good example is the Easter brunch scene in Episode 104. Chip, his mother and grandmother encounter Martha (Chip’s insurance agent/pseudo-friend played by Martha Kelly) and her parents having brunch in the casino. They decide to join their tables together. “There were so many characters talking at the same time, and a lot of the side characters were just having their own conversations while we were trying to pay attention to the main characters,” says Stacy. “I had to duck those side conversations as much as possible when necessary. There was a lot of that finagling going on.”

Stacy used iZotope RX 5 features like Decrackle and Denoise to clean up the tracks, as well as the Spectral Repair feature for fixing small noises.

Multiple Locations
Another challenge for sound mixer White was that he had to quickly shoot in numerous locations for any given episode. That Easter brunch episode alone had at least eight different locations, including the casino floor, the casino’s buffet, inside and outside of a church, inside the car, and inside and outside of Christine’s house. “Russell mentioned how he used two rigs for recording because he would always have to just get up and go. He would have someone else collect all of the gear from one location while he went off to a new location,” explains Chamberlin. “They didn’t skimp on locations. When they wanted to go to a place they would go. They went to Paris. They went to a rodeo. So that has challenges for the whole team — you have to get out there and record it and capture it. Russell did a pretty fantastic job considering where he was pushed and pulled at any moment of the day or night.”

Sound Effects
White’s tracks also provided a wealth of production effects, which were a main staple of the sound design. The whole basis for the show, for picture and sound, was to have really funny, slapstick things happen, but have them play really straight. “We were cutting the show to feel as real and as normal as possible, regardless of what was actually happening,” says Menear. “Like when Chip was walking across a room full of clown toys and there were all of these strange noises, or he was falling down, or doing amazing gags. We played it as if that could happen in the real world.”

Stacy worked with sound effects editor TC Spriggs to cut in effects that supported the production effects, never sounding too slapstick or over the top, even if the action was. “There is an episode where Chip knocks over a table full of champagne glasses and trips and falls. He gets back up only to start dancing, breaking even more glasses,” describes Chamberlin.

That scene was a combination of effects and Foley provided by Larson’s Foley team of Adam De Coster (artist) and Tom Kilzer (recordist). “Foley sync had to be perfect or it fell apart. Foley and production effects had to be joined seamlessly,” notes Chamberlin. “The Foley is impeccably performed and is really used to bring the show to life.”

Spriggs also designed the numerous backgrounds. Whether it was the streets of Paris, the rodeo arena or the doldrums of Bakersfield, all the locations needed to sound realistic and simple yet distinct. On the mix side, Chamberlin used processing on the dialogue to help sell the different environments – basic interiors and exteriors, the rodeo arena and backstage dressing room, Paris nightclubs, Bakersfield dive bars, an outdoor rave concert, a volleyball tournament, hospital rooms and dream-like sequences and a flashback.

“I spent more time on the dialogue than any other element. Each place had to have its own appropriate sounding environments, typically built with reverbs and delays. This was no simple show,” says Chamberlin. For reverbs, Chamberlin used Avid’s ReVibe and Reverb One, and for futzing, he likes McDSP’s FutzBox and Audio Ease’s Speakerphone plug-ins.

One of Chamberlin’s favorite scenes to mix was Chip’s performance at the rodeo, where he does his last act as his French clown alter ego Renoir. Chip walks into the announcer booth with a gramophone and asks for a special song to be played. Chamberlin processed the music to account for the variable pitch of the gramophone, and also processed the track to sound like it was coming over the PA system. In the center of the ring you can hear the crowds and the announcer, and off-screen a bull snorts and grinds it hooves into the dirt before rushing at Chip.

Another great sequence happens in the Easter brunch episode where we see Chip walking around the casino listening to a “Learn French” lesson through ear buds while smoking a broken cigarette and dreaming of being Renoir the clown on the streets of Paris. This scene summarizes Chip’s sad clown situation in life. It’s thoughtful, and charming and lonely.

“We experimented with elaborate sound design for the voice of the narrator, however, we landed on keeping things relatively simple with just an iPhone futz,” says Stacy. “I feel this worked out for the best, as nothing in this show was over done. We brought in some very light backgrounds for Paris and tried to keep the transitions as smooth as possible. We actually had a very large build for the casino effects, but played them very subtly.”

Adds Chamberlin, “We really wanted to enhance the inner workings of Chip and to focus in on him there. It takes a while in the show to get to the point where you understand Chip, but I think that is great. A lot of that has to do with the great writing and acting, but our support on the sound side, in particular on that Easter episode, was not to reinvent the wheel. Picture editors Micah Gardner and Michael Giambra often developed ideas for sound, and those had a great influence on the final track. We took what they did in picture editorial and just made it more polished.”

The post sound process on Baskets may be down and dirty, but the final product is amazing, says Menear. “I think our Larson Studios team on the show is awesome!”

Review: Avid Pro Tools 12

By Ron DiCesare

In 1990, I was working at a music studio where I did a lot of cut downs of 60s, 30s, 15s and 10s for TV and radio commercials. Back then we used ¼-inch analog tape with a razor blade to physically cut the tape. Since I did so many ¼-inch tape edits, the studio manager was forward thinking enough to introduce a new 2-track digital editing system by Digidesign called Sound Tools. I took to it like a fish takes to water since I was already using computers, MIDI sequencers and drum machines —  even replacing chips in drum machines — which is fitting since that is how Peter Gotcher and Evan Brooks started Digidesign back in 1984. (See my History of Audio Post here.)

A short time later, Pro Tools was introduced and everyone at the studio thought it was simply an upgrade to Sound Tools but with a different name. We purchased the first available version of Pro Tools and launched the new version to discover that there were now 4 audio tracks instead of 2. My first thought was, “Oh no, what am I going to do with the 2 extra tracks?!” Fearing the worst, my second thought was, “Oh shit, I bet this thing no longer does crossfades and I will have to use those two extra tracks to “ping pong” from one set of tracks to the other for fades.” Thankfully, I quickly realized that not only could Pro Tools 1.0 do crossfades, but it could do a lot more, including revolutionizing the entire audio industry.

During my long history of working on Sound Tools and Pro Tools, I have seen all of the advancements with the software firsthand. I am pleased to say that Avid’s latest version of Pro Tools, 12.3 includes some of the most helpful improvements yet.

Offerings and Pricing Options
Avid now offers its most flexible pricing ever for Pro Tools 12 — there are three different ways to purchase or upgrade. Just like before, Pro Tools can be purchased or upgraded outright, which is called a perpetual license. Don’t let the word license scare you; it still is a one-time purchase. In addition to the perpetual license, there are two new ways to lease Pro Tools either on a monthly basis or an annual subscription basis. This is an interesting step for Avid. The advantage to both types of subscriptions is that the user is eligible for all of the upgrades and tech support included with their subscription. This is an excellent way to ensure your program is always up to date while bug fixes are made along the way.

Offering such pricing flexibility does create a bit of confusion regarding what pricing options are available, since there are three versions of Pro Tools combined with the difference between first-time purchasers verses upgrades for preexisting users.

The first available option is called Pro Tools First, which is a free version. As a free version, this is an ideal option for anyone who is looking to get on board with Pro Tools for the first time. However, to take full advantage of Pro Tools 12, which is listed here in my review, you would need to purchase one of the two main versions, Pro Tools 12 or Pro Tools|HD 12.

Here is how the pricing breaks down: Pro Tools 12 Perpetual Licensing (AKA purchase outright) is $599. the Monthly subscription with upgrade plan is $29.99 per month.
The Annual subscription with upgrade plan is$24.92 per month (or $299 annually).

Pricing can vary according to your situation if you own previous versions or you have let too much time lapse in between upgrades. Suffice it to say, that whatever your unique situation is there is a purchase plan for you.

What’s Not New
The one thing product reviews rarely, if ever, cover is what has not changed. To me, what hasn’t changed is the first thing I want to know when I am working with any new version of existing software. I cannot stress enough the importance of being able to quickly and easily pick up exactly where I left off from my old version after upgrading. Unfortunately I know how often the software’s new features can make my old way of working obsolete.

I can’t help but think of a notable recent example when the upgrade to FCP X no longer supported OMF for audio exports. What were they thinking? Keeping previous workflows intact is an extremely important issue to me. Immediately after my upgrade from Pro Tools 10 to Pro Tools HD|12, I launched a session and it worked exactly as it did in version 10, eliminating any downtime for me.

One thing that is not new, but is extremely important to mention is the switch from the original Digidesign Audio Engine to the Avid Audio Engine. This happened on Pro Tools 11. Even with the change to the Avid Audio Engine, I was not forced to abandon my old workflow. The advantage of the Avid Audio Engine is key — among other things, this is what allows for the long overdue offline bounce, or faster-than-realtime bounce. And for anyone who is still on Pro Tools 10 and below, the offline bounce is a major reason to move to Pro Tools 12.

Because everyone uses Pro Tools in so many different and complex ways, I encourage you to view Avid’s website www.avid.com for a list of all of the new and improved functions. There are too many new features and improvements to list each one in this review. That is why I came up with a list of my 12 favorite new features of Pro Tools|HD 12.

My 12 Favorite New Features of Pro Tools 12
1. Avid Application Manager. There is a new icon at the top of your screen called the Avid Application Manager. Clicking on it will launch a window allowing you to log into your account, keep up with any updates and view a list of any uninstalled plug-ins available, along with your support options. You can also verify what type of license you have and when it was activated. This is helpful if you have the month-to-month or annual subscription so you can see when your next renewal is. Even with the perpetual license, you can still see what upgrades and bug fixes are available at any time.

2. Buy or Rent Plug-ins. One very cool new feature is the option to buy or rent any plug-in from a new menu option directly in Pro Tools called The Marketplace. This is particularly useful if you are opening another person’s session that has used a plug-in you do not own or if you are opening your session at a studio where they do not own a particular plug-in that you have at your studio. The rent option is a great way to access any missing plug-ins without having to commit to them fully.

3. Pitch Shift Legacy. Call me crazy, but I am thrilled that Avid has included the original version of Pitch Shift in the audio suite. In Pro Tools 11, Pitch Shift was changed to a piano keyboard-based plug-in called Pitch 2. As cool as it is to base your work off of a piano keyboard used in Pitch 2, I missed some of the basic features found only in the original version. I am pleased to say that Avid now offers both versions of Pitch Shift in the audio suite — the new piano-based keyboard version and the original, now called Pitch Shift Legacy.

4. Track Commit. Track Commit is used for converting virtual instruments to audio files, and it can be used for saving processing power overall. Even if you do not use virtual instruments, it still can be a very useful function, offering you the option to “print” your plug-ins to the audio track. This is a great way of saving processing and plug-in power. You can also render your automation, including panning. All of this saves processing power and any possible confusion if someone else is working on your session down the line.

5. Clip Transparency. Some people may remember the days of ¼-inch tape editing that I mentioned at the start of this article. Back then, audio editing had to be done solely with your ears. When Sound Tools and Pro Tools came along, editing became a visual skill, too. Clip Transparency takes visual editing one step further. It allows you to see two clips superimposed over each other while moving them on the same audio track. This is ideal for anyone who needs to line up a new clip with the old clip like when doing ADR.

The best part is it’s not only for seeing two different clips overlaid at the same time; it can be used when you are moving a single region or clip along your audio track. Clip Transparency allows you to see the old position superimposed with the new position of the same clip while you are shifting it for comparison.

It is perfect for those countless times when I have zoomed in past the start of the clip and I can’t see how much I am moving the clip relative to the old position. Clip Transparency now allows me to see how much I am shifting the audio, no matter what my zoom setting is. I never knew how much I needed this feature until I saw it in action. Clip Transparency is by far my most favorite new feature of Pro Tools 12.

6. Batch Fade and Fade Presets. When you are working with multiple audio clips on your timeline, fading each of the clips can be time consuming, especially if each fade needs to be treated differently. Now with Batch Fade, you can create presets for fade-ins, fade-outs and crossfades. When multiple audio clips are selected, a much larger dialog window pops up with many more options for you to choose from. Of course, fading between two clips can still be done the old way, and the fade dialog box works the same as in pervious versions. The new Batch Fade is an additional function that allows you to be more selective and have more options for your fades. Batch Fade is a great example how your old workflow is preserved while still adding new features.

7. The Dashboard. Launching a session now includes the Dashboard window at the start, which is an updated version of the Quick Start menu. You can quickly and easily see all of the available templates and your recent sessions. And, of course, you can create a new blank session. I like the new look and feel of Dashboard compared to Quick Start.

8. iPad Control. Pro Tools l Control is a free app now available in the App Store. iPad Control is made possible with the introduction of EuControl v.3.3, which is the driver needed for your workstation. EuControl is a free download using your Avid account after you complete the registration in the Pro Tools l Control iOS app. Even though I do not own an iPad, I can see the advantage of controlling Pro Tools via the iPad when I am monitoring a mix from a distance from my DAW.Avid Pro Tools iPad Control

Mixing a film, for example, would be a great use of the iPad control since that would allow me to sit back farther away from the speakers, thus simulating the distance of the listener in a movie theater. Today, the line between phones and tablets is blurred with the introduction of the “phablet.” As it stands now, the app is only available for iPad. I suspect that will change in the future, but I have no confirmation of that.

9. Included virtual musical instruments. The latest versions of Xpand II and First AIR Instruments Bundle are included with Pro Tools 12. Quite simply, I am blown away with how amazing these instruments sound. I have been a musician all of my life, but surprisingly I have never used any virtual instruments in MIDI in Pro Tools. I have always opted for a dedicated composing program for MIDI dating way back to Studio Vision Pro (for those of you old enough to remember how cool that program was).

I know there are plenty of third-party virtual instruments available for Pro Tools, but these two instrument bundles included with Pro Tools 12 have really opened up my eyes. Before Pro Tools 12, I found myself sharing and swapping files between a MIDI program (for me it’s Apple Logic) and Pro Tools. I have always preferred using a dedicated program for MIDI outside of Pro Tools, but now I am instantly converting using only Pro Tools for MIDI with the addition of these versions of Xpand II and the First AIR Instruments Bundle.

Please visit Avid’s website for a list of the specifics, but some of my favorite virtual instruments are the acoustic pianos, synth basses and of course anything drums or percussion related.

10. Updated I/O and flexibility. I work mostly on TV commercials and media specifically for the web, so I am rarely asked to do surround sound mixing, especially anything in 7.1. Therefore I am not able to explore any of the new surround features, including the new templates for 7.1 mixing.

Even so, I still can mention the addition of the Default Monitor path in Pro Tools 12. Pro Tools will automatically downmix or upmix your sessions’ monitor path to the studio’s monitor path. For example, if an HD session is saved with a 5.1 monitor path and then opened on a system that only has a stereo monitor path available, the session’s 5.1 monitor path is automatically downmixed to the systems’ stereo monitor outputs. This makes for even more flexibility when swapping sessions from one studio to another regardless of whether or not there are surround sound monitoring capabilities.

Another improvement relating to the I/O and surround capabilities is the addition of virtually unlimited busses. This will help anyone who has used up or exceeded previously allowed bus limitations when mixing in surround. The new Commit feature supports multichannel set-ups, which can improve your surround workflow.

And for any of the larger audio post facilities that may use Pro Tools in a much more complex way, such as getting several edit rooms to integrate, sync and play together, there are improvements in the Satellite link workflow. This includes the reset network button, transmit and receive play selection buttons in the transport window.

11. Track Bounce. Track Bounce is another feature I didn’t know I needed that much until I started using it. It is not to be confused with Track Commit. Track Bounce gives you the ability to select and bounce tracks or auxes as audio files when exporting. This can be one track, all the tracks or any combination of the tracks done in one single bounce.

For example, if you select a music track, a VO track and an FX track, you will get all three tracks as three discrete individual audio files in one single bounce using Track Bounce. This is essential for anyone who has to make splits or stems, especially in long format.

Imagine you have an hour program where you have a music track, a VO track and a sound effect track. In the past, you had to bounce each element as one realtime bounce three separate times. That meant it would take over three hours to complete. With Track Bounce in the offline bounce mode, you can output your stems in one single step in just minutes.

One friendly reminder is that if you are using Track Bounce with any layered tracks, such as sound effects or music tracks, it will bounce each track as its own separate track rather than a mix of the specific layers. For example, selecting 10 tracks will result in 10 discreet audio files with one bounce so it is important to know when Track Bounce is useful for you and when it is not.

12. Included Plug-ins. Of course, Pro Tools 12 is all about the plug-ins, and there are more plug-ins included than ever. This includes First AIR Effects Bundle, Eleven Effects and Space. I find that I rarely use any third party plug-ins since I am often going from studio to studio on a single project. Outside of noise reduction and LKFS Metering, I rarely find the need to use anything other than Avid plug-ins that are included with Pro Tools 12.

Cloud Collaboration and Avid Everywhere
In the near future, Avid will be offering Cloud Collaboration and Avid Everywhere. Avid will finally offer the ability to work on Pro Tools remotely using media located on a central cloud server accessible anywhere there is Internet access. When introduced, Cloud Collaboration will allow people in separate locations to access the same Pro Tools 12 session to share and update files instantly. This is perfectly suited for musicians collaborating on a song who do not live near each other.

More exciting to me is the potential of Cloud Collaboration to change the way we work in audio post by allowing access to all of your media remotely. This could benefit any audio facility that has multiple rooms with multiple engineers switching from room to room. Using Cloud Collaboration, there will be one central location for all your media accessible from any audio room. For engineers who need to switch rooms when working on a project, this will eliminate any file transfers or media dumps.

But I think the biggest benefit will be for any audio engineer like myself who is often working on a single project at multiple locations over the duration of the project. I am often working from my home studio, my client’s studio and a large audio post facility on the same project spread over several days, weeks or months. Each time I change studios, I have to make sure I transfer all of my sessions from one place to another using a flash drive, or WeTransfer or Google Drive, etc. I have tried them all and they are all time consuming. And with multiple versions and constant audio revisions, it is very easy to lose track of what and where the most current version is.

Cloud Collaboration will solve this issue with one central location where I can access my session from anywhere that has Internet access. This is a giant leap forward and I am looking forward to exploring this in-depth in a future review here on postProspective.

Ron DiCesare is an audio pro whose spot work includes TV campaigns for Purina, NJ Lotto and Beggin’ Strips. His indie film work includes Con Artist, BAM 150 and Fishing without Nets. He is also involved with audio post for Vice Media on their news reports and web series, including Vice on HBO. You can contact him at rononizer@gmail.com.

London’s Halo adds dubbing suite

Last month, London’s Halo launched a dubbing suite, Studio 5, at its Noel Street facility. The studio is suited for TV mix work across all genres, as well as for DCP 5.1 and 7.1 theatrical projects, or as a pre-mix room for Halo’s Dolby Features licensed Studios 1 and 3. The new room is also pre-wired for Dolby Atmos.

The new studio features an HDX2 Pro Tools 12|HD system, a 24-fader Avid S6 M40 and a custom Dynaudio 7.1 speaker system. This is all routed via a Colin Broad TMC-1-Penta controlled DADAX32 digital audio matrix for maximum versatility and future scalability. Picture playback from Pro Tools is provided by an AJA Kona LHi card via a Barco 2K digital projector.

In addition, Halo has built a dedicated 5.1 audio editing room for their recently arrived head of sound editorial, Jay Price, to work from. Situated directly adjacent to the new studio, the room features Pro Tools 12|HD Native system and 5.1 Dynaudio Air 6 speakers.

Jigsaw24 and CB Electronics supplied the hardware and the installation know-how. Level Acoustic designed, and Munro Acoustics provided a custom speaker system.

The Revenant’s sound team takes home BAFTA

The Revenant sound team has won the Best Sound award at the British Academy of Film and Television Arts (BAFTA) Awards ceremony . Winning the award was supervising sound editor and Formosa Group talent Lon Bender, along with supervising sound editor Martin Hernandez, supervising sound editor/re-recording mixer Randy Thom, production sound mixer Chris Duesterdiek and re-recording mixers Frank A. Montano and Jon Taylor.

Other nominees in the category include Bridge of Spies, Mad Max: Fury Road, The Martian, and Star Wars Episode VII: The Force Awakens. 

“I am very pleased that our crew was recognized at the BAFTA Awards for their hard work and artistry,” said BAFTA winner and Formosa’s Lon Bender.  “It is an honor to have had our film included among all the other nominees this year.” 

Bender is also nominated for an Oscar for Best Sound Editing for The Revenant. His 30-plus year career in sound dditing includes BAFTA nominations for Shrek and The Last of the Mohicans, Oscar nominations for Drive and Blood Diamond, and BAFTA and Oscar wins for Braveheart, which he shared with Formosa’s Per Hallberg. 

The British Academy of Film and Television Arts (BAFTA) is an independent charity that supports, develops and promotes the art forms of the moving image by identifying and rewarding excellence, inspiring practitioners and benefiting the public.