Category Archives: Audio

Shindig upgrades offerings, adds staff, online music library

On the heels of its second anniversary, Playa Del Rey’s Shindig Music + Sound is expanding its offerings and artists. Shindig, which offers original compositions, sound design, music licensing, voiceover sessions and final audio mixes, features an ocean view balcony, a beachfront patio and spaces that convert for overnight stays.

L-R: Susan Dolan, Austin Shupe, Scott Glenn, Caroline O’Sullivan, Debbi Landon and Daniel Hart.

As part of the expansion, the company’s mixing capabilities have been amped up with the newly constructed 5.1 audio mix room and vocal booth that enable sound designer/mixer Daniel Hart to accommodate VO sessions and execute final mixes for clients in stereo and/or 5.1. Shindig also recently completed the build-out of a new production/green room, which also offers an ocean view. This Mac-based studio uses Avid Pro Tools 12 Ultimate

Adding to their crew, Shindig has brought on on-site composer Austin Shupe, a former colleague from Hum. Along with Shindig’s in-house composers, the team uses a large pool of freelance talent, matching the genre and/or style that is best suited for a project.

Shindig’s licensing arm has launched a searchable boutique online music library. Upgrading their existing catalogue of best-in-quality compositions, the studio has now tagged all the tracks in a simple and searchable manner available on their website, providing new direct access for producers, creatives and editors.

Shindig’s executive team, which includes creative director Scott Glenn, executive producer Debbi Landon, head of production Caroline O’Sullivan and sound designer/mixer Dan Hart.

Glenn explains, “This natural growth has allowed us to offer end-to-end audio services and the ability to work creatively within the parameters of any size budget. In an ever-changing marketplace, our goal is to passionately support the vision of our clients, in a refreshing environment that is free of conventional restraints. Nothing beats getting creative in an inspiring, fun, relaxing space, so for us, the best collaboration is done beachside. Plus, it’s a recipe for a good time.”

Recent work spans recording five mariachi pieces for El Pollo Loco with Vitro to working with multiple composers in order to craft five decades of music for Honda’s Evolution commercial via Muse to orchestrating a virtuoso piano/violin duo performance cover of Twisted Sister’s “I Wanna Rock” for a Mitsubishi spot out of BSSP.

Quick Chat: Digital Arts’ Josh Heilbronner on Audi, Chase spots

New York City’s Digital Arts provided audio post on a couple of 30-second commercial spots that presented sound designer/mixer Josh Heilbronner with some unique audio challenges. They are Audi’s Night Watchman via agency Venables Bell & Partners in New York and Chase’s Mama Said Knock You Out, featuring Serena Williams from agency Droga5 in New York.

Josh Heilbronner

Heilbronner, who has been sound designing and mixing for broadcast and film for almost 10 years, has worked on large fashion brands like Nike and J Crew to Fortune 500 Companies like General Electric, Bank of America and Estee Lauder. He has also mixed promos and primetime broadcast specials for USA Network, CBS and ABC Television. In addition to commercial VO recording, editing and mixing, Heilbronner has a growing credit list of long-form documentaries and feature films, including The Broken Ones, Romance (In the Digital Age), Generation Iron 2, The Hurt Business and Giving Birth in America (a CNN special series).

We recently reached out to Heilbronner to find out more about these two very different commercial projects and how he tackled each.

Both Audi and Chase are very different assignments from an audio perspective. How did these projects come your way?
On Audi, we were asked to be part of their new 2019 A7 campaign, which follows a security guard patrolling the Audi factory in the middle of night. It’s sort of James Bond meets Night at the Museum. The factory is full of otherworldly rooms built to put the cars through their paces (extreme cold, isolation etc.). Q Department did a great job crafting the sounds of those worlds and really bringing the viewer into the factory. Agency Venables & Bell were looking to really pull everything together tightly and have the dialogue land up-front, while still maintaining the wonderfully lush and dynamic music and sound design that had been laid down already.

The Chase Serena campaign is an impact-driven series of spots. Droga5 has a great reputation for putting together cinematic spots and this is no exception. Drazen Bosnjak from Q Department originally reached out to see if I would be interested in mixing this one because one of the final deliverables was the Jumbotron at the US Open in Arthur Ashe Stadium.

Digital Arts has a wonderful 7.1 Dolby approved 4K theater, so we were able to really get a sense of what the finals would sound and look like up on the big screen.

Did you have any concerns going into the project about what would be required creatively or technically?
For Audi our biggest challenge was the tight deadline. We mixed in New York but we had three different time zones in play, so getting approvals could sometimes be difficult. With Chase, the amount of content for this campaign was large. We needed to deliver finals for broadcast, social media (Snapchat, Instagram, Facebook, Twitter), Jumbotron and cinema. Making sure they played back as loud and crisp as they could on all those platforms was a major focus.

What was the most challenging aspect for you on the project?
As with a lot of production audio, the noise on set was pretty extreme. For Audi they had to film the night watchman walking in different spaces, delivering the copy at a variety of volumes. It all needed to gel together as if he was in one smaller room talking directly to the camera, as if he were a narrator. We didn’t have access to re-record him, so we had to use a few different denoise tools, such as iZotope RX6, Brusfri and Waves WNS to clear out the clashing room tones.

The biggest challenge on Chase was the dynamic range and power of these spots. Serena beautifully hushed whisper narration is surrounded by impactful bass drops, cinematic hits and lush ambiences. Reigning all that in, building to a climax and still having her narration be the focus was a game of cat and mouse. Also, broadcast standards are a bit restrictive when it comes to large impacts, so finding the right balance was key.

Any interesting technology or techniques that you used on the project?
I mainly use Avid Pro Tools Ultimate 2018. They have made some incredible advancements — you can now do everything on one machine, all in the box. I can have 180 tracks running in a surround session and still print every deliverable (5.1, stereo, stems etc.) without a hiccup.

I’ve been using Penteo 7 Pro for stereo 5.1 upmixing. It does a fantastic job filling in the surrounds, but also folds down to stereo nicely (and passes QC). Spanner is another useful tool when working with all sorts of channel counts. It allows me to down-mix, rearrange channels and route audio to the correct buses easily.

DigitalGlue 2.5

Behind the Title: Mr. Bronx sound designer/mixer Dave Wolfe

NAME: Dave Wolfe

COMPANY: NYC’s Mr. Bronx Audio Post

CAN YOU DESCRIBE YOUR COMPANY?
Mr. Bronx is an audio post and sound design studio that works on everything from TV and film to commercials and installations.

WHAT’S YOUR JOB TITLE AND WHAT DOES THAT ENTAIL?
I am a partner and mixer. I do mostly sound design, dialogue editing and re-recording mixing. But I also have to manage the Bronx team, help create bids and get involved on the financial side.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER YOUR TITLE?
They would be surprised how often I change out old toilet paper rolls.

WHAT TOOLS DO YOU USE IN YOUR WORK?
Avid Pro Tools, and a ton of our sound design is created with Native Instruments Komplete, specifically, Reaktor and Kontakt.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love helping to push the story further. Also, I like how fast the turnover is on sound jobs. We’re always getting to tackle new challenges — we come in toward the end of a project, do our job and move on.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime. We’re blessed with a full-time in-house chef named Gen Sato. He’s been here maybe six or seven years. He makes great cold soba noodles in the summer and David Chang’s Bo Ssam in the winter. David Chang has a well-known NYC restaurant called Momofuku’s Ssam Bar. Bo Ssam is a slow-roasted pork shoulder with a sugary crust, placed in a lettuce wrap with rice and a ginger scallion sauce.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was going to be a lawyer before I had this job. Now it’s hard to imagine what I would do without this gig, but if I had to choose, I would open a Jewish deli in Rhinebeck, New York. I could sell pastrami and lox.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I came to it late. I didn’t get my first apprenticeship until I was 25. A lot of kids tend to go to school for audio now.

I have a business degree, and I wanted to work for a record label. The first opening I found was in business affairs, so I started moving down that path. After the first two to three years there, however, I realized I was unhappy because I was creatively unfulfilled.

One day I went to MetLife Stadium for a football game and a girl asked what I would rather be doing instead. I said, “I’d rather be a mixer.” She said, “I know someone who is hiring.” Two weeks later, I had left my job and took on an apprenticeship at a mix house.

Random Acts of Flyness

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We finished a TV show for HBO this summer that aired at the end of August called Random Acts of Flyness. It was a super creative challenge. It’s a variety show with live-action shorts, some sketch work, animated pieces and stop-motion animation. We would turn around an episode a week. Sound design, dialogue edit, ADR, music edit. Take the project from soup to nuts, from an audio perspective.

The creator, Terence Nance, had a very specific vision for the project. HBO said it’s, “A fluid, stream-of-conscious response to the contemporary American mediascape.” Originally, I didn’t know what that meant, but after a couple minutes of watching, it made perfect sense.

We’ve also completed the first season of the comedy show 2 Dope Queens on HBO, with the second season coming up. We also did another as-of-yet untitled project for Hulu, and there are many more exciting works to come.

2 Dope Queens

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
This would also be Random Acts of Flyness. We were so proud to help bring this to life by supplying some heavy sound design.We love to lend a hand in order to tell really necessary stories.

It was also big for our company. We hired a new mixer, Geoff Strasser, who led the charge for us on this project. We knew that he was going to be a great fit, personality and skill set-wise.

One of our other mixers, Eric Hoffman, mixed and sound designed Lemonade almost single-handedly. Speaking as someone who helped start the company, I couldn’t be prouder of the people I get to work with.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Like every other person who works in audio post, there’s something I heavily use called an iZotope RX Post Production Suite. It’s a set of audio restoration plugins, and you can’t live without it if you do our type of work.

When someone is making a movie, TV show or commercial, they tend to leave audio to the end. They don’t usually spend a lot of time on it in production — as the saying goes, “we’ll fix it in post,” and these tools are how we fix it.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a 1966 Ford pickup truck, so right now I’m meditatively polishing the hubcaps. That and playing my PS4.


Review: Audionamix IDC for cleaning dialogue

By Claudio Santos

Sound editing has many different faces. It is part of big-budget blockbuster movies and also an integral part of small hobby podcasting projects. Every project has its own sound needs. Some edit thousands upon thousands of sound effects. Others have to edit hundreds of hours of interviews. What most projects have in common, though, is that they circle around dialogue, whether in the form of character lines, interviews, narrators or any other format by which the spoken word guides the experience.

Now let’s be honest, dialogue is not always well recorded. Archival footage needs to be understood, even if the original recording was made with a microphone that was 20 feet away from the speaker in a basement full of machines. Interviews are often quickly recorded in the five minutes an artist has between two events while driving from point A to point B. And until electric cars are the norm, the engine sound will always be married to that recording.

The fact is, recordings are sometimes a little bit noisier than ideal, and it falls upon the sound editor to make it a little bit clearer.

To help with that endeavor, Audionamix has come out with the newest version of their IDC (Instant Dialogue Cleaner). I have been testing it on different kinds of material and must say that overall I’m very impressed with it.Let’s first get the awkward parts of this conversation out of the way. First, let’s see what the IDC is not.

– It is not a full-featured restoration workstation, such as Izotope RX.
– It does not depend on the cloud like other Audionamix plugins.
– It is not magic.

Honestly, all that is fine because what it does do, it does very well and in a very straightforward manner.

IDC aims to keep it simple. You get three controls plus output level and bypass. This makes trying out the plugin on different samples of audio a very quick task, which means you don’t waste time on clips that are beyond salvation.
The three controls you get are:
– Strength: The aggressiveness of the algorithm
– Background: Level of the separated background noise
– Speech: Level of the separated speech

Like all digital processing tools, things sound a bit techno glitchy toward the extremes of the scales, but within reasonable parameters the plugin makes a very good job of reducing background levels without gargling up the speech too noticeably. I personally had fairly good results with strengths around 40% to 60%, and background reductions of up to -24 dB. Anything more radical than that sounded heavily processed.

Now, it’s important to make a note that not all noise is the same. In fact, there are entirely different kinds of audio muck that obscures dialogue, and the IDC is more effective against some than others.

Noise reduction comparison between original clip (1), Cedar DNS Two VST (2), Audionamix IDC (3) and Izotope RX 7 Voice Denoise (4). The clip presents loud air conditioner noise in the background of close mic’d dialogue. All plugins had their level boosted by +4dB after processing.

– Constant broadband background noise (air conditioners, waterfalls, freezers): Here the IDC does fairly well. I couldn’t notice a lot of pumping at the beginning and end of phrases, and the background didn’t sound crippled either.

– Varying broadband background noise (distant cars passing, engines from inside cars): Here again, the IDC does a good job of increasing the dialogue/background ratio. It does introduce artifacts when the background noise spikes or varies very abruptly, but if the goal is to increase intelligibility then it is definitely a success in that area.

– Wind: On this kind of noise the IDC needs a little helping hand from other processes. I tried to clean up some heavily winded dialogue, and even though the wind was indeed lowered significantly so was the speech under it, resulting in a pumping clip that went up and down following the shadow of the removed wind. I believe with some pre-processing using high pass filters and a little bit of limiting the results could have been better, but if you are emergency buying this to clean up bad wind audio I’d definitely keep that in mind. It does work well on light wind reduction, but in those cases as well it seems it benefits from some pre-processing.

Summing Up
I am happily impressed by the plugin. It does not work miracles, but no one should really expect any tool to do so. It is great at improving the signal-to-noise ratio of your sound and does so in a very easy-to-use interface, which allows you to quickly decide whether you like the results or not. That alone is a plus that should be kept in consideration.


Claudio Santos is a sound mixer and tech aficionado who works at Silver Sound in NYC. He has worked on a wide range of sound projects ranging from traditional shows like I Was Prey for the Animal Planet and VR experiences like The Mile-Long Opera.


Cutters New York adds spot editor Alison Grasso

Cutters Studios in New York has added is commercial editor Alison Grasso to its staff. Previously a staff editor for Crew Cuts in New York, Grasso started her commercial career with the company immediately upon graduation from NYU (BFA, Film and Television Production).

She has experience in documentary-style visual storytelling, beauty and fashion and has collaborated with brands such as Garnier, Gatorade, L’Oreal, Pantene, Target and Verizon.

She cuts with Adobe Premiere on a Mac and uses After Effects when extra work is needed. Grasso also edits audio, such the entire second season of the podcast Limetown, and promotional audio material for the audio documentary The Wilderness, hosted by Pod Save America’s Jon Favreau.

When asked about editing audio, in particular Limetown, she says, “Premiere is obviously my ‘first language,’ so that made it much easier and faster to work with, versus something like Audition or Pro Tools), and I actually did use the video track to create visual slates and markers to help me through the edits. Since the episodes were often 30 to 60 minutes, it was incredibly helpful in jumping to certain scenes or sections, determining where mid-roll should be, how long certain scenes were playing out to be, etc. And when sharing with other people in the workflow (producers, directors, sound designers, etc.), I would export a QuickTime with a video track that made working remotely on comments and changes much quicker and easier, versus just referencing timecode and listening for contextual cues to get to a certain point in the edit.

Her talents don’t only include editing. Grasso is also a director, shooter, writer, editor and on-camera talent. Many New York stories — and in particular, those involving craft beer — have taken the spotlight in her latest projects.

“I aspire to do work that isn’t confined by boundaries,” says Grasso. “After seeing the breadth of work from Cutters Studios that supports global clients with projects that reach beyond the traditional, I’m confident the relationship will be a great fit. I’m really looking forward to contributing my sensibilities to the Cutters Studios culture, and being a positive, collaborative voice amongst my new peers, clients and colleagues.”


Making audio pop for Disney’s Mary Poppins Returns

By Jennifer Walden

As the song says, “It’s a jolly holiday with Mary.” And just in time for the holidays, there’s a new Mary Poppins musical to make the season bright. In theaters now, Disney’s Mary Poppins Returns is directed by Rob Marshall, who with Chicago, Nine and Into the Woods on his resume, has become the master of modern musicals.

Renée Tondelli

In this sequel, Mary Poppins (Emily Blunt) comes back to help the now-grown up Michael (Ben Whishaw) and Jane Banks (Emily Mortimer) by attending to Michael’s three children: Annabel (Pixie Davies), John (Nathanael Saleh) and Georgie (Joel Dawson). It’s a much-needed reunion for the family as Michael is struggling with the loss of his wife.

Mary Poppins Returns is another family reunion of sorts. According to Renée Tondelli, who along with Eugene Gearty, supervised and co-designed the sound, director Marshall likes to use the same crews on all his films. “Rob creates families in each phase of the film, so we all have a shorthand with each other. It’s really the most wonderful experience you can have in a filmmaking process,” says Tondelli, who has worked with Marshall on five films, three of which were his musicals. “In the many years of working in this business, I have never worked with a more collaborative, wonderful, creative team than I have on Mary Poppins Returns. That goes for everyone involved, from the picture editor down to all of our assistants.”

Sound editorial took place in New York at Sixteen 19, the facility where the picture was being edited. Sound mixing was also done in New York, at Warner Bros. Sound.

In his musicals, Marshall weaves songs into scenes in a way that feels organic. The songs are coaxed from the emotional quotient of the story. That’s not only true for how the dialogue transitions into the singing, but also for how the music is derived from what’s happening in the scene. “Everything with Rob is incredibly rhythmic,” she says. “He has an impeccable sense of timing. Every breath, every footstep, every movement has a rhythmic cadence to it that relates to and works within the song. He does this with every artform in the production — with choreography, production design and sound design.”

From a sound perspective, Tondelli and her team worked to integrate the songs by blending the pre-recorded vocals with the production dialogue and the ADR. “We combined all of those in a micro editing process, often syllable by syllable, to create a very seamless approach so that you can’t really tell where they stop talking and start singing,” she says.

The Conversation
For example, near the beginning of the film, Michael is looking through the attic of their home on Cherry Tree Lane as he speaks to the spirit of his deceased wife, telling her how much he misses her in a song called “The Conversation.” Tondelli explains, “It’s a very delicate scene, and it’s a song that Michael was speaking/singing. We constantly cut between his pre-records and his production dialogue. It was an amazing collaboration between me, the supervising music editor Jennifer Dunnington and re-recording mixer Mike Prestwood Smith. We all worked together to create this delicate balance so you really feel that he is singing his song in that scene in that moment.”

Since Michael is moving around the attic as he’s performing the song, the environment affects the quality of the production sound. As he gets closer to the window, the sound bounces off the glass. “Mike [Prestwood Smith] really had his work cut out for him on that song. We were taking impulse responses from the end of the slates and feeding them into Audio Ease’s Altiverb to get the right room reverb on the pre-records. We did a lot of impulse responses and reverbs, and EQs to make that scene all flow, but it was worth it. It was so beautiful.”

The Bowl
They also captured impulse responses for another sequence, which takes place inside a ceramic bowl. The sequence begins with the three Banks children arguing over their mother’s bowl. They accidentally drop it and it breaks. Mary and Jack (Lin-Manuel Miranda) notice the bowl’s painted scenery has changed. The horse-drawn carriage now has a broken wheel that must be fixed. Mary spins the bowl and a gust of wind pulls them into the ceramic bowl’s world, which is presented in 2D animation. According to Tondelli, the sequence was hand-drawn, frame by frame, as an homage to the original Mary Poppins. “They actually brought some animators out of retirement to work on this film,” she says.

Tondelli and co-supervising sound editor/co-sound designer Eugene Gearty placed mics inside porcelain bowls, in a porcelain sink, and near marble tiles, which they thumped with rubber mallets, broken pieces of ceramic and other materials. The resulting ring-out was used to create reverbs that were applied to every element in the ceramic bowl sequence, from the dialogue to the Foley. “Everything they said, every step they took had to have this ceramic feel to it, so as they are speaking and walking it sounds like it’s all happening inside a bowl,” Tondelli says.

She first started working on this hand-drawn animation sequence when it showed little more than the actors against a greenscreen with a few pencil drawings. “The fastest and easiest way to make a scene like that come alive is through sound. The horse, which was possibly the first thing that was drawn, is pullling the carriage. It dances in this syncopated rhythm with the music so it provides a rhythmic base. That was the first thing that we tackled.”

After the carriage is fixed, Mary and her troupe walk to the Royal Doulton Music Hall where, ultimately, Jack and Mary are going to perform. Traditionally, a music hall in London is very rowdy and boisterous. The audience is involved in the show and there’s an air of playfulness. “Rob said to me, ‘I want this to be an English music hall, Renée. You really have to make that happen.’ So I researched what music halls were like and how they sounded.”

Since the animation wasn’t complete, Tondelli consulted with the animators to find out who — or rather what — was going to be in the audience. “There were going to be giraffes dressed up in suits with hats and Indian elephants in beautiful saris, penguins on the stage dancing with Jack and Mary, flamingos, giant moose and rabbits, baby hippos and other animals. The only way I thought I could do this was to go to London and hire actors of all ages who could do animal voices.”

But there were some specific parameters that had to be met. Tondelli defines the world of Mary Poppins Returns as being “magical realism,” so the animals couldn’t sound too cartoony. They had to sound believably like animal versions of British citizens. Also, the actors had to be able to sing in their animal voices.

According to Tondelli, they recorded 15 actors at a time for a period of five days. “I would call out, ‘Who can do an orangutan?’ And then the actors would all do voices and we’d choose one. Then they would do the whole song and sing out and call out. We had all different accents — Cockney, Welsh and Scottish,” she says. “All the British Isles came together on this and, of course, they all loved Mary and knew all the songs so they sang along with her.”

On the Dolby Atmos mix, the music hall scene really comes alive. The audience’s voices are coming from the rafters and all around the walls and the music is reverberating into the space — which, by the way, no longer sounds like it’s in a ceramic bowl even though the music hall is in the ceramic bowl world. In addition to the animal voices, there are hooves and paws for the animals’ clapping. “We had to create the clapping in Foley because it wasn’t normal clapping,” explains Tondelli. “The music hall was possibly the most challenging, but also the funnest scene to do. We just loved it. All of us had a great time on it.”

The Foley
The Foley elements in Mary Poppins Returns often had to be performed in perfect sync with the music. On the big dance numbers, like “Trip the Light Fantastic,” the Foley was an essential musical element since the dances were reconstructed sonically in post. “Everything for this scene was wiped away, even the vocals. We ended up using a lot of the records for this one and a lot less production sound,” says Tondelli.

In “Trip the Light Fantastic,” Jack is bringing the kids back home through the park, and they emerge from a tunnel to see nearly 50 lamplighters on lampposts. Marshall and John DeLuca (choreographer/producer/screen story writer) arranged the dance to happen in multiple layers, with each layer doing something different. “The background dancers were doing hand slaps and leg swipes, and another layer was stepping on and off of these slate surfaces. Every time the dancers would jump up on the lampposts, they’d hit it and each would ring out in a different pitch,” explains Tondelli.

All those complex rhythms were performed in Foley in time to the music. It’s a pretty tall order to ask of any Foley artist but Tondelli has the perfect solution for that dilemma. “I hire the co-choreographers (for this film, Joey Pizzi and Tara Hughes) or dancers that actually worked on the film to do the Foley. It’s something that I always do for Rob’s films. There’s such a difference in the performance,” she says.

Tondelli worked with the Foley team of Marko Costanzo and George Lara at c5 Sound in New York, who helped to build custom surfaces — like a slate-on-sand surface for the lamplighter dance — and arrange multi-surface layouts to optimally suit the Foley performer’s needs.

For instance, in the music hall sequence, the dance on stage incorporates books, so they needed three different surfaces: wood, leather and a papery-sounding surface set up in a logical, easily accessible way. “I wanted the dancer performing the Foley to go through the entire number while jumping off and on these different surfaces so you felt like it was a complete dance and not pieced together,” she says.

For the lamplighter dance, they had a big, thick pig iron pipe next to the slate floor so that the dancer performing the Foley could hit it every time the dancers on-screen jumped up on the lampposts. “So the performer would dance on the slate floor, then hit the pipe and then jump over to the wood floor. It was an amazingly syncopated rhythmic soundtrack,” says Tondelli.

“It was an orchestration, a beautiful sound orchestra, a Foley orchestra that we created and it had to be impeccably in sync. If there was a step out of place you’d hear it,” she continues. “It was really a process to keep it in sync through all the edit conforms and the changes in the movie. We had to be very careful doing the conforms and making the adjustments because even one small mistake and you would hear it.”

The Wind
Wind plays a prominent role in the story. Mary Poppins descends into London on a gust of wind. Later, they’re transported into the ceramic bowl world via a whirlwind. “It’s everywhere, from a tiny leaf blowing across the sidewalk to the huge gale in the park,” attests Tondelli. “Each one of those winds has a personality that Eugene [Gearty] spent a lot of time working on. He did amazing work.”

As far as the on-set fans and wind machines wreaking havoc on the production dialogue, Tondelli says there were two huge saving graces. First was production sound mixer Simon Hayes, who did a great job of capturing the dialogue despite the practical effects obstacles. Second was dialogue editor Alexa Zimmerman, who was a master at iZotope RX. All told, about 85% of the production dialogue made it into the film.

“My goal — and my unspoken order from Rob — was to not replace anything that we didn’t have to. He’s so performance-oriented. He arduously goes over every single take to make sure it’s perfect,” says Tondelli, who also points out that Marshall isn’t afraid of using ADR. “He will pick words from a take and he doesn’t care if it’s coming from a pre-record and then back to ADR and then back to production. Whichever has the best performance is what wins. Our job then is to make all of that happen for him.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeny


Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko


First Man: Historical fiction meets authentic sound

By Jennifer Walden

Historical fiction is not a rigidly factual account, but rather an interpretation. Fact and fiction mix to tell a story in a way that helps people connect with the past. In director Damien Chazelle’s film First Man, audiences experience his vision of how the early days of space exploration may have been for astronaut Neil Armstrong.

Frank A. Montaño

The uncertainty of reaching the outer limits of Earth’s atmosphere, the near disasters and mistakes that led to the loss of several lives and the ultimate success of landing on the moon. These things are presented so viscerally that the audience feels as though they are riding along with Armstrong.

While First Man is not a documentary, there are factual elements in the film, particularly in the sound. “The concept was to try to be true to the astronauts’ sonic experience. What would they hear?” says effects re-recording mixer Frank A. Montaño, who mixed the film alongside re-recording mixer Jon Taylor (on dialogue/music) in the Alfred Hitchcock Theater at Universal Studios in Los Angeles.

Supervising sound editors Ai-Ling Lee (who also did re-recording mixing on the film) and Milly Iatrou were in charge of designing a soundtrack that was both authentic and visceral — a mix of reality and emotionality. When Armstrong (Ryan Gosling) and Dave Scott (Christopher Abbott) are being shot into space on a Gemini mission, everything the audience hears may not be completely accurate, but it’s meant to produce the accurate emotional response — i.e., fear, uncertainty, excitement, anxiety. The sound helps the audience to connect with the astronauts strapped into that handcrafted space capsule as it rattles and clatters its way into space.

As for the authentic sounds related to the astronauts’ experience — from the switches and toggles to the air inside the spacesuits — those were collected by several members of the post sound team, including Montaño, who by coincidence is an avid fan of the US space program and full of interesting facts on the subject. Their mission was to find and record era-appropriate NASA equipment and gear.

Recording
Starting at ILC Dover in Frederica, Delaware — original manufacturers of spacesuits for the Apollo missions — Montaño and sound effects recordist Alex Knickerbocker recorded a real A7L-B, which, says Montaño, is the second revision of the Apollo suit. It was actually worn by astronaut Paul Weiss, although it wasn’t the one he wore in space. “ILC Dover completely opened up to us, and were excited for this to happen,” says Montaño.

They spent eight hours recording every detail of the suit, like the umbilicals snapping in and out of place, and gloves and helmet (actually John Young’s from Apollo 10) locking into the rings. “In the film, when you see them plug in the umbilical for water or air, that’s the real sound. When they are locking the bubble helmet on to Neil’s suit in the clean room, that’s the real sound,” explains Montaño.

They also captured the internal environment of the spacesuit, which had never been officially documented before. “We could get hours of communications — that was easy — but there was no record of what those astronauts [felt like in those] spacesuits for that many hours, and how those things kept them alive,” says Montaño.

Back at Universal on the Hitchcock stage, Taylor and mix tech Bill Meadows were receiving all the recorded sounds from Montaño and Knickerbocker, who were still at ILC Dover. “We weren’t exactly in the right environment to get these recordings, so JT [Jon Taylor] and Bill let us know if it was a little too live or a little too sharp, and we’d move the microphones or try different microphones or try to get into a quieter area,” says Montaño.

Next, Montaño and Knickerbocker traveled to the US Space and Rocket Center in Huntsville, Alabama, where the Saturn V rocket was developed. “This is where Wernher von Braun (chief architect of the Saturn V rocket) was based out of, so they have a huge Apollo footprint,” says Montaño. There they got to work inside a Lunar Excursion Module (LEM) simulator, which according to Montaño was one of only two that were made for training. “All Apollo astronauts trained in these simulators including Neil and Buzz, so it was under plexiglass as it was only for observation. But, they opened it up to us. We got to go inside the LEM and flip all the switches, dials, and knobs and record them. It was historic. This has never been done before and we were so excited to be there,” says Montaño.

Additionally, they recorded a DSKY (Display and Keypad) flight guidance computer used by the crew to communicate with the LEM computer. This can be seen during the sequence of Buzz (Corey Stoll) and Neil landing on the moon. “It has this big numeric keypad, and when Buzz is hitting those switches it’s the real sound. When they flip all those switch banks, all those sounds are the real deal,” reports Montaño.

Other interesting recording adventures include the Cosmosphere in Hutchinson, Kansas, where they recorded all the switches and buttons of the original control flight consoles from Mission Control at the Johnson Space Center (JSC). At Edwards Airforce Base in Southern California, they recorded Joe Walker’s X-15 suit, capturing the movement and helmet sounds.

The team also recorded Beta cloth at the Space Station Museum in Novato, California, which is the white-colored, fireproof silica fiber cloth used for the Apollo spacesuits. Gene Cernan’s (Apollo 17) connector cover was used, which reportedly sounds like a plastic bag or hula skirt.

Researching
They also recreated sounds based on research. For example, they recorded an approximation of lunar boots on the moon’s surface but from exterior perspective of the boots. What would boots on the lunar surface sound like from inside the spacesuit? First, they did the research to find the right silicone used during that era. Then Frank Cuomo, who is a post supervisor at Universal, created a unique pair of lunar boots based on Montaño’s idea of having ports above the soles, into which they could insert lav mics. “Frank happens to do this as a hobby, so I bounced this idea for the boots off of him and he actually made them for us,” says Montaño.

Next, they researched what the lunar surface was made of. Their path led to NASA’s Ames Research Center where they have an eight-ton sandbox filled with JSC-1A lunar regolith simulant. “It’s the closest thing to the lunar surface that we have on earth,” he explains.

He strapped on the custom-made boots and walked on this “lunar surfasse” while Knickerbocker and sound effects recordist Peter Brown captured it with numerous different mics, including a hydrophone placed on the surface “which gave us a thuddy, non-pitched/non-fidelity-altered sound that was the real deal,” says Montaño. “But what worked best, to get that interior sound, were the lav mics inside those ports on the soles.”

While the boots on the lunar surface sound ultimately didn’t make it into the film, the boots did come in handy for creating a “boots on LEM floor” sound. “We did a facsimile session. JT (Taylor) brought in some aluminum and we rigged it up and got the silicone soles on the aluminum surface for the interior of the LEM,” says Montaño.

Jon Taylor

Another interesting sound they recreated was the low-fuel alarm sound inside the LEM. According to Montaño, their research uncovered a document that shows the alarm’s specific frequencies, that it was a square wave, and that it was 750 cycles to 2,000 cycles. “The sound got a bit tweaked out just for excitement purposes. You hear it on their powered descent, when they’re coming in for a landing on the moon, and they’re low on fuel and 20 seconds from a mandatory abort.”

Altogether, the recording process was spread over nearly a year, with about 98% of their recorded sounds making it into the final soundtrack, Taylor says, “The locking of the gloves, and the locking and handling of the helmet that belonged to John Young will live forever. It was an honor to work with that material.”

Montaño adds, “It was good to get every angle that we could, for all the sounds. We spent hours and hours trying to come up with these intangible pieces that only a handful of people have ever heard, and they’re in the movie.”

Helmet Comms
To recreate the comms sound of the transmissions back and forth between NASA and the astronauts, Montaño and Taylor took a practical approach. Instead of relying on plug-ins for futz and reverb, they built a 4-foot-by-3-foot isolated enclosure on wheels, deadened with acoustical foam and featuring custom fit brackets inside to hold either a high-altitude helmet (to replicate dialogue for the X-15 and the Gemini missions) or a bubble helmet (for the Apollo missions).

Each helmet was recorded independently using its own two-way coaxial car speaker and a set of microphones strapped to mini tripods that were set inside each helmet in the enclosure. The dialogue was played through the speaker in the helmet and sent back to the console through the mics. Taylor says, “It would come back really close to being perfectly in sync. So I could do whatever balance was necessary and it wouldn’t flange or sound strange.”

By adjusting the amount of helmet feed in relation to the dry dialogue, Taylor was able to change the amount of “futz.” If a scene was sonically dense, or dialogue clarity wasn’t an issue (such as the tech talk exchanges between Houston and the astronauts), then Taylor could push the futz further. “We were constantly changing the balance depending on what the effects and music were doing. Sometimes we could really feel the helmet and other times we’d have to back off for clarity’s sake. But it was always used, just sometimes more than others.”

Density and Dynamics
The challenge of the mix on First Man was to keep the track dynamic and not let the sound get too loud until it absolutely needed to. This made the launches feel powerful and intense. “If everything were loud up to that point, it just wouldn’t have the same pop,” says Taylor. “The director wanted to make sure that when we hit those rockets they felt huge.

One way to support the dynamics was choosing how to make the track appropriately less dense. For example, during the Gemini launch there are the sounds of the rocket’s different stages as it blasts off and breaks through the atmosphere, and there’s the sound of the space capsule rattling and metal groaning. On top of that, there’s Neil’s voice reading off various specs.

“When it comes to that kind of density sound-wise, you have to decide should we hear the actors? Are we with them? Do we have to understand what they are saying? In some cases, we just blew through that dialogue because ‘RCS Breakers’ doesn’t mean anything to anybody, but the intensity of the rocket does. We wanted to keep that energy alive, so we drove through the dialogue,” says Montaño. “You can feel that Neil’s calm, but you don’t need to understand what he’s saying. So that was a trick in the balance; deciding what should be heard and what we can gloss over.”

Another helpful factor was that the film’s score, by composer Justin Hurwitz, wasn’t bombastic. During the rocket launches, it wasn’t fighting for space in the mix. “The direction of the music is super supportive and it never had to play loud. It just sits in the pocket,” says Taylor. “The Gemini launch didn’t have music, which really allowed us to take advantage of the sonic structure that was built into the layers of sound effects and design for the take off.”

Without competition from the music and dialogue, the effects could really take the lead and tell the story of the Gemini launch. The camera stays close-up on Neil in the cockpit and doesn’t show an exterior perspective (as it does during the Apollo launch sequence). The audiences’ understanding of what’s happening comes from the sound. You hear the “bbbbbwhoop” of the Titan II missile during ignition, and hear the liftoff of the rocket. You hear the point at which they go through maximum dynamic pressure, characterized by the metal rattling and groaning inside the capsule as it’s subjected to extreme buffeting and stress.

Next you hear the first stage cut-off and the initial boosters break away followed by the ignition of the second stage engine as it takes over. Then, finally, it’s just the calmness of space with a few small metal pings and groans as the capsule settles into orbit.

Even though it’s an intense sequence, all the details come through in the mix. “Once we got the final effects tracks, as usual, we started to add more layers and more detail work. That kind of shaping is normal. The Gemini launch builds to that moment when it comes to an abrupt stop sonically. We built it up layer-wise with more groan, more thrust, more explosive/low-end material to give it some rhythm and beats,” says Montaño.

Although the rocket sounds like it’s going to pieces, Neil doesn’t sound like he’s going to pieces. He remains buttoned-up and composed. “The great thing about that scene was hearing the contrast between this intense rocket and the calmness of Neil’s voice. The most important part of the dialogue there was that Neil sounded calm,” says Taylor.

Apollo
Visually, the Apollo launch was handled differently in the film. There are exterior perspectives, but even though the camera shows the launch from various distances, the sound maintains its perspective — close as hell. “We really filled the room up with it the whole time, so it always sounds large, even when we are seeing it from a distance. You really feel the weight and size of it,” says Montaño.

The rocket that launched the Apollo missions was the most powerful ever created: the Saturn V. Recreating that sound was a big job and came with a bit of added pressure from director Chazelle. “Damien [Chazelle] had spoken with one of the Armstrong sons, Mark, who said he’s never really felt or heard a Saturn V liftoff correctly in a film. So Damien threw it our way. He threw down the gauntlet and challenged us to make the Armstrong family happy,” says Montaño.

Field recordists John Fasal and Skip Longfellow were sent to record the launch of the world’s second largest rocket — SpaceX’s Falcon Heavy. They got as close as they could to the rocket, which generated 5.5 million pounds of thrust. They also recorded it at various distances farther away. This was the biggest component of their Apollo launch sound for the film. It’s also bolstered by recordings that Lee captured of various rocket liftoffs at Vandenberg Air Force Base in California.

But recreating the world’s most powerful rocket required some mega recordings that regular mics just couldn’t produce. So they headed over to the Acoustic Test Chamber at JPL in Pasadena, which is where NASA sonically bombards and acoustically excites hardware before it’s sent into space. “They simulate the conditions of liftoff to see if the hardware fails under that kind of sound pressure,” says Montaño. They do this by “forcing nitrogen gas through this six-inch hose that goes into a diaphragm that turns that gas into some sort of soundwave, like pink noise. There are four loudspeakers bolted to the walls of this hard-shelled room, and the speakers are probably about 4’x4’ feet. It goes up to 153dB in there; that’s max.” (Fun Fact: The sound team wasn’t able to physically be in the room to hear the sound since the gas would have killed them. They could only hear the sound via their recordings.)

The low-end energy of that sound was a key element in their Apollo launch. So how do you capture the most low-end possible from a high-SPL source? Taylor had an interesting solution of using a 10-inch bass speaker as a microphone. “Years ago, while reading a music magazine, I discovered this method of recording low-end using a subwoofer or any bass speaker. If you have a 10-inch speaker as a mic, you’re going to be able to capture much more low-end. You may even be able to get as low as 7Hz,” Taylor says.

Montaño adds, “We were able to capture another octave lower than we’d normally get. The sounds we captured really shook the room, really got your chest cavity going.”
For the rocket sequences — the X-15 flight, the Gemini mission and the Apollo mission —their goal was to craft an experience the audience could feel. It was about energy and intensity, but also clarity.

Taylor concludes, “Damien’s big thing — which I love — is that he is not greedy when it comes to sound. Sometimes you get a movie where everything has to be big. Often, Damien’s notes were for things to be lower, to lower sounds that weren’t rocket affiliated. He was constantly making sure that we did what we could to get those rocket scenes to punch, so that you really felt it.”


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney


Capturing realistic dialogue for The Front Runner

By Mel Lambert

Early on in his process, The Front Runner director Jason Reitman asked frequent collaborator and production sound mixer Steve Morrow, CAS, to join the production. “It was maybe inevitable that Jason would ask me to join the crew,” says Morrow, who has worked with the director on Labor Day, Up in the Air and Thank You for Smoking. “I have been part of Jason’s extended family for at least 10 years — having worked with his father Ivan Reitman on Draft Day — and know how he likes to work.”

Steve Morrow

This Sony Pictures film was co-written by Reitman, Matt Bai and Jay Carson, and based on Bai’s book, “All the Truth Is Out.” The Front Runner follows the rise and fall of Senator Gary Hart, set during his unsuccessful presidential campaign in 1988 when he was famously caught having an affair with the much younger Donna Rice. Despite capturing the imagination of young voters, and being considered the overwhelming front runner for the Democratic nomination, Hart’s campaign was sidelined by the affair.

It stars Hugh Jackman as Gary Hart, Vera Farmiga as his wife Lee, J.K. Simmons as campaign manager Bill Dixon and Alfred Molina as the Washington Post’s managing editor, Ben Bradlee.

“From the first read-through of the script, I knew that we would be faced with some production challenges,” recalls Morrow, a 20-year industry veteran. “There were a lot of ensemble scenes with the cast talking over one another, and I knew from previous experience that Jason doesn’t like to rely on ADR. Not only is he really concerned about the quality of the sound we secure from the set — and gives the actors space to prepare — but Jason’s scripts are always so well-written that they shouldn’t need replacement lines in post.”

Ear Candy Post’s Perry Robertson and Scott Sanders, MPSE, served as co-supervising sound editors on the project, which was re-recorded on Deluxe Stage 2 — the former Glen Glenn Sound facility — by Chris Jenkins handling dialogue and music and Jeremy Peirson, CAS, overseeing sound effects. Sebastian Sheehan Visconti was sound effects editor.

With as many as two dozen actors in a busy scene, Morrow soon realized that he would have to mic all of the key campaign team members. “I knew that we were shooting a political film like Robert Altman’s All the President’s Men or [Michael Ritchie’s] The Candidate, so I referred back to the multichannel techniques pioneered by Jim Webb and his high-quality dialogue recordings. I elected to use up to 18 radio mics for those ensemble scenes,” including Reitman’s long opening sequence in which the audience learns who the key participants are on the campaign trail. I did this “while recording each actor on a separate track, together with a guide mono mix of the key participants for the picture editor Stefan Grube.”

Reitman is well known for his films’ elaborate opening title sequences and often highly subjective narration from a main character. His motion pictures typically revolve around characters that are brashly self-confident, but then begin to rethink their lives and responsibilities. He is also reported to be a fan of ‘70s-style cinema verite, which uses a meandering camera and overlapping dialogue to draw the audience into an immersive reality. The Front Runner’s soundtrack is layered with dialogue, together with a constant hum of conversation — from the principals to the press and campaign staff. Since Bai and Carson have written political speeches, Reitman had them on set to ensure that conversations sounded authentic.

Even though there might be four or so key participants speaking in a scene, “Jason wants to capture all of the background dialogue between working press and campaign staff, for example,” Morrow continues.

“He briefed all of the other actors on what the scene was about so they could develop appropriate conversations and background dialogue while the camera roamed around the room. In other words, if somebody was on set they got a mic — one track per actor. In addition to capturing everything, Jason wanted me to have fun with the scene; he likes a solid mix for the crew, dailies and picture editorial, so I gave him the best I could get. And we always had the ability to modify it later in post production from the iso mic channels.”

Morrow recorded the pre-fader individual tracks at between 10dB and 15dB lower than the main mix, “which I rode hot, knowing that we could go back and correct it in post. Levels on that main mix were within ±5 dB most of the time,” he says. Assisting Morrow during the 40-day shoot, which took place in and around Atlanta and Savannah, were Collin Heath and Craig Dollinger, who also served as the boom operator on a handful of scenes.

The mono production mix was also useful for the camera crew, says Morrow. “They sometimes had problems understanding the dramatic focus of a particular scene. In other words, ‘Where does my eye go?’ When I fed my mix to their headphones they came to understand which actors we were spotlighting from the script. This allowed them to follow that overview.”

Production Tools
Morrow used a Behringer Midas Model M32R digital console that features 16 rear-channel inputs and 16 more inputs via a stage box that connects to the M32R via a Cat-5 cable. The console provided pre-fader and mixed outputs to Morrow’s pair of 64-track Sound Devices 970 hard-disk recorders — a main and a parallel backup — via Audinate Dante digital ports. “I also carried my second M32R mixer as a spare,” Morrow says. “I turned over the Compact Flash media at the end of each day’s shooting and retained the contents of the 970’s internal 1TB SSDs and external back-up drives until the end of post, just in case. We created maybe 30GB of data per recorder per day.”

Color coding helps Morrow mix dialogue more accurately.

For easy level checking, the two recorders with front-panel displays were mounted on Morrow’s production sound cart directly above his mixing console. “When I can, I color code the script to highlight the dialogue of key characters in specific scenes,” he says. “It helps me mix more accurately.”

RF transmitters comprised two dozen Lectrosonics SSM Micro belt-pack units — Morrow bought six or seven more for the film — linked to a bank of Lectrosonics Venue2 modular four-channel and three-channel VR receivers. “I used my collection of Sanken COS-11D miniature lavalier microphones for the belt packs. They are my go-to lavs with clean audio output and excellent performance. I also have some DPA lavaliers, if needed.”

With 20+ RF channels simultaneously in use within metropolitan centers, frequency coordination was an essential chore to ensure consistent operation for all radio systems. “The Lectrosonics Venue receivers can auto-assign radio-mic frequencies,” Morrow explains. “The best way to do this is to have everything turned off, and then one by one let the system scan the frequency spectrum. When it finds a good channel, you assign it to the first microphone and then repeat that process for the next radio transmitters. I try to keep up with FCC deliberations [on diminishing RF spectrum space], but realize that companies who manufacture this equipment also need to be more involved. So, together, I feel good that we’ll have the separation we all need for successful shoots.”

Morrow’s setup.

Morrow also made several location recordings on set. “I mounted a couple of lavaliers on bumpers to secure car-byes and other sounds for supervising sound editor Perry Robertson, as well as backgrounds in the house during a Labor Day gathering. We also recorded Vera Farmiga playing the piano during one scene — she is actually a classically-trained pianist — using a DPA Model 5099 microphone (which I also used while working on A Star is Born). But we didn’t record much room tone, because we didn’t find it necessary.”

During scenes at a campaign rally, Morrow provided a small PA system that comprised a couple of loudspeakers mounted on a balcony and a vocal microphone on the podium. “We ran the system at medium-levels, simply to capture the reverb and ambiance of the auditorium,” he explains, “but not so much that it caused problems in post production.”

Summarizing his experience on The Front Runner, Morrow offers that Reitman, and his production partner Helen Estabrook, bring a team spirit to their films. “The set is a highly collaborative environment. We all hang out with one another and share birthdays together. In my experience, Jason’s films are always from the heart. We love working with him 120%. The low point of the shoot is going home!”


Mel Lambert has been involved with production and post on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.