Category Archives: Audio

First Man: Historical fiction meets authentic sound

By Jennifer Walden

Historical fiction is not a rigidly factual account, but rather an interpretation. Fact and fiction mix to tell a story in a way that helps people connect with the past. In director Damien Chazelle’s film First Man, audiences experience his vision of how the early days of space exploration may have been for astronaut Neil Armstrong.

Frank A. Montaño

The uncertainty of reaching the outer limits of Earth’s atmosphere, the near disasters and mistakes that led to the loss of several lives and the ultimate success of landing on the moon. These things are presented so viscerally that the audience feels as though they are riding along with Armstrong.

While First Man is not a documentary, there are factual elements in the film, particularly in the sound. “The concept was to try to be true to the astronauts’ sonic experience. What would they hear?” says effects re-recording mixer Frank A. Montaño, who mixed the film alongside re-recording mixer Jon Taylor (on dialogue/music) in the Alfred Hitchcock Theater at Universal Studios in Los Angeles.

Supervising sound editors Ai-Ling Lee (who also did re-recording mixing on the film) and Milly Iatrou were in charge of designing a soundtrack that was both authentic and visceral — a mix of reality and emotionality. When Armstrong (Ryan Gosling) and Dave Scott (Christopher Abbott) are being shot into space on a Gemini mission, everything the audience hears may not be completely accurate, but it’s meant to produce the accurate emotional response — i.e., fear, uncertainty, excitement, anxiety. The sound helps the audience to connect with the astronauts strapped into that handcrafted space capsule as it rattles and clatters its way into space.

As for the authentic sounds related to the astronauts’ experience — from the switches and toggles to the air inside the spacesuits — those were collected by several members of the post sound team, including Montaño, who by coincidence is an avid fan of the US space program and full of interesting facts on the subject. Their mission was to find and record era-appropriate NASA equipment and gear.

Recording
Starting at ILC Dover in Frederica, Delaware — original manufacturers of spacesuits for the Apollo missions — Montaño and sound effects recordist Alex Knickerbocker recorded a real A7L-B, which, says Montaño, is the second revision of the Apollo suit. It was actually worn by astronaut Paul Weiss, although it wasn’t the one he wore in space. “ILC Dover completely opened up to us, and were excited for this to happen,” says Montaño.

They spent eight hours recording every detail of the suit, like the umbilicals snapping in and out of place, and gloves and helmet (actually John Young’s from Apollo 10) locking into the rings. “In the film, when you see them plug in the umbilical for water or air, that’s the real sound. When they are locking the bubble helmet on to Neil’s suit in the clean room, that’s the real sound,” explains Montaño.

They also captured the internal environment of the spacesuit, which had never been officially documented before. “We could get hours of communications — that was easy — but there was no record of what those astronauts [felt like in those] spacesuits for that many hours, and how those things kept them alive,” says Montaño.

Back at Universal on the Hitchcock stage, Taylor and mix tech Bill Meadows were receiving all the recorded sounds from Montaño and Knickerbocker, who were still at ILC Dover. “We weren’t exactly in the right environment to get these recordings, so JT [Jon Taylor] and Bill let us know if it was a little too live or a little too sharp, and we’d move the microphones or try different microphones or try to get into a quieter area,” says Montaño.

Next, Montaño and Knickerbocker traveled to the US Space and Rocket Center in Huntsville, Alabama, where the Saturn V rocket was developed. “This is where Wernher von Braun (chief architect of the Saturn V rocket) was based out of, so they have a huge Apollo footprint,” says Montaño. There they got to work inside a Lunar Excursion Module (LEM) simulator, which according to Montaño was one of only two that were made for training. “All Apollo astronauts trained in these simulators including Neil and Buzz, so it was under plexiglass as it was only for observation. But, they opened it up to us. We got to go inside the LEM and flip all the switches, dials, and knobs and record them. It was historic. This has never been done before and we were so excited to be there,” says Montaño.

Additionally, they recorded a DSKY (Display and Keypad) flight guidance computer used by the crew to communicate with the LEM computer. This can be seen during the sequence of Buzz (Corey Stoll) and Neil landing on the moon. “It has this big numeric keypad, and when Buzz is hitting those switches it’s the real sound. When they flip all those switch banks, all those sounds are the real deal,” reports Montaño.

Other interesting recording adventures include the Cosmosphere in Hutchinson, Kansas, where they recorded all the switches and buttons of the original control flight consoles from Mission Control at the Johnson Space Center (JSC). At Edwards Airforce Base in Southern California, they recorded Joe Walker’s X-15 suit, capturing the movement and helmet sounds.

The team also recorded Beta cloth at the Space Station Museum in Novato, California, which is the white-colored, fireproof silica fiber cloth used for the Apollo spacesuits. Gene Cernan’s (Apollo 17) connector cover was used, which reportedly sounds like a plastic bag or hula skirt.

Researching
They also recreated sounds based on research. For example, they recorded an approximation of lunar boots on the moon’s surface but from exterior perspective of the boots. What would boots on the lunar surface sound like from inside the spacesuit? First, they did the research to find the right silicone used during that era. Then Frank Cuomo, who is a post supervisor at Universal, created a unique pair of lunar boots based on Montaño’s idea of having ports above the soles, into which they could insert lav mics. “Frank happens to do this as a hobby, so I bounced this idea for the boots off of him and he actually made them for us,” says Montaño.

Next, they researched what the lunar surface was made of. Their path led to NASA’s Ames Research Center where they have an eight-ton sandbox filled with JSC-1A lunar regolith simulant. “It’s the closest thing to the lunar surface that we have on earth,” he explains.

He strapped on the custom-made boots and walked on this “lunar surfasse” while Knickerbocker and sound effects recordist Peter Brown captured it with numerous different mics, including a hydrophone placed on the surface “which gave us a thuddy, non-pitched/non-fidelity-altered sound that was the real deal,” says Montaño. “But what worked best, to get that interior sound, were the lav mics inside those ports on the soles.”

While the boots on the lunar surface sound ultimately didn’t make it into the film, the boots did come in handy for creating a “boots on LEM floor” sound. “We did a facsimile session. JT (Taylor) brought in some aluminum and we rigged it up and got the silicone soles on the aluminum surface for the interior of the LEM,” says Montaño.

Jon Taylor

Another interesting sound they recreated was the low-fuel alarm sound inside the LEM. According to Montaño, their research uncovered a document that shows the alarm’s specific frequencies, that it was a square wave, and that it was 750 cycles to 2,000 cycles. “The sound got a bit tweaked out just for excitement purposes. You hear it on their powered descent, when they’re coming in for a landing on the moon, and they’re low on fuel and 20 seconds from a mandatory abort.”

Altogether, the recording process was spread over nearly a year, with about 98% of their recorded sounds making it into the final soundtrack, Taylor says, “The locking of the gloves, and the locking and handling of the helmet that belonged to John Young will live forever. It was an honor to work with that material.”

Montaño adds, “It was good to get every angle that we could, for all the sounds. We spent hours and hours trying to come up with these intangible pieces that only a handful of people have ever heard, and they’re in the movie.”

Helmet Comms
To recreate the comms sound of the transmissions back and forth between NASA and the astronauts, Montaño and Taylor took a practical approach. Instead of relying on plug-ins for futz and reverb, they built a 4-foot-by-3-foot isolated enclosure on wheels, deadened with acoustical foam and featuring custom fit brackets inside to hold either a high-altitude helmet (to replicate dialogue for the X-15 and the Gemini missions) or a bubble helmet (for the Apollo missions).

Each helmet was recorded independently using its own two-way coaxial car speaker and a set of microphones strapped to mini tripods that were set inside each helmet in the enclosure. The dialogue was played through the speaker in the helmet and sent back to the console through the mics. Taylor says, “It would come back really close to being perfectly in sync. So I could do whatever balance was necessary and it wouldn’t flange or sound strange.”

By adjusting the amount of helmet feed in relation to the dry dialogue, Taylor was able to change the amount of “futz.” If a scene was sonically dense, or dialogue clarity wasn’t an issue (such as the tech talk exchanges between Houston and the astronauts), then Taylor could push the futz further. “We were constantly changing the balance depending on what the effects and music were doing. Sometimes we could really feel the helmet and other times we’d have to back off for clarity’s sake. But it was always used, just sometimes more than others.”

Density and Dynamics
The challenge of the mix on First Man was to keep the track dynamic and not let the sound get too loud until it absolutely needed to. This made the launches feel powerful and intense. “If everything were loud up to that point, it just wouldn’t have the same pop,” says Taylor. “The director wanted to make sure that when we hit those rockets they felt huge.

One way to support the dynamics was choosing how to make the track appropriately less dense. For example, during the Gemini launch there are the sounds of the rocket’s different stages as it blasts off and breaks through the atmosphere, and there’s the sound of the space capsule rattling and metal groaning. On top of that, there’s Neil’s voice reading off various specs.

“When it comes to that kind of density sound-wise, you have to decide should we hear the actors? Are we with them? Do we have to understand what they are saying? In some cases, we just blew through that dialogue because ‘RCS Breakers’ doesn’t mean anything to anybody, but the intensity of the rocket does. We wanted to keep that energy alive, so we drove through the dialogue,” says Montaño. “You can feel that Neil’s calm, but you don’t need to understand what he’s saying. So that was a trick in the balance; deciding what should be heard and what we can gloss over.”

Another helpful factor was that the film’s score, by composer Justin Hurwitz, wasn’t bombastic. During the rocket launches, it wasn’t fighting for space in the mix. “The direction of the music is super supportive and it never had to play loud. It just sits in the pocket,” says Taylor. “The Gemini launch didn’t have music, which really allowed us to take advantage of the sonic structure that was built into the layers of sound effects and design for the take off.”

Without competition from the music and dialogue, the effects could really take the lead and tell the story of the Gemini launch. The camera stays close-up on Neil in the cockpit and doesn’t show an exterior perspective (as it does during the Apollo launch sequence). The audiences’ understanding of what’s happening comes from the sound. You hear the “bbbbbwhoop” of the Titan II missile during ignition, and hear the liftoff of the rocket. You hear the point at which they go through maximum dynamic pressure, characterized by the metal rattling and groaning inside the capsule as it’s subjected to extreme buffeting and stress.

Next you hear the first stage cut-off and the initial boosters break away followed by the ignition of the second stage engine as it takes over. Then, finally, it’s just the calmness of space with a few small metal pings and groans as the capsule settles into orbit.

Even though it’s an intense sequence, all the details come through in the mix. “Once we got the final effects tracks, as usual, we started to add more layers and more detail work. That kind of shaping is normal. The Gemini launch builds to that moment when it comes to an abrupt stop sonically. We built it up layer-wise with more groan, more thrust, more explosive/low-end material to give it some rhythm and beats,” says Montaño.

Although the rocket sounds like it’s going to pieces, Neil doesn’t sound like he’s going to pieces. He remains buttoned-up and composed. “The great thing about that scene was hearing the contrast between this intense rocket and the calmness of Neil’s voice. The most important part of the dialogue there was that Neil sounded calm,” says Taylor.

Apollo
Visually, the Apollo launch was handled differently in the film. There are exterior perspectives, but even though the camera shows the launch from various distances, the sound maintains its perspective — close as hell. “We really filled the room up with it the whole time, so it always sounds large, even when we are seeing it from a distance. You really feel the weight and size of it,” says Montaño.

The rocket that launched the Apollo missions was the most powerful ever created: the Saturn V. Recreating that sound was a big job and came with a bit of added pressure from director Chazelle. “Damien [Chazelle] had spoken with one of the Armstrong sons, Mark, who said he’s never really felt or heard a Saturn V liftoff correctly in a film. So Damien threw it our way. He threw down the gauntlet and challenged us to make the Armstrong family happy,” says Montaño.

Field recordists John Fasal and Skip Longfellow were sent to record the launch of the world’s second largest rocket — SpaceX’s Falcon Heavy. They got as close as they could to the rocket, which generated 5.5 million pounds of thrust. They also recorded it at various distances farther away. This was the biggest component of their Apollo launch sound for the film. It’s also bolstered by recordings that Lee captured of various rocket liftoffs at Vandenberg Air Force Base in California.

But recreating the world’s most powerful rocket required some mega recordings that regular mics just couldn’t produce. So they headed over to the Acoustic Test Chamber at JPL in Pasadena, which is where NASA sonically bombards and acoustically excites hardware before it’s sent into space. “They simulate the conditions of liftoff to see if the hardware fails under that kind of sound pressure,” says Montaño. They do this by “forcing nitrogen gas through this six-inch hose that goes into a diaphragm that turns that gas into some sort of soundwave, like pink noise. There are four loudspeakers bolted to the walls of this hard-shelled room, and the speakers are probably about 4’x4’ feet. It goes up to 153dB in there; that’s max.” (Fun Fact: The sound team wasn’t able to physically be in the room to hear the sound since the gas would have killed them. They could only hear the sound via their recordings.)

The low-end energy of that sound was a key element in their Apollo launch. So how do you capture the most low-end possible from a high-SPL source? Taylor had an interesting solution of using a 10-inch bass speaker as a microphone. “Years ago, while reading a music magazine, I discovered this method of recording low-end using a subwoofer or any bass speaker. If you have a 10-inch speaker as a mic, you’re going to be able to capture much more low-end. You may even be able to get as low as 7Hz,” Taylor says.

Montaño adds, “We were able to capture another octave lower than we’d normally get. The sounds we captured really shook the room, really got your chest cavity going.”
For the rocket sequences — the X-15 flight, the Gemini mission and the Apollo mission —their goal was to craft an experience the audience could feel. It was about energy and intensity, but also clarity.

Taylor concludes, “Damien’s big thing — which I love — is that he is not greedy when it comes to sound. Sometimes you get a movie where everything has to be big. Often, Damien’s notes were for things to be lower, to lower sounds that weren’t rocket affiliated. He was constantly making sure that we did what we could to get those rocket scenes to punch, so that you really felt it.”


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Capturing realistic dialogue for The Front Runner

By Mel Lambert

Early on in his process, The Front Runner director Jason Reitman asked frequent collaborator and production sound mixer Steve Morrow, CAS, to join the production. “It was maybe inevitable that Jason would ask me to join the crew,” says Morrow, who has worked with the director on Labor Day, Up in the Air and Thank You for Smoking. “I have been part of Jason’s extended family for at least 10 years — having worked with his father Ivan Reitman on Draft Day — and know how he likes to work.”

Steve Morrow

This Sony Pictures film was co-written by Reitman, Matt Bai and Jay Carson, and based on Bai’s book, “All the Truth Is Out.” The Front Runner follows the rise and fall of Senator Gary Hart, set during his unsuccessful presidential campaign in 1988 when he was famously caught having an affair with the much younger Donna Rice. Despite capturing the imagination of young voters, and being considered the overwhelming front runner for the Democratic nomination, Hart’s campaign was sidelined by the affair.

It stars Hugh Jackman as Gary Hart, Vera Farmiga as his wife Lee, J.K. Simmons as campaign manager Bill Dixon and Alfred Molina as the Washington Post’s managing editor, Ben Bradlee.

“From the first read-through of the script, I knew that we would be faced with some production challenges,” recalls Morrow, a 20-year industry veteran. “There were a lot of ensemble scenes with the cast talking over one another, and I knew from previous experience that Jason doesn’t like to rely on ADR. Not only is he really concerned about the quality of the sound we secure from the set — and gives the actors space to prepare — but Jason’s scripts are always so well-written that they shouldn’t need replacement lines in post.”

Ear Candy Post’s Perry Robertson and Scott Sanders, MPSE, served as co-supervising sound editors on the project, which was re-recorded on Deluxe Stage 2 — the former Glen Glenn Sound facility — by Chris Jenkins handling dialogue and music and Jeremy Peirson, CAS, overseeing sound effects. Sebastian Sheehan Visconti was sound effects editor.

With as many as two dozen actors in a busy scene, Morrow soon realized that he would have to mic all of the key campaign team members. “I knew that we were shooting a political film like Robert Altman’s All the President’s Men or [Michael Ritchie’s] The Candidate, so I referred back to the multichannel techniques pioneered by Jim Webb and his high-quality dialogue recordings. I elected to use up to 18 radio mics for those ensemble scenes,” including Reitman’s long opening sequence in which the audience learns who the key participants are on the campaign trail. I did this “while recording each actor on a separate track, together with a guide mono mix of the key participants for the picture editor Stefan Grube.”

Reitman is well known for his films’ elaborate opening title sequences and often highly subjective narration from a main character. His motion pictures typically revolve around characters that are brashly self-confident, but then begin to rethink their lives and responsibilities. He is also reported to be a fan of ‘70s-style cinema verite, which uses a meandering camera and overlapping dialogue to draw the audience into an immersive reality. The Front Runner’s soundtrack is layered with dialogue, together with a constant hum of conversation — from the principals to the press and campaign staff. Since Bai and Carson have written political speeches, Reitman had them on set to ensure that conversations sounded authentic.

Even though there might be four or so key participants speaking in a scene, “Jason wants to capture all of the background dialogue between working press and campaign staff, for example,” Morrow continues.

“He briefed all of the other actors on what the scene was about so they could develop appropriate conversations and background dialogue while the camera roamed around the room. In other words, if somebody was on set they got a mic — one track per actor. In addition to capturing everything, Jason wanted me to have fun with the scene; he likes a solid mix for the crew, dailies and picture editorial, so I gave him the best I could get. And we always had the ability to modify it later in post production from the iso mic channels.”

Morrow recorded the pre-fader individual tracks at between 10dB and 15dB lower than the main mix, “which I rode hot, knowing that we could go back and correct it in post. Levels on that main mix were within ±5 dB most of the time,” he says. Assisting Morrow during the 40-day shoot, which took place in and around Atlanta and Savannah, were Collin Heath and Craig Dollinger, who also served as the boom operator on a handful of scenes.

The mono production mix was also useful for the camera crew, says Morrow. “They sometimes had problems understanding the dramatic focus of a particular scene. In other words, ‘Where does my eye go?’ When I fed my mix to their headphones they came to understand which actors we were spotlighting from the script. This allowed them to follow that overview.”

Production Tools
Morrow used a Behringer Midas Model M32R digital console that features 16 rear-channel inputs and 16 more inputs via a stage box that connects to the M32R via a Cat-5 cable. The console provided pre-fader and mixed outputs to Morrow’s pair of 64-track Sound Devices 970 hard-disk recorders — a main and a parallel backup — via Audinate Dante digital ports. “I also carried my second M32R mixer as a spare,” Morrow says. “I turned over the Compact Flash media at the end of each day’s shooting and retained the contents of the 970’s internal 1TB SSDs and external back-up drives until the end of post, just in case. We created maybe 30GB of data per recorder per day.”

Color coding helps Morrow mix dialogue more accurately.

For easy level checking, the two recorders with front-panel displays were mounted on Morrow’s production sound cart directly above his mixing console. “When I can, I color code the script to highlight the dialogue of key characters in specific scenes,” he says. “It helps me mix more accurately.”

RF transmitters comprised two dozen Lectrosonics SSM Micro belt-pack units — Morrow bought six or seven more for the film — linked to a bank of Lectrosonics Venue2 modular four-channel and three-channel VR receivers. “I used my collection of Sanken COS-11D miniature lavalier microphones for the belt packs. They are my go-to lavs with clean audio output and excellent performance. I also have some DPA lavaliers, if needed.”

With 20+ RF channels simultaneously in use within metropolitan centers, frequency coordination was an essential chore to ensure consistent operation for all radio systems. “The Lectrosonics Venue receivers can auto-assign radio-mic frequencies,” Morrow explains. “The best way to do this is to have everything turned off, and then one by one let the system scan the frequency spectrum. When it finds a good channel, you assign it to the first microphone and then repeat that process for the next radio transmitters. I try to keep up with FCC deliberations [on diminishing RF spectrum space], but realize that companies who manufacture this equipment also need to be more involved. So, together, I feel good that we’ll have the separation we all need for successful shoots.”

Morrow’s setup.

Morrow also made several location recordings on set. “I mounted a couple of lavaliers on bumpers to secure car-byes and other sounds for supervising sound editor Perry Robertson, as well as backgrounds in the house during a Labor Day gathering. We also recorded Vera Farmiga playing the piano during one scene — she is actually a classically-trained pianist — using a DPA Model 5099 microphone (which I also used while working on A Star is Born). But we didn’t record much room tone, because we didn’t find it necessary.”

During scenes at a campaign rally, Morrow provided a small PA system that comprised a couple of loudspeakers mounted on a balcony and a vocal microphone on the podium. “We ran the system at medium-levels, simply to capture the reverb and ambiance of the auditorium,” he explains, “but not so much that it caused problems in post production.”

Summarizing his experience on The Front Runner, Morrow offers that Reitman, and his production partner Helen Estabrook, bring a team spirit to their films. “The set is a highly collaborative environment. We all hang out with one another and share birthdays together. In my experience, Jason’s films are always from the heart. We love working with him 120%. The low point of the shoot is going home!”


Mel Lambert has been involved with production and post on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

DigitalGlue 12.3

Sound Lounge Film+Television adds Atmos mixing, Evan Benjamin

Sound Lounge’s Film + Television division, which provides sound editorial, ADR and mixing services for episodic television, features and documentaries is upgrading its main mix stage to support editing and mixing in the Dolby Atmos format.

Sound Lounge Film + Television division EP Rob Browning says that the studio expects to begin mixing in Dolby Atmos by the beginning of next year and that will allow it to target more high-end studio features. Sound Lounge is also installing a Dolby Atmos Mastering Suite, a custom hardware/software solution for preparing Dolby Atmos content for Blu-ray and streaming release.

It has also added veteran supervising sound editor, designer and re-recording mixer Evan Benjamin to its team. Benjamin is best known for his work in documentaries, including the feature doc RBG, about Supreme Court Justice Ruth Bader Ginsburg, as well as documentary series for Netflix, Paramount Network, HBO and PBS.

Benjamin is a 20-year industry veteran with credits on more than 130 film, television and documentary projects, including Paramount Network’s Rest in Power: The Trayvon Martin Story and HBO’s Baltimore Rising. Additionally, his credits include Time: The Kalief Browder Story, Welcome to Leith, Joseph Pulitzer: Man of the People and Moynihan.


The Girl in the Spider’s Web: immersive audio and picture editing

By Mel Lambert

Key members of the post crew responsible for the fast-paced look and feel of director Fede Alvarez’s new film, The Girl in the Spider’s Web, came to the project via a series of right time/right place situations. First, co-supervising sound editor Julian Slater (who played a big role in Baby Driver’s audio post) met picture editor Tatiana Riegel at last year’s ACE Awards.

During early 2018, Slater was approached to work on the lastest adaptation of the crime novels by the Swedish author Stieg Larsson. Alvarez was impressed with Slater’s contribution to both Baby Driver and the Oscar-winning Mad Max: Fury Road (2015). “Fede told me that he uses the soundtrack to Mad Max to show off his home Atmos playback system,” says Slater, who served as sound designer on that film. “I was happy to learn that Tatiana had also been tagged to work on The Girl in the Spider’s Web.”

Back row (L-R): Micah Loken, Sang Kim, Mandell Winter, Dan Boccoli, Tatiana Riegel, Kevin O’Connell, Fede Alvarez, Julian Slater, Hamilton Sterling, Kyle Arzt, Del Spiva and Maarten Hofmeijer. Front row (L-R): Pablo Prietto, Lola Gutierrez, Mathew McGivney and Ben Sherman.

Slater, who would also be working on the crime drama Bad Times at the El Royale for director Drew Goddard, wanted Mandell Winter as his co-supervising sound editor. “I very much liked his work on The Equalizer 2, Death Wish and The Magnificent Seven, and I knew that we could co-supervise well together. I came on full time after completing El Royale.”

Editor Riegel (Gringo, I Tonya, Million Dollar Arm, Bad Words) was a fan of the original Stieg Larsson Millennium Series films —The Girl With the Dragon Tattoo, The Girl Who Kicked the Hornet’s Nest and The Girl Who Played with Fire — as well as David Fincher’s 2011 remake of The Girl With the Dragon Tattoo. She was already a fan of Alvarez, admiring his previous suspense film, Don’t Breathe, and told him she enjoyed working on different types of films to avoid being typecast. “We hit it off immediately,” says Riegel, who then got together with Julian Slater and Mandell Winter to discuss specifics.

The latest outing in the Stieg Larsson franchise, The Girl in the Spider’s Web: A New Dragon Tattoo Story, stars English actress Claire Foy (The Crown) in the eponymous role of a young computer hacker Lisbeth Salander who, along with journalist Mikael Blomkvist, gets caught up in a web of spies, cybercriminals and corrupt government officials. The screenplay was co-written by Jay Basu and Alvarez from the novel by David Lagercrantz. The cast also includes Sylvia Hoeks, Stephen Merchant and Lakeith Stanfield.

Having worked previously with Niels Arden Oplev, the Swedish director of 2009’s The Girl with the Dragon Tattoo, Winter knew the franchise and was interested in working on the newest offering. He was also excited about working with director Fede Alvarez. “I loved the use of color and lighting choices that Fede selected for Don’t Breathe, so when Julian Slater called I jumped at the opportunity. None of us had worked together before, and it was Fede’s first large-budget film, having previously specialized in independent offerings. I was eager to help shepherd the film’s immersive soundtrack through the intricate process from location to the dub stage.”

From the very outset, Slater argued for a native Dolby Atmos soundtrack, with a 7.1-channel Avid Pro Tools bed that evolved through editorial, with appropriate objects being assigned during re-recording to surround and overhead locations. “We knew that the film would be very atmospheric,” Slater recalls, “so we decided to use spaces and ambiences to develop a moody, noir thriller.”

The film was dubbed on the William Holden Stage at Sony Pictures Studios, with Kevin O’ Connell handling dialog and music, and Slater overseeing sound effects elements.

Cutting Picture on Location
Editor Riegel and two assistants joined the project at its Berlin location last January. “It was a 10-month journey until final print mastering in mid-October,” she says. “We knew CGI elements would be added later. Fede didn’t do any previz, instead focusing on VFX during post production. We set up Avid Media Composers and assemble-edited the dailies as we went” against early storyboards. “Fede wanted to play up the film’s rogue theme; he had a very, very clear focus of the film as spectacle. He wanted us to stay true to the Lisbeth Salander character from the original films, yet retain that dark, Scandinavian feel from the previous outings. The film is a fun ride!”

The team returned to Los Angeles in April and turned the VFX over to Pixomondo, which was brought on to handle the greenscreen CGI sequences. “We adjourned to Pivotal Post in Burbank for the Director’s Cut and then to the Sony lot in Culver City for the first temp mix,” explains Riegel. “My editing decisions were based on the innate DNA of the shot material, and honoring the script. I asked Fede a lot of questions to ensure that the story and the pacing were crystal clear. Our first assembly was around two hours and 15 minutes, which we trimmed to just under two hours during a series of refinements. We then removed 15 minutes to reach our final 1:45 running time, which worked for all of us. The cut was better without the dropped section.”

Daniel Boccoli served as first assistant picture editor, Patrick Clancey was post finishing editor, Matthew McGivney was VFX editor and Andrew McGivney was VFX assistant editor.

Because Riegel likes to cut against an evolving soundtrack, she developed a temporary dialog track in her Avid workstation, adding sound effects taken from commercial libraries. “But there is a complex fight and chase sequence in the middle of the film that I turned over to Mandell and Julian early on so I could secure realistic effects elements to help inform the cut,” she explains. “Those early tracks were wonderful and gave me a better idea of what the final film would sound like. That way I can get to know the film better — I can also open up the cut to make space for a sound if it works within the film’s creative arcs.”

“Our overall direction from Fede Alvarez was to make the soundtrack feel cold when we were outside and to grab the audience with the action… while focusing on the story,” Winter explains. “We were also working against a very tight schedule and had little time for distractions. After the first temp, Julian and I got notes from Fede and Tatiana and set off using that feedback, which continued through three more temp mixes.”

Having complete supervising The Equalizer 2, Mandell came aboard full time in mid-June, with temp mixes running through the beginning of September. “We were finaling by the last week of September, ahead of the film’s World Premiere on October 19 at the International Rome Film Festival.”

Since there was no spotting session, from day one we were in a tight post schedule, according to Slater. “There were a number of high-action scenes that needed intricate sound design, including the eight-minute sequence that begins with explosions in Lisbeth Salander’s apartment and the subsequent high-speed motorbike chase.”

Sound designer Hamilton Sterling crafted major sections of the film’s key fight and chase sequences.

Intricate Sound Design
“We liked Hamilton’s outstanding work on Independence Day: Resurgence and Logan and relied upon him to develop truly unique sounds for the industrial heating towers, motorbikes and fights,” says Winter. “Sound effects editor Ryan Collins cut the gas mask fight sequence, as well as a couple of reels, while Karen Vassar Triest handled another couple of reels, and David Esparza worked on several of the early sequences.”

Other sound effects editors included Ando Johnson and Robert Stambler, together with dialog editor Micah Loken and supervising Foley editor Sang Jun Kim.

Sterling is particularly proud of several sequences he designed for the film. “During a scene in which the lead character Lisbeth Salander is drugged, I used the Whoosh plug-in [from the German company, Tonsturm] inside Native Instruments’ Reaktor [modular music software] to create a variable, live-performable heartbeat. I used muffled explosion samples that were Doppler-shifted at different speeds against the picture to mimic the pulse-changing effects of various drugs. I also used Whoosh to create different turbo sounds for the Ducati motorcycle driven by Lisbeth, together with air-release sounds. They were subtle effects, because we didn’t want the result to sound like a ‘sci-fi bike’ — just a souped-up twin-cylinder Ducati.”

For the car chases, Sterling used whale-spout blasts to mimic the sound of a car driving through deep puddles with water striking the inside of the wheel wells. For frightening laughs in another sequence, the sound designer turned to Tonsturm’s Doppler program, which he used in an unorthodox way. “The program can be set to break up a sound sample using, for example, a 5.1-channel star pattern with small Doppler shifts to produce very disturbing laughter,” he says. “For the heating towers I used several sound components, including slowed-down toaster noises to add depth and resonance — a hum from the heating elements, plus ticks and clangs as they warmed up. Julian suggested that we use ‘chittery’ effects for the computer user interfaces, so I used The Cargo Cult’s Envy plug-in to create unusual sounds, and to avoid the conventional ‘bips” and ‘boops’ noises. Envy is a spectral-shift, pitch- and amplitude-change application that is very pitch manipulatable. I also turned to the Sound Particles app to generate complex wind sounds that I delivered as immersive 7.1.2 Pro Tools tracks.”

“We also had a lot of Foley, which was recorded on Stage B at Sony Studios by Nerses Gezalyan with Foley artists Sara Monat and Robin Harlen,” Winter adds. “Unfortunately, the production dialog had a number of compromised tracks from the Berlin locations. As a result, we had a lot of ADR to shoot. Scheduling the ADR was complicated by the time difference, as most of our actors were in London, Berlin, Oslo or Stockholm. We used Foley to support the cleaned-up dialog tracks and backfilled tracks. Our dialog editor was very knowledgeable with iZotope RX7 Advance software. Micah Loken really understood how to use it, and how not to use it. He can dig deep into a track without affecting the quality of the voice, and without overdoing the processing.”

The music from composer Roque Baños — who also worked with Alvarez on Don’t Breathe and Evil Dead — arrived very late in the project, “and remained something of a mystery,” Riegel recalls. “Being a musician himself, Fede knew what he wanted and how to achieve that result. He would disappear into an edit suite close to the stage with the music editors Maarten Hofmeijer and Del Spiva, where they cut together the score against the locked picture — or as locked as it ever was! After that we could balance the music against the dialog and sound effects.”

Regarding sound effects elements, Winter acknowledges that his small editorial team needed to work against a tight schedule. “We had a 7.1.2 template that allowed Tony [Lamberti] and later Julian to use the automated panning data. For the final mix in Atmos, we used objects minimally for the music and dialog. However, we used overhead objects strategically for effects and design. In an early sequence we put the sound of the rope — used to suspend an abusive husband — above the audience.” Re-recording mixer Tony Lamberti handled some of the early temp mixes in Slater’s absence.

Collaborative Re-Recording Process
When the project reached the William Holden Stage, “we could see the overall shape of the film with the VFX elements and decide what sounds would now be needed to match the visuals, since we had a lot of new technology to cover, including computer screens,” Riegel says.

Mandell agrees: “Yes, we could now see where Fede Alvarez wanted to take the film and make suggestions about new material. We started asking: ‘What do you think about this and that option?’ Or, ‘What’s missing?’ It was an ongoing series of conversation through the temp mixes, re-mixes and then the final.”

Having handled the first temp mix at Sony Studios, Slater returned full-time for the final Atmos mixes. “After so many temp mixes using the same templates, I knew that we would not be re-inventing the wheel on the William Holden Stage. We simply focused on changing the spatiality of what we had. Having worked with Kevin O’ Connell on both Jumanji: Welcome to the Jungle and The Public, I knew that I had to do my homework and deliver what he needed from my side of the console. Kevin is very involved. He’ll make suggestions, but always based on what is best for the film. I learned a lot by seeing how he works; he is very experienced. It’s easy to find what works with Kevin, since he has experience with a wide range of technologies and keeps up with new advances.”

Describing the re-recording process as being highly collaborative, Mandell remained objective about creative options. “You can get too close to the soundtrack. With a number of German and English actors, we constantly had to ask ourselves: ‘Do we have clarity?’ If not, can we fix it in the track or turn to ADR? We maintained a continuing conversation with Tatiana and Fede, with ideas that we would circulate backwards and forwards. Since we had a lot of new people working on the crew, trust became a major factor. Everybody was incredibly professional.”

“It was a very rewarding experience working with so many talented new people,” Slater concludes. “I quickly tuned into Fede Alvarez’s specific needs and sensibilities. It was a successful liaison.”

Riegel says that her biggest challenge was “trying to figure out what the film is supposed to be — from the script and pre-production through the shoot and first assembly. It’s a gradual process and one that involves regular conversations with my assistant editors and the director as we develop characters and clarify the information being shown. But I didn’t want to hit the audience over the head with too much information. We needed to decide: ‘What is important?’ and retain as much realism as possible. It’s a complex, creative process … and one that I totally love being a part of!”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.


A Star is Born: Live vocals, real crowds and venues

By Jennifer Walden

Warner Bros. Pictures’ remake of A Star is Born stars Bradley Cooper as Jackson Maine, a famous musician with a serious drinking hobby who stumbles onto singer/songwriter Ally (Lady Gaga) at a drag bar where she’s giving a performance. Jackson is taken by her raw talent and their chance meeting turns into something more. With Jackson’s help, Ally becomes a star but her fame is ultimately bittersweet.

Jason Ruder

Aside from Lady Gaga and Bradley Cooper (who also directed and co-wrote the screenplay), the other big star of this film is the music. Songwriting started over two years ago. Cooper and Gaga collaborated with several other songwriters along the way, like Lukas Nelson (son of Willie Nelson), Mark Ronson, Hillary Lindsey and DJ White Shadow.

According to supervising music editor/re-recording mixer Jason Ruder from 2 Pop Music — who was involved with the film from pre-production through post — the lyrics, tempo and key signatures were even changing right up to the day of the shoot. “The songwriting went to the 11th hour. Gaga sort of works in that fashion,” says Ruder, who witnessed her process first-hand during a sound check at Coachella. (2 Pop Sound is located on the Warner Bros. lot in Burbank.)

Before each shoot, Ruder would split out the pre-recorded instrumental tracks, reference vocals and have them ready for playback, but there were days when he would get a call from Gaga’s manager as he was driving to the set, saying that she had gone into the studio in the middle of the night and made changes, so there were all new pre-records for the day. I guess she could be called a bit of a perfectionist, always trying to make it better.

“On the final number, for instance, it was only a couple hours before the shoot and I got a message from her saying that the song wasn’t final yet and that she wanted to try it in three different keys and three different tempos just to make sure,” shares Ruder. “So there were a lot of moving parts going into each day. Everyone that she works with has to be able to adapt very quickly.”

Since the music is so important to the story, here’s what Cooper and Gaga didn’t want — they start singing and the music suddenly switches over to a slick, studio-produced track. That concern was the driving force behind the production and post teams’ approach to the on-camera performances.

Recording Live Vocals
All the vocals in A Star is Born were recorded live on-set for all the performances. Those live vocals are the ones used in the film’s final mix. To pull this off, Ruder and the production sound team did a stage test at Warner Bros. to see if this was possible. They had a pre-recorded track of the band, which they played back on the stage. First, Cooper and Gaga did live vocals. Then they tried the song again, with Cooper and Gaga miming along to pre-recorded vocals. Ruder took the material back to his cutting room and built a quick version of both. The comparison solidified their decision. “Once we got through that test, everyone was more confident about doing the live vocals. We felt good about it,” he says.

Their first shoot for the film was at Coachella, on a weekday since there were no performances. They were shooting a big, important concert scene for the film and only had one day to get it done. “We knew that it all had to go right,” says Ruder. It was their first shot at live vocals on-set.

Neither the music nor the vocals were amplified through the stage’s speaker system since song security was a concern — they didn’t want the songs leaked before the film’s release. So everything was done through headphone mixes. This way, even those in the crowd closest to the stage couldn’t hear the melodies or lyrics. Gaga is a seasoned concert performer, comfortable with performing at concert volume. She wasn’t used to having the band muted and the vocals live (though not amplified), so some adjustments needed to be made. “We ended up bringing her in-ear monitor mixer in to help consult,” explains Ruder. “We had to bring some of her touring people into our world to help get her perfectly comfortable so she could focus on acting and singing. It worked really well, especially later for Arizona Sky, where she had to play the piano and sing. Getting the right balance in her ear was important.”

As for Jackson Maine’s band on-screen, those were all real musicians and not actors — it was Lukas Nelson’s band. “They’re used to touring together. They’re very tight and they’re seasoned musicians,” says Ruder. “Everyone was playing and we were recording their direct feeds. So we had all the material that the musicians were playing. For the drums, those had to be muted because we didn’t want them bleeding into the live vocals. We were on-set making sure we were getting clean vocals on every take.”

Real Venues, Real Reverbs
Since the goal from the beginning was to create realistic-sounding concerts, Ruder decided to capture impulse responses at every performance location — from big stages like Coachella to much smaller venues — and use those to create reverbs in Audio Ease’s Altiverb.

The challenge wasn’t capturing the IRs, but rather, trying to convince the assistant director on-set that they needed to be captured. “We needed to quiet the whole set for five or 10 minutes so we could put up some mics and shoot these tones through the spaces. This all had to be done on the production clock, and they’re just not used to that. They didn’t understand what it was for and why it was important — it’s not cheap to do that during production,” explains Ruder.

Those IRs were like gold during post. They allowed the team to recreate spaces like the main stage at Coachella, the Greek Theatre and the Shrine Auditorium. “We were able to manufacture our own reverbs that were pretty much exactly what you would hear if you were standing there. For Coachella, because it’s so massive, we weren’t sure if they were going to come out, but it worked. All the reverbs you hear in the film are completely authentic to the space.”

Live Crowds
Oscar-winning supervising sound editor Alan Murray at Warner Bros. Sound was also capturing sound at the concert performances, but his attention was away from the stage and into the crowd. “We had about 300 to 500 people at the concerts, and I was able to get clean reactions from them since I wasn’t picking up any music. So that approach of not amplifying the music worked for the crowd sounds too,” he says.

Production sound mixer Steven Morrow had set up mics in and around the crowd and recorded those to a multitrack recorder while Murray had his own mic and recorder that he could walk around with, even capturing the crowds from backstage. They did multiple recordings for the crowds and then layered those in Avid Pro Tools in post.

Alan Murray

“For Coachella and Glastonbury, we ended up enhancing those with stadium crowds just to get the appropriate size and excitement we needed,” explains Murray. They also got crowd recordings from one of Gaga’s concerts. “There was a point in the Arizona Sky scene where we needed the crowd to yell, ‘Ally!’ Gaga was performing at Fenway Park in Boston and so Bradley’s assistant called there and asked Gaga’s people to have the crowd do an ‘Ally’ chant for us.”

Ruder adds, “That’s not something you can get on an ADR stage. It needed to have that stadium feel to it. So we were lucky to get that from Boston that night and we were able to incorporate it into the mix.”

Building Blocks
According to Ruder, they wanted to make sure the right building blocks were in place when they went into post. Those blocks — the custom recorded impulse responses, the custom crowds, the live vocals, the band’s on-set performances, and the band’s unprocessed studio tracks that were recorded at The Village — gave Ruder and the re-recording mixers ultimate flexibility during the edit and mix to craft on-scene performances that felt like big, live concerts or intimate songwriting sessions.

Even with all those bases covered, Ruder was still worried about it working. “I’ve seen it go wrong before. You get tracks that just aren’t usable, vocals that are distorted or noisy. Or you get shots that don’t work with the music. There were those guitar playing shots…”

A few weeks after filming, while Ruder was piecing all the music together in post, he realized that they got it all. “Fortunately, it all worked. We had a great DP on the film and it was clear that he was capturing the right shots. Once we got to that point in post, once we knew we had the right pieces, it was a huge relief.”

Relief gave way to excitement when Ruder reached the dub stage — Warner Bros. Stage 10. “It was amazing to walk into the final mix knowing that we had the material and the flexibility to pull this off,” he says.

In addition to using Altiverb for the reverbs, Ruder used Waves plug-ins, such as the Waves API Collection, to give the vocals and instrumental tracks a live concert sound. “I tend to use plug-ins that emulate more of a tube sound to get punchier drums and that sort of thing. We used different 5.1 spreaders to put the music in a 5.1 environment. We changed the sound to match the picture, so we dried up the vocals on close-ups so they felt more intimate. We had tons and tons of flexibility because we had clean vocals and raw guitars and drum tracks.”

All the hard work paid off. In the film, Ally joins Jackson Maine on stage to sing a song she wrote called “Shallow.” For Murray and Ruder, this scene portrays everything they wanted to achieve for the performances in A Star is Born. The scene begins outside the concert, as Ally and her friend get out of the car and head toward the stage. The distant crowd and music reverberate through the stairwell as they’re led up to the backstage area. As they get closer, the sound subtly changes to match their proximity to the band. On stage, the music and crowd are deafening. Jackson begins to play guitar and sing solo before Ally finds the courage to join in. They sing “Shallow” together and the crowd goes crazy.

“The whole sequence was timed out perfectly, and the emotion we got out of them was great. The mix there was great. You felt like you were there with them. From a mix perspective, that was probably the most successful moment in the film,” concludes Ruder.


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney


Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.


Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 


Behind the Title: Sim re-recording mixer Sue Pelino

This audio post vet, who specializes in performance-based projects, is also an accomplished musician.

Name: Sue Pelino

Company: Sim Post New York

Can you describe your company?
I work within Sim Post New York, a division of Sim located in North Tribeca. This is a post production studio that specializes in offline editing, sound and picture finishing, color timing and VFX/Flame. We offer our clients end-to-end solutions for content creation and certified project delivery. Sim also has additional locations in Los Angeles, Atlanta, Toronto and Vancouver, and also has three other divisions: Studio, Camera and Lighting and Grip.

What’s your job title?
Senior Re-Recording Mixer

What does that entail?
At Sim Post New York, the job of re-recording mixer entails all aspects of sound for picture. We are not only responsible for the final 5.1 and stereo mix, but also act as supervising sound editors and sound designers. Our team of re-recording mixers mainly concentrate on long-form television, including documentaries, scripted series, reality programs and game shows. Of course, we mix commercials and promos as well. I specialize in music performance and comedy specials.

What would surprise people the most about what falls under that title?
The amount of mouth clicks and spit that we need to remove from dialogue!

What’s your favorite part of the job?
The lasting friendships that I have made with my clients and colleagues. Also, I’ve had the great opportunity to work with so many interesting artists and actors, which makes the job exciting.

What’s your least favorite?
The unpredictable hours.

What is your most productive time of the day?
I am a night owl, so I’m most creative between 7pm and 1am… actually, make that 2am.

If you didn’t have this job, what would you be doing instead?
I would most likely be either a full-time musician/songwriter. I would absolutely love to design guitars.

How early on did you know this would be your path?
The first time I was in a recording studio was when I was 10 years old. My dad was friends with a great jingle writer who offered to have me come in for a recording session. He thought that I was going to play “Mary Had a Little Lamb” on ukulele, but instead I showed up with a seven-piece band and played two originals and a Carpenters tune. I think I played a Gibson Dove that day at Electric Lady Studios. I was mesmerized, and it was at that moment when I got the bug!

Rock & Roll Hall of Fame Induction Ceremony: Bon jovi.

Can you name some recent projects you have worked on?
Roy Wood Jr. — No One Loves You (Comedy Central stand-up special), MTV Video Music Awards, Kings of Leon – Landmarks Live in Concert, Special Olympics: 50 Years of Changing the Game (ESPN on ABC documentary), Rock & Roll Hall of Fame Induction Ceremony (HBO Special).

What is the project that you are most proud of?
Tony Bennett: An American Classic (NBC special, directed by Rob Marshall).

Name three pieces of technology that you can’t live without.
iZotope RX Post Production Suite, Penteo 7 Pro and Pro Tools.

What social media channels do you follow?
For work, I follow LinkedIn. It’s a great research and marketing tool and was extremely helpful in getting the word out when the team and I joined Sim. I heard from connections and clients that I hadn’t heard from in a while. They were excited to come by for a tour.

Some groups/companies that I follow on LinkedIn include: Audio Engineering Society, Sim, Panavision, Waves Audio, Apogee Electronics, Avid, Technicolor, Viacom, Sony Music Entertainment, HBO, Hulu, Netflix, Media and Entertainment Professionals , New York Film Academy, New York Women in Film and Television, Producers Guild of America.

For pleasure (and a little business) I love Instagram. I have always been into photography and love to get my message across in a photo. I definitely do follow quite a few production companies and many of my clients who are also close friends.

Care to share some music to listen to?
In my car, I mainly listen to Jam On. At home I’ve been constantly playing the LP of my new favorite band, Roadcase Royale (on my turntable)!

What do you do to de-stress from it all?
I play guitar and sing almost every night. Many times I even hold my guitar while chilling out watching TV, and I’ve found myself playing along with the Game of Thrones theme!


Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.

Rex Recker’s mix and sound design for new Sunoco spot

By Randi Altman

Rex Recker

Digital Arts audio post mixer/sound designer Rex Recker recently completed work on a 30-second Sunoco spot for Allen & Gerritsen/Boston and Cosmo Street Edit/NYC. In the commercial a man is seen pumping his own gas at a Sunoco station and checking his phone. You can hear birds chirping and traffic moving in the background when suddenly a robotic female voice comes from the pump itself, asking about what app he’s looking at.

He explains it’s the Sunoco mobile app and that he can pay for the gas directly from his phone, saving time while earning rewards. The voice takes on an offended tone since he will no longer need her help when paying for his gas. The spot ends with a voiceover about the new app.

To find out more about the process, we reached out to New York-based Recker, who recorded the VO and performed the mix and sound design.

How early did you get involved, and how did you work with the agency and the edit house?
I was contacted before the mix by producer Billy Near about the nature of the spot. Specifically, about the filtering of the music coming out of the speakers at the gas station.  I was sent all the elements from the edit house before the actual mix, so I had a chance to basically do a premix before the agency showed up.

Can you talk about the sound design you provided?
The biggest hurdle was to settle on the sound texture of the woman coming out of the speaker of the gas pump. We tried about five different filtering profiles before settling on the one in the spot. I used McDSP FutzBox for the effect. The ambience was your basic run-of-the mill birds and distant highway sound effects from my SoundMiner server. I added some Foley sound effects of the man handling the gas pump too.

Any challenges on this spot?
Besides designing the sound processing on the music and the woman’s voice, the biggest hurdle was cleaning up the dialogue, which was very noisy and not matching from shot to shot. I used iZotope 6 to clean up the dialogue and also used the ambience match to create a seamless backround of the ambience. iZotope 6 is the biggest mix-saver in my audio toolbox. I love how it smoothed out the dialogue.