Tag Archives: audio post production

First Man: Historical fiction meets authentic sound

By Jennifer Walden

Historical fiction is not a rigidly factual account, but rather an interpretation. Fact and fiction mix to tell a story in a way that helps people connect with the past. In director Damien Chazelle’s film First Man, audiences experience his vision of how the early days of space exploration may have been for astronaut Neil Armstrong.

Frank A. Montaño

The uncertainty of reaching the outer limits of Earth’s atmosphere, the near disasters and mistakes that led to the loss of several lives and the ultimate success of landing on the moon. These things are presented so viscerally that the audience feels as though they are riding along with Armstrong.

While First Man is not a documentary, there are factual elements in the film, particularly in the sound. “The concept was to try to be true to the astronauts’ sonic experience. What would they hear?” says effects re-recording mixer Frank A. Montaño, who mixed the film alongside re-recording mixer Jon Taylor (on dialogue/music) in the Alfred Hitchcock Theater at Universal Studios in Los Angeles.

Supervising sound editors Ai-Ling Lee (who also did re-recording mixing on the film) and Milly Iatrou were in charge of designing a soundtrack that was both authentic and visceral — a mix of reality and emotionality. When Armstrong (Ryan Gosling) and Dave Scott (Christopher Abbott) are being shot into space on a Gemini mission, everything the audience hears may not be completely accurate, but it’s meant to produce the accurate emotional response — i.e., fear, uncertainty, excitement, anxiety. The sound helps the audience to connect with the astronauts strapped into that handcrafted space capsule as it rattles and clatters its way into space.

As for the authentic sounds related to the astronauts’ experience — from the switches and toggles to the air inside the spacesuits — those were collected by several members of the post sound team, including Montaño, who by coincidence is an avid fan of the US space program and full of interesting facts on the subject. Their mission was to find and record era-appropriate NASA equipment and gear.

Recording
Starting at ILC Dover in Frederica, Delaware — original manufacturers of spacesuits for the Apollo missions — Montaño and sound effects recordist Alex Knickerbocker recorded a real A7L-B, which, says Montaño, is the second revision of the Apollo suit. It was actually worn by astronaut Paul Weiss, although it wasn’t the one he wore in space. “ILC Dover completely opened up to us, and were excited for this to happen,” says Montaño.

They spent eight hours recording every detail of the suit, like the umbilicals snapping in and out of place, and gloves and helmet (actually John Young’s from Apollo 10) locking into the rings. “In the film, when you see them plug in the umbilical for water or air, that’s the real sound. When they are locking the bubble helmet on to Neil’s suit in the clean room, that’s the real sound,” explains Montaño.

They also captured the internal environment of the spacesuit, which had never been officially documented before. “We could get hours of communications — that was easy — but there was no record of what those astronauts [felt like in those] spacesuits for that many hours, and how those things kept them alive,” says Montaño.

Back at Universal on the Hitchcock stage, Taylor and mix tech Bill Meadows were receiving all the recorded sounds from Montaño and Knickerbocker, who were still at ILC Dover. “We weren’t exactly in the right environment to get these recordings, so JT [Jon Taylor] and Bill let us know if it was a little too live or a little too sharp, and we’d move the microphones or try different microphones or try to get into a quieter area,” says Montaño.

Next, Montaño and Knickerbocker traveled to the US Space and Rocket Center in Huntsville, Alabama, where the Saturn V rocket was developed. “This is where Wernher von Braun (chief architect of the Saturn V rocket) was based out of, so they have a huge Apollo footprint,” says Montaño. There they got to work inside a Lunar Excursion Module (LEM) simulator, which according to Montaño was one of only two that were made for training. “All Apollo astronauts trained in these simulators including Neil and Buzz, so it was under plexiglass as it was only for observation. But, they opened it up to us. We got to go inside the LEM and flip all the switches, dials, and knobs and record them. It was historic. This has never been done before and we were so excited to be there,” says Montaño.

Additionally, they recorded a DSKY (Display and Keypad) flight guidance computer used by the crew to communicate with the LEM computer. This can be seen during the sequence of Buzz (Corey Stoll) and Neil landing on the moon. “It has this big numeric keypad, and when Buzz is hitting those switches it’s the real sound. When they flip all those switch banks, all those sounds are the real deal,” reports Montaño.

Other interesting recording adventures include the Cosmosphere in Hutchinson, Kansas, where they recorded all the switches and buttons of the original control flight consoles from Mission Control at the Johnson Space Center (JSC). At Edwards Airforce Base in Southern California, they recorded Joe Walker’s X-15 suit, capturing the movement and helmet sounds.

The team also recorded Beta cloth at the Space Station Museum in Novato, California, which is the white-colored, fireproof silica fiber cloth used for the Apollo spacesuits. Gene Cernan’s (Apollo 17) connector cover was used, which reportedly sounds like a plastic bag or hula skirt.

Researching
They also recreated sounds based on research. For example, they recorded an approximation of lunar boots on the moon’s surface but from exterior perspective of the boots. What would boots on the lunar surface sound like from inside the spacesuit? First, they did the research to find the right silicone used during that era. Then Frank Cuomo, who is a post supervisor at Universal, created a unique pair of lunar boots based on Montaño’s idea of having ports above the soles, into which they could insert lav mics. “Frank happens to do this as a hobby, so I bounced this idea for the boots off of him and he actually made them for us,” says Montaño.

Next, they researched what the lunar surface was made of. Their path led to NASA’s Ames Research Center where they have an eight-ton sandbox filled with JSC-1A lunar regolith simulant. “It’s the closest thing to the lunar surface that we have on earth,” he explains.

He strapped on the custom-made boots and walked on this “lunar surfasse” while Knickerbocker and sound effects recordist Peter Brown captured it with numerous different mics, including a hydrophone placed on the surface “which gave us a thuddy, non-pitched/non-fidelity-altered sound that was the real deal,” says Montaño. “But what worked best, to get that interior sound, were the lav mics inside those ports on the soles.”

While the boots on the lunar surface sound ultimately didn’t make it into the film, the boots did come in handy for creating a “boots on LEM floor” sound. “We did a facsimile session. JT (Taylor) brought in some aluminum and we rigged it up and got the silicone soles on the aluminum surface for the interior of the LEM,” says Montaño.

Jon Taylor

Another interesting sound they recreated was the low-fuel alarm sound inside the LEM. According to Montaño, their research uncovered a document that shows the alarm’s specific frequencies, that it was a square wave, and that it was 750 cycles to 2,000 cycles. “The sound got a bit tweaked out just for excitement purposes. You hear it on their powered descent, when they’re coming in for a landing on the moon, and they’re low on fuel and 20 seconds from a mandatory abort.”

Altogether, the recording process was spread over nearly a year, with about 98% of their recorded sounds making it into the final soundtrack, Taylor says, “The locking of the gloves, and the locking and handling of the helmet that belonged to John Young will live forever. It was an honor to work with that material.”

Montaño adds, “It was good to get every angle that we could, for all the sounds. We spent hours and hours trying to come up with these intangible pieces that only a handful of people have ever heard, and they’re in the movie.”

Helmet Comms
To recreate the comms sound of the transmissions back and forth between NASA and the astronauts, Montaño and Taylor took a practical approach. Instead of relying on plug-ins for futz and reverb, they built a 4-foot-by-3-foot isolated enclosure on wheels, deadened with acoustical foam and featuring custom fit brackets inside to hold either a high-altitude helmet (to replicate dialogue for the X-15 and the Gemini missions) or a bubble helmet (for the Apollo missions).

Each helmet was recorded independently using its own two-way coaxial car speaker and a set of microphones strapped to mini tripods that were set inside each helmet in the enclosure. The dialogue was played through the speaker in the helmet and sent back to the console through the mics. Taylor says, “It would come back really close to being perfectly in sync. So I could do whatever balance was necessary and it wouldn’t flange or sound strange.”

By adjusting the amount of helmet feed in relation to the dry dialogue, Taylor was able to change the amount of “futz.” If a scene was sonically dense, or dialogue clarity wasn’t an issue (such as the tech talk exchanges between Houston and the astronauts), then Taylor could push the futz further. “We were constantly changing the balance depending on what the effects and music were doing. Sometimes we could really feel the helmet and other times we’d have to back off for clarity’s sake. But it was always used, just sometimes more than others.”

Density and Dynamics
The challenge of the mix on First Man was to keep the track dynamic and not let the sound get too loud until it absolutely needed to. This made the launches feel powerful and intense. “If everything were loud up to that point, it just wouldn’t have the same pop,” says Taylor. “The director wanted to make sure that when we hit those rockets they felt huge.

One way to support the dynamics was choosing how to make the track appropriately less dense. For example, during the Gemini launch there are the sounds of the rocket’s different stages as it blasts off and breaks through the atmosphere, and there’s the sound of the space capsule rattling and metal groaning. On top of that, there’s Neil’s voice reading off various specs.

“When it comes to that kind of density sound-wise, you have to decide should we hear the actors? Are we with them? Do we have to understand what they are saying? In some cases, we just blew through that dialogue because ‘RCS Breakers’ doesn’t mean anything to anybody, but the intensity of the rocket does. We wanted to keep that energy alive, so we drove through the dialogue,” says Montaño. “You can feel that Neil’s calm, but you don’t need to understand what he’s saying. So that was a trick in the balance; deciding what should be heard and what we can gloss over.”

Another helpful factor was that the film’s score, by composer Justin Hurwitz, wasn’t bombastic. During the rocket launches, it wasn’t fighting for space in the mix. “The direction of the music is super supportive and it never had to play loud. It just sits in the pocket,” says Taylor. “The Gemini launch didn’t have music, which really allowed us to take advantage of the sonic structure that was built into the layers of sound effects and design for the take off.”

Without competition from the music and dialogue, the effects could really take the lead and tell the story of the Gemini launch. The camera stays close-up on Neil in the cockpit and doesn’t show an exterior perspective (as it does during the Apollo launch sequence). The audiences’ understanding of what’s happening comes from the sound. You hear the “bbbbbwhoop” of the Titan II missile during ignition, and hear the liftoff of the rocket. You hear the point at which they go through maximum dynamic pressure, characterized by the metal rattling and groaning inside the capsule as it’s subjected to extreme buffeting and stress.

Next you hear the first stage cut-off and the initial boosters break away followed by the ignition of the second stage engine as it takes over. Then, finally, it’s just the calmness of space with a few small metal pings and groans as the capsule settles into orbit.

Even though it’s an intense sequence, all the details come through in the mix. “Once we got the final effects tracks, as usual, we started to add more layers and more detail work. That kind of shaping is normal. The Gemini launch builds to that moment when it comes to an abrupt stop sonically. We built it up layer-wise with more groan, more thrust, more explosive/low-end material to give it some rhythm and beats,” says Montaño.

Although the rocket sounds like it’s going to pieces, Neil doesn’t sound like he’s going to pieces. He remains buttoned-up and composed. “The great thing about that scene was hearing the contrast between this intense rocket and the calmness of Neil’s voice. The most important part of the dialogue there was that Neil sounded calm,” says Taylor.

Apollo
Visually, the Apollo launch was handled differently in the film. There are exterior perspectives, but even though the camera shows the launch from various distances, the sound maintains its perspective — close as hell. “We really filled the room up with it the whole time, so it always sounds large, even when we are seeing it from a distance. You really feel the weight and size of it,” says Montaño.

The rocket that launched the Apollo missions was the most powerful ever created: the Saturn V. Recreating that sound was a big job and came with a bit of added pressure from director Chazelle. “Damien [Chazelle] had spoken with one of the Armstrong sons, Mark, who said he’s never really felt or heard a Saturn V liftoff correctly in a film. So Damien threw it our way. He threw down the gauntlet and challenged us to make the Armstrong family happy,” says Montaño.

Field recordists John Fasal and Skip Longfellow were sent to record the launch of the world’s second largest rocket — SpaceX’s Falcon Heavy. They got as close as they could to the rocket, which generated 5.5 million pounds of thrust. They also recorded it at various distances farther away. This was the biggest component of their Apollo launch sound for the film. It’s also bolstered by recordings that Lee captured of various rocket liftoffs at Vandenberg Air Force Base in California.

But recreating the world’s most powerful rocket required some mega recordings that regular mics just couldn’t produce. So they headed over to the Acoustic Test Chamber at JPL in Pasadena, which is where NASA sonically bombards and acoustically excites hardware before it’s sent into space. “They simulate the conditions of liftoff to see if the hardware fails under that kind of sound pressure,” says Montaño. They do this by “forcing nitrogen gas through this six-inch hose that goes into a diaphragm that turns that gas into some sort of soundwave, like pink noise. There are four loudspeakers bolted to the walls of this hard-shelled room, and the speakers are probably about 4’x4’ feet. It goes up to 153dB in there; that’s max.” (Fun Fact: The sound team wasn’t able to physically be in the room to hear the sound since the gas would have killed them. They could only hear the sound via their recordings.)

The low-end energy of that sound was a key element in their Apollo launch. So how do you capture the most low-end possible from a high-SPL source? Taylor had an interesting solution of using a 10-inch bass speaker as a microphone. “Years ago, while reading a music magazine, I discovered this method of recording low-end using a subwoofer or any bass speaker. If you have a 10-inch speaker as a mic, you’re going to be able to capture much more low-end. You may even be able to get as low as 7Hz,” Taylor says.

Montaño adds, “We were able to capture another octave lower than we’d normally get. The sounds we captured really shook the room, really got your chest cavity going.”
For the rocket sequences — the X-15 flight, the Gemini mission and the Apollo mission —their goal was to craft an experience the audience could feel. It was about energy and intensity, but also clarity.

Taylor concludes, “Damien’s big thing — which I love — is that he is not greedy when it comes to sound. Sometimes you get a movie where everything has to be big. Often, Damien’s notes were for things to be lower, to lower sounds that weren’t rocket affiliated. He was constantly making sure that we did what we could to get those rocket scenes to punch, so that you really felt it.”


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

A Star is Born: Live vocals, real crowds and venues

By Jennifer Walden

Warner Bros. Pictures’ remake of A Star is Born stars Bradley Cooper as Jackson Maine, a famous musician with a serious drinking hobby who stumbles onto singer/songwriter Ally (Lady Gaga) at a drag bar where she’s giving a performance. Jackson is taken by her raw talent and their chance meeting turns into something more. With Jackson’s help, Ally becomes a star but her fame is ultimately bittersweet.

Jason Ruder

Aside from Lady Gaga and Bradley Cooper (who also directed and co-wrote the screenplay), the other big star of this film is the music. Songwriting started over two years ago. Cooper and Gaga collaborated with several other songwriters along the way, like Lukas Nelson (son of Willie Nelson), Mark Ronson, Hillary Lindsey and DJ White Shadow.

According to supervising music editor/re-recording mixer Jason Ruder from 2 Pop Music — who was involved with the film from pre-production through post — the lyrics, tempo and key signatures were even changing right up to the day of the shoot. “The songwriting went to the 11th hour. Gaga sort of works in that fashion,” says Ruder, who witnessed her process first-hand during a sound check at Coachella. (2 Pop Sound is located on the Warner Bros. lot in Burbank.)

Before each shoot, Ruder would split out the pre-recorded instrumental tracks, reference vocals and have them ready for playback, but there were days when he would get a call from Gaga’s manager as he was driving to the set, saying that she had gone into the studio in the middle of the night and made changes, so there were all new pre-records for the day. I guess she could be called a bit of a perfectionist, always trying to make it better.

“On the final number, for instance, it was only a couple hours before the shoot and I got a message from her saying that the song wasn’t final yet and that she wanted to try it in three different keys and three different tempos just to make sure,” shares Ruder. “So there were a lot of moving parts going into each day. Everyone that she works with has to be able to adapt very quickly.”

Since the music is so important to the story, here’s what Cooper and Gaga didn’t want — they start singing and the music suddenly switches over to a slick, studio-produced track. That concern was the driving force behind the production and post teams’ approach to the on-camera performances.

Recording Live Vocals
All the vocals in A Star is Born were recorded live on-set for all the performances. Those live vocals are the ones used in the film’s final mix. To pull this off, Ruder and the production sound team did a stage test at Warner Bros. to see if this was possible. They had a pre-recorded track of the band, which they played back on the stage. First, Cooper and Gaga did live vocals. Then they tried the song again, with Cooper and Gaga miming along to pre-recorded vocals. Ruder took the material back to his cutting room and built a quick version of both. The comparison solidified their decision. “Once we got through that test, everyone was more confident about doing the live vocals. We felt good about it,” he says.

Their first shoot for the film was at Coachella, on a weekday since there were no performances. They were shooting a big, important concert scene for the film and only had one day to get it done. “We knew that it all had to go right,” says Ruder. It was their first shot at live vocals on-set.

Neither the music nor the vocals were amplified through the stage’s speaker system since song security was a concern — they didn’t want the songs leaked before the film’s release. So everything was done through headphone mixes. This way, even those in the crowd closest to the stage couldn’t hear the melodies or lyrics. Gaga is a seasoned concert performer, comfortable with performing at concert volume. She wasn’t used to having the band muted and the vocals live (though not amplified), so some adjustments needed to be made. “We ended up bringing her in-ear monitor mixer in to help consult,” explains Ruder. “We had to bring some of her touring people into our world to help get her perfectly comfortable so she could focus on acting and singing. It worked really well, especially later for Arizona Sky, where she had to play the piano and sing. Getting the right balance in her ear was important.”

As for Jackson Maine’s band on-screen, those were all real musicians and not actors — it was Lukas Nelson’s band. “They’re used to touring together. They’re very tight and they’re seasoned musicians,” says Ruder. “Everyone was playing and we were recording their direct feeds. So we had all the material that the musicians were playing. For the drums, those had to be muted because we didn’t want them bleeding into the live vocals. We were on-set making sure we were getting clean vocals on every take.”

Real Venues, Real Reverbs
Since the goal from the beginning was to create realistic-sounding concerts, Ruder decided to capture impulse responses at every performance location — from big stages like Coachella to much smaller venues — and use those to create reverbs in Audio Ease’s Altiverb.

The challenge wasn’t capturing the IRs, but rather, trying to convince the assistant director on-set that they needed to be captured. “We needed to quiet the whole set for five or 10 minutes so we could put up some mics and shoot these tones through the spaces. This all had to be done on the production clock, and they’re just not used to that. They didn’t understand what it was for and why it was important — it’s not cheap to do that during production,” explains Ruder.

Those IRs were like gold during post. They allowed the team to recreate spaces like the main stage at Coachella, the Greek Theatre and the Shrine Auditorium. “We were able to manufacture our own reverbs that were pretty much exactly what you would hear if you were standing there. For Coachella, because it’s so massive, we weren’t sure if they were going to come out, but it worked. All the reverbs you hear in the film are completely authentic to the space.”

Live Crowds
Oscar-winning supervising sound editor Alan Murray at Warner Bros. Sound was also capturing sound at the concert performances, but his attention was away from the stage and into the crowd. “We had about 300 to 500 people at the concerts, and I was able to get clean reactions from them since I wasn’t picking up any music. So that approach of not amplifying the music worked for the crowd sounds too,” he says.

Production sound mixer Steven Morrow had set up mics in and around the crowd and recorded those to a multitrack recorder while Murray had his own mic and recorder that he could walk around with, even capturing the crowds from backstage. They did multiple recordings for the crowds and then layered those in Avid Pro Tools in post.

Alan Murray

“For Coachella and Glastonbury, we ended up enhancing those with stadium crowds just to get the appropriate size and excitement we needed,” explains Murray. They also got crowd recordings from one of Gaga’s concerts. “There was a point in the Arizona Sky scene where we needed the crowd to yell, ‘Ally!’ Gaga was performing at Fenway Park in Boston and so Bradley’s assistant called there and asked Gaga’s people to have the crowd do an ‘Ally’ chant for us.”

Ruder adds, “That’s not something you can get on an ADR stage. It needed to have that stadium feel to it. So we were lucky to get that from Boston that night and we were able to incorporate it into the mix.”

Building Blocks
According to Ruder, they wanted to make sure the right building blocks were in place when they went into post. Those blocks — the custom recorded impulse responses, the custom crowds, the live vocals, the band’s on-set performances, and the band’s unprocessed studio tracks that were recorded at The Village — gave Ruder and the re-recording mixers ultimate flexibility during the edit and mix to craft on-scene performances that felt like big, live concerts or intimate songwriting sessions.

Even with all those bases covered, Ruder was still worried about it working. “I’ve seen it go wrong before. You get tracks that just aren’t usable, vocals that are distorted or noisy. Or you get shots that don’t work with the music. There were those guitar playing shots…”

A few weeks after filming, while Ruder was piecing all the music together in post, he realized that they got it all. “Fortunately, it all worked. We had a great DP on the film and it was clear that he was capturing the right shots. Once we got to that point in post, once we knew we had the right pieces, it was a huge relief.”

Relief gave way to excitement when Ruder reached the dub stage — Warner Bros. Stage 10. “It was amazing to walk into the final mix knowing that we had the material and the flexibility to pull this off,” he says.

In addition to using Altiverb for the reverbs, Ruder used Waves plug-ins, such as the Waves API Collection, to give the vocals and instrumental tracks a live concert sound. “I tend to use plug-ins that emulate more of a tube sound to get punchier drums and that sort of thing. We used different 5.1 spreaders to put the music in a 5.1 environment. We changed the sound to match the picture, so we dried up the vocals on close-ups so they felt more intimate. We had tons and tons of flexibility because we had clean vocals and raw guitars and drum tracks.”

All the hard work paid off. In the film, Ally joins Jackson Maine on stage to sing a song she wrote called “Shallow.” For Murray and Ruder, this scene portrays everything they wanted to achieve for the performances in A Star is Born. The scene begins outside the concert, as Ally and her friend get out of the car and head toward the stage. The distant crowd and music reverberate through the stairwell as they’re led up to the backstage area. As they get closer, the sound subtly changes to match their proximity to the band. On stage, the music and crowd are deafening. Jackson begins to play guitar and sing solo before Ally finds the courage to join in. They sing “Shallow” together and the crowd goes crazy.

“The whole sequence was timed out perfectly, and the emotion we got out of them was great. The mix there was great. You felt like you were there with them. From a mix perspective, that was probably the most successful moment in the film,” concludes Ruder.


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 

Sony Pictures Post adds three theater-style studios

Sony Pictures Post Production Services has added three theater-style studios inside the Stage 6 facility on the Sony Pictures Studios lot in Culver City. All studios feature mid-size theater environments and include digital projectors and projection screens.

Theater 1 is setup for sound design and mixing with two Avid S6 consoles and immersive Dolby Atmos capabilities, while Theater 3 is geared toward sound design with a single S6. Theater 2 is designed for remote visual effects and color grading review, allowing filmmakers to monitor ongoing post work at other sites without leaving the lot. Additionally, centralized reception and client services facilities have been established to better serve studio sound clients.

Mix Stage 6 and Mix Stage 7 within the sound facility have been upgraded, each featuring two S6 mixing consoles, six Pro Tools digital audio workstations, Christie digital cinema projectors, 24 X 13 projection screens and a variety of support gear. The stages will be used to mix features and high-end television projects. The new resources add capacity and versatility to the studio’s sound operations.

Sony Pictures Post Production Services now has 11 traditional mix stages, the largest being the Cary Grant Theater, which seats 344. It also has mix stages dedicated to IMAX and home entertainment formats. The department features four sound design suites, 60 sound editorial rooms, three ADR recording studios and three Foley stages. Its Barbra Streisand Scoring Stage is among the largest in the world and can accommodate a full orchestra and choir.

Behind the Title: Sonic Union’s executive creative producer Halle Petro

This creative producer bounces between Sonic Union’s two New York locations, working with engineers and staff.

NAME: Halle Petro

COMPANY: New York City’s Sonic Union (@SonicUnionNYC)

CAN YOU DESCRIBE YOUR COMPANY?
Sonic Union works with agencies, brands, editors, producers and directors for creative development in all aspects of sound for advertising and film. Sound design, production sound, immersive and VR projects, original music, broadcast and Dolby Atmos mixes. If there is audio involved, we can help.

WHAT’S YOUR JOB TITLE?
Executive Creative Producer

WHAT DOES THAT ENTAIL?
My background is producing original music and sound design, so the position was created with my strengths in mind — to act as a creative liaison between our engineers and our clients. Basically, that means speaking to clients and flushing out a project before their session. Our scheduling producers love to call me and say, “So we have this really strange request…”

Sound is an asset to every edit, and our goal is to be involved in projects at earlier points in production. Along with our partners, I also recruit and meet new talent for adjunct and permanent projects.

I also recently launched a sonic speaker series at Sonic Union’s Bryant Park location, which has so far featured female VR directors Lily Baldwin and Jessica Brillhart, a producer from RadioLab and a career initiative event with more to come for fall 2018. My job allows me to wear multiple hats, which I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I have no desk! I work between both our Bryant Park and Union Square studios to be in and out of sessions with engineers and speaking to staff at both locations. You can find me sitting in random places around the studio if I am not at client meetings. I love the freedom in that, and how it allows me to interact with folks at the studios.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Recently, I was asked to participate on the AICP Curatorial Committee, which was an amazing chance to discuss and honor the work in our industry. I love how there is always so much to learn about our industry through how folks from different disciplines approach and participate in a project’s creative process. Being on that committee taught me so much.

WHAT’S YOUR LEAST FAVORITE?
There are too many tempting snacks around the studios ALL the time. As a sucker for chocolate, my waistline hates my job.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like mornings before I head to the studio — walking clears my mind and allows ideas to percolate.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a land baroness hosting bands in her barn! (True story: my dad calls me “The Land Baroness.”)

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Well, I sort of fell into it. Early on I was a singer and performer who also worked a hundred jobs. I worked for an investment bank, as a travel concierge and celebrity assistant, all while playing with my band and auditioning. Eventually after a tour, I was tired of doing work that had nothing to do with what I loved, so I began working for a music company. The path unveiled itself from there!

Evelyn

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Sprint’s 2018 Super Bowl commercial Evelyn. I worked with the sound engineer to discuss creative ideas with the agency ahead of and during sound design sessions.

A film for Ogilvy: I helped source and record live drummers and created/produced a fluid composition for the edit with our composer.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We are about to start working on a cool project with MIT and the NY Times.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Probably podcasts and GPS, but I’d like to have the ability to say if the world lost power tomorrow, I’d be okay in the woods. I’d just be lost.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Usually there is a selection of playlists going at the studios — I literally just requested Dolly Parton. Someone turned it off.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Cooking, gardening and horseback riding. I’m basically 75 years old.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

Sim Post NY expands audio offerings, adds five new staffers

Sim Post in New York is in growth mode. They recently expanded their audio for TV and film services and boosted their post team with five new hires. Following the recent addition of a DI theater to its New York location, Sim is building three audio suites, a voiceover room and support space for the expanded audio capabilities.

Primetime Emmy award-winner Sue Pelino joins Sim as a senior re-recording mixer. Over Pelino’s career, she has been nominated for 10 Primetime Emmy Awards, most recently winning her third Emmy in 2017 for Outstanding Sound Mixing for her work on the 2017 Rock & Roll Hall of Fame Induction Ceremony (HBO). Project highlights that include performance series such as VH1 Sessions at West 54th, Tony Bennett: An American Classic, Alicia Keys — Unplugged, Tupac: Resurrection and Elton John: The Red Piano.

Dan Ricci also joins the Sim audio department as a re-recording mixer. A graduate of the Berklee College of Music, his prior work experience includes time at Sony Music and credits include Comedians in Cars Getting Coffee and the Grammy-nominated Jerry Before Seinfeld Netflix special. Ricci has worked extensively with Dolby Atmos and immersive technologies involved in VR content creation.

Ryan Schumer completes Sim New York’s audio department as an assistant audio engineer. Schumer has a bachelor’s degree from Five Towns College on Long Island in Jazz Commercial Music with a concentration in audio recording technology.

Stephanie Pacchiano joins Sim as a finishing producer, following a 10-year stint at Broadway Video where she provided finishing and delivery services for a robust roster of clients. Highlights include Jerry Seinfeld’s Comedians in Cars Getting Coffee, Atlanta, Portlandia, Documentary Now! and delivering Saturday Night Live to over 25 domestic and international platforms.

Kassie Caffiero joins Sim as VP, business development, east coast sales. She brings with her over 25 years of post experience. A graduate of Queens College with a degree in communication arts, Caffiero began her post career in the mid 1980s and found herself working on the CBS TV series. Caffiero’s experience managing the scheduling, operations and sales departments at major post facilities led her to the role of VP of post production at Sony Music Studios in New York City for 10 years. This was followed by a stint at Creative Group in New York for five years and most recently Broadway Video, also in New York, for six years.

Sim Post is a division of Sim, provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

Netflix’s Lost in Space: New sounds for a classic series

By Jennifer Walden

Netflix’s Lost in Space series, a remake of the 1965 television show, is a playground for sound. In the first two episodes alone, the series introduces at least five unique environments, including an alien planet, a whole world of new tech — from wristband communication systems to medical analysis devices — new modes of transportation, an organic-based robot lifeform and its correlating technologies, a massive explosion in space and so much more.

It was a mission not easily undertaken, but if anyone could manage it, it was four-time Emmy Award-winning supervising sound editor Benjamin Cook of 424 Post in Culver City. He’s led the sound teams on series like Starz’s Black Sails, Counterpart and Magic City, as well as HBO’s The Pacific, Rome and Deadwood, to name a few.

Benjamin Cook

Lost in Space was a reunion of sorts for members of the Black Sails post sound team. Making the jump from pirate ships to spaceships were sound effects editors Jeffrey Pitts, Shaughnessy Hare, Charles Maynes, Hector Gika and Trevor Metz; Foley artists Jeffrey Wilhoit and Dylan Tuomy-Wilhoit; Foley mixer Brett Voss; and re-recording mixers Onnalee Blank and Mathew Waters.

“I really enjoyed the crew on Lost in Space. I had great editors and mixers — really super-creative, top-notch people,” says Cook, who also had help from co-supervising sound editor Branden Spencer. “Sound effects-wise there was an enormous amount of elements to create and record. Everyone involved contributed. You’re establishing a lot of sounds in those first two episodes that are carried on throughout the rest of the season.”

Soundscapes
So where does one begin on such a sound-intensive show? The initial focus was on the soundscapes, such as the sound of the alien planet’s different biomes, and the sound of different areas on the ships. “Before I saw any visuals, the showrunners wanted me to send them some ‘alien planet sounds,’ but there is a huge difference between Mars and Dagobah,” explains Cook. “After talking with them for a bit, we narrowed down some areas to focus on, like the glacier, the badlands and the forest area.”

For the forest area, Cook began by finding interesting snippets of animal, bird and insect recordings, like a single chirp or little song phrase that he could treat with pitching or other processing to create something new. Then he took those new sounds and positioned them in the sound field to build up beds of creatures to populate the alien forest. In that initial creation phase, Cook designed several tracks, which he could use for the rest of the season. “The show itself was shot in Canada, so that was one of the things they were fighting against — the showrunners were pretty conscious of not making the crash planet sound too Earthly. They really wanted it to sound alien.”

Another huge aspect of the series’ sound is the communication systems. The characters talk to each other through the headsets in their spacesuit helmets, and through wristband communications. Each family has their own personal ship, called a Jupiter, which can contact other Jupiter ships through shortwave radios. They use the same radios to communicate with their all-terrain vehicles called rovers. Cook notes these ham radios had an intentional retro feel. The Jupiters can send/receive long-distance transmissions from the planet’s surface to the main ship, called Resolute, in space. The families can also communicate with their Jupiters ship’s systems.

Each mode of communication sounds different and was handled differently in post. Some processing was handled by the re-recording mixers, and some was created by the sound editorial team. For example, in Episode 1 Judy Robinson (Taylor Russell) is frozen underwater in a glacial lake. Whenever the shot cuts to Judy’s face inside her helmet, the sound is very close and claustrophobic.

Judy’s voice bounces off the helmet’s face-shield. She hears her sister through the headset and it’s a small, slightly futzed speaker sound. The processing on both Judy’s voice and her sister’s voice sounds very distinct, yet natural. “That was all Onnalee Blank and Mathew Waters,” says Cook. “They mixed this show, and they both bring so much to the table creatively. They’ll do additional futzing and treatments, like on the helmets. That was something that Onna wanted to do, to make it really sound like an ‘inside a helmet’ sound. It has that special quality to it.”

On the flipside, the ship’s voice was a process that Cook created. Co-supervisor Spencer recorded the voice actor’s lines in ADR and then Cook added vocoding, EQ futz and reverb to sell the idea that the voice was coming through the ship’s speakers. “Sometimes we worldized the lines by playing them through a speaker and recording them. I really tried to avoid too much reverb or heavy futzing knowing that on the stage the mixers may do additional processing,” he says.

In Episode 1, Will Robinson (Maxwell Jenkins) finds himself alone in the forest. He tries to call his father, John Robinson (Toby Stephens — a Black Sails alumni as well) via his wristband comm system but the transmission is interrupted by a strange, undulating, vocal-like sound. It’s interference from an alien ship that had crashed nearby. Cook notes that the interference sound required thorough experimentation. “That was a difficult one. The showrunners wanted something organic and very eerie, but it also needed to be jarring. We did quite a few versions of that.”

For the main element in that sound, Cook chose whale sounds for their innate pitchy quality. He manipulated and processed the whale recordings using Symbolic Sound’s Kyma sound design workstation.

The Robot
Another challenging set of sounds were those created for Will Robinson’s Robot (Brian Steele). The Robot makes dying sounds, movement sounds and face-light sounds when it’s processing information. It can transform its body to look more human. It can use its hands to fire energy blasts or as a tool to create heat. It says, “Danger, Will Robinson,” and “Danger, Dr. Smith.” The Robot is sometimes a good guy and sometimes a bad guy, and the sound needed to cover all of that. “The Robot was a job in itself,” says Cook. “One thing we had to do was to sell emotion, especially for his dying sounds and his interactions with Will and the family.”

One of Cook’s trickiest feats was to create the proper sense of weight and movement for the Robot, and to portray the idea that the Robot was alive and organic but still metallic. “It couldn’t be earthly technology. Traditionally for robot movement you will hear people use servo sounds, but I didn’t want to use any kind of servos. So, we had to create a sound with a similar aesthetic to a servo,” says Cook. He turned to the Robot’s Foley sounds, and devised a processing chain to heavily treat those movement tracks. “That generated the basic body movement for the Robot and then we sweetened its feet with heavier sound effects, like heavy metal clanking and deeper impact booms. We had a lot of textures for the different surfaces like rock and foliage that we used for its feet.”

The Robot’s face lights change color to let everyone know if it’s in good-mode or bad-mode. But there isn’t any overt sound to emphasize the lights as they move and change. If the camera is extremely close-up on the lights, then there’s a faint chiming or tinkling sound that accentuates their movement. Overall though, there is a “presence” sound for the Robot, an undulating tone that’s reminiscent of purring when it’s in good-mode. “The showrunners wanted a kind of purring sound, so I used my cat purring as one of the building block elements for that,” says Cook. When the Robot is in bad-mode, the sound is anxious, like a pulsing heartbeat, to set the audience on edge.

It wouldn’t be Lost in Space without the Robot’s iconic line, “Danger, Will Robinson.” Initially, the showrunners wanted that line to sound as close to the original 1960’s delivery as possible. “But then they wanted it to sound unique too,” says Cook. “One comment was that they wanted it to sound like the Robot had metallic vocal cords. So we had to figure out ways to incorporate that into the treatment.” The vocal processing chain used several tools, from EQ, pitching and filtering to modulation plug-ins like Waves Morphoder and Dehumaniser by Krotos. “It was an extensive chain. It wasn’t just one particular tool; there were several of them,” he notes.

There are other sound elements that tie into the original 1960’s series. For example, when Maureen Robinson (Molly Parker) and husband John are exploring the wreckage of the alien ship they discover a virtual map room that lets them see into the solar system where they’ve crashed and into the galaxy beyond. The sound design during that sequence features sound material from the original show. “We treated and processed those original elements until they’re virtually unrecognizable, but they’re in there. We tried to pay tribute to the original when we could, when it was possible,” says Cook.

Other sound highlights include the Resolute exploding in space, which caused massive sections of the ship to break apart and collide. For that, Cook says contact microphones were used to capture the sound of tin cans being ripped apart. “There were so many fun things in the show for sound. From the first episode with the ship crash and it sinking into the glacier to the black hole sequence and the Robot fight in the season finale. The show had a lot of different challenges and a lot of opportunities for sound.”

Lost in Space was mixed in the Anthony Quinn Theater at Sony Pictures in 7.1 surround. Interestingly, the show was delivered in Dolby’s Home Atmos format. Cook explains, “When they booked the stage, the producer’s weren’t sure if we were going to do the show in Atmos or not. That was something they decided to do later so we had to figure out a way to do it.”

They mixed the show in Atmos while referencing the 7.1 mix and then played those mixes back in a Dolby Home Atmos room to check them, making any necessary adjustments and creating the Atmos deliverables. “Between updates for visual effects and music as well as the Atmos mixes, we spent roughly 80 days on the dub stage for the 10 episodes,” concludes Cook.

Behind the Title: Grey Ghost Music mix engineer Greg Geitzenauer

NAME: Greg Geitzenauer

COMPANY: Minneapolis-based Grey Ghost Music

CAN YOU DESCRIBE YOUR COMPANY?
Side A: Music production, creative direction and licensing for the advertising and marketing industries. Side B: Audio post production for the advertising and marketing industries.

WHAT’S YOUR JOB TITLE?
Senior Mix Engineer

WHAT DOES THAT ENTAIL?
All the hands-on audio post work our clients need — from VO recording, editing, forensic/cleanup work to sound design and final mixing.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The number of times my voice has ended up in a final spot when the script calls for “recording engineer. “

WHAT’S YOUR FAVORITE PART OF THE JOB?
There are some really funny people in this industry. I laugh a lot.

WHAT’S YOUR LEAST FAVORITE?
Working on a particular project so long that I lose perspective on whether the changes being made are helping any more.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get to work early — the time I get to spend confirming all my shit is together.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Cutting together music for my daughter’s dance team.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was 14 when I found out what a recording engineer did, and I just knew. Audio and technology… it just pushes all my buttons.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Essentia Water, Best Buy, Comcast, Invisalign, 3M and Xcel Energy.

Invisalign

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
An anti-smoking radio campaign that won Radio Mercury and One Show awards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Avid Pro Tools HD, Kensington Expert Mouse trackball and Pentel Quicker-Clicker mechanical pencils.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Reddit and LinkedIn.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Go home.

JoJo Whilden/Hulu

Color and audio post for Hulu’s The Looming Tower

Hulu’s limited series, The Looming Tower, explores the rivalries and missed opportunities that beset US law enforcement and intelligence communities in the lead-up to the 9/11 attacks. Based on the Pulitzer Prize-winning book by Lawrence Wright, who also shares credit as executive producer with Dan Futterman and Alex Gibney, the show’s 10 episodes paint an absorbing, if troubling, portrait of the rise of Osama bin Laden and al-Qaida, and offer fresh insight into the complex people who were at the center of the fight against terrorism.

For The Looming Tower’s sound and picture post team, the show’s sensitive subject matter and blend of dramatizations and archival media posed significant technical and creative challenges. Colorist Jack Lewars and online editor Jeff Cornell of Technicolor PostWorks New York, were tasked with integrating grainy, run-and-gun news footage dating back to 1998 with crisply shot, high-resolution original cinematography. Supervising sound designer/effects mixer Ruy García and re-recording mixer Martin Czembor from PostWorks, along with a Foley team from Alchemy Post Sound, were charged with helping to bring disparate environments and action to life, but without sensationalizing or straying from historical accuracy.

L-R: colorist Jack Lewars and editor Jeff Cornell

Lewars and Cornell mastered the series in Dolby Vision HDR, working from the production’s camera original 2K and 3.4K ArriRaw files. Most of the color grading and conforming work was done with a light touch, according to Lewars, as the objective was to adhere to a look that appeared real and unadulterated. The goal was for viewers to feel they are behind the scenes, watching events as they happened.

Where more specific grades were applied, it was done to support the narrative. “We developed different look sets for the FBI and CIA headquarters, so people weren’t confused about where we were,” Lewars explains. “The CIA was working out of the basement floors of a building, so it’s dark and cool — the light is generated by fluorescent fixtures in the room. The FBI is in an older office building — its drop ceiling also has fluorescent lighting, but there is a lot of exterior light, so its greener, warmer.”

The show adds to the sense of realism by mixing actual news footage and other archival media with dramatic recreations of those same events. Lewars and Cornell help to cement the effect by manipulating imagery to cut together seamlessly. “In one episode, we matched an interview with Osama bin Laden from the late ‘90s with new material shot with an Arri Alexa,” recalls Lewars. “We used color correction and editorial effects to blend the two worlds.”

Cornell degraded some scenes to make them match older, real-world media. “I took the Alexa material and ‘muddied’ it up by exporting it to compressed SD files and then cutting it back into the master timeline,” he notes. “We also added little digital hits to make it feel like the archival footage.”

While the color grade was subtle and adhered closely to reality, it still packed an emotional punch. That is most apparent in a later episode that includes the attack on the Twin Towers. “The episode starts off in New York early in the morning,” says Lewars. “We have a series of beauty shots of the city and it’s a glorious day. It’s a big contrast to what follows — archival footage after the towers have fallen where everything is a white haze of dust and debris.”

Audio Post
The sound team also strove to remain faithful to real events. García recalls his first conversations about the show’s sound needs during pre-production spotting sessions with executive producer Futterman and editor Daniel A. Valverde. “It was clear that we didn’t want to glamorize anything,” he says. “Still, we wanted to create an impact. We wanted people to feel like they were right in the middle of it, experiencing things as they happened.”

García says that his sound team approached the project as if it were a documentary, protecting the performances and relying on sound effects that were authentic in terms of time and place. “With the news footage, we stuck with archival sounds matching the original production footage and accentuating whatever sounds were in there that would connect emotionally to the characters,” he explains. “When we moved to the narrative side with the actors, we’d take more creative liberties and add detail and texture to draw you into the space and focus on the story.”

He notes that the drive for authenticity extended to crowd scenes, where native speakers were used as voice actors. Crowd sounds set in the Middle East, for example, were from original recordings from those regions to ensure local accents were correct.

Much like Lewars approach to color, García and his crew used sound to underscore environmental and psychological differences between CIA and FBI headquarters. “We did subtle things,” he notes. “The CIA has more advanced technology, so everything there sounds sharper and newer versus the FBI where you hear older phones and computers.”

The Foley provided by artists and mixers from Alchemy Post Sound further enhanced differences between the two environments. “It’s all about the story, and sound played a very important role in adding tension between characters,” says Leslie Bloome, Alchemy’s lead Foley artist. “A good example is the scene where CIA station chief Diane Marsh is berating an FBI agent while casually applying her makeup. Her vicious attitude toward the FBI agent combined with the subtle sounds of her makeup created a very interesting juxtaposition that added to the story.”

In addition to footsteps, the Foley team created incidental sounds used to enhance or add dimension to explosions, action and environments. For a scene where FBI agents are inspecting a warehouse filled with debris from the embassy bombings in Africa, artists recorded brick and metal sounds on a Foley stage designed to capture natural ambience. “Normally, a post mixer will apply reverb to place Foley in an environment,” says Foley artist Joanna Fang. “But we recorded the effects in our live room to get the perspective just right as people are walking around the warehouse. You can hear the mayhem as the FBI agents are documenting evidence.”

“Much of the story is about what went wrong, about the miscommunication between the CIA and FBI,” adds Foley mixer Ryan Collison, “and we wanted to help get that point across.”

The soundtrack to the series assumed its final form on a mix stage at PostWorks. Czembor spent weeks mixing dialogue, sound and music elements into what he described as a cinematic soundtrack.

L-R: Martin Czember and Ruy Garcia

Czembor notes that the sound team provided a wealth of material, but for certain emotionally charged scenes, such as the attack on the USS Cole, the producers felt that less was more. “Danny Futterman’s conceptual approach was to go with almost no sound and let the music and the story speak for themselves,” he says. “That was super challenging, because while you want to build tension, you are stripping it down so there’s less and less and less.”

Czembor adds that music, from composer Will Bates, is used with great effect throughout the series, even though it might go by unnoticed by viewers. “There is actually a lot more music in the series than you might realize,” he says. “That’s because it’s not so ‘musical;’ there aren’t a lot of melodies or harmonies. It’s more textural…soundscapes in a way. It blends in.”

Czembor says that as a longtime New Yorker, working on the show held special resonance for him, and he was impressed with the powerful, yet measured way it brings history back to life. “The performances by the cast are so strong,” he says. “That made it a pleasure to work on. It inspires you to add to the texture and do your job really well.”