Category Archives: Audio

Behind the Title: Butter Music and Sound’s Chip Herter

NAME: Chip Herter

COMPANY: NYC’s Butter Music+Sound/Haystack Music

CAN YOU DESCRIBE YOUR COMPANY?
Butter creates custom music compositions for advertising/film/TV. Haystack Music is the internal music catalog from Butter, featuring works from our composers, emerging artists and indie labels.

WHAT’S YOUR JOB TITLE?
Director of Creative Sync Services

WHAT DOES THAT ENTAIL?
The role was designed to be a catch-all for all things creative music licensing. This includes music supervision (curating music for projects from the music industry at large, by way of record labels and publishers) and creative direction from our own Haystack Music library.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Rights management is an understated aspect of the role. The ability to immediately know who key players are in the ownership of a song, so that we can estimate costs for using a song on behalf of our clients and license a track with ease.

WHAT TOOLS DO YOU USE?
The best tool in my toolbox is the team that supports me every day.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I have a keen interest in putting the spotlight on new and emerging music. Be it a new piece written by one of our composers or an emerging act that I want to introduce to a larger audience.

WHAT’S YOUR LEAST FAVORITE?
Losing work to anyone else. It is a natural part of the job, but I can’t help getting personally invested in every project I work on. So the loss feels real, but in turn I always learn something from it.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Morning, for sure. Coffee and music? Yes, please!

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Most likely working for a PR agency. I love to write, and I am good at it (so I’m told).

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was a late bloomer. I was 26 when I took my first internship as a music producer at Crispin Porter+Bogusky. From my first day on the job, I knew this was my higher calling. Anyone who geeks-out to the language in a music license like me is destined to do this for a living.

Lexus Innovations

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently worked on a campaign for Lexus with Team One USA called Innovations that was particularly great and the response to the music was very positive. Recently, we also worked on projects for Levi’s, Nescafé, Starbucks and Keurig… coffee likes us, I guess!

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I was fortunate to work with Wieden+Kennedy on their Coca-Cola Super Bowl ad in 2015. I placed a song from the band Hundred Waters, who have gone on to do remarkable things since. The spot carried a very positive message about anti-bullying, and it was great to work on something with such social awareness.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
WiFi, Bluetooth and Spotify.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I don’t take for granted that my favorite pastime — going to concerts — is a fringe benefit of the job. When I am not listening to music, I am almost always listening to a podcast or a standup comedian. I also enjoy acting like a child with my two-year-old son as much as I can. I learn a lot from him about not taking myself too seriously.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Dell 6.15

Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

Music
On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”


Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.


Blade Runner 2049’s dynamic and emotional mix

By Jennifer Walden

“This film has more dynamic range than any movie we’ve ever mixed,” explains re-recording mixer Doug Hemphill of the Blade Runner 2049 soundtrack. He and re-recording mixer Ron Bartlett, from Formosa Group, worked with director Denis Villeneuve to make sure the audio matched the visual look of the film. From the pounding sound waves of Hans Zimmer and Benjamin Wallfisch’s score to the overwhelming wash of Los Angeles’s street-level soundscape, there’s massive energy in the film’s sonic peaks.

L-R: Ron Bartlett, Denis Villeneuve, Joe Walker, Ben Wallfisch and Doug Hemphill. Credit: Clint Bennett

The first time K (Ryan Gosling) arrives in Los Angeles in the film, the audience is blasted with a Vangelis-esque score that is reminiscent of the original Blade Runner, and that was ultimately the goal there — to envelope the audience in the Blade Runner experience. “That was our benchmark for the biggest, most enveloping sound sequence — without being harsh or loud. We wanted the audience to soak it in. It was about filling out the score, using all the elements in Hans Zimmer’s and Ben Wallfisch’s arsenal there,” says Bartlett, who handled the dialogue and music in the mix.

He and Villeneuve went through a wealth of musical elements — all of which were separated so Villeneuve could pick the ones he liked. His preference gravitated toward the analog synth sounds, like the Yamaha CS-80, which composer Vangelis used in his 1982 Blade Runner composition. “We featured those synth sounds throughout the movie,” says Bartlett. “I played with the spatial aspects, spreading certain elements into the room to envelope you into the score. It was very immersive that way.”

Bartlett notes that initially there were sounds from the original Blade Runner in their mix, like huge drum hits from the original score that were converted into 7.1 versions by supervising sound editor Mark Mangini at Formosa Group. Bartlett used those drum hits as punctuation throughout the film, for scene changes and transitions. “Those hits were everywhere. Actually, they’re the first sound in the movie. Then you can hear those big drum hits in the Vegas walk. That Vegas walk had another score with it, but we kept stripping it away until we were down to just those drum hits. It’s so dramatic.”

But halfway into the final mix for Blade Runner 2049, Mangini phoned Bartlett to tell him that the legal department said they couldn’t use any of those sounds from the original film. They’d need to replace them immediately. “Since I’m a percussionist, Mark asked if I could remake the drum hits. I stayed up until 3am and redid them all in my studio in 7.1, and then brought them in and replaced them throughout the movie. Mark had to make all these new spinner sounds and replace those in the film. That was an interesting moment,” reveals Bartlett.

Sounds of the City
Los Angeles 2049 is a multi-tiered city. Each level offers a different sonic experience. The zen-like prayer that’s broadcast at the top level gradually transforms into a cacophony the closer one gets to street-level. Advertisements, announcements, vehicles, music from storefronts and vending machine sounds mix with multi-language crowds — there’s Russian, Vietnamese, Korean, Japanese, and the list goes on. The city is bursting with sound, and Hemphill enhanced that experience by using Cargo Cult’s Spanner on the crowd effects during the scene where K is sitting outside of Bibi’s Bar to put the crowds around the theater and “give the audience a sense of this crush of humanity,” he says.

The city experience could easily be chaotic, but Hemphill and Bartlett made careful choices on the stage to “rack the focus” — determining for the audience what they should be listening to. “We needed to create the sense that you’re in this overpopulated city environment, but it still had to make sense. The flow of the sound is like ‘musique concrète.’ The sounds have a rhythm and movement that’s musical. It’s not random. There’s a flow,” explains Hemphill, who has an Oscar for his work on The Last of the Mohicans.

Bartlett adds that their goal was to keep a sense of clarity as the camera traveled through the street scene. If there was a big, holographic ad in the forefront, they’d focus on that, and as the scene panned away another sound would drive the mix. “We had to delete some of the elements and then move sounds around. It was a difficult scene and we took a long time on it but we’re happy with the clarity.”

On the quiet end of the spectrum, the film’s soundtrack shines. Spaces are defined with textural ambiences and handcrafted reverbs. Bartlett worked with a new reverb called DSpatial created by Rafael Duyos. “Mark Mangini and I helped to develop DSpatial. It’s a very unique reverb,” says Bartlett.

According to the website, DSpatial Reverb is a space modeler and renderer that offers 48 decorrelated outputs. It doesn’t use recorded impulse responses; instead it uses modeled IRs. This allows the user to select and tweak a series of parameters, like surface texture and space size, to model the acoustic and physical characteristics of any room. “It’s a decorrelated reverb, meaning you can add as many channels as you like and pan them into every Dolby Atmos speaker that is in the room. That wasn’t the only reverb we used, but it was the main one we used in specific environments in the film,” says Bartlett.

In combination with DSpatial, Bartlett used Audio Ease’s Altiverb, FabFilter reverbs and Cargo Cult’s Slapper delay to help create the multifaceted reflections that define the spaces on-screen so well. “We tried to make each space different, “says Bartlett. “We tried to evoke an emotion through the choices of reverbs and delays. It was never just one reverb or delay. I used two or three. It was very interesting creating those textures and creating those rooms.”

For example, in the Tyrell Corporation building, Niander Wallace (Jared Leto)’s private office is a cold, lonely space. Water surrounds a central platform; reflections play on the imposing stone walls. “The way that Roger Deakins lit it was just stunning,” says Bartlett. “It really evoked a cool emotion. That’s what is so intangible about what we do, creating those emotions out of sound.” In addition to DSpatial, Altiverb and FabFilter reverbs, he used Cargo Cult’s Slapper delay, which “added a soft rolling, slight echo to Jared Leto’s voice that made him feel a little more God-like. It gave his voice a unique presence without being distracting.”

Another stunning example of Bartlett’s reverb work was K’s entrance into Rick Deckard’s (Harrison Ford) casino hideout. The space is dead quiet then K opens the door and the sound rings out and slowly dissipates. It conveys the feeling that this is a vast, isolated, and empty space. “It was a combination of three reverbs and a delay that made that happen, so the tail had a really nice shine to it,” says Bartlett.

One of the most difficult rooms to find artistically, says Bartlett, was that of the memory maker, Dr. Ana Stelline (Carla Juri). “Everyone had a different idea of what that dome might sound like. We experimented with four or five different approaches to find a good place with that.”

The reverbs that Bartlett creates are never static. They change to fit the camera perspective. Bartlett needed several different reverb and delay processing chains to define how Dr. Stelline’s voice would react in the environment. For example, “There are some long shots, and I had a longer, more distant reverb. I bled her into the ceiling a little bit in certain shots so that in the dome it felt like the sound was bouncing off the ceiling and coming down at you. When she gets really close to the glass, I wanted to get that resonance of her voice bouncing off of the glass. Then when she’s further in the dome, creating that birthday memory, there is a bit broader reverb without that glass reflection in it,” he says.

On K’s side of the glass, the reverb is tighter to match the smaller dimensions and less reflective characteristics of that space. “The key to that scene was to not be distracting while going in and out of the dome, from one side of the glass to the other,” says Bartlett. “I had to treat her voice a little bit so that it felt like she was behind the glass, but if she was way too muffled it would be too distracting from the story. You have to stay with those characters in the story, otherwise you’re doing a disservice by trying to be clever with your mixing.

“The idea is to create an environment so you don’t feel like someone mixed it. You don’t want to smell the mixing,” he continues. “You want to make it feel natural and cool. If we can tell when we’ve made a move, then we’ll go back and smooth that out. We try to make it so you can’t tell someone’s mixing the sound. Instead, you should just feel like you’re there. The last thing you want to do is to make something distracting. You want to stay in the story. We are all about the story.”

Mixing Tools
Bartlett and Hemphill mixed Blade Runner 2049 at Sony Pictures Post in the William Holden Theater using two Avid S6 consoles running Avid Pro Tools 12.8.2, which features complete Dolby Atmos integration. “It’s nice to have Atmos panners on each channel in Pro Tools. You just click on the channel and the panner pops up. You don’t want to go to just one panner with one joystick all the time so it was nice to have it on each channel,” says Bartlett.

Hemphill feels the main benefit of having the latest gear — the S6 consoles and the latest version of Pro Tools — is that it gives them the ability to carry their work forward. “In times past, before we had this equipment and this level of Pro Tools, we would do temp dubs and then we would scrap a lot of that work. Now, we are working with main sessions all the way from the temp mix through to the final. That’s very important to how this soundtrack was created.”

For instance, the dialogue required significant attention due to the use of practical effects on set, like weather machines for rain and snow. All the dialogue work they did during the temp dubs was carried forward into the final mix. “Production sound mixer Mac Ruth did an amazing job while working in those environments,” explains Bartlett. “He gave us enough to work with and we were able to use iZotope RX 6 to take out noise that was distracting. We were careful not to dig into the dialogue too much because when you start pulling out too many frequencies, you ruin the timbre and quality of the dialogue— the humanness.”

One dialogue-driven scene that made a substantial transformation from temp dub to final mix was the underground sequence in which Freysa [Hiam Abbass] makes a revelation about the replicant child. “The actress was talking in this crazy accent and it was noisy and hard to understand what was happening. It’s a very strong expositional moment in the movie. It’s a very pivotal point,” says Bartlett. They looped the actress for that entire scene and worked to get her ADR performance to sound natural in context to the other sounds. “That scene came such a long way, and it really made the movie for me. Sometimes you have to dig a little deeper to tell the story properly but we got it. When K sits down in the chair, you feel the weight. You feel that he’s crushed by that news. You really feel it because the setup was there.”

Blade Runner 2049 is ultimately a story that questions the essence of human existence. While equipment and technique were an important part of the post process, in the end it was all about conveying the emotion of the story through the soundtrack.

“With Denis [Villeneuve], it’s very much feel-based. When you hear a sound, it brings to mind memories immediately. Denis is the type of director that is plugged into the emotionality of sound usage. The idea more than anything else is to tell the story, and the story of this film is what it means to be a human being. That was the fuel that drove me to do the best possible work that I could,” concludes Hemphill.


Jennifer Walden is a NJ-based writer and audio engineer. Follow her on Twitter @audiojeney.


Review: Blackmagic Resolve 14

By David Cox

Blackmagic has released Version 14 of its popular DaVinci Resolve “color grading” suite, following a period of open public beta development. I put color grading in quotes, because one of the most interesting aspects about the V14 release is how far-reaching Resolve’s ambitions have become, beyond simply color grading.

Fairlight audio within Resolve.

Prior to being purchased by Blackmagic, DaVinci Resolve was one of a small group of high-end color grading systems being offered in the industry. Blackmagic then extended the product to include editing, and Version 14 offers several updates in this area, particularly around speed and fluidity of use. A surprise addition is the incorporation of Fairlight Audio — a full-featured audio mixing platform capable of producing feature film quality 3D soundscapes. It is not just an external plugin, but an integrated part of the software.

This review concentrates on the color finishing aspects of Resolve 14, and on first view the core color tools remain largely unchanged save for a handful of ergonomic improvements. This is not surprising given that Resolve is already a mature grading product. However, Blackmagic has added some very interesting tools and features clearly aimed at enabling colorists to broaden their creative control. I have been a long-time advocate of the idea that a colorist doesn’t change the color of a sequence, but changes the mood of it. Manipulating the color is just one path to that result, so I am happy to see more creatively expansive facilities being added.

Face Refinement
One new feature that epitomizes Blackmagic’s development direction is the Face Refinement tool. It provides features to “beautify” a face and underlines two interesting development points. Firstly, it shows an intention by the developers to create a platform that allows users to extend their creative control across the traditional borders of “color” and “VFX.”

Secondly, such a feature incorporates more advanced programming techniques that seek to recognize objects in the scene. Traditional color and keying tools simply replace one color for another, without “understanding” what objects those colors are attached to. This next step toward a more intelligent diagnosis of scene content will lead to some exciting tools and Blackmagic has started off with face-feature tracking.

Face Refinement

The Face Refinement function works extremely well where it recognizes a face. There is no manual intervention — the tool simply finds a face in the shot and tracks all the constituent parts (eyes, lips, etc). Where there is more than one face detected, the system offers a simple box selector for the user to specify which face to track. Once the analysis is complete, the user has a variety of simple sliders to control the smoothness, color and detail of the face overall, but also specific controls for the forehead, cheeks, chin, lips, eyes and the areas around and below the eyes.

I found the face de-shine function particularly successful. A light touch with the controls yields pleasing results very quickly. A heavy touch is what you need if you want to make someone look like an android. I liked the fact that you can go negative with some controls and make a face look more haggard!

In my tests, the facial tracking was very effective for properly framed faces, even those with exaggerated expressions, headshakes and so on. But it would fail where the face became partially obscured, such as when the camera panned off the face. This led to all the added improvements popping off mid shot. While the fully automatic operation makes it quick and simple to use, it affords no opportunity for the user to intervene and assist the facial tracking if it fails. All things considered though, this will be a big help and time saver for the majority of beauty work shots.

Resolve FX
New for Resolve 14 are a myriad of built-in effects called Resolve FX, all GPU-accelerated and available to be added in the edit “page” directly to clips, or in the color page attached to nodes. They are categorized into Blurs, Light, Color, Refine, Repair, Stylize, Texture and Warp. A few particularly caught my eye, for example in “color,” the color compressor brings together nearby colors to a central hue. This is handy for unifying colors of an unevenly lit client logo into their precise brand reference, or dealing with blotchy skin. There is also a color space transform tool that enables LUT-less conversion between all the major color “spaces.”

Color

The dehaze function derives a depth map by some mysterious magic to help improve contrast over distance. The “light” collection includes a decent lens flare that allows plenty of customizing. “Styles” creates watercolor and outline looks while Texture includes a film grain effect with several film-gauge presets. I liked the implementation of the new Warp function. Rather than using grids or splines, the user simply places “pins” in the image to drag certain areas around. Shift-adding a pin defines a locked position immune from dragging. All simple, intuitive and realtime, or close to it.

Multi-Skilled and Collaborative Workflows
A dilemma for the Resolve developers is likely to be where to draw the line between editing, color and VFX. Blackmagic also develops Fusion, so they have the advanced side of VFX covered. But in the middle, there are editors who want to make funky transitions and title sequences, and colorists who use more effects, mattes and tracking. Resolve runs out of ability in these areas quite quickly and this forces the more adventurous editor or colorist into the alien environment of Fusion. The new features of Resolve help in this area, but a few additions to Resolve, such as better keyframing of effects and easier ability to reference other timeline layers in the node panel could help to extend Resolve’s ability to handle many common VFX-ish demands.

Some have criticized Blackmagic for turning Resolve into a multi-discipline platform, suggesting that this will create an industry of “jack of all trades and masters of none.” I disagree with this view for several reasons. Firstly, if an artist wants to major in a specific discipline, having a platform that can do more does not impede them. Secondly, I think the majority of content (if you include YouTube, etc.) is created by a single person or small teams, so the growth of multi-skilled post production people is simply an inevitable and logical progression which Blackmagic is sensibly addressing.

Edit

But for professional users within larger organisations, the cross-discipline features of Resolve take on a different meaning when viewed in the context of “collaboration.” Resolve 14 permits editors to edit, colorists to color and sound mixers to mix, all using different installations of the same platform, sharing the same media and contributing to the same project, even the same timeline. On the face of it, this promises to remove “conforms” and eradicate wasteful import/export processes and frustrating compatibility issues, while enabling parallel workflows across editing, color grading and audio.

For fast-turnaround projects, or projects where client approval cannot be sought until the project progresses beyond a “rough” stage, the potential advantages are compelling. Of course, the minor hurdle to get over will be to persuade editors and audio mixers to adopt Resolve as their chosen weapon. If they do, Blackmagic might well be on the way to providing collaborative utopia.

Summing Up
Resolve 14 is a massive upgrade from Resolve 12 (there wasn’t a Resolve 13 — who would have thought that a company called Blackagic might be superstitious?). It provides a substantial broadening of ability that will suit both the multi-skilled smaller outfits or fit as a grading/finishing platform and collaborative backbone in larger installations.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


MPSE to present John Paul Fasal with Career Achievement Award

The Motion Picture Sound Editors (MPSE) will present sound designer and sound recordist John Paul Fasal with its 2018 MPSE Career Achievement Award. A 30-year veteran of the sound industry, Fasal has contributed to more than 150 motion pictures and is best known for his work in field recording.

Among his many credits are Top Gun, Master and Commander: The Far Side of the World, Interstellar, The Dark Knight, American Sniper and this year’s Dunkirk. Fasal will receive his award at the MPSE Golden Reel Awards ceremony, February 18, 2018 in Los Angeles.

“John is a master of his craft, an innovator who has pioneered many new recording techniques, and a restless, creative spirit who will stop at nothing to capture the next great sound,” says MPSE president Tom McCarthy.

The MPSE Career Achievement Award recognizes “sound artists who have distinguished themselves by meritorious works as both an individual and fellow contributor to the art of sound for feature film, television and gaming and for setting an example of excellence for others to follow.”

Fasal joins a distinguished list of sound innovators, including 2017 Career Achievement recipient Harry Cohen, Richard King, John Roesch, Skip Lievsay, Randy Thom, Larry Singer, Walter Murch and George Watters II.

“Sound artists typically work behind the scenes, out of the limelight, and so to be recognized in this way by my peers is humbling,” says Fasal. “It is an honor to join the past recipients of this award, many of whom are both colleagues and friends.”

Fasal began his career as a musician and songwriter, but gravitated toward post production sound in the 1980s. Among his first big successes was Top Gun for which he recorded and designed many of the memorable jet aircraft sound effects. He has been a member of the sound teams on several films that have won Academy Awards in sound categories, including Inception, The Dark Knight, Letters From Iwo Jima, Master and Commander: The Far Side of the World, The Hunt for Red October and Pearl Harbor.

Fasal has worked as a sound designer and recordist throughout his career, but in recent years has increasingly focused on field recording. He enjoys especially high regard for his ability to capture the sounds of planes, ships, automobiles and military weaponry. “The equipment has changed dramatically over the course of my career, but the philosophy behind the craft remains the same,” he says. “It still involves the layering of sounds to create a sonic picture and help tell the story.”

 


Creating sounds for Battle of the Sexes

By Jennifer Walden

Fox Searchlight’s biographical sports, drama Battle of the Sexes, delves into the personal lives of tennis players Bobby Riggs (Steve Carell) and Billie Jean King (Emma Stone) during the time surrounding their famous televised tennis match in 1973, known as the Battle of the Sexes. Directors Jonathan Dayton and Valerie Faris faithfully recreated the sports event using real-life tennis players Vince Spadea and Kaitlyn Christian as body doubles for Carell and Stone, and they used the original event commentary by announcer Howard Cosell to add an air of authenticity.

Oscar-nominated supervising sound editors Ai-Ling Lee (also sound designer/re-recording mixer) and Mildred Iatrou, from Fox Studios Post Production in LA, began their work during the director’s cut. Lee was on-site at Hula Post providing early sound support to film editor Pamela Martin, feeding her era-appropriate effects, like telephones, cars and cameras, and working on scenes that the directors wanted to tackle right away.

For director Dayton, the first priority scene was Billie Jean’s trip to a hair salon where she meets Marilyn Barnett (Andrea Riseborough). It’s the beginnings of a romantic relationship and Dayton wanted to explore the idea of ASMR (autonomous sensory meridian response, mainly an aural experience that causes the skin on the scalp and neck to tingle in a pleasing way) to make the hair cut feel close and sensual. Lee explains that ASMR videos are popular on YouTube, and topping the list of experience triggers are hair dryers blowing, cutting hair and running fingers through hair. After studying numerous examples, Lee discovered “the main trick to ASMR is to have the sound source be very close to the mic and to use slow movements,” she says. “If it’s cutting hair, the scissors move very slow and deliberate, and they’re really close to the mic and you have close-up breathing.”

Lee applied those techniques to the recordings she made for the hair salon scene. Using a Sennheiser MKH 8040 and MKH 30 in an MS setup, Lee recorded the up-close sound of slowly cutting a wig’s hair. She also recorded several hair dryers slowly panning back and forth to find the right sound and speed that would trigger an ASMR feeling. “For the hairdryers, you don’t want an intense sound or something that’s too loud. The right sound is one that’s soothing. A lot of it comes down to just having quiet, close-up, sensual movement,” she says.

Ai-Ling Lee capturing the sound of hair being cut.

Recording the sounds was the easy part. Getting that experience to translate in a theater environment was the challenge because most ASMR videos are heard through headphones as a binaural, close experience. “In the end, I just took the mid-side recording and mixed it by slowly panning the sound across the front speakers and a little bit into the surrounds,” explains Lee. “Another trick to making that scene work was to slowly melt away the background sounds of the busy salon, so that it felt like it was just the two of them there.”

Updating the Commentary
As Lee was working on the ASMR sound experience, Iatrou was back at Fox Studios working on another important sequence — the final match. The directors wanted to have Howard Cosell’s original commentary play in the film but the only recording available was a mixed mono track of the broadcast, complete with cheering crowds and a marching band playing underneath.

“At first, the directors sent us the pieces that they wanted to use and we brightened it a little because it was very dull sounding. They also asked us if we could get rid of the music, which we were not able to do,” says Iatrou.

As a work-around, the directors asked Iatrou to record Cosell’s lines using a soundalike. “We did a huge search. Our ADR/group leader Johnny Gidcomb at Loop De Loop held auditions of people who could do Howard Cosell. We did around 50 auditions and sent those to the directors. Finally, we got one guy they really liked.”

L-R: Mildred Iatrou and Ai-Ling Lee.

They spent a day recording the Cosell soundalike, using the same make and model mic that was used by Cosell and nearly all newscasters of that period — the Electro-Voice 635A Apple. Even with the “new” Cosell and the proper mic, the directors felt it still wasn’t right. “They really wanted to use Howard Cosell,” says Iatrou. “We ended up using all Howard Cosell in the film except for a word or a few syllables here and there, which we cut in from the Cosell soundalike. During the mix, re-recording mixer Ron Bartlett (dialogue/music) had to do very severe noise reduction in the segments with the music underneath. Then we put other music on top to help mask the degree of noise reduction that we did.”

Another challenge to the Howard Cosell commentary was that he wasn’t alone. Rosie Casals was also a commentator at the event. In the film, Rosie is played by actress Natalie Morales. Iatrou recorded Morales performing Casals’ commentary using the Electro-Voice 635A Apple mic. She then used iZotope RX 6’s EQ Match feature to help her lines sound similar to Cosell’s. “For the final mix, Ron Bartlett put more time and energy into getting the EQ to match. It’s interesting because we didn’t want Rosie’s lines to be as distressed as Cosell’s. We had to find this balance between making it work with Howard Cosell’s material but also make it a tiny bit better.”

After cutting Rosie’s new lines with Cosell’s original commentary, Iatrou turned her attention to the ambience. She played through the original match’s 90-minute mixed mono track to find clear sections of crowds, murmuring and cheering to cut under Rosie’s lines, so they would have a natural transition into Cosell’s lines. “For example, if there was a swell of the cheer on Howard Cosell’s line then I’d have to find a similar cheer to extend the sound under the actress’s line to fill it in.”

Crowd Sounds
To build up authentic crowd sounds for the recreated Battle of the Sexes match, Iatrou had the loop group perform call-outs that she and Lee heard in the original broadcast, like a woman yelling, “Come on Billie!” and a man shouting, “Come on Bobby baby!”

“The crowd is another big character in the match,” says Lee. “As the game went on, it felt like more of the women were cheering for Billie Jean and more of the men were cheering for Bobby Riggs. In the real broadcast, you hear one guy cheer for Bobby Riggs and then a woman would immediately cheer on Billie Jean. The guy would try to out cheer her and she would cheer back. It’s this whole secondary situation going on and we have that in the film because we wanted to make sure we were as authentic as possible.”

Lee also wanted the tennis rackets to sound authentic. She tracked down a wooden racket and an aluminum racket and had them restrung with a gut material at a local tennis store. She also had them strung with less tension than a modern racket. Then Lee and an assistant headed to an outdoor tennis court and recorded serves, bounces, net impacts, ball-bys and shoe squeaks using two mic setups — both with a Schoeps MK 41 and an MK 8 in an MS setup, paired with Sound Devices 702 and 722 recorders. “We miked it close and far so that it has some natural outdoor sound.”

Lee edited her recordings of tennis sounds and sporting event crowds with the production effects captured by sound mixer Lisa Pinero. “Lisa did a really good job of miking everything, and we were able to use some of the production crowd sounds, especially for the Margaret Court vs. Bobby Riggs match that happens before the final Battle of the Sexes match. In the final match, some of the tennis ball hits were layers of what I recorded and the production hits.”

Foley
Another key sonic element in the recreated Battle of the Sexes match was the Foley work by Dan O’Connell and John Cucci of One Step Up, located on the Fox Studios lot. During the match, Billie Jean’s strategy was to wear out the older and out-of-shape Bobby Riggs by making him run all over the court. “As the game went on, I wanted Bobby’s footsteps to feel heavier, with more thumps, as though he’s running out of steam trying to get the ball,” explains Lee. “Dan O’Connell did a good job of creating that heavy stomping foot, but with a slight wood resonance too. We topped that with shoe squeaks — some that Dan did and some that I recorded.”

The final Battle of the Sexes match was by far the most challenging scene to mix, says Lee. Re-recording mixers Bartlett and Doug Hemphill, as well as Lee, mixed the film in 7.1 surround at Formosa Group’s Hollywood location on Stage A using Avid S6 consoles. In the final match, they had Cosell’s original commentary blended with actress Morales commentary as Rosie Casals. There was music and layered crowds with call-outs. Production sound, field recordings, and Foley meshed to create the diegetic effects. “There were so many layers involved. Deciding how the sounds build and choosing what to play when — the crowds being tied to Howard Cosell, made it challenging to balance that sequence,” concludes Lee.


Jennifer Walden is a New Jersey-based audio engineer and writer.


Jeff Haboush and Chris Newman join Cinema Audio Society board

The Cinema Audio Society has added re-recording mixer Jeffrey J. Haboush, CAS, and production sound mixer Chris Newman, CAS, to its board. They will be filling the vacancies left by the recent passing of production mixer Ed Greene, CAS and the retirement of re-recording mixer Mary Jo Lang, CAS.

“Adding new board members at this time is bittersweet, but we are proud and inspired by the fact that we can welcome two dynamic and valued members of the sound community to fill shoes that we thought might be impossible to fill,” says CAS president Mark Ulano.

With over 200 feature and television mixing credits, Haboush has four Oscar nominations along with CAS, BAFTA and Emmy nominations. One of those Emmy nominations led to a win. His career began in 1978 at B&B Sound Studios Burbank. In 1989 he moved to Warner Bros./Goldwyn sound and in 1999 move to Sony Studios. Currently, Haboush can be found bouncing between Technicolor and Smart Post Sound mixing stages.

In a career that spans more than 40 years, Newman has been the production sound mixer on more than 85 feature films and garnered eight Oscar nominations with three wins for The English Patient, Amadeus and The Exorcist.

Newman was honored in 2013 with the CAS Career Achievement Award.  He also won a CAS Award for Outstanding Sound Mixing for The English Patient and has BAFTA wins for Fame and Amadeus. Prior to working on feature films he spent a decade working on documentaries, including working for Ted Yates’s NBC unit in Southeast Asia in 1966. Having taught sound and filmmaking in Europe, Brazil, Mexico and at NYU and Columbia University, Newman currently teaches both sound and production at the School of Visual Arts in New York.

Main Image: (L-R) Chris Newman and Jeff Haboush.


Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 

Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.