Tag Archives: Dolby

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.

IBC: Thoughts on Dolby and Nokia

By Zak Tucker

Strolling the halls of IBC in Amsterdam this past week, I found a lot of interesting tools and tech. Here are just a few thoughts about a of couple companies I visited.

On Picture: Dolby is presenting their PQ workflow, which enables HDR and SDR deliverables seamlessly. Recognizing that there will be a real transition period as consumers adopt HDR home viewing environments, Dolby has written algorithms that detect the native specs of each monitor that is Dolby-enabled so that it can interpret the intent of the PQ color and translate it to the specific monitor. In demos, the HDR media is optically more vibrant and true-to-life colors are also more accurately represented compared to traditional SDR. Also, the SDR that Dolby is able to draw from the HDR is optically more vibrant and sharp than the traditional SDR.

On Sound: Dolby is pressing forward with its home immersive sound experience. Through its sound bar and associated sub-woofer, Dolby is producing a home Atmos sound experience that is quite compelling. Dolby can also work with the additional speakers that can be installed by home users. Dolby’s home Atmos is able to dynamically adjust to various home speaker installations.

Nokia OZO
They have developed and delivered a purpose-built VR camera that records both picture and sound. The form factor, not any bigger than a person’s head, is clean and small so as to address the concern of most VR rigs that are large and overly obtrusive — often an issue with talent, for example, when capturing a live event such as a concert. This camera is cable of north of 4K resolution and the current stitched deliverable is a 4K, 3D, VR file. The accompanying software can accomplish both a Fast auto stitch as well as a higher quality stitch. The software is also capable of taking a live stream from the VR camera and transmitting it, stitched, to a platform, such as YouTube, in real time. In the demo, the stitching is quite seamless.

Zak Tucker is president and co-founder of Harbor Picture Company in New York.

Creating VR audio workflows for ‘Mars 2030’ and beyond

Source Sound is collaborating with others and capturing 360 sound for VR environments.

By Jennifer Walden

Everyone wants it, but not everyone can make it. No, I’m not talking about money. I’m talking about virtual reality content.

Let’s say you want to shoot a short VR film. You’ve got a solid script, a cast of known actors, you’ve got a 360-degree camera and a pretty good idea of how to use it, but what about the sound? The camera has a built-in mic, but will that be enough coverage? Should the cast be mic’d as they would be for a traditional production? How will the production sound be handled in post?

Tim Gedemer, owner/sound supervisor at Source Sound in Woodland Hills, California, can help answer these questions. “In VR, we are audio directors,” he says. “Our services include advising clients at the script level on how they should be shooting their visuals to be optimal for sound.”

Tim Gedemer

As audio directors, Source Sound walks their clients through every step of the process, from production to distribution. Starting with the recording on set, they manage all of the technical aspects of sound file management through production, and then guide their clients through the post sound process, both creatively and technically.

They recommend what technology should be used, how clients should be using it and what deals they need to make to sort out their distribution. “It really is a point-to-point service,” says Gedemer. “We decided early on that we needed to influence the entire process, so that is what we do.”

Two years ago, Dolby Labs referred Jaunt Studio to Source Sound to for their first VR film gig. Gedemer explains that because of Source Sound’s experience with games and feature films, Dolby felt they would be a good match to handle Jaunt’s creative sound needs while Dolby worked with Jaunt on the technical challenges.

Jaunt’s Kaiju Fury! premiered at the 2015 Sundance Film Festival. The experience puts the viewer in the middle of an epic Godzilla-like monster battle. “They realized their film needed cinematic sound, so Dolby called us up and asked if we’d like to get involved. We said, ‘We’re really busy with projects, but show us the tech and maybe we’ll help.’ We were disinterested at first, figuring it was going to be gimmicky, but I went to San Francisco and I looked at their first test, and I was just shocked. I had never seen anything like that before in my life. I realized, in that first moment of putting on those goggles, that we needed to do this.”

Paul McCartney on the "Out There" tour 2014.

Paul McCartney on the “Out There” tour 2014.

Kaiju Fury! was just the start. Source Sound completed three more VR projects for Jaunt, all within a week. There was the horror VR short film called Black Mass, a battle sequence called The Mission and the Atmos VR mastering of Paul McCartney’s Live and Let Die in concert.

Gedemer admits, “It was just insane. No one had ever done anything like this and no one knew how to do it. We just said, ‘Okay, we’ll just stay up for a week, figure all of that out and get it done.’”

Adjusting The Workflow
At first, their Pro Tools-based post sound workflow was similar to a traditional production, says Gedemer, “because we didn’t know what we didn’t know. It was only when we got into creating the final mix that we realized we didn’t have the tools to do this.”

Specifically, how could they experience the full immersion of the 360-degree video and concurrently make adjustments to the mix? On that first project, there was no way to slave the VR picture playing back through the Oculus headgear to the sound playing back via Pro Tools. “We had to manually synchronize,” explains Gedemer. “Literally, I would watch the equi-rectangular video that we were working with in Pro Tools, and at the precise moment I would just press play on the laptop, playing back the VR video through the Oculus HMD to try and synchronize it that way. I admit I got pretty good at that, but it’s not really the way you want to be working!”

Since that time, Dolby has implemented timecode synchronization and a video player that will playback the VR video through the Oculus headset. Now the Source Sound team can pick up the Oculus and it will be synchronized to the Pro Tools session.

Working Together For VR
Over the last few years, Source Sound has been collaborating with tech companies like Dolby, Avid, Oculus, Google, YouTube and Nokia on developing audio-related VR tools, workflow solutions and spec standards that will eventually become available to the wider audio post industry.

“We have this holistic approach to how we want to work, both in virtual and augmented reality audio,” says Gedemer. “We’re working with many different companies, beta testing technology and advising on what they should be thinking about regarding VR sound — with a keen eye toward new product development.”

Kaiju Fury

Kaiju Fury!

Since Kaiju Fury, Source Sound has continued to create VR experiences with Jaunt. They have worked with other VR content creators, including the Emblematic Group (founded by “the godmother of VR,” Nonny de la Peña), 30 Ninjas (founded by director Doug Liman, The Bourne Identity and Edge of Tomorrow), Fusion Media, Mirada, Disney, Google, YouTube and many others.

Mars 2030
Currently, Source Sound is working with Fusion Media on a project with NASA called Mars 2030, which takes a player to Mars as an astronaut and allows him/her to experience what life might be like while living in a Mars habitat. NASA feels that human exploration of Mars may be possible in the year 2030, so why not let people see and feel what it’s like.

The project has given Source Sound unprecedented access to the NASA facilities and engineers. One directive for Mars 2030 is to be as accurate as possible, with information on Mars coming directly from NASA’s Mars missions. For example, NASA collected information about the surface of Mars, such as the layout of all the rocks and the type of sand covering the surface. All of that data was loaded into the Unreal Engine, so when a player steps out of the habitat in the Mars 2030 experience and walks around, that surface is going to be the exact surface that is on Mars. “It’s not a facsimile,” says Gedemer. “That rock is actually there on Mars. So in order for us to be accurate from an audio perspective, there’s a lot that we have to do.”

In the experience the player gets to drive the Mars Rover. At NASA in Houston, there are multiple iterations of the rover that are being developed for this mission. They also have a special area that is set up like the Mars surface with a few craters and rocks.

For audio capture, Gedemer and sound effects recordist John Fasal headed to Houston with Sound Devices recorders and a slew of mic options. While the rover is too slow to do burnouts and donuts, Gedemer and Fasal were able to direct a certified astronaut driver and record the rover from every relevant angle. They captured sounds and ambiences from the various habitats on site. “There is a new prototype space suit that is designed for operation on Mars, and as such we will need to capture all the relevant sound associated with it,” says Gedemer. “We’ll be looking into helmet shape and size, communication systems, life support air flow, etc. when recreating this in the Unreal Engine.”

Another question the sound team nSS_NASA_2USEeeds to address is, “What does it sound like out on the surface of Mars?” It has an atmosphere, but the tricky thing is that a human can never actually walk around on the surface of Mars without wearing a suit. Sounds traveling through the Mars atmosphere will sound different than sounds traveling through Earth’s atmosphere, and additional special considerations need to be made for how the suit will impact sound getting to the astronaut’s ears.

“Only certain sounds and/or frequencies will penetrate the suit, and if it is loud enough to penetrate the suit, what is it going to sound like to the astronaut?” asks Gedemer. “So we are trying to figure out some of these technical things along the way. We hope to present a paper on this at the upcoming AES Conference on Audio for Virtual and Augmented Reality.”

Going Live
Another interesting project at Source Sound is the work they’re doing with Nokia to develop specialized audio technology for live broadcasts in VR. “We are currently the sole creative provider of spatial audio for Nokia’s VR broadcasting initiative,” reveals Gedemer. Source Sound has been embedded with the Nokia Ozo Live team at events where they have been demonstrating their technology. They were part of the official Ozo Camera Launches in Los Angeles and London. They captured and spatialized a Los Angeles Lakers basketball game at the Staples Center. And once again they teamed up with Nokia at their NAB event this past spring.

“We’ve been working with them very closely on the technology that they are developing for live capture and distribution of stereoscopic visual and spatial audio in VR. I can’t elaborate on any details, but we have some very cool things going on there.”

However, Gedemer does break down one of the different requirements of live VR broadcast versus a cinematic VR experience — an example being the multi-episode VR series called Invisible, which Source Sound and Doug Liman of 30 Ninjas are currently collaborating on.

For a live broadcast you want an accurate representation of the event, but for a cinematic experience the opposite is true. Accuracy is not the objective. A cinematic experience needs a highly curated soundtrack in order to tell the story.

Gedemer elaborates, “The basic premise is that, for VR broadcasts you need to have an accurate audio representation of camera location. There is the matter of proper perspective to attend to. If you have a multi-camera shoot, every time you change camera angles to the viewer, you change perspective, and the sound needs to follow. Unlike a traditional live environment, which has a stereo or 5.1 mix that stays the same no matter the camera angle, our opinion is that approach is not adequate for true VR. We think Nokia is on the right track, and we are helping them perfect the finer points. To us that is truly exciting.”

Jennifer Walden is a New Jersey based writer and audio engineer.

IBC 2015 Blog: HDR displays

By Simon Ray

It was an interesting couple of days in Amsterdam. I was hoping to get some more clarity on where things were going with the High Dynamic Range concept in both professional and consumer panels, as well as delivery mechanisms to get it to the consumers. I am leaving IBC knowing more, but no nearer a coherent idea as to exactly where this is heading.

I initially visited Dolby to get an update on Dolby Vision (our main image), see where they were with their Dolby Vision technology and most importantly get my reserved tickets for the screening of Fantastic Four in the Auditorium (Laser Projection and Dolby Atmos). It all sounded very positive with news of a number of consumer panel manufacturers being close to releasing Dolby Vision-capable TVs. For example, Vizio with their Reference Series panel and streaming services like VUDU streaming Dolby Vision HDR content, although this is just in the USA to begin with. I also had my first look at a Dolby “Quantum Dot” HDR display panel, which did look good and surely has the best name of any tech out here.

There are other HDR offerings out there with Amazon Prime having announced in August that they will be streaming HDR content in the UK, but not initially in the Dolby Vision format (HDR video is available with the Amazon Instant Video app for Samsung SUHD TVs like the JS9000, JS9100 and JS9500 series) and selected LG TVs (G9600 and G9700 series) and the “big” TV manufacturers have or are about to launch HDR panels. So far so good.

Pro HDR Monitors
Things got bit more vague again when I started looking into HDR-equipped professional panels for color correction. There are only two I could find in the show: Sony had an impressive HDR-ready panel connected to a Filmlight BaseLight tucked away on their large stand in Hall 12; and Canon, who had their equally impressive prototype display tucked away in Hall 11 connected to a SGO Mistika. Both displays had different brightness specs and gamma options.


When I asked some other manufacturers about their HDR panels the response was the same: “We are going to wait until the specifications are finalized before committing to an HDR monitor.” This leaves me to think this is a bad time to be buying a monitor. You are either going to buy an HDR monitor now, which may not be correct to the final specifications, or you are going to be buying a non-HDR monitor that is likely to be superseded in the near future.

Another thing I noticed was that the professional HDR panels were all being shown off in a carefully (or as carefully as a trade show allows) light environment to give them the best opportunity to make an impact. Any ambient light getting into the viewing environment is going to detract from the benefits of having the increased dynamic range and brightness of the HDR display, which I imagine might be a problem in the average living room. I hope this does not reduce the chance of this technology making an impact because it is great to see images seemingly having more depth and quality to them. As a representative on the Sony stand said, “It feels more immersive — I am so much more engaged in the picture.”


The problem of the ambient light was also picked up on in an interesting talk in the Auditorium as part of the “HDR: From zero to infinity” series. There were speakers from iMax, Dolby, Barco and Sony talking about the challenges of bringing HDR to the cinema. I had come across the idea of HDR in cinema from Dolby through their “Dolby Cinema” project, which brings together HDR picture and immersive sound with Dolby Atmos.

I am in the process of building a theatre to mix theatrical soundtracks in Dolby Atmos, but despite the exciting opportunities for sound that Atmos offers the sound teams, in the UK at least the take up by Cinemas is slow. One of the best things about Dolby Atmos for me is that if you go to see a film in Atmos, you know that the speaker system is going to be of a certain standard, otherwise Dolby would not have given it Atmos status. For too long, cinemas have been allowed to let the speaker systems wear down to the point where it becomes unlistenable. If these new initiatives can give cinemas an opportunity to reinvest in the equipment (and the various financial implications and challenges and who would meet these costs were discussed) and get a return on that investment it could be a chance to stop the rot and improve the cinema going experience. And, importantly, for us in post it gives us an exciting high bench mark to be aiming for when working on films.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

‘Inside Out’: Skywalker helps hug the audience with sound

Pixar’s latest gets a Dolby Atmos mix

By Jennifer Walden

Ever ask yourself what goes through a child’s mind? Well, Pixar did, and the result was their latest Inside Out, which has left audiences laughing and crying. The film focuses on 11-year-old Riley, whose emotions are sent reeling as her family moves from Minnesota to San Francisco.

The story, by directors Pete Docter and Ronaldo Del Carmen, portrays five main emotions: Joy, Sadness, Anger, Disgust and Fear — which hang out in the control room of people’s minds. The audience gets to experience Riley’s tumultuous transition through the actions of those five core emotions as they interact inside her mind. They get to see a bit of how her mom and dad’s minds work too. It’s a refreshingly creative animated feature like no other.

Inside Out has two main environments: inside the mind where everything is hyper-real, and out in the world, where everything seems dull by comparison. “We wanted to have the sound mimic that and to follow the actions they took with the picture,” says re-recording mixer Michael Semanick, who handled the sound effects, backgrounds and music for Inside Out.

Michael Semanick

Since the film’s sound — created at Skywalker Sound in Marin County, California — was designed and mixed natively in Dolby Atmos, Semanick and fellow re-recording mixer Tom Johnson, on dialogue/Foley, were able to heighten that difference further by only using the upfront speakers during scenes in the outside world, and the full array of speakers in the Atmos set-up during scenes inside the mind. “We made a conscious decision to have the outside world sound flat, with nothing in the surrounds or the top speakers,” says Semanick.

For inside the mind, sound designer Ren Klyce designed rich backgrounds and elements that could be used in the surrounds and the overhead speakers to fill out the space without being gimmicky or distracting. “For example, Ren had designed these really great water sounds that are, I believe, babies in the womb. They are these cool, inside-the-body-type sounds. I got to move those back and forth and over the top when we’re in the head. It’s very subtle. It’s not meant to be distracting but it’s supposed to give you this feeling like you are inside the mind, and that it’s alive and moving.”

All-Around Sound
With the full-range speakers in the Atmos set-up, Semanick could fluidly move sounds around the theater without having to account for the level dips and EQ differences typical of the surrounds used in 5.1/7.1 set-ups. So when Joy and Sadness get sucked up a memory tube, Semanick was able to fly Klyce’s sound design elements past the viewer without losing low-end detail. “With the Atmos, I can move the sound anywhere and I don’t have to push the level to get the sound to read in the back,” says Semanick.

Additionally, the full-range overhead speakers in the Atmos set-up allowed Semanick to bring sounds in from above, and seemingly move them down the screen. For example, there are memory balls (small, clear balls containing Riley’s memories) that come down from over the top and project light, almost as if they are playing a movie. Since the sound was designed from the ground up in Atmos, Semanick was able to take individual sound elements for that scene and assign them to object panners on the AMS Neve DFC mixing console used in the Kurosawa Studio.

Another advantage to the Atmos set-up was it allowed re-recording mixer Johnson and director Docter to experiment with how they could treat the voices coming from inside Riley’s head. “We didn’t want it to be a standard voiceover. We wanted it to feel like we are inside of this girl’s mind,” says Semanick. “So in the Atmos mix, the first time Joy speaks, it really fills the room up all around you. Then eventually, as she keeps speaking, her voice starts to pull forward and it gets set in a place that is very comfortable, so you realize that this is Joy speaking.”

There are different areas inside the mind, such as the control room where the five emotions interact and decide Riley’s course of action, long-term memory: abstract thought, the subconscious, the memory dump of forgotten memories and the dream studio, which resembles a film stage. Semanick used a combination of stereo reverbs, such as the Lexicon 960 and the TC 6000, to help define those spaces. The control room, with its large windows, has a slight room reverb while the halls of long-term memory are vaster. The reverbs in the subconscious are dark to match the mood of the environment. “We match the reflections to the space,” says Semanick. “When we’re in the canyon of the memory dump area; it’s like an infinite abyss, so the sound has an echo. It’s like looking into the Grand Canyon but you can’t see the bottom. Sometimes I would hit the echo and then fade the reflections quickly, as if they just disappeared into that abyss and then there is no sound. You don’t know if an object is still falling or not.”

Semanick prefers to use several stereo reverbs together to build out the spaces for the Atmos set-up, as opposed to using pre-built multichannel reverbs. “With the stereo reverb or mono reverb, I know how I can place them. I can side-chain them. I can have the reflections build,” he explains. “I can use multiple stereo reverbs and have something different on the top, in the front and in the back. I can manipulate each one separately. I can push the rears louder than I push the fronts, so the reflection comes off a little quicker.”


Semanick really enjoyed mixing the emotional scenes in Inside Out, particularly in the memory dump where Joy and Riley’s old imaginary friend, Bing Bong, are sitting among disintegrating memory balls. “There isn’t music or any other supporting sound, just the voices from the fading memory balls. Each sound that’s placed in there is so important — from the rewind sound of the memory to Joy turning the ball over and changing hands to the balls in the background that are just disintegrating. They are so lightly touched with a little bit of musical enhancement,” he says.

“There were really great sounds for that which I got to blend in, as each ball breaks and falls into this ash. I agonized over every little flake of those balls. That scene is just so delicate and we spent a lot of time on it. The sound can just help draw the audience in even more, and wrap up their hearts, then rip them out. Those are some of the hardest things to mix, those quiet emotional scenes where every little sound is like a pin drop. When you nail it, you can see the audience’s reaction,” concludes Semanick.

HPA Tech Retreat Blog: A display made for humans

By Tom Coughlin

Indian Wells, California — At the 2014 Hollywood Post Alliance Retreat (http://hollywoodpostalliance.org), the session on “Better Pixels: Best Bang for the Buck” gave some interesting insights on how we can make better displays — displays made for humans. Continue reading

IBC Blog: Audio Day


look at Avid’s S6, Dolby Atmos and more

By Simon Ray

Head of Operations & Engineering

Goldcrest London


Sound day started a bit later than picture day, but it was the day we got to look at the new S6 console from Avid.  I booked an afternoon appointment to make sure my judgment was not clouded by an HHB sponsored hangover. We were met by the usual suspects and given an excellent demo by Dave Tyler, which was nowhere near long enough, but there was still time to get an overview of the hardware and its integration with Pro Tools. Continue reading