Category Archives: Audio

Cory Melious

Behind the Title: Heard City senior sound designer/mixer Cory Melious

NAME: Cory Melious

COMPANY: Heard City (@heardcity)

CAN YOU DESCRIBE YOUR COMPANY?
We are an audio post production company.

WHAT’S YOUR JOB TITLE?
Senior Sound Designer/Mixer

WHAT DOES THAT ENTAIL?
I provide final mastering of the audio soundtrack for commercials, TV shows and movies. I combine the production audio recorded on set (typically dialog), narration, music (whether it’s an original composition or artist) and sound effects (often created by me) into one 5.1 surround soundtrack that plays on both TV and Internet.

Heard City

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
I think most people without a production background think the sound of a spot just “is.” They don’t really think about how or why it happens. Once I start explaining the sonic layers we combine to make up the final mix they are really surprised.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The part that really excites me is the fact that each spot offers its own unique challenge. I take raw audio elements and tweak and mold them into a mix. Working with the agency creatives, we’re able to develop a mix that helps tell the story being presented in the spot. In that respect I feel like my job changes day in and day out and feels fresh every day.

WHAT’S YOUR LEAST FAVORITE?
Working late! There are a lot of late hours in creative jobs.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I really like finishing a job. It’s that feeling of accomplishment when, after a few hours, I’m able to take some pretty rough-sounding dialog and manipulate that into a smooth-sounding final mix. It’s also when the clients we work with are happy during the final stages of their project.

WHAT TOOLS DO YOU USE ON A DAY-TO-DAY BASIS?
Avid Pro Tools, Izotope RX, Waves Mercury, Altiverb and Revibe.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
One of my many hobbies is making furniture. My dad is a carpenter and taught me how to build at a very young age. If I never had the opportunity to come to New York and make a career here, I’d probably be building and making furniture near my hometown of Seneca Castle, New York.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think this profession chose me. When I was a kid I was really into electronics and sound. I was both the drummer and the front of house sound mixer for my high school band. Mixing from behind the speakers definitely presents some challenges! I went on to college to pursue a career in music recording, but when I got an internship in New York at a premier post studio, I truly fell in love with creating sound for picture.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, I’ve worked on Chobani, Google, Microsoft, and Budweiser. I also did a film called The Discovery for Netflix.

The Discovery for Netflix.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I’d probably have to say Chobani. That was a challenging campaign because the athletes featured in it were very busy. In order to capture the voiceover properly I was sent to Orlando and Los Angeles to supervise the narration recording and make sure it was suitable for broadcast. The spots ran during the Olympics, so they had to be top notch.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
iPhone, iPad and depth finder. I love boating and can’t imagine navigating these waters without knowing the depth!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on the basics — Facebook, LinkedIn and Instagram. I dabble with SnapChat occasionally and will even open up Twitter once in a while to see what’s trending. I’m a fan of photography and nature, so I follow a bunch of outdoor Instagramers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I joke with my friends that all of my hobbies are those of retired folks — sailing, golfing, fly fishing, masterful dog training, skiing, biking, etc. I joke that I’m practicing for retirement. I think hobbies that force me to relax and get out of NYC are really good for me.

Warner/Chappell intros Color TV, Elbroar music catalogs from Germany

For those of you working in film and television with a need for production music, Warner/Chappell Production Music has added to its offerings with the Color TV and Elbroar catalogs. Color TV is German composer Curt Cress’ nearly 14,000-track collection from Curt Cress Publishing and its sister company F.A.M.E. Recordings Publishing. Color TV and the Elbroar catalog, which is also from Germany, are available for licensing now.

Color TV brings to life a wide range of TV production styles with an initial release that includes nine albums: Panoramic Landscapes; Simply Happy, Quirky & Eccentric; Piano Moods; Chase & Surveillance; Secret Service; Actionism; Drama Cuts; and Crime Scene.

Following the initial release, Warner/Chappell Production Music plans to offer two new compilations from the catalog every two weeks. Color TV is available for licensing worldwide, excluding Italy and France.

“Composers have that unique talent and ability to translate what they’re feeling,” explains Warner/Chappell Production Music president Randy Wachtler. “You can hear emotion in different compositions, and it’s always interesting to hear how creators from countries around the world capture it.  Adding to our mix only adds more perspective and more choice for our clients.”

Cress began his musical career in the 1960s, performing in acts such as Klaus Doldinger’s Passport and his own band Snowball, as well as in Falco and Udo Lindenberg’s band. His solo projects involved work with local and international artists including Freddie Mercury, Tina Turner, Rick Springfield, SAGA, Meat Loaf and Scorpions, as well as releasing his own solo material. He made a name for himself as a composer for popular German films and TV series such as SK Kölsch, HeliCops and The Red Mile.

Elbroar, out of Hamburg, Germany, is a collection ranging from epic to minimal, jazz to techno and drama to fun. The catalog serves creatives in the fields of television, film and advertising, with a strong focus on trailers and daytime TV.

The catalog’s first release, “Epic Fairy Tales,” is an album of orchestral arrangements that set the scene for fantastic stories and epic emotions. Elbroar is available for licensing immediately, worldwide.

G-Tech 6-15

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.


Behind the Title: Composer Michael Carey

NAME: Michael Carey (@MichaelCarey007)

COMPANY: Resonation Music

WHAT’S YOUR JOB TITLE?
Creative director/composer (film/commercials/TV) and songwriter/producer/mixer (album work).

WHAT DOES THAT ENTAIL?
For commercials, film and TV projects, I work closely with the director, producer and agency to come up with something that meets their needs and the needs of the project. I develop an understanding of their overall vision, and then I conceptualize, compose and produce original music to capture the essence of this vision, in a complimentary way.

i-want-to-say-composer-main-title-opening-scenes

Michael Carey was composer of the main title theme and the opening scenes for ‘I Want to Say.’

This includes themes, underscore, source, main titles, end titles, etc. When it comes to album projects and soundtrack songs, I often write for (or with) the featured artist or band and produce the track from end to end. This means that I am also the engineer, programmer, session player and often mixer for a project.

On large projects that require fast turnaround, I wear the “creative director” hat, and I assemble and manage a specific team of colleagues to collaborate with me — those I know can get the job done at the highest level. I keep things focused and cohesive, and strive to maintain a consistent musical voice.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Whichever medium I’m working in, be it music-for-picture or album work, the underlying fundamentals are surprisingly similar. In both instances, it’s ultimately about storytelling – conveying maximum emotional impact in a compelling way. Using dynamics, melody, tension, release, density and space to create memorable moments and exciting transitions to keep the viewer or listener engaged.

I’m always striving to support the “main event.” In film, it’s visuals and dialog. In album work it’s the singer’s performance. I see my job as building a metaphorical “frame” around the picture. Enhance, reinforce, compliment, but never distract.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Two parts, really. First, the satisfaction of achieving a collective goal. Helping a filmmaker/artist realize their vision, while finding a way to authentically express my own musical vision and make a deeper connection with the audience experiencing the work.

There are moments in the course of a project when you hit on something that’s undeniable. Everyone involved immediately feels it. Human connections are made. Those are great moments, and ultimately you want the whole piece to feel like that.

The second part is the inspiration that comes from working collaboratively (usually with people at the top of their game) with those talented peers who challenge and push you in directions you might not have taken otherwise.

WHAT IS YOUR PROCESS FOR SCORING? HOW DO YOU BEGIN?
1) Watch film/read script. 2) Discuss with director, get a sense of their vision. 3) Create musical sketches and build a sonic palette. If there’s already some picture available to work with, then I’ll tackle a scene that feels representative of the rest of the project and refine it with input from the director. My goal is to create a musical/sonic “voice” or “sound” for the film that becomes an inextricable part of its personality.

CAN YOU WALK US THROUGH YOUR WORKFLOW?
Once overall direction has been established and scenes have been spotted, my first step with a scene is to map things out tempo/timing-wise, making note of any significant cuts, events or moments that need to be hit (or avoided) musically.

By defining this structure first, it frees me up to explore musically and texturally with a clear understanding of where “ins” and “outs” are. By then, I usually have a pretty clear sense of what I want to hear as it pertains to realizing the vision of the director, and from that point it is about execution —programming, recording live instrumentation, processing/manipulation and mixing — whatever is required to make the scene “feel” the way it does in my head.

DOES YOUR PROCESS CHANGE DEPENDING ON THE TYPE OF PROJECT? FILM VS. SPOT, ETC?
There are certain nuances that have to be considered when approaching these different types of projects. Nailing the details in short form (commercials) is often more crucial because you have an entire world of information to convey in 30 seconds or less. There can be no missed moment or opportunity. It needs to feel cohesive with a cinematic story arc, and a compelling payoff at the end, all in an incredibly compressed window of time.

This is less evident in long-form projects. With feature films or TV, you often have the luxury to build musical movements more naturally as a scene progresses.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
That’s a tough one. As a kid I wanted to be an anthropologist. At 21, I went to a cooking school in Paris for a month thinking that that might be cool. More recently, I’ve been dabbling with building websites for friends using template-based platforms like Squarespace.

I think the common themes with these other interests are curiosity, experimentation, creativity and storytelling. Bringing an idea to life, making the abstract tangible. At the end of the day, music still allows me to do these things with a greater degree of satisfaction.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew music would be my path by age 14. I was playing guitar in local bands at the time, and then moved into steady club gigs. By the time I was 18, I was in a signed band, recording and touring. I couldn’t have imagined doing anything else. When I hit my 20s, I knew that writing and composing was the path ahead (vs. being a “gun for hire” guitarist).

I still played in bands and did lots of session work, but I focused more on songwriting and learning about recording and production. During that time, I had the opportunity to work with some legendary British engineer producers. At one point, a well-known video director who had shot some videos with one of my bands had started doing commercials, and he was unhappy with the music that an ad agency had put in one of his spots. So he recruited me to take a shot a composing a new score. It all clicked, and that opened the door to a couple of decades of high-profile commercial spots, as well as consistent work from major ad agencies and brands.

Eventually, this journey led me down the road of TV and film. All the while, I kept a foot in the album world, writing for and producing artists in the US and internationally.

andy-vargas-the-beat-2016-hmma-winner-producer-songwriterCAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I Want To Say— Composer: Main Title and opening scenes (Healdsburg International Film Festival – Best Documentary).
LBS– Songwriter/Producer: End Title Track feat. J.R. Richards of Dishwalla (Sundance Official Selection, Independent Spirit Awards nominee)
• Andy Vargas/The Beat (Producer/Songwriter – Winner 2016 Hollywood Music in Media Awards “R&B/Soul”)
• Escape The Fate/Alive (Songwriter — hit single, #26 Active Rock, album #2 Billboard Hard Rock charts)

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s hard to pick one. Some of the projects listed above are contenders. There’s a young band I’m developing and producing right now called Bentley. I will be very proud when that is released. They’re fantastic.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Pro Tools. It’s my “instrument” as much as any guitar or keyboard. It’s allowed me to be incredibly productive and make anything I hear in my head a reality. Steven Slate, Sound Toys and PSP plug-ins. Vibe, warmth, color, saturation, detail. My extensive collection of vintage gear (amps, mics, mic pres, compressors, guitars, boutique pedals, etc.). Not sure if these qualify as “technology,” but they all have buttons and knobs and make great noises!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter and Facebook (to a lesser extent lately).

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have an amazing family who helps keep me centered with my eyes on the big picture. Running and exercise (not enough, but feels great when I do) and, increasingly, I try to meditate each morning. A friend and colleague whose studio demeanor I’ve always admired turned me onto it. He’s consistently calm and focused even in the midst of total drama and chaos. I’d like to think I’m getting there.

Main Image: Patricia Maureen Photography-P.M.P


Quick Chat: Monkeyland Audio’s Trip Brock

By Dayna McCallum

Monkeyland Audio recently expanded its facility, including a new Dolby Atmos equipped mixing stage. The Glendale-based Monkeyland Audio, where fluorescent lights are not allowed and creative expression is always encouraged, now offers three mixing stages, an ADR/Foley stage and six editorial suites.

Trip Brock, the owner of Monkeyland, opened the facility over 10 years ago, but the MPSE Golden Reel Award-winning supervising sound editor and mixer (All the Wilderness), started out in the business more than 23 years ago. We reached out to Brock to find out more about the expansion and where the name Monkeyland came from in the first place…

monkeyland audioOne of your two new stages is Dolby Atmos certified. Why was that important for your business?
We really believe in the Dolby Atmos format and feel it has a lot of growth potential in both the theatrical and television markets. We purpose-built our Atmos stage looking towards the future, giving our independent and studio clients a less expensive, yet completely state-of-the-art alternative to the Atmos stages found on the studio lots.

Can you talk specifically about the gear you are using on the new stages?
All of our stages are running the latest Avid Pro Tools HD 12 software across multiple Mac Pros with Avid HDX hardware. Our 7.1 mixing stage, Reposado, is based around an Avid Icon D-Control console, and Anejo, our Atmos stage, is equipped with dual 24-fader Avid S6 M40 consoles. Monitoring on Anejo is based on a 3-way JBL theatrical system, with 30 channels of discrete Crown DCi amplification, BSS processing and the DAD AX32 front end.

You’ve been in this business for over 23 years. How does that experience color the way you run your shop?
I stumbled into the post sound business coming from a music background, and immediately fell in love with the entire process. After all these years, having worked with and learned so much from so many talented clients and colleagues, I still love what I do and look forward to every day at the office. That’s what I look for and try to cultivate in my creative team — the passion for what we do. There are so many aspects and nuances in the audio post world, and I try to express that to my team — explore all the different areas of our profession, find which role really speaks to you and then embrace it!

You’ve got 10 artists on staff. Why is it important to you to employ a full team of talent, and how do you see that benefiting your clients?
I started Monkeyland as primarily a sound editorial company. Back in the day, this was much more common than the all-inclusive, independent post sound outfits offering ADR, Foley and mixing, which are more common today. The sound editorial crew always worked together in house as a team, which is a theme I’ve always felt was important to maintain as our company made the switch into full service. To us, keeping the team intact and working together at the same location allows for a lot more creative collaboration and synergy than say a set of editors all working by themselves remotely. Having staff in house also allows us flexibility when last minute changes are thrown our way. We are better able to work and communicate as a team, which leads to a superior end product for our clients.

Monkeyland AudioCan you name some of the projects you are working on and what you are doing for them?
We are currently mixing a film called The King’s Daughter, starring Pierce Brosnan and William Hurt. We also recently completed full sound design and editorial, as well as the native Atmos mix, on a new post-apocalyptic feature we are really proud of called The Worthy. Other recent editorial and mixing projects include the latest feature from Director Alan Rudolph, Ray Meets Helen, the 10-episode series Junior for director Zoe Cassavetes, and Three Days To Live, a new eight-episode true-crime series for NBC/Universal.

Most of your stage names are related to tequila… Why is that?
Haha — this is kind of a take-off from the naming of the company itself. When I was looking for a company name, I knew I didn’t want it to include the word “digital” or have any hint toward technology, which seemed to be the norm at the time. A friend in college used to tease me about my “unique” major in audio production, saying stuff like, “What kind of a degree is that? A monkey could be trained to do that.” Thus Monkeyland was born!

Same theory applied to our stage names. When we built the new stages and needed to name them, I knew I didn’t want to go with the traditional stage “A, B, C” or “1, 2, 3,” so we decided on tequila types — Anejo, Reposado, Plata, even Mezcal. It seems to fit our personality better, and who doesn’t like a good margarita after a great mix!


Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.


VR Audio: Virtual and spacial soundscapes

By Beth Marchant

The first things most people think of when starting out in VR is which 360-degree camera rig they need and what software is best for stitching. But virtual reality is not just a Gordian knot for production and post. Audio is as important — and complex — a component as the rest. In fact, audio designers, engineers and composers have been fascinated and challenged by VR’s potential for some time and, working alongside future-looking production facilities, are equally engaged in forging its future path. We talked to several industry pros on the front lines.

Howard Bowler

Music industry veteran and Hobo Audio founder Howard Bowler traces his interest in VR back to the groundbreaking film Avatar. “When that movie came out, I saw it three times in the same week,” he says. I was floored by the technology. It was the first time I felt like you weren’t just watching a film, but actually in the film.” As close to virtual reality as 3D films had gotten to that point, it was the blockbuster’s evolved process of motion capture and virtual cinematography that ultimately delivered its breathtaking result.

“Sonically it was extraordinary, but visually it was stunning as well,” he says. “As a result, I pressed everyone here at the studio to start buying 3D televisions, and you can see where that has gotten us — nowhere.” But a stepping stone in technology is more often a sturdy bridge, and Bowler was not discouraged. “I love my 3D TVs, and I truly believe my interest in that led me and the studio directly into VR-related projects.”

When discussing the kind of immersive technology Hobo Sound is involved with today, Bowler — like others interviewed for this series — clearly define VR’s parallel deliverables. “First, there’s 360 video, which is passive viewing, but still puts you in the center of the action. You just don’t interact with it. The second type, more truly immersive VR, lets you interact with the virtual environment as in a video game. The third area is augmented reality,” like the Pokemon Go phenomenon of projecting virtual objects and views onto your actual, natural environment. “It’s really important to know what you’re talking about when discussing these types of VR with clients, because there are big differences.”

With each segment comes related headsets, lenses and players. “Microsoft’s HoloLens, for example, operates solely in AR space,” says Hobo producer Jon Mackey. “It’s a headset, but will project anything that is digitally generated, either on the wall or to the space in front of you. True VR separates you from all that, and really good VR separates all your senses: your sight, your hearing and even touch and feeling, like some of those 4D rides at Disney World.” Which technology will triumph? “Some think VR will take it, and others think AR will have wider mass adoption,” says Mackey. “But we think it’s too early to decide between either one.”

Boxed Out

‘Boxed Out’ is a Hobo indie project about how gentrification is affecting artists studios in the Gowanus section of Brooklyn.

Those kinds of end-game obstacles are beside the point, says Bowler. “The main reason why we’re interested in VR right now is that the experiences, beyond the limitations of whatever headset you watch it on, are still mind-blowing. It gives you enough of a glimpse of the future that it’s incredible. There are all kinds of obstacles it presents just because it’s new technology, but from our point of view, we’ve honed it to make it pretty seamless. We’re digging past a lot of these problem areas, so at least from the user standpoint, it seems very easy. That’s our goal. Down the road, people from medical, education and training are going to need to understand VR for very productive reasons. And we’re positioning ourselves to be there on behalf of our clients.”

Hobo’s all-in commitment to VR has brought changes to its services as well. “Because VR is an emerging technology, we’re investing in it globally,” says Bowler. “Our company is expanding into complete production, from concepting — if the client needs it — to shooting, editing and doing all of the audio post. We have the longest experience in audio post, but we find that this is just such an exciting area that we wanted to embrace it completely. We believe in it and we believe this is where the future is going to be. Everybody here is completely on board to move this forward and sees its potential.”

To ramp up on the technology, Hobo teamed up with several local students who were studying at specialty schools. “As we expanded out, we got asked to work with a few production companies, including East Coast Digital and End of Era Productions, that are doing the video side of it. We’re bundling our services with them to provide a comprehensive set of services.” Hobo is also collaborating with Hidden Content, a VR production and post production company, to provide 360 audio for premium virtual reality content. Hidden Content’s clients include Samsung, 451 Media, Giant Step, PMK-BNC, Nokia and Popsugar.

There is still plenty of magic sauce in VR audio that continues to make it a very tricky part of the immersive experience, but Bowler and his team are engineering their way through it. “We’ve been developing a mixing technique that allows you to tie the audio to the actual object,” he says. “What that does is disrupt the normal stereo mix. Say you have a public speaker in the center of the room; normally that voice would turn with you in your headphones if you turn away from him. What we’re able to do is to tie the audio of the speaker to the actual object, so when you turn your head, it will pan to the right earphone. That also allows you to use audio as signaling devices in the storyline. If you want the viewer to look in a certain direction in the environment, you can use an audio cue to do that.”

Hobo engineer Diego Jimenez drove a lot of that innovation, says Mackey. “He’s a real VR aficionado and just explored a lot of the software and mixing techniques required to do audio in VR. We started out just doing a ton of tests and they all proved successful.” Jimenez was always driven by new inspiration, notes Bowler. “He’s certainly been leading our sound design efforts on a lot of fronts, from creating instruments to creating all sorts of unusual and original sounds. VR was just the natural next step for him, and for us. For example, one of the spots that we did recently was to create a music video and we had to create an otherworldly environment. And because we could use our VR mixing technology, we could also push the viewer right into the experience. It was otherworldly, but you were in that world. It’s an amazing feeling.”

boxed-out

‘Boxed Out’

What advice do Bowler and Mackey have for those interested in VR production and post? “360 video is to me the entry point to all other versions of immersive content,” says Bowler. “It’s the most basic, and it’s passive, like what we’re used to — television and film. But it’s also a completely undefined territory when it comes to production technique.” So what’s the way in? “You can draw on some of the older ways of doing productions,” he says, “but how do you storyboard in 360? Where does the director sit? How do you hide the crew? How do you light this stuff? All of these things have to be considered when creating 360 video. That also includes everyone on camera: all the viewer has to do is look around the virtual space to see what’s going on. You don’t want anything that takes the viewer out of that experience.”

Bowler thinks 360 video is also the perfect entry point to VR for marketers and advertisers creating branded VR content, and Hobo’s clients agree. “When we’ve suggested 360 video on certain projects and clients want to try it out, what that does is it allows the technology to breathe a little while it’s underwritten at the same time. It’s a good way to get the technology off the ground and also to let clients get their feet wet in it.”

Any studio or client contemplating VR, adds Mackey, should first find what works for them and develop an efficient workflow. “This is not really a solidified industry yet,” he says. “Nothing is standard, and everyone’s waiting to see who comes out on top and who falls by the wayside. What’s the file standard going to be? Or the export standard?  Will it be custom-made apps on (Google) YouTube or Facebook? We’ll see Facebook and Google battle it out in the near term. Facebook has recently acquired an audio company to help them produce audio in 360 for their video app and Google has the Daydream platform,” though neither platform’s codec is compatible with the other, he points out. “If you mix your audio to Facebook audio specs, you can actually have your audio come out in 360. For us, it’s been trial and error, where we’ve experimented with these different mixing techniques to see what fits and what works.”

Still, Bowler concedes, there is no true business yet in VR. “There are things happening and people getting things out there, but it’s still so early in the game. Sure, our clients are intrigued by it, but they are still a little mystified by what the return will be. I think this is just part of what happens when you deal with new technology. I still think it’s a very exciting area to be working in, and it wouldn’t surprise me if it doesn’t touch across many, many different subjects, from history to the arts to original content. Think about applications for geriatrics, with an aging population that gets less mobile but still wants to experience the Caribbean or our National Parks. The possibilities are endless.”

At one point, he admits, it may even become difficult to distinguish one’s real memory from one’s virtual memory. But is that really such a bad thing? “I’m already having this problem. I was watching an immersive video of Cuban music, that was pretty beautifully done, and by the end of the five-minute spot, I had the visceral experience that I was actually there. It’s just a very powerful way of experiencing content. Let me put it another way: 3D TVs were at the rabbit hole, and immersive video will take you down the rabbit hole into the other world.”

Source Sound
LA-based Source Sound, which has provided supervision and sound design on a number of Jaunt-produced cinematic VR experiences, including a virtual fashion show, a horror short and a Godzilla short film written and directed by Oscar-winning VFX artist Ian Hunter, as well as final Atmos audio mastering for the early immersive release Sir Paul McCartney Live, is ready for spacial mixes to come. That wasn’t initially the case.

Tim

Tim Gedemer

“When Jaunt first got into this space three years ago, they went to Dolby to try to figure out the audio component,” says Source Sound owner/supervising sound designer/editor Tim Gedemer. “I got a call from Dolby, who told me about what Jaunt was doing, and the first thing I said was, ‘I have no idea what you are talking about!’ Whatever it is, I thought, there’s really no budget and I was dragging my feet. But I asked them to show me exactly what they were doing. I was getting curious at that point.”

After meeting the team at Jaunt, who strapped some VR goggles on him and showed him some footage, Gedemer was hooked. “It couldn’t have been more than 30 seconds in and I was just blown away. I took off the headset and said, ‘What the hell is this?! We have to do this right now.’ They could have reached out to a lot of people, but I was thrilled that we were able to help them by seizing the moment.”

Gedemer says Source Sound’s business has expanded in multiple directions in the past few years, and VR is still a significant part of the studio’s revenue. “People are often surprised when I tell them VR counts for about 15-20 percent of our business today,” he says. “It could be a lot more, but we’d have to allocate the studios differently first.”

With a background in mixing and designing sound for film and gaming and theatrical trailers, Gedemer and his studio have a very focused definition of immersive experiences, and it all includes spacial audio. “Stereo 360 video with mono audio is not VR. For us, there’s cinematic, live-action VR, then straight-up game development that can easily migrate into a virtual reality world and, finally, VR for live broadcast.” Mass adoption of VR won’t happen, he believes, until enterprise and job training applications jump on the bandwagon with entertainment. “I think virtual reality may also be a stopover before we get to a world where augmented reality is commonplace. It makes more sense to me that we’ll just overlay all this content onto our regular days, instead of escaping from one isolated experience to the next.”

On set for the European launch of the Nokia Ozo VR camera in London, which featured a live musical performances captured in 360 VR.

For now, Source Sound’s VR work is completed in dedicated studios configured with gear for that purpose. “It doesn’t mean that we can’t migrate more into other studios, and we’re certainly evolving our systems to be dual-purpose,” he says. “About a year ago we were finally able to get a grip on the kinds of hardware and software we needed to really start coagulating this workflow. It was also clear from the beginning of our foray into VR that we needed to partner with manufacturers, like Dolby and Nokia. Both of those companies’ R&D divisions are on the front lines of VR in the cinematic and live broadcast space, with Dolby’s Atmos for VR and Nokia’s Ozo camera.”

What missing tools and technology have to be developed to achieve VR audio nirvana? “We delivered a wish list to Dolby, and I think we got about a quarter of the list,” he says. “But those guys have been awesome in helping us out. Still, it seems like just about every VR project that we do, we have to invent something to get us to the end. You definitely have to have an adventurous spirit if you want to play in this space.”

The work has already influenced his approach to more traditional audio projects, he says, and he now notices the lack of inter-spacial sound everywhere. “Everything out there is a boring rectangle of sound. It’s on my phone, on my TV, in the movie theater. I didn’t notice it as much before, but it really pops out at me now. The actual creative work of designing and mixing immersive sound has realigned the way I perceive it.”

Main Image: One of Hobo’s audio rooms, where the VR magic happens.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

 


VR Audio: What you need to know about Ambisonics

By Claudio Santos

The explosion of virtual reality as a new entertainment medium has been largely discussed in the filmmaking community in the past year, and there is still no consensus about what the future will hold for the technology. But regardless of the predictions, it is a fact that more and more virtual reality content is being created and various producers are experimenting to find just how the technology fits into the current market.

Out of the vast possibilities of virtual reality, there is one segment that is particularly close to us filmmakers, and that is 360 videos. They are becoming more and more popular on platforms such as YouTube and Facebook and present the distinct advantage that —  beside playing in VR headsets, such as the GearVR or the DayDream — these videos can also be played in standalone mobile phones, tablets and stationary desktops. This considerably expands the potential audience when compared to the relatively small group of people who own virtual reality headsets.

But simply making the image immerse the viewer into a 360 environment is not enough. Without accompanying spatial audio the illusion is very easily broken, and it becomes very difficult to cue the audience to look in the direction in which the main action of each moment is happening. While there are technically a few ways to design and implement spatial audio into a 360 video, I will share some thoughts and tips on how to work with Ambisonics, the spatial audio format chosen as the standard for platforms such as YouTube.

VR shoot in Bryce Canyons with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty Picture by: Hunt Beaty

First, what is Ambisonics and why are we talking about it?
Ambisonics is a sound format that is slightly different from your usual stereo/surround paradigm because its channels are not attached to speakers. Instead, an Ambisonics recording actually represents the whole spherical soundfield around a point. In practice, it means that you can represent sound coming from all directions around a listening position and, using an appropriate decoder, you can playback the same recording in any set of speakers with any number of channels arranged around the listener horizontally or vertically. That is exactly why it is so interesting to us when we are working with spatial sound for VR.

The biggest challenge of VR audio is that you can’t predict which direction the viewer will be looking at in any given time. Using Ambisonics we can design the whole sound sphere and the VR player decodes the sound to match the direction of the video in realtime, decoding it into binaural for accurate headphone playback. The best part is that the decoding process is relatively light on processing power, which makes this a suitable option for mediums with limited resources such as smartphones.

In order to work with Ambisonics we have two options: to record the sound on location with an Ambisonics microphone, which gives us a very realistic representation of the sound in the location and is very well suited to ambiance recordings, for example; or we can encode other sound formats such as mono and stereo into Ambisonics and then manipulate the sound in the sphere from there, which gives us great flexibility in post production to use sound libraries and create interesting effects by carefully adjusting the positioning and width of a sound in the sphere.

Example: Mono “voice of God” placement. The left shows the soundfield completely filled, which gives the “in-head” illusion.

There are plenty of resources online explaining the technical nature of Ambisonics, and I definitely recommend reading them so you can better understand how to work with it and how the spatiality is achieved. But there aren’t many discussions yet about the creative decisions and techniques used in sound for 360 videos with Ambisonics, so that’s what we will be focusing on from now on.

What to do with mono “in-head” sources such as VO?
That was one of the first tricky challenges we found with Ambisonics. It is not exactly intuitive to place a sound source equally in all directions of the soundfield. The easiest solution comes more naturally once you understand how the four channels of the Ambisonics audio track interact with each other.

The first channel of the ambisonics audio, named W, is omnidirectional and contains the level information of the sound. The other three channels describe the position of the sound in the soundfield through phase relationships. Each one of the channels represents one dimension, which enables the positioning of sounds in three dimensions.

Now, if we want the sound to play at the same level and centered from every direction, what we want is for the sound source to be at the center of the soundfield “sphere,” where the listeners head is. In practice, that means that if you play the sound out of the first channel only, with no information into either of the other three channels, the sound will play “in-head.”

What to do with stereo non-diegetic music?
This is the natural question that follows the one of knowing what to do with mono sources. And the answer is a bit trickier. The mono, first channel trick doesn’t work perfectly with stereo sources because for that to work you would have to first sum the stereo to mono, which might be undesirable depending on your track.

If you want to maintain the stereo width of the source, one good option we found was to mirror the sound in two directions. Some plug-in suites, such as the Ambix VST, offer the functionality to mirror hemispheres of the soundfield. That could also be accomplish with careful positioning of a copy of the source, but this will make things easier.

Example of sound paced in the “left” of the soundfield in ambisonics.

Generally, what you want is to place the center of the stereo source in the focus of the action your audience will be looking at and mirror the top-bottom and the front-back. This will keep the music playing at the same level regardless of the direction the viewer looks at, but will keep the spatiality of the source. The downside is that the sound is not anchored to the viewer, so changes in direction of the sources will be noted as the viewer turns around, notably inverting the sides when looking at the back. I usually find this to be an interesting effect nonetheless, and it doesn’t distract the audience too much. If the directionality is too noticeable you can always mix a bit of the mono sum of the music into both channels in order to reduce the perceived width of the track.

How to creatively use reverberation in Ambisonics?
There is a lot you can do with reverberation in Ambisonics and this is only a single trick I find very useful when dealing with scenes in which you have one big obstacle in one direction (such as a wall), and no obstacles in the opposite direction.

In this situation, the sound would reflect from the barrier and return to the listener from one direction, while on the opposite side there would be no significant reflections because of the open field. You can simulate that by placing a slightly delayed reverb coming from the direction of the barrier only. You can adjust the width of the reflection sound to match the perceived size of the barrier and the delay based on the distance the barrier is from the viewer. In this case the effect usually works better with drier reverbs with defined early reflections but not a lot of late reflections.

Once you experiment with this technique you can use variations of if to simulate a variety of spaces and achieve even more realistic mixes that will fool anyone into believing the sounds you placed in post production were recorded on location.

Main Caption: VR shoot in Hawaii with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty.


Claudio Santos is a sound editor at Silver Sound/SilVR in New York.


Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Industry vet Tim Claman to lead Avid Audio business

Tim Claman, VP, platform and solutions at Avid, is now responsible for all of the company’s audio solutions. This move is intended to further unite the development of Avid’s tools and workflow solutions for media creation, distribution and optimization.

tim-claman

Tim Claman

As part of the company’s platform strategy, Avid will further integrate its audio solutions — including Pro Tools, Sibelius and Avid Venue products — into the MediaCentral Platform, which is their open, integrated platform designed for media.

Claman comes from the artist side of the business, working as an editor, sound designer and mixer. Claman first joined Avid as a senior product manager for Pro Tools in 1998, designing many of the features that led to Pro Tools’ 2004 Academy Award. Over 14 years, he played a major role in shaping the company’s technology and product strategies as he rose through the ranks to become CTO. After working as CTO at Snell Advanced Media (SAM) and Quantel from 2013 through 2015, Claman rejoined Avid in February 2016 as VP, platform and solutions.

“I’m looking forward to working closely with our customer community to shape our collective future in the audio post production, music creation and live sound industries,” says Claman. “Through tight integration with MediaCentral, we’re dedicated to innovating our audio solutions to meet our customers’ emerging needs.”