Tag Archives: virtual reality

Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.

Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

Behind the Title: Light Sail VR’s Matthew Celia

NAME: Matthew Celia

COMPANY: LA’s Light Sail VR (@lightsailvr)

CAN YOU DESCRIBE YOUR COMPANY?
Light Sail VR is a virtual reality production company specializing in telling immersive narrative stories. We’ve built a strong branded content business over the last two years working with clients such as Google and GoPro, and studios like Paramount and ABC.

Whether it’s 360 video, cinematic VR or interactive media, we’ve built an end-to-end pipeline to go from script to final delivery. We’re now excited to be moving into creating original IP and more interactive content that fuses cinematic live-action film footage with game engine mechanics.

WHAT’S YOUR JOB TITLE?
Creative Director and Managing Partner

WHAT DOES THAT ENTAIL?
A lot! We’re a small boutique shop so we all wear many hats. First and foremost, I am a director and work hard to deliver a compelling story and emotional connection to the audience for each one of our pieces. Story first is our motto, and I try and approach every technical problem with a creative solution. Figuring out execution is a large part of that.

In addition to the production side, I also carry a lot of the technical responsibilities in post production, such as keeping our post pipeline humming and inventing new workflows. Most recently, I have been dabbling in programming interactive cinema using the Unity game engine.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I am in charge of washing the lettuce when we do our famous “Light Sail VR Sandwich Club” during lunch. Yes, you get fed for free if you work with us, and I make an amazing italian sandwich.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Hard to say. I really like what I do. I like being on set and working with actors because VR is such a great medium for them to play in, and it’s exciting to collaborate with such creative and talented people.

National Parks Service

WHAT’S YOUR LEAST FAVORITE?
Render times and computer crashes. My tech life is in constant beta. Price we pay for being on the bleeding edge, I guess!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like the early morning because it is quiet, my brain is fresh, and I haven’t yet had 20 people asking something of me.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably the same, but at a large company. If I left the film business I’d probably teach. I love working with kids.

WHY DID YOU CHOOSE THIS PROFESSION?
I feel like I’ve wanted to be a filmmaker since I could walk. My parents like to drag out the home movies of me asking to look in my dad’s VHS video camera when I was 4. I spent most of high school in the theater and most people assumed I would be an actor. But senior year I fell in love with film when I shot and cut my first 16mm reversal stock on an old reel-to-reel editing machine. The process was incredibly fun and rewarding and I was hooked. I only recently discovered VR, but in many ways it feels like the right path for me because I think cinematic VR is the perfect intersection of filmmaking and theater.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
On the branded side, we just finished up two tourism videos. One for the National Parks Service which was a 360 tour of the Channel Islands with Jordan Fisher and the other was a 360 piece for Princess Cruises. VR is really great to show people the world. The last few months of my life have been consumed by Light Sail VR’s first original project, Speak of the Devil.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Speak of the Devil is at the top of that list. It’s the first live-action interactive project I’ve worked on and it’s massive. Crafted using the GoPro Odyssey camera in partnership with Google Jump it features over 50 unique locations, 13 different endings and is currently taking up about 80TB of storage (and counting). It is the largest project I’ve worked on to date, and we’ve done it all on a shoestring budget thanks to the gracious contributions of talented creative folks who believed in our vision.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My instant-read grill meat thermometer, my iPhone and my Philips Hue bulbs. Seriously, if you have a baby, it’s a life saver being able to whisper, Hey, Siri, turn off the lights.”

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m really active on several Facebook groups related to 360 video production. You can get a lot of advice and connect directly with vendors and software engineers. It’s a great community.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I tend to pop on some music when I’m doing repetitive mindless tasks, but when I have to be creative or solve a tough tech problem, the music is off so that I can focus. My favorite music to work to tends to be Dave Matthews Band live albums. They get into 20-minute long jams and it’s great.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
De-stressing is really hard when you own your own company. I like to go walking, but if that doesn’t work, I’ll try diving into some cooking for my family, which forces me to focus on something not work related. I tend to feel better after eating a really good meal.

Rogue takes us on VR/360 tour of Supermodel Closets

Rogue is a NYC-based creative boutique that specializes in high-end production and post for film, advertising and digital. Since its founding two years ago, executive creative director, Alex MacLean and his team have produced a large body of work providing color grading, finishing and visual effects for clients such as HBO, Vogue, Google, Vice, Fader and more. For the past three years MacLean has also been at the forefront of VR/360 content for narratives and advertising.

MacLean recently wrapped up post production on four five-minute episodes of 360-degree tours of Supermodel Closets. The series is a project of Conde Nast Entertainment and Vogue for Vogue’s 125th anniversary. If you’re into fashion, this VR tour gives you a glimpse at what supermodels wear in their daily lives. Viewers can look up, down and all around to feel immersed in the closet of each model as she shows her favorite fashions and shares the stories behind their most prized pieces.

 

Tours include the closets of Lily Aldridge, Cindy Crawford, Kendall Jenner  and
Amber Valletta.

MacLean worked with director Julina Tatlock, who is a co-founder and CEO of 30 Ninjas, a digital entertainment company that develops, writes and produces VR, multi-platform and interactive content. Rogue and 30 Ninjas worked together to determine the best workflow for the series. “I always think it’s best practice to collaborate with the directors, DPs and/or production companies in advance of a VR shoot to sort out any technical issues and pre-plan the most efficient production process from shoot to edit, stitching through all the steps of post-production,” reports MacLean. “Foresight is everything; it saves a lot of time, money, and frustration for everyone, especially when working in VR, as well as 3D.”

According to MacLean, they worked with a new camera format, the YI Halo camera, which is designed for professional VR data acquisition. “I often turn to the Assimilate team to discuss the format issues because they always support the latest camera formats in their Scratch VR tools. This worked well again because I needed to define an efficient VR and 3D workflow that would accommodate the conforming, color grading, creating of visual effects and the finishing of a massive amount of data at 6.7K x 6.7K resolution.”

 

The Post
“The post production process began by downloading 30 Ninjas’ editorial, stitched footage from the cloud to ingest into our MacBook Pro workstations to do the conform at 6K x 6K,” explains MacLean. “Organized data management is a critical step in our workflow, and Scratch VR is a champ at that. We were simultaneously doing the post for more than one episode, as well as other projects within the studio, so data efficiency is key.”

“We then moved the conformed raw 6.7K x 6.7K raw footage to our HP Z840 workstations to do the color grading, visual effects, compositing and finishing. You really need powerful workstations when working at this resolution and with this much data,” reports MacLean. “Spherical VR/360 imagery requires focused concentration, and then we’re basically doing everything twice when working in 3D. For these episodes, and for all VR/360 projects, we create a lat/long that breaks out the left eye and right eye into two spherical images. We then replicate the work from one eye to the next, and color correct any variances. The result is seamless color grading.

 

“We’re essentially using the headset as a creative tool with Scratch VR, because we can work in realtime in an immersive environment and see the exact results of work in each step of the post process,” he continues. “This is especially useful when doing any additional compositing, such as clean-up for artifacts that may have been missed or adding or subtracting data. Working in realtime eases the stress and time of doing a new composite of 360 data for the left eye and right eye 3D.”

Playback of content in the studio is very important to MacLean and team, and he calls the choice of multiple headsets another piece to the VR/360 puzzle. “The VR/3D content can look different in each headset so we need to determine a mid-point aesthetic look that displays well in each headset. We have our own playback black box that we use to preview the color grading and visual effects, before committing to rendering. And then we do a final QC review of the content, and for these episodes we did so in Google Daydream (untethered), HTV Live (tethered) and the Oculus Rift (tethered).”

MacLean sees rendering as one of their biggest challenges. “It’s really imperative to be diligent throughout all the internal and client reviews prior to rendering. It requires being very organized in your workflow from production through finishing, and a solid QC check. Content at 6K x 6K, VR/360 and 3D means extremely large files and numerous hours of rendering, so we want to restrict re-rendering as much as possible.”

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Assimilate and Z Cam offer second integrated VR workflow bundle

Z Cam and Assimilate are offering their second VR integrated workflow bundle, which features the Z Cam S1 Pro VR camera and the Assimilate Scratch VR Z post tools. The new Z Cam S1 Pro offers a higher level of image quality that includes better handling of low lights and dynamic range with detailed, well-saturated, noise-free video. In addition to the new camera, this streamlined pro workflow combines Z Cam’s WonderStitch optical-flow stitch feature and the end-to-end Scratch VR Z tools.

Z Cam and Assimilate have designed their combined technologies to ensure as simple a workflow as possible, including making it easy to switch back and forth between the S1 Pro functions and the Scratch VR Z tools. Users can also employ Scratch VR Z to do live camera preview, prior to shooting with the S1 Pro. Once the shoot begins with the S1 Pro, Scratch VR Z is then used for dailies and data management, including metadata. You don’t have to remove the SD cards and copy; it’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is then done in Z Cam’s WonderStitch — now integrated into Scratch VR Z — as well as traditional editing, color grading, compositing, support for multichannel audio from the S1 or external ambisonic sound, finishing and publishing (to all final online or standalone 360 platforms).

Z Cam S1 Pro/Scratch VR Z  bundle highlights include:
• Lower light sensitivity and dynamic range – 4/3-inch CMOS image sensor
• Premium 220 degree MFT fisheye lens, f/2.8~11
• Coordinated AE (automatic exposure) and AWB ( automatic white-balance)
• Full integration with built-in Z Cam Sync
• 6K 30fps resolution (post stitching) output
• Gig-E port (video stream & setting control)
• WonderStich optical-flow based stitching
• Live Streaming to Facebook, YouTube or a private server, including text overlays and green/composite layers for a virtual set
• Scratch VR Z single, a streamlined, end-to-end, integrated VR post workflow

“We’ve already developed a few VR projects with the S1 Pro VR camera and the entire Neotopy team is awed by its image quality and performance,” says Alex Regeffe, VR post production manager at Neotopy Studio in Paris. “Together with the Scratch VR Z tools, we see this integrated workflow as a game changer in creating VR experiences, because our focus is now all on the creativity and storytelling rather than configuring multiple, costly tools and workflows.”

The Z Cam S1 Pro/Scratch VR Z bundle is available within 30 days of ordering. Priced at $11,999 (US), the bundle includes the following:
– Z CamS1 Pro Camera main unit, Z Cam S1 Pro battery unit (w/o battery cells), AC/DC power adapter unit and power connection cables (US, UK, EU).
– A Z Cam WonderStitch license, which is an optical flow-based stitching feature that performs offline stitching of files from Z Cam S1 Pro. Z Cam WonderStitch requires a valid software license associated with a designated Z Cam S1 Pro, and is nontransferable.
– A Scratch VR Z permanent license: a pro VR end-to-end, post workflow with an all-inclusive, realtime toolset for data management, dailies, conform, color grading, compositing, multichannel and ambisonic sound, and finishing, all integrated within the Z Cam S1 Pro camera. Includes one-year of support/updates.

The companies are offering a tutorial about the bundle.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Dell partners with Sony on Spider-Man film, showcases VR experience

By Jay Choi

Sony Pictures Imageworks used Dell technology during the creation of the Spider-Man: Homecoming. To celebrate, Dell and Sony held a press junket in New York City that included tech demos and details on the film, as well as the Spider-Man: Homecoming Virtual Reality Experience. While I’m a huge Spider-Man fan, I am not biased in saying it was spectacular.

To begin the VR demo, users are given the same suit Tony Stark designs for Peter Parker in Captain America: Civil War and Spider-Man: Homecoming. The first action you perform is grabbing the mask and putting on the costume. You then jump into a tutorial that teaches you how to use your web-shooter mechanics (which implement intuitively with your VR controllers).

Users are then tasked with thwarting the villainous Vulture from attacking you and the city of New York. Admittedly, I didn’t get too far into the demo. I was a bit confused as to where to progress, but also absolutely stunned by the mechanics and details. Along with pulling triggers to fire webs, each button accessed a different type of web cartridge in your web shooter. So, like Spidey, I had to be both strategic and adaptive to each changing scenario. I actually felt like I was shooting webs and pulling large crates around… I honestly spent most of my time seeing how far the webs could go and what they could stick to — it was amazing!

The Tech
With the power of thousands of workstations, servers and over a petabyte of storage from Dell, Sony Pictures Imageworks and other studios, such as MPC and Method, were able to create the visual effects for the Spider-Man: Homecoming film. The Virtual Reality Experience actually pulled the same models, assets and details used in the film, giving users a truly awesome and immersive experience.

When I asked what this particular VR experience would cost your typical consumer, I was told that when developing the game, Dell researched major VR consoles and workstations and set a benchmark to strive for so most consumers should be able to experience the game without too much of a difference.

Along with the VR game, Dell also showcased its new gaming laptop: the Inspiron 15 7000. With a quad-core H-Class 7th-Gen Intel Core and Nvidia GeForce GTX 1050/1050 Ti, the laptop is marketed for hardcore gaming. It has a tough-yet-sleek design that’s appealing to the eye. However, I was more impressed with its power and potential. The junket had one of these new Inspiron laptops running the recently rebooted Killer Instinct fighting game (which ironically was my very first video game on the Super Nintendo… I guess violent video games did an okay job raising me). As a fighting game fanatic and occasional competitor, I have to say the game ran very smoothly. I couldn’t spot latency between inputs from the USB-connected X-Box One controllers or any frame skipping. It does what it says it can do!

The Inspiron 15 7000 was also featured in the Spider-Man: Homecoming film and was used by Jacob Batalon’s character, Ned, to help aid Peter Parker in his web-tastic mission.

I was also lucky enough to try out Sony Future Lab Program’s projector-based interactive Find Spider-Man game, where the game’s “screen” is projected on a table from a depth-perceiving projector lamp. A blank board was used as a scroll to maneuver a map of New York City, while piles of movable blocks were used to recognize buildings and individual floors. Sometimes Spidey was found sitting on the roof, while other times he was hiding inside on one of the floors.

All in all, Dell and Sony Pictures Imageworks’ partnership provided some sensational insight to what being Spider-Man is like with their technology and innovation, and I hope to see it evolve even further along side more Spider-Man: Homecoming films.

The Spider-Man: Homecoming Virtual Reality Experience arrives on June 30th for all major VR platforms. Marvel’s Spider-Man: Homecoming releases in theaters on July 7th.


Jay Choi is a Korean-American screenwriter, who has an odd fascination with Lego minifigures, a big heart for his cat Sula, and an obsession with all things Spider-Man. He is currently developing an animated television pitch he sold to Nickelodeon and resides in Brooklyn.

SGO’s Mistika VR is now available

 

SGO’s Mistika VR software app is now available. This solution has been developed using the company’s established Mistika technology and offers advanced realtime stitching capabilities combined with a new intuitive interface and raw format support with incredible speed.

Using Mistika Optical Flow Technology (our main image), the new VR solution takes camera position information and sequences then stitches the images together using extensive and intelligent pre-sets. Its unique stitching algorithms help with the many challenges facing post teams to allow for the highest image quality.

Mistika VR was developed to encompass and work with as many existing VR camera formats as possible, and SGO is creating custom pre-sets for productions where teams are building the rigs themselves.

The Mistika VR solution is part of SGO’s new natively integrated workflow concept. SGO has been dissecting its current turnkey offering “Mistika Ultima” to develop advanced workflow applications aimed at specific tasks.

Mistika VR runs on Mac, and Windows and is available as a personal or professional (with SGO customer support) edition license. Costs for licenses are:

–  30-day license (with no automatic renewals): Evaluation Version is free; Personal Edition: $78; Professional Edition $110

– Monthly subscription: Personal Edition $55; Professional Edition $78 per month

–  Annual subscription: Personal Edition: $556 per year; Professional Edition: $779 per year

VR audio terms: Gaze Activation v. Focus

By Claudio Santos

Virtual reality brings a lot of new terminology to the post process, and we’re all having a hard time agreeing on the meaning of everything. It’s tricky because clients and technicians sometimes have different understandings of the same term, which is a guaranteed recipe for headaches in post.

Two terms that I’ve seen being confused a few times in the spatial audio realm are Gaze Activation and Focus. They are both similar enough to be put in the same category, but at the same time different enough that most of the times you have to choose completely different tools and distribution platforms depending on which technology you want to use.

Field of view

Focus
Focus is what the Facebook Spatial Workstation calls this technology, but it is a tricky one to name. As you may know, ambisonics represents a full sphere of audio around the listener. Players like YouTube and Facebook (which uses ambisonics inside its own proprietary .tbe format) can dynamically rotate this sphere so the relative positions of the audio elements are accurate to the direction the audience is looking at. But the sounds don’t change noticeably in level depending on where you are looking.

If we take a step back and think about “surround sound” in the real world, it actually makes perfect sense. A hair clipper isn’t particularly louder when it’s in front of our eyes as opposed to when its trimming the back of our head. Nor can we ignore the annoying person who is loudly talking on their phone on the bus by simply looking away.

But for narrative construction, it can be very effective to emphasize what your audience is looking at. That opens up possibilities, such as presenting the viewer with simultaneous yet completely unrelated situations and letting them choose which one to pay attention to simply by looking in the direction of the chosen event. Keep in mind that in this case, all events are happening simultaneously and will carry on even if the viewer never looks at them.

This technology is not currently supported by YouTube, but it is possible in the Facebook Spatial Workstation with the use of high Focus Values.

Gaze Activation
When we talk about focus, the key thing to keep in mind is that all the events happen regardless of the viewer looking at them or not. If instead you want a certain sound to only happen when the viewer looks at a certain prop, regardless of the time, then you are looking for Gaze Activation.

This concept is much more akin to game audio then to film sound because of the interactivity element it presents. Essentially, you are using the direction of the gaze and potentially the length of the gaze (if you want your viewer to look in a direction for x amount of seconds before something happens) as a trigger for a sound/video playback.

This is very useful if you want to make impossible for your audience to miss something because they were looking in the “wrong” direction. Think of a jump scare in a horror experience. It’s not very scary if you’re looking in the opposite direction, is it?

This is currently only supported if you build your experience in a game engine or as an independent app with tools such as InstaVR.

Both concepts are very closely related and I expect many implementations will make use of both. We should all keep an eye on the VR content distribution platforms to see how these tools will be supported and make the best use of them in order to make 360 videos even more immersive.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

VR Workflows: The Studio | B&H panel during NAB

At this year’s NAB Show in Las Vegas, The Studio B&H hosted a series of panels at their booth. One of those panels addressed workflows for virtual reality, including shooting, posting, best practices, hiccups and trends.

The panel, moderated by postPerspective editor-in-chief Randi Altman, was made up of SuperSphere’s Lucas Wilson, ReDesign’s Greg Ciaccio, Local Hero Post’s Steve Bannerman and Jaunt’s Koji Gardner.

While the panel was streamed live, it also lives on YouTube. Enjoy…

New AMD Radeon Pro Duo graphics card for pro workflows

AMD was at NAB this year with its dual-GPU graphics card designed for pros — the Polaris-architecture-based Radeon Pro Duo. Built on the capabilities of the Radeon Pro WX 7100, the Radeon Pro Duo graphics card is designed for media and entertainment, broadcast and design workflows.

The Radeon Pro Duo is equipped with 32GB of ultra-fast GDDR5 memory to handle larger data sets, more intricate 3D models, higher-resolution videos and complex assemblies. Operating at a max power of 250W, the Radeon Pro Duo uses a total of 72 compute units (4,608 stream processors) for a combined performance of up to 11.45 TFLOPS of single-precision compute performance on one board, and twice the geometry throughput of the Radeon Pro WX 7100.

The Radeon Pro Duo enables pros to work on up to four 4K monitors at 60Hz, drive the latest 8K single monitor display at 30Hz using a single cable or drive an 8K display at 60Hz using a dual cable solution.

The Radeon Pro Duo’s distinct dual-GPU design allows pros the flexibility to divide their workloads, enabling smooth multi-tasking between applications by committing GPU resources to each. This will allow users to focus on their creativity and get more done faster, allowing for a greater number of design iterations in the same time.

On select pro apps (including DaVinci Resolve, Nuke/Care VR, Blender Cycles and VRed), the Radeon Pro Duo offers up to two times faster performance compared with the Radeon Pro WX 7100.

For those working in VR, the Radeon Pro Duo graphics card uses the power of two GPUs to render out separate images for each eye, increasing VR performance over single GPU solutions by up to 50% in the SteamVR test. AMD’s LiquidVR technologies are also supported by the industry’s leading realtime engines, including Unity and Unreal, to help ensure smooth, comfortable and responsive VR experiences on Radeon Pro Duo.

The Radeon Pro Duo’s planned availability is the end of May at an expected price of US $999.

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

The importance of audio in VR

By Anne Jimkes

While some might not be aware, sound is 50 percent of the experience in VR, as well as in film, television and games. Because we can’t physically see the audio, it might not get as much attention as the visual side of the medium. But the balance and collaboration between visual and aural is what creates the most effective, immersive and successful experience.

More specifically, sound in VR can be used to ease people into the experience, what we also call “on boarding.” It can be used subtly and subconsciously to guide viewers by motivating them to look in a specific direction of the virtual world, which completely surrounds them.

In every production process, it is important to discuss how sound can be used to benefit the storytelling and the overall experience of the final project. In VR, especially the many low-budget independent projects, it is crucial to keep the importance and use of audio in mind from the start to save time and money in the end. Oftentimes, there are no real opportunities or means to record ADR after a live-action VR shoot, so it is important to give the production mixer ample opportunity to capture the best production sound possible.

Anne Jimkes at work.

This involves capturing wild lines, making sure there is time to plant and check the mics, and recording room tone. Things that are already required, albeit not always granted, on regular shoots, but even more important on a set where a boom operator cannot be used due to the 360 degree view of the camera. The post process is also very similar to that for TV or film up to the point of actual spatialization. We come across similar issues of having to clean up dialogue and fill in the world through sound. What producers must be aware of, however, is that after all the necessary elements of the soundtrack have been prepared, we have to manually and meticulously place and move around all the “audio objects” and various audio sources throughout the space. Whenever people decide to re-orient the video — meaning when they change what is considered the initial point of facing forward or “north” — we have to rewrite all this information that established the location and movement of the sound, which takes time.

Capturing Audio for VR
To capture audio for virtual reality we have learned a lot about planting and hiding mics as efficiently as possible. Unlike regular productions, it is not possible to use a boom mic, which tends to be the primary and most naturally sounding microphone. Aside from the more common lavalier mics, we also use ambisonic mics, which capture a full sphere of audio and matches the 360 picture — if the mic is placed correctly on axis with the camera. Most of the time we work with Sennheiser and use their Ambeo microphone to capture 360 audio on set, after which we add the rest of the spatialized audio during post production. Playing back the spatialized audio has become easier lately, because more and more platforms and VR apps accept some form of 360 audio playback. There is still a difference between the file formats to which we can encode our audio outputs, meaning that some are more precise and others are a little more blurry regarding spatialization. With VR, there is not yet a standard for deliverables and specs, unlike the film/television workflow.

What matters most in the end is that people are aware of how the creative use of sound can enhance their experience, and how important it is to spend time on capturing good dialogue on set.


Anne Jimkes is a composer, sound designer, scholar and visual artist from the Netherlands. Her work includes VR sound design at EccoVR and work with the IMAX VR Centre. With a Master’s Degree from Chapman University, Jimkes previously served as a sound intern for the Academy of Television Arts & Sciences.

Assimilate’s Scratch VR Suite 8.6 now available

Back in February, Assimilate announced the beta version of its Scratch VR Suite 8.6. Well, now the company is back with a final version of the product, including user requests for features and functions.

Scratch VR Suite 8.6 is a realtime post solution and workflow for VR/360 content. With added GPU stitching of 360-video and ambisonic audio support, as well as live streaming, the Scratch VR Suite 8.6 allows VR content creators — DPs, DITs, post artists — a streamlined, end-to-end workflow for VR/360 content.

The Scratch VR Suite 8.6 workflow automatically includes all the basic post tools: dailies, color grading, compositing, playback, cloud-based reviews, finishing and mastering.

New features and updates include:
– 360 stitching functionality: Load the source media of multiple shots from your 360 cameras. into Scratch VR and easily wrap them into a stitch node to combine the sources into a equirectangular image.
• Support for various stitch template format, such as AutoPano, Hugin, PTGui and PTStitch scripts.
• Either render out the equirectangular format first or just continue to edit, grade and composite on top of the stitched nodes and render the final result.
• Ambisonic audio: Load, set and playback ambisonic audio files to complete the 360 immersive experience.
• Video with 360 sound can be published directly to YouTube 360.
• Additional overlay handles to the existing. 2D-equirectangular feature for more easily positioning. 2D elements in a 360 scene.
• Support for Oculus Rift, Samsung Gear VR, HTC Vive and Google Cardboard.
• Several new features and functions make working in HDR just as easy as SDR.
• Increased Format Support – Added support for all the latest formats for even greater efficiency in the DIT and post production processes.
• More Simplified DIT reporting function – Added features and functions enables even greater efficiencies in a single, streamlined workflow.
• User Interface: Numerous updates have been made to enhance and simplify the UI for content creators, such as for the log-in screen, matrix layout, swipe sensitivity, Player stack, tool bar and tool tips.

Lenovo intros VR-ready ThinkStation P320

Lenovo launched its VR-ready ThinkStation P320 at Develop3D Live, a UK-based conference that puts special focus on virtual reality as a productivity tool in design workflows. The ThinkStation P320 is the latest addition to the Lenovo portfolio of VR-ready certified workstations and is designed for power users looking to balance both performance and their budgets.

The workstation’s pro VR certification allows ThinkStation P320 users an to more easily add virtual reality into their workflow without requiring an initial high-end hardware and software investment.

The refreshed workstation will be available in both full-size tower and small form factor (SFF) and comes equipped with Intel’s newest Xeon processors and Core i7 processors — offering speeds of up to 4.5GHz with Turbo Boost (on the tower). Both form factors will also support the latest Nvidia Quadro graphics cards, including support for dual Nvidia Quadro P1000 GPUs in the small form factor.

The ISV-certified ThinkStation P320 supports up to 64GB of DDR4 memory and customization via the Flex Module. In terms of environmental sustainability, the P320 is Energy Star-qualified, as well as EPEAT Gold and Greenguard-certified.

The Lenovo ThinkStation P320 full-size tower and SFF will be available at the end of April.

Timecode’s new firmware paves the way for VR

Timecode Systems, which makes wireless technologies for sharing timecode and metadata, has launched a firmware upgrade that enhances the accuracy of its wireless genlock.

Promising sub-line-accurate synchronization, the system allows Timecode Systems products to stay locked in sync more accurately, setting the scene for development of a wireless sensor sync solution able to meet the requirements of VR/AR and motion capture.

“The industry benchmark for synchronization has always been ‘frame-accurate’, but as we started exploring the absolutely mission-critical sync requirements of virtual reality, augmented reality and motion capture, we realized sync had to be even tighter,” said Ashok Savdharia, chief technical officer at Timecode Systems. “With the new firmware and FPGA algorithms released in our latest update, we’ve created a system offering wireless genlock to sub-line accuracy. We now have a solid foundation on which to build a robust and immensely accurate genlock, HSYNC and VSYNC solution that will meet the demands of VR and motion capture.”

A veteran in camera and image sensor technology, Savdharia joined Timecode Systems last year. In addition to building up the company’s multi-camera range of solutions, he is leading a development team to pioneering a wireless sync system for the VR and motion capture market.

HPA Tech Retreat takes on realities of virtual reality

By Tom Coughlin

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.
Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.


Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

Last Chance to Enter to Win an Amazon Echo… Take our Storage Survey Now!

If you’re working in post production, animation, VFX and/or VR/AR/360, please take our short survey and tell us what works (and what doesn’t work) for your day-to-day needs.

What do you need from a storage solution? Your opinion is important to us, so please complete the survey by Wednesday, March 8th.

We want to hear your thoughts… so click here to get started now!

 

 

VR Post: Hybrid workflows are key

By Beth Marchant

Shooting immersive content is one thing, but posting it for an ever-changing set of players and headsets is whole other multidimensional can of beans.

With early help from software companies that have developed off-the-shelf ways to tackle VR post — and global improvements to their storage and networking infrastructures — some facilities are diving into immersive content by adapting their existing post suites with a hybrid set of new tools. As with everything else in this business, it’s an ongoing challenge to stay one step ahead.

Chris Healer

The Molecule
New York- and Los Angeles-based motion graphics and VFX post house The Molecule leapt into the VR space more than a year and a half ago when it fused The Foundry’s Nuke with the open-sourced panoramic photo stitching software Hugin. Then, CEO Chris Healer took the workflow one step further. He developed an algorithm that rendered stereoscopic motion graphics spherically in Nuke.

Today, those developments have evolved into a robust pipeline that fuels The Molecule’s work for Conan O’Brien’s eponymous TBS talk show, The New York Times’s VR division and commercial work. “It’s basically eight or ten individual nodes inside Nuke that complete one step or another of the process,” says Healer. “Some of them overlap with Cara VR,” The Foundry’s recently launched VR plug-in for Nuke, “but all of it works really well for our artists. I talk to The Foundry from time to time and show them the tools, so there’s definitely an open conversation there about what we all need to move VR post forward.”

Collaborating with VR production companies like SuperSphere, Jaunt and Pixvana in Seattle, The Molecule is heading first where mass VR adoption seems likeliest. “The New York Times, for example, wants to have a presence at film festivals and new technology venues, and is trying to get out of the news-only business and into the entertainment-provider business. And the job for Conan was pretty wild — we had to create a one-off gag for Comic-Con that people would watch once and go away laughing to the next thing. It’s kind of a cool format.”

Healer’s team spent six weeks on the three-minute spot. “We had to shoot plates, model characters, animate them, composite it, build a game engine around it, compile it, get approval and iterate through that until we finished. We delivered 20 or so precise clips that fit into a game engine design, and I think it looks great.”

Healer says the VR content The Molecule is posting now is, like the Conan job, a slight variation on more typical recent VR productions. “I think that’s also what makes VR so exciting and challenging right now,” he says. “Everyone’s got a different idea about how to take it to the next level. And a lot of that is in anticipation of AR (augmented reality) and next-generation players/apps and headsets.

‘Conan’

The Steam store,” the premiere place online to find virtual content, “has content that supports multiple headsets, but not all of them.” He believes that will soon gel into a more unified device driver structure, “so that it’s just VR, not Oculus VR or Vive VR. Once you get basic head tracking together, then there’s the whole next thing: Do you have a controller of some kind, are you tracking in positional space, do you need to do room set up? Do we want wands or joysticks or hand gestures, or will keyboards do fine? What is the thing that wins? Those hurdles should solidify in the next year or two. The key factor in any of that is killer content.”

The biggest challenge facing his facility, and anyone doing VR post right now, he says, is keeping pace with changing resolutions and standards. “It used to be that 4K or 4K stereo was a good deliverable and that would work,” says Healer. “Now everything is 8K or 10K, because there’s this idea that we also have to future-proof content and prepare for next-gen headsets. You end up with a lot of new variables, like frame rate and resolution. We’re working on stereo commercial right now, and just getting the footage of one shot converted from only six cameras takes almost 3TB of disk space, and that’s just the raw footage.”

When every client suddenly wants to dip their toes into VR, how does a post facility respond? Healer thinks the onus is on production and post services to provide as many options as possible while using their expertise to blaze new paths. “It’s great that everyone wants to experiment in the space, and that puts a certain creative question in our field,” he says. “You have to seriously ask of every project now, does it really just need to be plain-old video? Or is there a game component or interactive component that involves video? We have to explore that. But that means you have to allocate more time in Unity https://unity3d.com/ building out different concepts for how to present these stories.”

As the client projects get more creative, The Molecule is relying on traditional VFX processes like greenscreen, 3D tracking and shooting plates to solve VR-related problems. “These VFX techniques help us get around a lot of the production issues VR presents. If you’re shooting on a greenscreen, you don’t need a 360 lens, and that helps. You can shoot one person walking around on a stage and then just pan to follow them. That’s one piece of footage that you then composite into some other frame, as opposed to getting that person out there on the day, trying to get their performance right and then worrying about hiding all the other camera junk. Our expertise in VFX definitely gives us an advantage in VR post.”

From a post perspective, Healer still hopes most for new camera technology that would radically simplify the stitching process, allowing more time for concepting and innovative project development. “I just saw a prototype of a toric lens,” shaped like the donut-like torus that results from revolving a circle in three-dimensional space, “that films 360 minus a little patch, where the tripod is, in a single frame,” he says. “That would be huge for us. That would really change the workflow around, and while we’re doing a lot of CG stuff that has to be added to VR, stitching takes the most time. Obviously, I care most about post, but there are also lots of production issues around a new lens like that. You’d need a lot of light to make it work well.”

Local Hero Post
For longtime Scratch users Local Hero Post, in Santa Monica, the move to begin grading and compositing in Assimilate Scratch VR was a no-brainer. “We were one of the very first American companies to own a Scratch when it was $75,000 a license,” says founder and head of imaging Leandro Marini. “That was about 10 years ago and we’ve since done about 175 feature film DIs entirely in Scratch, and although we also now use a variety of tools, we still use it.”

Leandro Marini

Marini says he started seeing client demand for VR projects about two years ago and he turned to Scratch VR. He says it allows users do traditional post the way editors and colorist are used to — with all the same DI tools that let you do complicated paint outs, visual effects and 50-layer-deep color corrections, Power Windows, in realtime on a VR sphere.”

New Deal Studios’ 2015 Sundance film, Kaiju Fury was an early project, “when Scratch VR was first really user-friendly and working in realtime.” Now Marini says their VR workflow is “pretty robust. [It’s] currently the only system that I know of that can work in VR in realtime in multiple ways,” which includes a echo-rectangular projection, which gives you a YouTube 360-type of feel and an Oculus headset view.

“You can attach the headset, put the Oculus on and grade and do visual effects in the headset,” he says. “To me, that’s the crux: you really have to be able to work inside the headset if you are going to grade and do VR for real. The difference between seeing a 360 video on a computer screen and seeing it from within a headset and being able to move your head around is huge. Those headsets have wildly different colors than a computer screen.”

The facility’s — and likely the industry’s — highest profile and biggest budget project to date is Invisible, a new VR scripted miniseries directed by Doug Liman and created by 30 Ninjas, the VR company he founded with Julina Tatlock. Invisible premiered in October on Samsung VR and the Jaunt app and will roll out in coming months in VR theaters nationwide. Written by Dallas Buyers Club screenwriter Melisa Wallack and produced by Jaunt and Condé Nast Entertainment, it is billed as the first virtual reality action-adventure series of its kind.

‘Invisible’

“Working on that was a pretty magical experience,” says Marini. “Even the producers and Liman himself had never seen anything like being able to do the grade, do VFX and do composite and stereo fixes in 3D virtual reality all with the headset on. That was our initial dilemma for this project, until we figured it out: do you make it look good for the headset, for the computer screen or for iPhones or Samsung phones? Everyone who worked on this understood that every VR project we do now is in anticipation of the future wave of VR headsets. All we knew was that about a third would probably see it on a Samsung Gear VR, another third would see it on a platform like YouTube 360 and the final third would see it on some other headset like Oculus Rift, HTC or Google’s new Daydream.”

How do you develop a grading workflow that fits all of the above? “This was a real tricky one,” admits Marini. “It’s a very dark and moody film and he wanted to make a family drama thriller within that context. A lot of it is dark hallways and shadows and people in silhouette, and we had to sort of learn the language a bit.”

Marini and his team began exclusively grading in the headset, but that was way too dark on computer monitors. “At the end of the day, we learned to dial it back a bit and make pretty conservative grades that worked on every platform so that it looked good everywhere. The effect of the headset is it’s a light that’s shining right into your eyeball, so it just looks a lot brighter. It had to still look moody inside the headset in a dark room but not too moody that it vanishes on computer laptop in a bright room. It was a balancing act.”

Local Hero

Local Hero also had to figure out how to juggle the new VR work with its regular DI workload. “We had to break off the VR services into a separate bay and room that is completely dedicated to it,” he explains. “We had to slice it off from the main pipeline because it needs around-the-clock custom attention. Very quickly we realized we needed to quarantine this workflow. One of our colorists here has become a VR expert, and he’s now the only one allowed to grade those projects.” The facility upgraded to a Silverdraft Demon workstation with specialized storage to meet the exponential demand for processing power and disk space.

Marini says Invisible, like the other VR work Local Hero has done before is, in essence, a research project in these early days of immersive content. “There is no standard color space or headset or camera. And we’re still in the prototype phase of this. While we are in this phase, everything is an experiment. The experience of being in 3D space is interesting but the quality of what you’re watching is still very, very low resolution. The color fidelity relative to what we’re used to in the theater and on 4K HDR televisions is like VHS 1980’s quality. We’re still very far away from truly excellent VR.”

Scratch VR workflows in Invisible included a variety of complicated processes. “We did things like dimension-alizing 2D shots,” says Marini. “That’s complicated stuff. In 3D with the headset on we would take a shot that was in 2D, draw a rough roto mask around the person, create a 3D field, pull their nose forward, push their eyes back, push the sky back — all in a matter of seconds. That is next-level stuff for VR post.”

Local Hero also used Scratch Web for reviews. “Moments after we finished a shot or sequence it was online and someone could put on a headset and watch it. That was hugely helpful. Doug was in London, Condé Nast in New York. Lexus was a sponsor of this, so their agency in New York was also involved. Jaunt is down the street from us here in Santa Monica. And there were three clients in the bay with us at all times.”

‘Invisible’

As such, there is no way to standardize a VR DI workflow, he says. “For Invisible, it was definitely all hands on deck and every day was a new challenge. It was 4K 60p stereo, so the amount of data we had to push — 4K 60p to both eyes — which was unprecedented.” Strange stereo artifacts would appear for no apparent reason. “A bulge would suddenly show up on a wall and we’d have to go in there and figure out why and fix it. Do we warp it? Try something else? It was like that throughout the entire project: invent the workflow every day and fudge your way through. But that’s the nature of experimental technology.”

Will there be a watershed VR moment in the year ahead? “I think it all depends on the headsets, which are going to be like mobile phones,” he says. “Every six months there will be a new group of them that will be better and more powerful with higher resolution. I don’t think there will be a point in the future when everyone has a self-contained high-end headset. I think the more affordable headsets that you put your phone into, like Gear VR and Daydream, are the way most people will begin to experience VR. And we’re only 20 percent of the way there now. The whole idea of VR narrative content is completely unknown and it remains to be seen if audiences care and want it and will clamor for it. When they do, then we’ll develop a healthy VR content industry in Hollywood.”


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.

VR Audio: Virtual and spacial soundscapes

By Beth Marchant

The first things most people think of when starting out in VR is which 360-degree camera rig they need and what software is best for stitching. But virtual reality is not just a Gordian knot for production and post. Audio is as important — and complex — a component as the rest. In fact, audio designers, engineers and composers have been fascinated and challenged by VR’s potential for some time and, working alongside future-looking production facilities, are equally engaged in forging its future path. We talked to several industry pros on the front lines.

Howard Bowler

Music industry veteran and Hobo Audio founder Howard Bowler traces his interest in VR back to the groundbreaking film Avatar. “When that movie came out, I saw it three times in the same week,” he says. I was floored by the technology. It was the first time I felt like you weren’t just watching a film, but actually in the film.” As close to virtual reality as 3D films had gotten to that point, it was the blockbuster’s evolved process of motion capture and virtual cinematography that ultimately delivered its breathtaking result.

“Sonically it was extraordinary, but visually it was stunning as well,” he says. “As a result, I pressed everyone here at the studio to start buying 3D televisions, and you can see where that has gotten us — nowhere.” But a stepping stone in technology is more often a sturdy bridge, and Bowler was not discouraged. “I love my 3D TVs, and I truly believe my interest in that led me and the studio directly into VR-related projects.”

When discussing the kind of immersive technology Hobo Sound is involved with today, Bowler — like others interviewed for this series — clearly define VR’s parallel deliverables. “First, there’s 360 video, which is passive viewing, but still puts you in the center of the action. You just don’t interact with it. The second type, more truly immersive VR, lets you interact with the virtual environment as in a video game. The third area is augmented reality,” like the Pokemon Go phenomenon of projecting virtual objects and views onto your actual, natural environment. “It’s really important to know what you’re talking about when discussing these types of VR with clients, because there are big differences.”

With each segment comes related headsets, lenses and players. “Microsoft’s HoloLens, for example, operates solely in AR space,” says Hobo producer Jon Mackey. “It’s a headset, but will project anything that is digitally generated, either on the wall or to the space in front of you. True VR separates you from all that, and really good VR separates all your senses: your sight, your hearing and even touch and feeling, like some of those 4D rides at Disney World.” Which technology will triumph? “Some think VR will take it, and others think AR will have wider mass adoption,” says Mackey. “But we think it’s too early to decide between either one.”

Boxed Out

‘Boxed Out’ is a Hobo indie project about how gentrification is affecting artists studios in the Gowanus section of Brooklyn.

Those kinds of end-game obstacles are beside the point, says Bowler. “The main reason why we’re interested in VR right now is that the experiences, beyond the limitations of whatever headset you watch it on, are still mind-blowing. It gives you enough of a glimpse of the future that it’s incredible. There are all kinds of obstacles it presents just because it’s new technology, but from our point of view, we’ve honed it to make it pretty seamless. We’re digging past a lot of these problem areas, so at least from the user standpoint, it seems very easy. That’s our goal. Down the road, people from medical, education and training are going to need to understand VR for very productive reasons. And we’re positioning ourselves to be there on behalf of our clients.”

Hobo’s all-in commitment to VR has brought changes to its services as well. “Because VR is an emerging technology, we’re investing in it globally,” says Bowler. “Our company is expanding into complete production, from concepting — if the client needs it — to shooting, editing and doing all of the audio post. We have the longest experience in audio post, but we find that this is just such an exciting area that we wanted to embrace it completely. We believe in it and we believe this is where the future is going to be. Everybody here is completely on board to move this forward and sees its potential.”

To ramp up on the technology, Hobo teamed up with several local students who were studying at specialty schools. “As we expanded out, we got asked to work with a few production companies, including East Coast Digital and End of Era Productions, that are doing the video side of it. We’re bundling our services with them to provide a comprehensive set of services.” Hobo is also collaborating with Hidden Content, a VR production and post production company, to provide 360 audio for premium virtual reality content. Hidden Content’s clients include Samsung, 451 Media, Giant Step, PMK-BNC, Nokia and Popsugar.

There is still plenty of magic sauce in VR audio that continues to make it a very tricky part of the immersive experience, but Bowler and his team are engineering their way through it. “We’ve been developing a mixing technique that allows you to tie the audio to the actual object,” he says. “What that does is disrupt the normal stereo mix. Say you have a public speaker in the center of the room; normally that voice would turn with you in your headphones if you turn away from him. What we’re able to do is to tie the audio of the speaker to the actual object, so when you turn your head, it will pan to the right earphone. That also allows you to use audio as signaling devices in the storyline. If you want the viewer to look in a certain direction in the environment, you can use an audio cue to do that.”

Hobo engineer Diego Jimenez drove a lot of that innovation, says Mackey. “He’s a real VR aficionado and just explored a lot of the software and mixing techniques required to do audio in VR. We started out just doing a ton of tests and they all proved successful.” Jimenez was always driven by new inspiration, notes Bowler. “He’s certainly been leading our sound design efforts on a lot of fronts, from creating instruments to creating all sorts of unusual and original sounds. VR was just the natural next step for him, and for us. For example, one of the spots that we did recently was to create a music video and we had to create an otherworldly environment. And because we could use our VR mixing technology, we could also push the viewer right into the experience. It was otherworldly, but you were in that world. It’s an amazing feeling.”

boxed-out

‘Boxed Out’

What advice do Bowler and Mackey have for those interested in VR production and post? “360 video is to me the entry point to all other versions of immersive content,” says Bowler. “It’s the most basic, and it’s passive, like what we’re used to — television and film. But it’s also a completely undefined territory when it comes to production technique.” So what’s the way in? “You can draw on some of the older ways of doing productions,” he says, “but how do you storyboard in 360? Where does the director sit? How do you hide the crew? How do you light this stuff? All of these things have to be considered when creating 360 video. That also includes everyone on camera: all the viewer has to do is look around the virtual space to see what’s going on. You don’t want anything that takes the viewer out of that experience.”

Bowler thinks 360 video is also the perfect entry point to VR for marketers and advertisers creating branded VR content, and Hobo’s clients agree. “When we’ve suggested 360 video on certain projects and clients want to try it out, what that does is it allows the technology to breathe a little while it’s underwritten at the same time. It’s a good way to get the technology off the ground and also to let clients get their feet wet in it.”

Any studio or client contemplating VR, adds Mackey, should first find what works for them and develop an efficient workflow. “This is not really a solidified industry yet,” he says. “Nothing is standard, and everyone’s waiting to see who comes out on top and who falls by the wayside. What’s the file standard going to be? Or the export standard?  Will it be custom-made apps on (Google) YouTube or Facebook? We’ll see Facebook and Google battle it out in the near term. Facebook has recently acquired an audio company to help them produce audio in 360 for their video app and Google has the Daydream platform,” though neither platform’s codec is compatible with the other, he points out. “If you mix your audio to Facebook audio specs, you can actually have your audio come out in 360. For us, it’s been trial and error, where we’ve experimented with these different mixing techniques to see what fits and what works.”

Still, Bowler concedes, there is no true business yet in VR. “There are things happening and people getting things out there, but it’s still so early in the game. Sure, our clients are intrigued by it, but they are still a little mystified by what the return will be. I think this is just part of what happens when you deal with new technology. I still think it’s a very exciting area to be working in, and it wouldn’t surprise me if it doesn’t touch across many, many different subjects, from history to the arts to original content. Think about applications for geriatrics, with an aging population that gets less mobile but still wants to experience the Caribbean or our National Parks. The possibilities are endless.”

At one point, he admits, it may even become difficult to distinguish one’s real memory from one’s virtual memory. But is that really such a bad thing? “I’m already having this problem. I was watching an immersive video of Cuban music, that was pretty beautifully done, and by the end of the five-minute spot, I had the visceral experience that I was actually there. It’s just a very powerful way of experiencing content. Let me put it another way: 3D TVs were at the rabbit hole, and immersive video will take you down the rabbit hole into the other world.”

Source Sound
LA-based Source Sound, which has provided supervision and sound design on a number of Jaunt-produced cinematic VR experiences, including a virtual fashion show, a horror short and a Godzilla short film written and directed by Oscar-winning VFX artist Ian Hunter, as well as final Atmos audio mastering for the early immersive release Sir Paul McCartney Live, is ready for spacial mixes to come. That wasn’t initially the case.

Tim

Tim Gedemer

“When Jaunt first got into this space three years ago, they went to Dolby to try to figure out the audio component,” says Source Sound owner/supervising sound designer/editor Tim Gedemer. “I got a call from Dolby, who told me about what Jaunt was doing, and the first thing I said was, ‘I have no idea what you are talking about!’ Whatever it is, I thought, there’s really no budget and I was dragging my feet. But I asked them to show me exactly what they were doing. I was getting curious at that point.”

After meeting the team at Jaunt, who strapped some VR goggles on him and showed him some footage, Gedemer was hooked. “It couldn’t have been more than 30 seconds in and I was just blown away. I took off the headset and said, ‘What the hell is this?! We have to do this right now.’ They could have reached out to a lot of people, but I was thrilled that we were able to help them by seizing the moment.”

Gedemer says Source Sound’s business has expanded in multiple directions in the past few years, and VR is still a significant part of the studio’s revenue. “People are often surprised when I tell them VR counts for about 15-20 percent of our business today,” he says. “It could be a lot more, but we’d have to allocate the studios differently first.”

With a background in mixing and designing sound for film and gaming and theatrical trailers, Gedemer and his studio have a very focused definition of immersive experiences, and it all includes spacial audio. “Stereo 360 video with mono audio is not VR. For us, there’s cinematic, live-action VR, then straight-up game development that can easily migrate into a virtual reality world and, finally, VR for live broadcast.” Mass adoption of VR won’t happen, he believes, until enterprise and job training applications jump on the bandwagon with entertainment. “I think virtual reality may also be a stopover before we get to a world where augmented reality is commonplace. It makes more sense to me that we’ll just overlay all this content onto our regular days, instead of escaping from one isolated experience to the next.”

On set for the European launch of the Nokia Ozo VR camera in London, which featured a live musical performances captured in 360 VR.

For now, Source Sound’s VR work is completed in dedicated studios configured with gear for that purpose. “It doesn’t mean that we can’t migrate more into other studios, and we’re certainly evolving our systems to be dual-purpose,” he says. “About a year ago we were finally able to get a grip on the kinds of hardware and software we needed to really start coagulating this workflow. It was also clear from the beginning of our foray into VR that we needed to partner with manufacturers, like Dolby and Nokia. Both of those companies’ R&D divisions are on the front lines of VR in the cinematic and live broadcast space, with Dolby’s Atmos for VR and Nokia’s Ozo camera.”

What missing tools and technology have to be developed to achieve VR audio nirvana? “We delivered a wish list to Dolby, and I think we got about a quarter of the list,” he says. “But those guys have been awesome in helping us out. Still, it seems like just about every VR project that we do, we have to invent something to get us to the end. You definitely have to have an adventurous spirit if you want to play in this space.”

The work has already influenced his approach to more traditional audio projects, he says, and he now notices the lack of inter-spacial sound everywhere. “Everything out there is a boring rectangle of sound. It’s on my phone, on my TV, in the movie theater. I didn’t notice it as much before, but it really pops out at me now. The actual creative work of designing and mixing immersive sound has realigned the way I perceive it.”

Main Image: One of Hobo’s audio rooms, where the VR magic happens.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

 

New version of VideoStitch software for 360 video post

VideoStitch is offering a new version of its 360 video post software VideoStitch Studio, including support of ProRes and the H.265 codec, rig presets and feathering.

“With the new version of VideoStitch Studio we give professional 360 video content creators a great new tool that will save them a lot of valuable time during the post production process without compromising the quality of their output,” says Nicolas Burtey, CEO of VideoStitch.

VR pros are already using VideoStitch’s interactive high-resolution live preview as well as its rapid processing. With various new features, VideoStitch Studio 2.2 promises an easier and faster workflow. Support of ProRes ensures a high quality and interoperability with third parties. Support of the H.265 codec widens the range of cameras that can be used with the software. Newly added rig presets allow for quick and automatic stitching with optimal calibration results. Feathering provides for improved blending of the input videos. Also, audio and motion synchronization has been enhanced so that various inputs can be integrated flawlessly. Lastly, the software supports the latest Nvidia graphics card, GTX-10 series.

VideoStitch Studio 2.2 is available for trial download at www.video-stitch.com. The full license costs $295.

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.

Jaunt One pro VR camera available for rent from AbelCine

Thanks to an expanding rental plan, the Jaunt One cinematic VR camera is being made available through AbelCine, a provider of products and services to the production, broadcast and new media industries. AbleCine has locations in New York, Chicago and Los Angeles.

The Jaunt One 24G model camera — which features 24 global shutter sensors, is suited for low-light and fast-moving objects, and has the ability to couple with 360-degree ambisonic audio recording — will be available to rent from AbelCine. Creators will also have access to AbelCine’s training, workshops and educational tools for shooting in VR.

The nationwide availability of the Jaunt One camera, paired with access to the company’s end-to-end VR pipeline, provides filmmakers, creators and artists with the hardware and software (through Jaunt Cloud Services) solutions for shooting, producing and distributing immersive cinematic VR experiences (creators can submit high-quality VR content for distribution directly to the Jaunt VR app through the Jaunt Publishing program).

“As we continue to open the Jaunt pipeline to the expanding community of VR creators, AbelCine is a perfect partner to not only get the Jaunt One camera in the hands of filmmakers, but also to educate them on the opportunities in VR,” says Koji Gardiner, VP of hardware engineering at Jaunt. “Whether they’re a frequent experimenter of new mediums or a proven filmmaker dabbling in VR for the first time, we want to equip creators of all backgrounds with everything needed to bring their stories to life.”

Jaunt is also expanding its existing rental program with LA-based Radiant Images to increase the number of cameras available to their customers.

 

AMD’s Radeon Pro WX series graphics cards shipping this month

AMD is getting ready to ship the Radeon Pro WX Series of graphics cards, the company’s new workstation graphics solutions targeting creatives pros. The Radeon Pro WX Series are AMD’s answer to the rise of realtime game engines in professional settings, the emergence of virtual reality, the popularity of new low-overhead APIs (such as DirectX 12 and Vulkan) and the rise of open-source tools and applications.

The Radeon Pro WX Series takes advantage of the Polaris architecture-based GPUs featuring fourth-generation Graphics Core Next (GCN) technology and engineered on the 14nm FinFET process. The cards have future-proof monitor support, are able to run a 5K HDR display via DisplayPort 1.4, include state-of-the-art multimedia IP with support for HEVC encoding and decoding and TrueAudio Next for VR, and feature cool and quiet operation with an emphasis on energy efficiency. Each retail Radeon Pro WX graphics card comes with 24/7, VIP customer support, a three-year limited warranty and now features a free, optional seven-year extended limited warranty upon product and customer registration.

Available November 10 for $799, the Radeon Pro WX 7100 graphics card offers 5.7 TFLOPS of single precision floating point performance in a single slot, and is designed for professional VR content creators. Equipped with 8GB GDDR5 memory and 36 compute units (2304 Stream Processors) the Radeon Pro WX 7100 is targeting high-quality visualization workloads.

Also available on November 10, for $399, the Radeon Pro WX 4100 graphics cards targets CAD professionals. The Pro WX 4100 breaks the 2 TFLOPS single precision compute performance barrier. With 4GB of GDDR5 memory and 16 compute units (1024 stream processors), users can drive four 4K monitors or a single 5K monitor at 60Hz, a feature which competing low-profile CAD focused cards in its class can’t touch.radeon

Available November 18 for $499, the Radeon Pro WX 5100 graphics card (pictured right) offers 3.9 TFLOPS of single precision compute performance while using just 75 watts of power. The Radeon Pro WX 5100 graphics card features 8GB of GDDR5 memory and 28 compute units (1792 stream processors) suited for high-resolution realtime visualization for industries such as automotive and architecture.

In addition, AMD recently introduced Radeon Pro Software Enterprise drivers, designed to combine AMD’s next-gen graphics with the specific needs of pro enterprise users. Radeon Pro Software Enterprise drivers offer predictable software release dates, with updates issued on the fourth Thursday of each calendar quarter, and feature prioritized support with AMD working with customers, ISVs and OEMs. The drivers are certified in numerous workstation applications covering the leading professional use cases.

AMD says it’s also committed to furthering open source software for content creators. Following news that later this year AMD plans to open source its physically-based rendering engine Radeon ProRender, the company recently announced that a future release of Maxon’s Cinema 4D application for 3D modeling, animation and rendering will support Radeon ProRender. Radeon ProRender plug-ins are available today for many popular 3D content creation apps, including Autodesk 3ds Max and Maya, and as beta plug-ins for Dassault Systèmes SolidWorks and Rhino. Radeon ProRender works across Windows, MacOS and Linux and supports AMD GPUs, CPUs and APUs as well as those of other vendors.

Ronen Tanchum brought on to run The Artery’s new AR/VR division

New York City’s The Artery has named Ronen Tanchum head of its newly launched virtual reality/augmented reality division. He will serve as creative director/technical director.

Tanchum has a rich VFX background, having produced complex effects set-ups and overseen digital tools development for feature films including Deadpool, Transformers, The Amazing Spiderman, Happy Feet 2, Teenage Mutant Ninja Turtles and The Wolverine. He is also the creator of the original VR film When We Land: Young Yosef. His work on The Future of Music — a 360-degree virtual experience from director Greg Barth and Phenomena Labs, which immerses the viewer in a surrealist musical space — won the DA&D Silver Award in the “Best Branded Content” category in 2016.

“VR today stands at just the tip of the iceberg,” says Tanchum. “Before VR came along, we were just observers and controlled our worlds through a mouse and a keyboard. Through the VR medium, humans become active participants in the virtual world — we get to step into our own imaginations with a direct link to our brains for the first time, experiencing the first impressions of a virtual world. As creators, VR offers us a very powerful tool by which to present a unique new experience.”

Tanchum says the first thing he asks a potential new VR client is, ‘Why VR? What is the role of VR in your story? “Coming from our long experiences in the CG world by working on highly demanding creative visual projects, we at The Artery have evolved our collective knowledge and developed a strong pipeline into this new VR platform,” he explains, adding that The Artery’s new division is currently gearing up for a big VR project for a major brand. “We are using it to its fullest to tell stories. We inform our clients that VR shouldn’t be created just because it’s ‘cool.’ The new VR platform should be used to play an integral part of the storyline itself — a well crafted VR experience should embellish and complement the story.”

 

Creating VR audio workflows for ‘Mars 2030’ and beyond

Source Sound is collaborating with others and capturing 360 sound for VR environments.

By Jennifer Walden

Everyone wants it, but not everyone can make it. No, I’m not talking about money. I’m talking about virtual reality content.

Let’s say you want to shoot a short VR film. You’ve got a solid script, a cast of known actors, you’ve got a 360-degree camera and a pretty good idea of how to use it, but what about the sound? The camera has a built-in mic, but will that be enough coverage? Should the cast be mic’d as they would be for a traditional production? How will the production sound be handled in post?

Tim Gedemer, owner/sound supervisor at Source Sound in Woodland Hills, California, can help answer these questions. “In VR, we are audio directors,” he says. “Our services include advising clients at the script level on how they should be shooting their visuals to be optimal for sound.”

Tim Gedemer

As audio directors, Source Sound walks their clients through every step of the process, from production to distribution. Starting with the recording on set, they manage all of the technical aspects of sound file management through production, and then guide their clients through the post sound process, both creatively and technically.

They recommend what technology should be used, how clients should be using it and what deals they need to make to sort out their distribution. “It really is a point-to-point service,” says Gedemer. “We decided early on that we needed to influence the entire process, so that is what we do.”

Two years ago, Dolby Labs referred Jaunt Studio to Source Sound to for their first VR film gig. Gedemer explains that because of Source Sound’s experience with games and feature films, Dolby felt they would be a good match to handle Jaunt’s creative sound needs while Dolby worked with Jaunt on the technical challenges.

Jaunt’s Kaiju Fury! premiered at the 2015 Sundance Film Festival. The experience puts the viewer in the middle of an epic Godzilla-like monster battle. “They realized their film needed cinematic sound, so Dolby called us up and asked if we’d like to get involved. We said, ‘We’re really busy with projects, but show us the tech and maybe we’ll help.’ We were disinterested at first, figuring it was going to be gimmicky, but I went to San Francisco and I looked at their first test, and I was just shocked. I had never seen anything like that before in my life. I realized, in that first moment of putting on those goggles, that we needed to do this.”

Paul McCartney on the "Out There" tour 2014.

Paul McCartney on the “Out There” tour 2014.

Kaiju Fury! was just the start. Source Sound completed three more VR projects for Jaunt, all within a week. There was the horror VR short film called Black Mass, a battle sequence called The Mission and the Atmos VR mastering of Paul McCartney’s Live and Let Die in concert.

Gedemer admits, “It was just insane. No one had ever done anything like this and no one knew how to do it. We just said, ‘Okay, we’ll just stay up for a week, figure all of that out and get it done.’”

Adjusting The Workflow
At first, their Pro Tools-based post sound workflow was similar to a traditional production, says Gedemer, “because we didn’t know what we didn’t know. It was only when we got into creating the final mix that we realized we didn’t have the tools to do this.”

Specifically, how could they experience the full immersion of the 360-degree video and concurrently make adjustments to the mix? On that first project, there was no way to slave the VR picture playing back through the Oculus headgear to the sound playing back via Pro Tools. “We had to manually synchronize,” explains Gedemer. “Literally, I would watch the equi-rectangular video that we were working with in Pro Tools, and at the precise moment I would just press play on the laptop, playing back the VR video through the Oculus HMD to try and synchronize it that way. I admit I got pretty good at that, but it’s not really the way you want to be working!”

Since that time, Dolby has implemented timecode synchronization and a video player that will playback the VR video through the Oculus headset. Now the Source Sound team can pick up the Oculus and it will be synchronized to the Pro Tools session.

Working Together For VR
Over the last few years, Source Sound has been collaborating with tech companies like Dolby, Avid, Oculus, Google, YouTube and Nokia on developing audio-related VR tools, workflow solutions and spec standards that will eventually become available to the wider audio post industry.

“We have this holistic approach to how we want to work, both in virtual and augmented reality audio,” says Gedemer. “We’re working with many different companies, beta testing technology and advising on what they should be thinking about regarding VR sound — with a keen eye toward new product development.”

Kaiju Fury

Kaiju Fury!

Since Kaiju Fury, Source Sound has continued to create VR experiences with Jaunt. They have worked with other VR content creators, including the Emblematic Group (founded by “the godmother of VR,” Nonny de la Peña), 30 Ninjas (founded by director Doug Liman, The Bourne Identity and Edge of Tomorrow), Fusion Media, Mirada, Disney, Google, YouTube and many others.

Mars 2030
Currently, Source Sound is working with Fusion Media on a project with NASA called Mars 2030, which takes a player to Mars as an astronaut and allows him/her to experience what life might be like while living in a Mars habitat. NASA feels that human exploration of Mars may be possible in the year 2030, so why not let people see and feel what it’s like.

The project has given Source Sound unprecedented access to the NASA facilities and engineers. One directive for Mars 2030 is to be as accurate as possible, with information on Mars coming directly from NASA’s Mars missions. For example, NASA collected information about the surface of Mars, such as the layout of all the rocks and the type of sand covering the surface. All of that data was loaded into the Unreal Engine, so when a player steps out of the habitat in the Mars 2030 experience and walks around, that surface is going to be the exact surface that is on Mars. “It’s not a facsimile,” says Gedemer. “That rock is actually there on Mars. So in order for us to be accurate from an audio perspective, there’s a lot that we have to do.”

In the experience the player gets to drive the Mars Rover. At NASA in Houston, there are multiple iterations of the rover that are being developed for this mission. They also have a special area that is set up like the Mars surface with a few craters and rocks.

For audio capture, Gedemer and sound effects recordist John Fasal headed to Houston with Sound Devices recorders and a slew of mic options. While the rover is too slow to do burnouts and donuts, Gedemer and Fasal were able to direct a certified astronaut driver and record the rover from every relevant angle. They captured sounds and ambiences from the various habitats on site. “There is a new prototype space suit that is designed for operation on Mars, and as such we will need to capture all the relevant sound associated with it,” says Gedemer. “We’ll be looking into helmet shape and size, communication systems, life support air flow, etc. when recreating this in the Unreal Engine.”

Another question the sound team nSS_NASA_2USEeeds to address is, “What does it sound like out on the surface of Mars?” It has an atmosphere, but the tricky thing is that a human can never actually walk around on the surface of Mars without wearing a suit. Sounds traveling through the Mars atmosphere will sound different than sounds traveling through Earth’s atmosphere, and additional special considerations need to be made for how the suit will impact sound getting to the astronaut’s ears.

“Only certain sounds and/or frequencies will penetrate the suit, and if it is loud enough to penetrate the suit, what is it going to sound like to the astronaut?” asks Gedemer. “So we are trying to figure out some of these technical things along the way. We hope to present a paper on this at the upcoming AES Conference on Audio for Virtual and Augmented Reality.”

Going Live
Another interesting project at Source Sound is the work they’re doing with Nokia to develop specialized audio technology for live broadcasts in VR. “We are currently the sole creative provider of spatial audio for Nokia’s VR broadcasting initiative,” reveals Gedemer. Source Sound has been embedded with the Nokia Ozo Live team at events where they have been demonstrating their technology. They were part of the official Ozo Camera Launches in Los Angeles and London. They captured and spatialized a Los Angeles Lakers basketball game at the Staples Center. And once again they teamed up with Nokia at their NAB event this past spring.

“We’ve been working with them very closely on the technology that they are developing for live capture and distribution of stereoscopic visual and spatial audio in VR. I can’t elaborate on any details, but we have some very cool things going on there.”

However, Gedemer does break down one of the different requirements of live VR broadcast versus a cinematic VR experience — an example being the multi-episode VR series called Invisible, which Source Sound and Doug Liman of 30 Ninjas are currently collaborating on.

For a live broadcast you want an accurate representation of the event, but for a cinematic experience the opposite is true. Accuracy is not the objective. A cinematic experience needs a highly curated soundtrack in order to tell the story.

Gedemer elaborates, “The basic premise is that, for VR broadcasts you need to have an accurate audio representation of camera location. There is the matter of proper perspective to attend to. If you have a multi-camera shoot, every time you change camera angles to the viewer, you change perspective, and the sound needs to follow. Unlike a traditional live environment, which has a stereo or 5.1 mix that stays the same no matter the camera angle, our opinion is that approach is not adequate for true VR. We think Nokia is on the right track, and we are helping them perfect the finer points. To us that is truly exciting.”

Jennifer Walden is a New Jersey based writer and audio engineer.

Testronic opens second VR test center

Testronic has opened a dedicated virtual reality test center in its Burbank headquarters. The VR test center is the company’s second, as they also launched one in their Warsaw, Poland, location earlier this year, further expanding their full-service QA testing services.

“Consumer VR is in its infancy, and nobody knows what it will become years from now,” said Jason Gish (pictured right), Testronic’s senior VP for film and television. “As VR evolves, consumer eJason Gish, Testronicsmallxpectations will grow, requiring more exploratory and inventive QC processes. Testing VR content has unique requirements, and the integrity of VR content is crucial to its functionality. It is critical to have an understanding of aspects like head tracking and other core VR functions in order to develop a thorough test approach. Issues in VR can not only take you out of the experience, but can cause simulator sickness. Beyond testing for the usual bugs and functionality imperfections, VR is deeply rooted in user experience, and Testronic’s test approach reflects that understanding.”

Testronic was also an early testing pioneer of user experience design (UX), developing one of the first UX labs in the US.

Content creators can now publish to Jaunt VR app

Further expanding it’s cinematic VR ecosystem, Jaunt Inc. has launched Jaunt Publishing, a program that allows the professional VR creators to publish their VR content directly to the Jaunt VR app.

Through an online portal on the company’s website, Jaunt Publishing allows creators to submit their work for consideration and publishing across the Jaunt platform. Once content is approved by an internal review board, creators will have access to Jaunt Cloud Services (JCS), its cloud-based VR production software suite, and the publishing tools within, including transcoding, “deep-links,” support for premium spatial audio formats like Dolby Atmos and processing and preparation for distribution on all VR platforms.

Jaunt Publishing is aimed at solving a common challenge for creators in the VR industry: multichannel distribution. Jaunt is a distribution huThePullb that is platform agnostic and therefore capable of delivering cinematic VR content to every VR platform and device.

“Working with Jaunt was simple and intuitive; I didn’t have to touch anything. I’m a very hands-on person, so it was a bit awkward at first, but when I started seeing the results it was clear that the encoding was solid,” says Quba Michalski, director of The Pull, one of 16 independently-produced pieces released on Jaunt so far.

Some highlights of Jaunt Publishing
●      There is no need to encode each file format manually. Once a source file is uploaded to JCS, it’s automatically transcoded in the cloud for all devices and platforms.
●       Jaunt is platform agnostic and supports the widest array of VR headsets and platforms available. Jaunt supports iOS, Android, Gear VR, Oculus Rift, HTC Vive, PlayStation VR and desktop 360.
●     Every video on Jaunt has a unique deep link—a single smart URL that brings viewers directly to a creator’s VR experience on the device they’re using. This eliminates any friction between someone learning about a creative’s content and experiencing it.
●      When it comes to immersive VR content, sound quality is more than half of the experience. The Jaunt platform supports Dolby Atmos technology which allows creators to incorporate precise spatial audio into the content they create.

For full guidelines on submitting through Jaunt Publishing, click here: www.jauntvr.com/creators/submissions/guidelines/

Archion’s new Omni Hybrid storage targets VR, VFX, animation

Archion Technologies has introduced the EditStor Omni Hybrid, a collaborative storage solution for virtual reality, visual effects, animation, motion graphics and post workflows.

In terms of performance, an Omni Hybrid with one expansion chassis offers 8000MB/second for 4K and other streaming demands, and over 600,000 IOPS for rendering and motion graphics. The product has been certified for Adobe After Effects, Autodesk’s Maya/Flame/Lustre, The Foundry’s Nuke and Modo, Assimilate Scratch and Blackmagic’s Resolve and Fusion.  The Omni Hybrid is scalable up to a 1.5Petabytes, and can be expanded without shutdown.

“We have Omni Hybrid in post production facilities that range from high-end TV and film to massive reality productions,” reports Archion CTO James Tucci. “They are all doing graphics and editorial work on one storage system.”

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.

Virgil Kastrup talks about color grading ‘Ewa’ VR project

Post pro Virgil Kastrup was the colorist for Ewa, the latest venture into the world of virtual reality and 3D from Denmark’s Makropol. It made its debut at the 2016 Cannes Film Festival. According to Kastrup, it was a whirlwind project — an eight-minute pilot, with one day for him to do the color grading, versioning and finishing.

The concept is fairly simple. The main character is Ewa, and you become her, as she becomes herself. Through the eyes of Ewa, you will access a world you have never seen before. You will be born as Ewa, you will grow up as Ewa, and, as Ewa, you will fight to free yourself. “Out Of Body” is a crucial chapter of Ewa’s life.

Virgil Kastrup

In a recent chat, the Copenhagen-based Kastrup talked about the challenges of posting the Ewa VR and 3D project, and how he handled the color grading and finishing within the one-day deadline.

What was your main challenge?
The time constraints! We had one day to try out looks, color grade in VR and 3D, create versions for reviews and then finish in VR and 3D. We then had to ensure the look and final result met the vision and satisfaction of Makropol’s director, Johan Jensen.

How was the pilot shot?
Four GoPro cameras (two stereo pairs) were mounted on a helmet-rig the actress wore on her head. This created the immersive view into Ewa’s life, so that the viewer is “entering” the scenes as Ewa. When the DP removed the helmet from her head, an out-of-body experience was created. The viewer is seeing the world through the eyes of a young girl.

What material were you working with?
Makropol sent me the final stitched imagery, ProRes 4K x 4K pixels. Because the content was one long shot in eight minutes, there was no need to edit or conform.

What VR challenges did you face?
In viewing the Ewa pilot, the viewer is immersed in the VR experience without it being 360. Achieving a 360 aspect was a little tricky because the imagery was limited to 180 degrees, so I had to find a way to blank the rear part of the sphere on which the image was projected. I tested and tried out different solutions, then went with making a back gradient so the image fades away from the viewer.

Ewa

What tool suite did you use for color grading and finishing?
I had a beta copy of Assimilate’s Scratch VR Suite. I’ve been using Scratch for 2D and other projects for years, so the learning curve for the VR Suite was virtually zero. The VR Suite offers the same of tools and workflow as Scratch, but they’re geared to work in the VR/360 space. It’s intuitive and very easy to use, which gave me a confidence boost for testing looks and achieving a quality result.

How did you handle the VR?
The biggest challenge was that the look had to work everywhere within the VR scene. For example, if you’re looking from the dining room into the living room, and the light was different, it had to be manipulated without affecting either room. The Scratch 3D tools simplify the 3D process — with a click of a button, you can set up the 3D/stereo functions.

Did you use a headset?
I did the color grading and finishing all on the monitor view. For my reviews and the client sessions, we used the Oculus Rift. Our goal was to ensure the content was viewed as a completely immersive experience, rather than just watching another video.

What impact did this project have on your view about VR?
A project like this — an eight-minute test pilot — doesn’t warrant the use of the expensive professional-grade cameras, yet a filmmaker can still achieve a quality VR result on a restricted budget. By using professional color grading and finishing tools, many issues can be overcome, such compression, lighting, hot spots and more. The colorist has the ability to add his/her creative expertise to craft the look and feel, as well as the subtle effects that go into producing a quality video or feature. This combination of expertise and the right tools opens the world of VR to a wide range of creative professionals in numerous markets.

Experiencing autism in VR via Happy Finish

While people with autism might “appear” to be like the rest of us, the way they experience the world is decidedly different. Imagine sensory overload times 10. In an effort to help the public understand autism, the UK’s National Autistic Society and agency Don’t Panic have launched a campaign called “Too Much Information” (#autismTMI) that is set to challenge myths, misconceptions and stereotypes relating to this neurobiological disorder.

In order to help tell that story, the NAS called on London’s Happy Finish to help create a 360-degree VR film that puts viewers into the shoes of a child with autism during a visit to the store. A 2D film had previously been developed based on the experience of a 10-year-old boy autistic boy named Alexander. Happy Finish provided visual effects for that version, which, since March of last year, has over 54 million views and over 850K shares. The new 360-degree VR experience takes the viewer into Alexander’s world in a more immersive way.

After interviewing several autistic adults as part of the research, Happy Finish worked on this idea that aims to trigger viewer’s empathy and understanding. Working with Don’t Panic and The National Autistic Society, they share Alexander’s experience in an immersive and moving way.

The piece was shot by DP Michael Hornbogen using a six-camera GoPro array in 3D printed housing. For stitching, Happy Finish called on Autopano by Kolor, The Foundry’s Nuke and Adobe After Effects. Editing was in Adobe Premiere. Color grading was via Blackmagic’s Resolve.

“It was a long process of compositing using various tools,” explains Jamie Mossahebi, director of the VR shooting at Happy Finish. “We created 18 versions and amended and tweaked based on initial feedback from autistic adults.”

He says that most of the studio’s VR experiences aim to create something comfortable and pleasant, but this one needed to be uncomfortable while remaining engaging. “The main challenge was to be as realistic as possible, for that, we focused a lot on the sound design as well as a testing a wide variety of visual effects, selecting the key ones that contributed to making it as immersive and as close to a sensory overload as possible,” explains Mossahebi, who directed the VR film.

“This is Don’t Panic’s first experience of creating a virtual reality campaign,” says Richard Beer, creative director of Don’t Panic. “The process of creating a virtual reality film has a whole different set of rules: it’s about creating a place for people to visit and a person for them to become, rather than simply telling a story. This interactivity of virtual reality gives it a unique sense of “presence” — it has the power to take us somewhere else in time and space, to help us feel, just for a while, what it’s like to be someone else – which is why it was the perfect tool to communicate exactly what a sensory overload feels like for someone with autism for the NAS.”

Sponsored by Tangle Teaser and Intu, the film will tour shopping centers around the UK and will also be available through Autism TMI Virtual Reality Experience view app.

AR/VR audio conference taking place with AES show in fall


The AES is tackling the augmented reality and virtual reality creative process, applications workflow and product development for the first time with a dedicated conference that will take place on 9/30-10/1 during the 141st AES Convention at the LA Convention Center’s West Hall.

The two-day program of technical papers, workshops, tutorials and manufacturer’s expo will highlight the creative and technical challenges of providing immersive spatial audio to accompany virtual reality and augmented reality media.

The conference will attract content developers, researchers, manufacturers, consultants and students, in addition to audio engineers seeking to expand their knowledge about sound production for virtual and augmented reality. The companion expo will feature displays from leading-edge manufacturers and service providers looking to secure industry metrics for this emerging field.

“Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” shares conference co-chair Andres Mayo. “This conference will demonstrate that VR and AR productions, using a variety of playback devices, require audio that follows the motions of the subject, and produces a realistic immersive experience. Our program will spotlight the work of leading proponents in this exciting field of endeavor, and how realistic spatial audio can be produced from existing game console and DSP engines.”

Proposed topics include object-based audio mixing for VR/AR, immersive audio in VR/AR broadcast, live VR audio production, developing audio standards for VR/AR, cross platform audio considerations in VR and streaming immersive audio content.

Costs range from $195 for a one-day pass for AES members ($295 for a two-day pass) and $125 for accredited students, to $280/$435 for non-members; Early-bird discounts also are available.

Conference registrants can also attend the 141st AES Convention’s companion exhibition, select educational sessions and special events free of charge with an exhibits-plus badge.

Assimilate Scratch 8.5, Scratch VR Suite available for open beta

Assimilate is offering an open-beta version of Scratch 8.5, its realtime post system and workflow for dailies, conform, grading, compositing and finishing. Also in open beta is the Scratch VR Suite. Both open-beta versions give users the chance to work with the full suite of Scratch 8.5 and Scratch VR tools while evaluating and submitting requests and recommendations for additional features or updates.

Scratch Web for cloud-based, realtime review and collaboration, and Scratch Play for immediate review and playback, are also included in the ecosystem updates. Current users of Scratch 8.4 can download the Scratch 8.5 open beta. Those who are new to Scratch can access the Scratch 8.5 open-beta version for a 30-day free trial. The Scratch VR open-beta version can also be accessed for a 30-day free trial.

“Thanks to open-Beta programs, we get at lot of feedback from current Scratch users about the features and functions that will simplify their workflows, increase their productivity and enhance their storytelling,” explains Assimilate CEO Jeff Edson. “We have two significant Scratch releases a year for the open-beta program and then provide several incremental builds throughout the year. In this way Scratch is continually evolving to offer bleeding-edge functionality, as well as support for the latest formats, for example, Scratch was the first to support Arri’s mini-camera MXF format.”

New to Scratch 8.5
• Easy validation of availability of physical media and file references throughout a project, timeline and render
• Fast access to all external resources (media / LUT / CTL / etc.) through bookmarks
• Full set of ACES transforms as published by the Academy
• Publishing media directly to Facebook
• Option to launch Scratch from a command-line with a series of xml-script commands, which allows closer integration with post-infrastructure and third-party software and scripts

The new Scratch VR Suite includes all the features and functions of Scratch 8.5, Scratch Play and Scratch Web, plus substantial features, functions and enhancements that are specific to working in a 360 media environment.

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.

Reel FX beefs up VR division with GM Steve Nix

Dallas/Santa Monica’s Reel FX has added Steve Nix as general manager of its VR division. Nix will oversee all aspects of development, strategy and technology for the division, which has been working in the VR and AR content space. He joins David Bates, who was recently named GM of the studio’s commercial division. Nix has spent nearly two decades embracing new technology.

Prior to joining Reel FX, Nix was CEO/co-founder of Yvolver, a mobile gaming technology developer, acquired by Opera Mediaworks in 2015. His extensive experience in the gaming industry spans digital distribution and game development for companies such as id Software, Ritual Entertainment and GameStop.

“Reel FX has shown foresight establishing itself as an early leader in the rapidly emerging VR content space,” says Nix. “This foundation, combined with the studio’s resources, will allow us to aggressively expand our VR content offering — combining storytelling and visual and technical expertise in a medium that I firmly believe will change the human experience.”

Nix graduated summa cum laude from Texas Tech University with a BBA in finance, later earning his MBA at SMU where he was an Armentrout Scholar. He got his start in the gaming industry as CEO of independent game developer, Ritual Entertainment, an early pioneer in action games and digitally distributed PC games. Ritual developed and co-developed many titles including Counter-Strike (Xbox), Counter Strike: Condition Zero, Star Trek: Elite Force II, Delta Force Black Hawk Down, Team Sabre and 007: Agent Under Fire.

He moved on to join id Software as director of business development, prior to its acquisition by ZenMax Media, transitioning to the position of director of digital platforms at the company. There he led digital distribution and mobile game development for some of the biggest brands in gaming, including Doom, Rage, Quake and Wolfenstein.

In 2011, Nix joined GameStop as GM of digital distribution.

Quick Chat: East Coast Digital’s Stina Hamlin on VR ‘Cardboard City’

New York City-based East Coast Digital believes in VR and has set up its studio and staff to be able to handle virtual reality projects. In fact, they recently provided editorial, 3D animation, color correction and audio post on the 60-second VR short Cardboard City, co-winner of the Samsung Gear Indie VR Filmmaker Contest. The short premiered at the 2016 Sundance Film Festival. You can check it out here.

Cardboard City, directed by Double Eye Productions’ Kiira Benzing, takes viewers inside the studio of Brooklyn-based stop-motion animator Danielle Ash, who has built a cardboard world inside her studio. There is a pickle vendor, a bakery and a neighborhood bar, all of which can be seen while riding a cardboard roller coaster.

East Coast Digital‘s Stina Hamlin was post producer on the project. We reached out to her to find out more about this project and how the VR workflow differs from the traditional production and post workflow.

Stina Hamlin

How did this project come about?
The project came about organically after being introduced to director Kiira Benzing by narrative designer Eulani Labay. We were all looking to get our first VR project under our belt.  In order to understand the post process involved, I thought it was vital to be involved in a project from the inception, through the production stage and throughout post.  I was seeking projects and people to team up with, and after I met Kiira this amazing team came together.

What direction did you get?
We were given the understanding of the viewer experience that the film should evoke and were asked to be responsible for the technical side of things on set and in editorial.

So you were you on set?
Yes, we were definitely on set. That was an important piece of the puzzle. We were able to consult on what we could do in color and we were able to determine file management and labeling of takes to make it easier to deal with when back in the edit room. Also, we were able to do a couple of stitches at the beginning of the day to determine best camera positioning, etc.

How does your workflow differ from a traditional project to a VR project?
A VR project is different because we are syncing and concerned with seven-plus cameras at a time. The file management has to be very detailed and the stitching process is tedious and uses new software that all editors are getting up to speed with.

Monitoring the cameras on set is tricky, so being able to stitch on set to make sure the look is true to the vision was huge.  That is something that doesn’t happen in the traditional workflow… the post team is definitely not on set.

Cardboard City

Can you elaborate on some of the challenges of VR in general and those you encountered on this project?
The challenges are dealing with multiple cameras and cards, battery or power, and media for every shot from every camera. Syncing the cameras properly in the field and in post can be problematic, and the file management has to uber-detailed.  Then there’s the stitching… there are different software options, no one is a master yet. It is tedious work, and all of this has to get done before you can even edit the clips together in a sequence.

Our project also used stop-motion animation, so we had the artist featured in our film experimenting with us on how to pull that off.  That was really fun and it turned out great!  I heard someone say recently at the Real Screen conference that you have to unlearn everything that you have learned about making a film.  It is a completely different way to tell a story in production and post.

What was your workflow like?
As I mentioned before, I thought that it was vital to be on set to help with media management and “shot looks” using only natural light and organically placed light in preparation for color. We were also able to stitch on set to get a sense of each set-up, which really helped the director and artist see their story and creatively do their job. We then had a better sense of managing the media and understanding how the takes were marked.

Once back in the edit room we used Adobe Premiere to clean up each take and sync each clip for each camera.  We then brought only those clips into the stitching software — Autopano and Giga software from Kolor.com — to stitch and clean up each scene. We rendered out each scene into a self contained QuickTime for color. We colored in DaVinci Resolve and edited the scenes together using Premiere.

What about the audio? 
We recorded nothing on location. All of the sound was designed in post using the mix from the animated short film Pickles for Nickels that was playing on the wall, in addition to the subway and roller coaster sound effects.

What tools were used on set?
We used GoPro Hero 4s with firmware 3.0 and shot in log, 2.7k/30fps. iPads and iPhones were used to wirelessly monitor the rig, which was challenging. We used a laptop with AutoPano and Giga software to stitch on set. This is the same software we used in the edit bay.

What’s next?
We are collaborating once more with Kiira Benzing on the follow-up to Cardboard City. It’s a full-fledged 360 VR short film. The sequel will be even more technically advanced and create additional possibilities for interaction with the user.

Talking to Assimilate about new VR dailies/review tool

CEO Jeff Edson and VP of biz dev Lucas Wilson answer our questions

By Randi Altman

As you can tell from our recent Sundance coverage, postPerspective has a little crush on VR. While we know that today’s VR is young and creatives are still figuring out how it will be used — narrative storytelling, gaming, immersive concerts (looking at you Paul McCartney), job training, therapy, etc. — we cannot ignore how established film fests and trade shows are welcoming it, or the tools that are coming out for its production and post.

One of those tools comes from Assimilate, which is expanding its Scratch Web cloud-platform capabilities to offer a professional, web-based dailies/review tool for reviewing headset-based 360-degree VR content, regardless of location.

How does it work? Kind of simply: Users launch this link vr360.sweb.media on an Android phone (Samsung S6 or other) via Chrome, click the goggles in the lower right corner, put it in their Google Cardboard and view immediate headset-based VR. Once users launch the Scratch Web review link for the VR content, they can playback VR imagery, pan around imagery or create a “magic window” so they can move their smart phone around, similar to looking through a window to see the 360-degree content behind it.

The VR content, including metadata, is automatically formatted for 360-degree video headsets, such as Google Cardboard. The reviewer can then make notes and comments on their mobile device to send back to the sender. The company says they will be announcing support for other mobile devices, headsets and browsers in the near future.

On the heels of this news, we decided to reach out to Assimilate CEO Jeff Edson and VP of business development Lucas Wilson to find out more.

Assimilate has been offering tools for VR, but with this new dailies and reviews tool, you’ve taken it to a new level. Can you talk about the evolution of how you service VR and how this newest product came to be?
Jeff Edson: Professional imagery needs professional tools and workflows to succeed. Much like imagery evolutions to date (digital cinema), this is a new way to capture and tell stories and provide experiences. VR provides a whole new way for people to tell stories amongst other experiences.

So regarding the evolution of tools, Scratch has supported the 360 format for a while now. It has allowed people to playback their footage as well as do basic DI — basic functionality to help produce the best output. As the production side of VR continues to evolve, the workflow aligns itself with a more standard process. This means the same toolset for VR as exists for non-VR. Scratch Web-VR is the natural progression to provide VR productions with the ability to review dailies worldwide.

Lucas Wilson: When VR first started appearing as a real deliverable for creative professionals, Assimilate jumped in. Scratch has supported 360 video live to an Oculus Rift for more than a year now. But with the new Scratch Web toolset and the additional tools added in Scratch to make 360 work more easily and be more accessible, it is no longer just a feature added to a product. It is a workflow and process — review and approval for Cardboard via a web link, or via the free Scratch Play tool, along with color and finishing with Scratch.

It seems pretty simple to use, how are you able to do this via the cloud and through a standard browser?
Jeff: The product is very straight forward to use, as there is a very wide range of people who will have access to it, most of whom do not want the technology to get in the way of the solution. We work very hard at the core of all we have developed — interactive performance.

Lucas: Good programmers (smiles)! Seriously though, we looked at what was needed and what was missing in the VR delivery chain and tried to serve those needs. Scratch Web allows users to upload a clip and generate a link that will work in Cardboard. Review and approval is now just clicking a link and putting your phone into a headset.

What’s the price?
Jeff: The same price as Scratch Web — Free-Trial, Basic-$79/month, Extended-$249/month and Enterprise for special requirements.

Prior to this product, how were those working on VR production going about dailies and reviews?
Jeff: In most cases they were doing it by looking at output from several cameras for review. The main process for viewing was to edit and publish. There really was no tool targeted at dailies/review of VR.

Lucas: It has been really difficult. Reviews are typically done on a flat screen and by guessing, or by reverse engineering MilkVR or Oculus Videos in GearVR.

Can you talk about real-world testing of the product? VR productions that used this tool?
Lucas: We have a few large productions doing review and approval right now with Scratch Web. We can’t talk about them yet, but one of them is the first VR project directed by an A-List director. There are also two of the major sports leagues in the US who employed the tool.

The future is in Park City: highlights from Sundance’s New Frontier

By Kristine Pregot

The future is here, and I caught a glimpse of it while wearing VR glasses at the New Frontier. This is Sundance’s hottest place on the mountain. The Frontier is a who’s who of VR tech, design and storytelling.

These VR products aren’t exactly ready for household consumption yet, but the New Frontier has become a spot for developers to show off their latest and greatest in this ever-growing arena.

On the 2nd and 3rd floors of the Frontier’s dark hallway, you’ll find Oculus Rifts and HTC Vive stations lining the studio walls along with masked viewers sitting on comfy couches reaching for nothing, sitting side by side, but in their own dimension of (virtual) reality.

A very impressive exhibit was Holo-Cinema, a new technology being developed by Walt Disney Co.’s Lucasfilm to expand the Star Wars universe to your very own home. Users, wearing augmented glasses, journey through the Jakku desert and walk around a 3D C3PO while he paces and complains around you, like a hologram. If you were to walk into the room without the glasses, you would see an unfocused projection against the wall and under your feet.

Music meets storytelling was a big trend in the lab as well, with the Kendrick Lamar-scored installation Double Conscience from artist Kahlil Joseph featuring scenes from the inner city of LA rhythmically projected onto two walls and set to Kendrick’s new album.

Another fun and interactive piece that blended music with new technology was 3 Dreams of Black, a film by Chris Milk, with music from the album “Rome” by Danger Mouse, Daniele Luppi, and featuring Norah Jones. Check it out here.

While Sundance is one of the top festivals for filmmakers, I’m impressed with the breadth of new storytelling tools and technology that were on display. I look forward to seeing how the programmers further integrate this type of experience in the years to come.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


The sound of VR at Sundance and Slamdance

By Luke Allen

If last year’s annual Park City film and cultural meet-up was where VR filmmaking first dipped its toes in the proverbial water, count 2016’s edition as its full on coming out party. With over 30 VR pieces as official selections at Sundance’s New Frontier sub-festival, and even more content debuting at Slamdance and elsewhere, festival goers this year can barely take two steps down Main Street without being reminded of the format’s ubiquitous presence.

When I first stepped onto the main demonstration floor of New Frontier (which could be described this year as a de-facto VR mini-festival), the first thing that struck me was, why was it so loud in there? I admit I’m biased since I’m a sound designer with a couple of VR films being exhibited around town, but I am definitely backed up by a consensus among content creators regarding sound’s importance to creating the immersive environment central to VR’s promise as a format (I know, please forgive the buzzwords). In seemingly direct defiance of this principle, Sundance’s two main public exhibition areas for all the latest and greatest content were inundated with the rhythmic bass lines of booming electronic music and noisy crowds.

I suppose you can’t blame the programmers for some of this — the crowds were unavoidable — but I can’t help contrasting the New Frontier experience with the way Slamdance handled its more limited VR offering. Both festivals required visitors to sign up for a viewing time, but while the majority of Sundance’s screenings involved strapping on a headset while seated on a crowded bench in the middle of the demonstration floor, Slamdance reserved a quiet room for the screening experience. Visitors were advised to keep their voices to a murmur while in the viewing chamber, and the screenings took place in an isolated corner seated on — crucially — a chair with full range of motion.

Why is this important? Consider the nature of VR: the viewer has the freedom to look around the environment at their own discretion, and the best content creators make full use the 360-degrees at their disposal to craft the experience. A well-designed VR piece will use directional sound mixing to cue the viewer to look in different directions in order to further the story. It will also incorporate deep soundscapes that shift as one looks around the environment in order to immerse the viewer. Full range of motion, including horizontal rotation, is critical to allowing this exploration to take place.

The Visitor, which I had the pleasure of experiencing in Slamdance’s VR sanctuary, put this concept to use nicely by placing the two lead characters 90 degrees apart from one another, forcing the viewer to look around the beautifully-staged set in order to follow the story. Director James Kaelan and the post sound team at WEVR used subtly shifting backgrounds and eerie footsteps to put the viewer right in the middle of their abstract world.

VR New Frontier

Sundance’s New Frontier VR Bar.

Resonance, an experience directed by Jessica Brillhart that I sound designed and engineered, features violinist Tim Fain performing in a variety of different locations, mostly abandoned, selected both for their visual beauty and their unique sonic character. We used an Ambisonic microphone on set in order to capture the full range of acoustic reflections and, with a lot of love in the mix room at Silver Sound, were able to recreate these incredible sonic landscapes while enhancing the directionality of Fain’s playing in order to help the viewer follow him through the piece (Unfortunately, when Resonance was screening at Sundance’s New Frontier VR Bar, there was a loudspeaker playing Top 40 hits located about three feet above the viewer’s head).

In both of these live-action VR films, sound and picture serve to enhance and guide the experience of the other, much like in traditional cinema, but in a new and more enchanting way. I have had many conversations with other festival attendees here in Park City in which we recall shared VR experiences much like shared dreams, so personal and haunting is this format. We can only hope that in future exhibitions more attention is paid to ensure that viewers have the quiet they need to fully experience the artists’ work.

Luke Allen is a sound designer at Silver Sound Studios in New York City. You can reach him at luke@silversound.us

MPC Creative provides film, VR project for Faraday at CES 2016

VR was everywhere at CES earlier this month, and LA’s MPC played a role. Their content production arm, MPC Creative, produced a film and VR experience for CES 2016, highlighting Faraday Future’s technology platform and providing glimpses of the innovations consumers can expect from their product. The specific innovation shown in the CES VR film was a concept car — the FFZERO1 high-performance electric dream car — and the inspiration around Faraday Future’s consumer-based cars.

“We wanted it to feel elemental. Faraday Future is a sophisticated brand that aims for a seamless connection between technology and transportation,” explains MPC Creative CD Dan Marsh, who also directed the film. “We tried to make the film personal, but natural in the landscape. The car is engineered for the racetrack, but beautiful, in the environmental showcase.”

CES_Faraday_MASTER.0000725      CES_Faraday_MASTER.0000442

To make the film, MPC Creative shot a stand-in vehicle to achieve realistic performance driving and camera work. “We filmed in Malibu and a performance racetrack over two days, then married those locations together with some matte painting and CG to create a unique place that feels like an aspirational Nürburgring of sorts. We match-moved/tracked the real car that was filmed and replaced it with our CG replica of the Faraday Future racecar to get realistic performance driving. Interior shots were filmed on stage. We chose to bridge those stage shots with a slightly stylized appearance so that we could tie it all back together with a full CG demo sequence at the end of the film.”

MPC Creative also produced a Faraday Future VR experience that features the FFZERO1 driving through a series of abstract environments. The experience feels architectural and sculptural, and ultimately offers a spiritual versus visceral journey. Using Samsung’s Gear VR, CES attendees sat in a position similar to the angled seating of the car for their 360-degree CES_Faraday_MASTER.0001174tour.

MPC Creative shot the pursuit vehicle with an Arri Alexa and  used a Red Dragon for drone and VFX support. “We also mounted a Red, with a 4.5mm lens pointed upwards on a follow vehicle that allowed us to capture a mobile spherical environment, which we used to map moving reflections of the environment back onto the CG car,” explains MPC Creative executive producer Mike Wigart.

How did working on the film versus the VR product differ? “The VR project was very different from the film in the sense that it was CG rendered,” says Wigart. “We initially considered the idea of a doing a live-action VR piece, but we started to see several in-car live-action VR projects out in the world, so we decided to do something we hadn’t seen before — an aesthetically driven VR piece with design-based environments. We wanted a VR experience that was visually rich while speaking to the aspirational nature of Faraday Future.”

CES_Faraday_MASTER.0001235      CES_Faraday_MASTER.0000988

Adds Marsh, “Faraday Future wanted to put viewers in the driver’s seat but, more than that, they wanted to create a compelling experience that points to some of the new ideas they are focusing on. We’ve seen and made a lot of car driving experiences, but without a compelling narrative the piece can be in danger of being VR for the sake of it. We made something for Faraday Future that you couldn’t see otherwise. We conceived an architectural framework for the experience. Participants travel through a racetrack of sorts, but each stage takes you through a unique space. But we’re also traveling fast, so, like the film, we’re teasing the possibilities.”

Tools used by MPC Creative included Autodesk Maya, Side Effects Houdini, V-Ray by Chaos Group, The Foundry’s Nuke and Nuke Studio and Tweak’s RV.

Quick Chat: GoPro EP/showrunner Bill McCullough

By Randi Altman

The first time I met Bill McCullough was on a small set in Port Washington, New York, about 20 years ago. He was directing NewSport Talk With Chet Coppock, who was a popular sports radio guy from Chicago.

When our paths crossed again, Bill — who had made some other stops along the way — was owner of the multiple Emmy Award-winning Wonderland Productions in New York City. He remained there for 11 years before heading over to HBO Sports as VP of creative and operations. Bill’s drive didn’t stop there. Recently, he completed a move to the West Coast, joining GoPro as executive producer of team sports and motor sports.

Let’s find out more:

You were most recently at HBO Sports in New York. Why the jump to GoPro, and why was this the right time?
I was fortunate enough to have a great and long career with HBO, a company that has set the standard for quality storytelling, but when I had the opportunity to join the GoPro team I could not pass it up.

GoPro has literally changed the way we capture and share content. With its unique perspective and immersive style, the capture device has given filmmakers the ability to tell stories and capture visuals that have never existed before. The size of the device makes it virtually invisible to the subject and creates an atmosphere that is much more organic and authentic. GoPro is also a leader in VR capture and we’re excited for 2016.”

What will you be doing in your new role? What will it entail?
I am an executive producer in the entertainment division. I will be responsible for creating, developing and producing content for all platforms.

What do you hope to accomplish in this new role?
I am excited for my new role because I have the opportunity to make films from a completely new perspective. GoPro has done an amazing job capturing and telling stories. My goal is to raise the bar and grow the brand even more.

You have a background in post and production. Will this new job incorporate both?
Yes. I will oversee the creative and production process from concept to completion for my projects.

AWE puts people in a VR battle using mocap

What’s the best way to safely show people what it’s like to be on a battlefield? Virtual reality. Mobile content developer AWE employed motion capture to create a virtual reality experience for the Fort York national historic site in Toronto. Visitors to the nine-acre site will use Google Cardboard to experience key battles and military fortifications in a simulated and safe immersive 3D environment.

“We created content that, when viewed on a mobile phone using Google Cardboard, immerses visitors inside a 360-degree environment that recreates historical events from different eras that occurred right where visitors are standing,” explains Srinivas Krishna, CEO of Toronto-based AWE, who directed the project.

Fort York played a pivotal role during the War of 1812, when US naval ships attacked the fort, which was then under the control of the British army. To recreate that experience, along with several other noteworthy events, Krishna designed a workflow that leaned heavily on iPi Soft markerless mocap software.

The project, which began three years ago, included creating animated model rigs for the many characters that would be seen virtually. Built using Unity’s 3D Game engine, the character models, as well as the environments (designed to look like Fort York at the time) were animated using Autodesk Maya. That content was then composited with the mocap sequences, along with facial mocap data captured using Mixamo Face Plus.

“On the production side, we had a real problem because of the huge number of characters the client wanted represented,” Krishna says. “It became a challenge on many levels. We wondered how are we going to create our 3D models and use motion capture in a cost-effective way. Markerless mocap is perfect for broad strokes, and as a filmmaker I found working with it to be a marvelous creative experience.”

While it would seem historical sites like Fort York are perfect for these types of virtual reality experiences, Krishna notes that the project was a bit of a risk given that when they started Google Cardboard wasn’t yet on anyone’s radar.

“We started developing this in 2012, which actually turned out to be good timing because we were able to think creatively and implement ideas, while at the same time the ecosystem for VR was developing,” says Krishna. “We see this as a game-changer in this arena, and I think more historical sites around the world are going to be interested in creating these types of experiences for their visitors.”

SuperSphere and Fox team on ‘Scream Queens’ VR videos

Fox Television decided to help fans of its Scream Queens horror/comedy series visit the show’s set in a way that wasn’t previously possible, thanks to eight new virtual reality short videos. For those of you who haven’t seen the show, Scream Queens focuses on a series of murders tied to a sorority and is set at a fictional college in New Orleans. The VR videos have been produced for the Samsung Milk VR, YouTube 360° and Facebook platforms and are rolling out in the coming weeks.

Fox called on SuperSphere Productions — a consultancy that helps with virtual reality project execution and delivery — to bring their VR concepts to life. SuperSphere founder Lucas Wilson worked closely with Fox creative and marketing executives to develop the production, post and delivery process using the talent, tools and equipment already in place for Scream Queens.

“It was the first VR shoot for a major episodic that proved the ability to replicate a realistic production formula, because so much of VR is very ‘science project-y’ right now,” explains industry vet Wilson, who many of you might know from his work with Assimilate and his own company Revelens.

Lucas Wilson

Lucas Wilson

Wilson reports that this project had a reasonable budget and a small crew. “This allowed us to work together to produce and deliver a wide series of experiences for the show that — judging by reaction on Facebook — are pretty successful. As of late November, the Closet Set Tour (extended) has over 660,000 views, over 31,000 likes and over 10,500 shares. In product terms, that price/performance ratio is pretty damn impressive.”

The VR content, captured over a two-day shoot on the show’s set in New Orleans, was directed by Jessica Sanders and shot by the 360Heros team. “The fact that a woman directed these videos is relevant, and it’s intentional. For virtual reality to take root and grow in every corner of the globe, it must become clear very quickly that VR is for everyone,” says Wilson. “So in addition to creating compelling content, it is critical for that content to be produced and influenced by talented people who bring a wide range of perspectives and experiences. Hiring smart, ambitious women like Jessica as directors and DPs is a no-brainer. SuperSphere’s mission is to open up a whole new kind of immersive, enriching experience to everyone on the planet. To reach everyone, you have to include everyone… from the beginning.

In terms of post, editorial and the 5.1 sound mix was done by Fox’s internal team. SuperSphere did the conform and finish on Assimilate Scratch VR. Local Hero did the VR grading, also on Scratch VR. “The way we worked with Local Hero was actually kinda cool,” explains Wilson. “Most of the pieces are very single-location with highly controlled lighting. We sent them representative still frames, and they graded the stills and sent back a Scratch preset, which we used to then render and conform/output. SuperSphere then output the three different VR deliverables — Facebook, MilkV Rand YouTube.

Two videos have already launched — the first includes a behind-the-scenes tour, mentioned earlier, of the set and closet of Chanel Oberlin (Emma Roberts), created by Scream Queens production designer Andrew Murdock. The second shows a screaming match between the Scream Queens‘ Zayday Williams (Keke Palmer) and Grace Gardner (Skyler Samuels).

Following the Emmy-winning Comic-Con VR experience for its drama Sleepy Hollow last year, these Scream Queens videos mark the first of an ongoing Fox VR and augmented reality initiative for its shows.

“The intelligent way that Fox went about it, and how SuperSphere and Fox worked together to very specifically create a formula for replication and success, is in my opinion a model for how episodic television can leverage VR into an overall experience,” concludes Wilson.

IBC Report: Making high-resolution panoramic video

By Tom Coughlin

Higher resolution content is becoming the norm in today’s media workflows, but pixel count is not the only element that is changing. In addition to the pixel density the depth of image, color gamut, frame rates and even the number of simultaneous streams of video will be important. At the 2015 IBC in Amsterdam there was a clear picture of a future that includes UHD 4K and 8K video, as well as virtual reality, as the future path to more immersive video and entertainment experiences.

NHK, a pioneer in 8K video hardware and infrastructure development has given more details on its introduction of this higher resolution format. They will start test broadcasts of their 8K technology in 2016, followed by significant satellite video transmission in 2018 and widespread deployment in 2020 in time for the Tokyo Olympic Games. The company is looking at using HEVC compression to put a 72Gb/s video stream with 22:2 channel audio into a 100Mb/s delivery channel.

In the Technology Zone at the IBC there were displays of virtual reality, 8K video developments, (mostly by NHK), as well as multiple camera set-ups for creating virtual reality video and various ways to use panoramic video. Sphericam 2 is a Kickstarter-funded product that provides 60 frames per second 4K video capture for creating VR content. This six-camera device is compact and can be placed on a stick and used like a selfie camera to capture a 360-degree view.

Sphericam 2

Sphericam 2

At the 2015 Google Developers Conference, GoPro demonstrated a 360-degree camera rig (our main image) using 16 GoPro cameras to capture panoramic video. At the IBC, GoPro displayed a more compact 360 Hero six-camera rig for 3D video capture.

In the Technology Zone, Al Jeezera had an eight-camera rig for 4K video capture (made using a 3D printer) and were using software to create panoramic videos. There are many such videos on YouTube that can be viewed as panoramic videos, which change perspective when viewed on a smart phone that has an accelerometer that will create a reference around which the viewer can look at the panoramic activities. The Kolor software actually provides a number of different ways to view the captured content.

Eight Camera rig

Eight-camera rig at Al Jeezera stand.

While many viewing devices for VR video use special split-screen displays, or even use smart phones with a split screen image while using the phone’s accelerometers to give the sense of being surrounded by the viewed image — like the Google Cardboard — there are other ways to create an immersive experience. As mentioned earlier, panoramic videos with a single (or split screen) are available on YouTube. There are also spherical display devices where the still or video image can be rotated by moving your hand across the sphere like the one shown below.

Higher resolution content is becoming mainstream, with 4K TVs set to be the majority that are sold within the next few years. 8K video production, pioneered by NHK and others in Japan, could be the next 4K video by the start of the next decade, driving even more realistic content capture and higher bandwidth and higher storage capacity post.

Multi-camera content is also growing in popularity to support virtual reality games and other applications. This growth is enabled by the proliferation of low cost, high-resolution cameras and sophisticated software that combine the video from these cameras to create a panoramic video and virtual reality experience.

The trends toward higher resolution, combined with a greater color gamut, higher frame rate and color depth will transform video experiences by the next decade, leading to new requirements for storage, networking and processing in video production and display.

Dr. Tom Coughlin, president of Coughlin Associates, has over 35 years in the data storage industry. Coughlin is also the founder and organizer of the annual Storage Visions Conference, a partner to the International Consumer Electronics Show, as well as the Creative Storage Conference