Tag Archives: VR

Lenovo intros VR-ready ThinkStation P320

Lenovo launched its VR-ready ThinkStation P320 at Develop3D Live, a UK-based conference that puts special focus on virtual reality as a productivity tool in design workflows. The ThinkStation P320 is the latest addition to the Lenovo portfolio of VR-ready certified workstations and is designed for power users looking to balance both performance and their budgets.

The workstation’s pro VR certification allows ThinkStation P320 users an to more easily add virtual reality into their workflow without requiring an initial high-end hardware and software investment.

The refreshed workstation will be available in both full-size tower and small form factor (SFF) and comes equipped with Intel’s newest Xeon processors and Core i7 processors — offering speeds of up to 4.5GHz with Turbo Boost (on the tower). Both form factors will also support the latest Nvidia Quadro graphics cards, including support for dual Nvidia Quadro P1000 GPUs in the small form factor.

The ISV-certified ThinkStation P320 supports up to 64GB of DDR4 memory and customization via the Flex Module. In terms of environmental sustainability, the P320 is Energy Star-qualified, as well as EPEAT Gold and Greenguard-certified.

The Lenovo ThinkStation P320 full-size tower and SFF will be available at the end of April.

Timecode’s new firmware paves the way for VR

Timecode Systems, which makes wireless technologies for sharing timecode and metadata, has launched a firmware upgrade that enhances the accuracy of its wireless genlock.

Promising sub-line-accurate synchronization, the system allows Timecode Systems products to stay locked in sync more accurately, setting the scene for development of a wireless sensor sync solution able to meet the requirements of VR/AR and motion capture.

“The industry benchmark for synchronization has always been ‘frame-accurate’, but as we started exploring the absolutely mission-critical sync requirements of virtual reality, augmented reality and motion capture, we realized sync had to be even tighter,” said Ashok Savdharia, chief technical officer at Timecode Systems. “With the new firmware and FPGA algorithms released in our latest update, we’ve created a system offering wireless genlock to sub-line accuracy. We now have a solid foundation on which to build a robust and immensely accurate genlock, HSYNC and VSYNC solution that will meet the demands of VR and motion capture.”

A veteran in camera and image sensor technology, Savdharia joined Timecode Systems last year. In addition to building up the company’s multi-camera range of solutions, he is leading a development team to pioneering a wireless sync system for the VR and motion capture market.

HPA Tech Retreat takes on realities of virtual reality

By Tom Coughlin

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.
Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.


Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

Deluxe VFX

Craig Zerouni joins Deluxe VFX as head of technology

Deluxe has named Craig Zerouni as head of technology for Deluxe Visual Effects. In this role, he will focus on continuing to unify software development and systems architecture across Deluxe’s Method studios in Los Angeles, Vancouver, New York and India, and its Iloura studios in Sydney and Melbourne, as well as LA’s Deluxe VR.

Based in LA and reporting to president/GM of Deluxe VFX and VR Ed Ulbrich, Zerouni will lead VFX and VR R&D and software development teams and systems worldwide, working closely with technology teams across Deluxe’s Creative division.

Zerouni has been working in media technology and production for nearly three decades, joining Deluxe most recently from DreamWorks, where he was director of technology at its Bangalore, India-bsed facility overseeing all technology. Prior to that he spent nine years at Digital Domain, where he was first head of R&D responsible for software strategy and teams in five locations across three countries, then senior director of technology overseeing software, systems, production technology, technical directors and media systems. He has also directed engineering, products and teams at software/tech companies Silicon Grail, Side Effects Software and Critical Path. In addition, he was co-founder of London-based computer animation company CFX.

Zerouni’s work has contributed to features including Tron: Legacy, Iron Man 3, Maleficent, X-Men: Days of Future Past, Ender’s Game and more than 400 commercials and TV IDs and titles. He is a member of BAFTA, ACM/SIGGRAPH, IEEE and the VES. He has served on the AMPAS Digital Imaging Technology Subcommittee and is the author of the technical reference book “Houdini on the Spot.”

Says Ulbrich on the new hire: “Our VFX work serves both the features world, which is increasingly global, and the advertising community, which is increasingly local. Behind the curtain at Method, Iloura, and Deluxe, in general, we have been working to integrate our studios to give clients the ability to tap into integrated global capacity, technology and talent anywhere in the world, while offering a high-quality local experience. Craig’s experience leading global technology organizations and distributed development teams, and building and integrating pipelines is right in line with our focus.”

Last Chance to Enter to Win an Amazon Echo… Take our Storage Survey Now!

If you’re working in post production, animation, VFX and/or VR/AR/360, please take our short survey and tell us what works (and what doesn’t work) for your day-to-day needs.

What do you need from a storage solution? Your opinion is important to us, so please complete the survey by Wednesday, March 8th.

We want to hear your thoughts… so click here to get started now!

 

 

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.

Rise Above

Sundance 2017: VR for Good’s Rise Above 

By Elise Ballard

On January 22, during the Sundance Film Festival in Park City, the Oculus House had an event for their VR for Good initiative, described as “helping non-profits and rising filmmakers bring a variety of social missions to life.” Oculus awarded 10 non-profits a $40,000 grant and matched them with VR filmmakers to make a short film related to their community and cause.

One of the films, Rise Above, highlights a young girl’s recovery from sexual abuse and the support and therapy she received from New York City’s non-profit Womankind (formerly New York Asian Women’s Center).

Rise AboveRise Above is a gorgeous film — shot on the Nokia Ozo camera — and really well done, especially in as far as guiding your eye to the storytelling going on in a VR360 environment. I had the opportunity to interview the filmmakers, Ben Ross and Brittany Neff, about their experience. I was curious why they feel VR is one of the best mediums to create empathy and action for social impact. Check out their website.

Referencing the post process, Ross said he wore headsets the entire time as he worked with the editor in order to make sure it worked as a VR experience. All post production for VR for Good films was done at Reel FX. In terms of tools, for stitching the footage they used a combination of the Ozo Creator software from Nokia, Autopano Video from Kolor and the Cara plug-in for Nuke. Reel FX finished all the shots in Nuke (again making major use of Care) and Autodesk’s Flame for seam fixing and rig removal. TD Ryan Hartsell did the graphics work in After Effects using the mettle plug-in to help him place the graphics in 360 space and in 3D.

For more on the project and Reel FX’s involvement visit here.

The Oculus’ VR for Good initiative will be exhibiting will be at other major film festivals throughout the year and will be distributed by Facebook after the festival circuit.

Visit VR for Good here for more information, news and updates, and to stay connected (and apply!) to this inspiring and cutting-edge project.

Elise Ballard is a Los Angeles-based writer and author of Epiphany, True Stories of Sudden Insight, and the director of development at Cognition and Arc/k Project, a non-profit dedicated to preserving cultural heritage via virtual reality and digital media.

HPA Tech Retreat takes on VR/AR at Tech Retreat Extra

The long-standing HPA Tech Retreat is always a popular destination for tech-focused post pros, and while they have touched on virtual reality and augmented reality in the past, this year they are dedicating an entire day to the topic — February 20, the day before the official Retreat begins. TR-X (Tech Retreat Extra) will feature VR experts and storytellers sharing their knowledge and experiences. The traditional HPA Tech Retreat runs from February 21-24 in Indian Wells, California.

TR-X VR/AR is co-chaired by Lucas Wilson (Founder/Executive Producer at SuperSphereVR) and Marcie Jastrow (Senior VP, Immersive Media & Head of Technicolor Experience Center), who will lead a discussion focused on the changing VR/AR landscape in the context of rapidly growing integration into entertainment and applications.

Marcie Jastrow

Experts and creative panelists will tackle questions such as: What do you need to understand to enable VR in your environment? How do you adapt? What are the workflows? Storytellers, technologists and industry leaders will provide an overview of the technology and discuss how to harness emerging technologies in the service of the artistic vision. A series of diverse case studies and creative explorations — from NASA to the NFL — will examine how to engage the audience.

The TR-X program, along with the complete HPA Tech Retreat program, is available here. Additional sessions and speakers will be announced.

TR-X VR/AR Speakers and Panel Overview
Monday, February 20

Opening and Introductions
Seth Hallen, HPA President

Technical Introduction: 360/VR/AR/MR
Lucas Wilson

Panel Discussion: The VR/AR Market
Marcie Jastrow
David Moretti, Director of Corporate Development, Jaunt
Catherine Day, Head of VR/AR, Missing Pieces
Phil Lelyveld, VR/AR Initiative Program Lead, Entertainment Technology Center at USC

Acquisition Technology
Koji Gardiner, VP, Hardware, Jaunt

Live 360 Production Case Study
Andrew McGovern, VP of VR/AR Productions, Digital Domain

Live 360 Production Case Study
Michael Mansouri, Founder, Radiant Images

Interactive VR Production Case Study
Tim Dillon, Head of VR & Immersive Content, MPC Advertising USA

Immersive Audio Production Case Study
Kyle Schember, CEO, Subtractive

Panel Discussion: The Future
Alan Lasky, Director of Studio Product Development, 8i
Ben Grossmann, CEO, Magnopus
Scott Squires, CTO, Creative Director, Pixvana
Moderator: Lucas Wilson
Jen Dennis, EP of Branded Content, RSA

Panel Discussion: New Voices: Young Professionals in VR
Anne Jimkes, Sound Designer and Composer, Ecco VR
Jyotsna Kadimi, USC Graduate
Sho Schrock, Chapman University Student
Brian Handy, USC Student

TR-X also includes an ATSC 3.0 seminar, focusing on the next-generation television broadcast standard, which is nearing completion and offers a wide range of new content delivery options to the TV production community. This session will explore the expanding possibilities that the new standard provides in video, audio, interactivity and more. Presenters and panelists will also discuss the complex next-gen television distribution ecosystem that content must traverse, and the technologies that will bring the content to life in consumers’ homes.

Early registration is highly recommended for TR-X and the HPA Tech Retreat, which is a perennially sold-out event. Attendees can sign up for TR-X VR/AR, TR-X ATSC or the HPA Tech Retreat.

Main Image: Lucas Wilson.

VR Post: Hybrid workflows are key

By Beth Marchant

Shooting immersive content is one thing, but posting it for an ever-changing set of players and headsets is whole other multidimensional can of beans.

With early help from software companies that have developed off-the-shelf ways to tackle VR post — and global improvements to their storage and networking infrastructures — some facilities are diving into immersive content by adapting their existing post suites with a hybrid set of new tools. As with everything else in this business, it’s an ongoing challenge to stay one step ahead.

Chris Healer

The Molecule
New York- and Los Angeles-based motion graphics and VFX post house The Molecule leapt into the VR space more than a year and a half ago when it fused The Foundry’s Nuke with the open-sourced panoramic photo stitching software Hugin. Then, CEO Chris Healer took the workflow one step further. He developed an algorithm that rendered stereoscopic motion graphics spherically in Nuke.

Today, those developments have evolved into a robust pipeline that fuels The Molecule’s work for Conan O’Brien’s eponymous TBS talk show, The New York Times’s VR division and commercial work. “It’s basically eight or ten individual nodes inside Nuke that complete one step or another of the process,” says Healer. “Some of them overlap with Cara VR,” The Foundry’s recently launched VR plug-in for Nuke, “but all of it works really well for our artists. I talk to The Foundry from time to time and show them the tools, so there’s definitely an open conversation there about what we all need to move VR post forward.”

Collaborating with VR production companies like SuperSphere, Jaunt and Pixvana in Seattle, The Molecule is heading first where mass VR adoption seems likeliest. “The New York Times, for example, wants to have a presence at film festivals and new technology venues, and is trying to get out of the news-only business and into the entertainment-provider business. And the job for Conan was pretty wild — we had to create a one-off gag for Comic-Con that people would watch once and go away laughing to the next thing. It’s kind of a cool format.”

Healer’s team spent six weeks on the three-minute spot. “We had to shoot plates, model characters, animate them, composite it, build a game engine around it, compile it, get approval and iterate through that until we finished. We delivered 20 or so precise clips that fit into a game engine design, and I think it looks great.”

Healer says the VR content The Molecule is posting now is, like the Conan job, a slight variation on more typical recent VR productions. “I think that’s also what makes VR so exciting and challenging right now,” he says. “Everyone’s got a different idea about how to take it to the next level. And a lot of that is in anticipation of AR (augmented reality) and next-generation players/apps and headsets.

‘Conan’

The Steam store,” the premiere place online to find virtual content, “has content that supports multiple headsets, but not all of them.” He believes that will soon gel into a more unified device driver structure, “so that it’s just VR, not Oculus VR or Vive VR. Once you get basic head tracking together, then there’s the whole next thing: Do you have a controller of some kind, are you tracking in positional space, do you need to do room set up? Do we want wands or joysticks or hand gestures, or will keyboards do fine? What is the thing that wins? Those hurdles should solidify in the next year or two. The key factor in any of that is killer content.”

The biggest challenge facing his facility, and anyone doing VR post right now, he says, is keeping pace with changing resolutions and standards. “It used to be that 4K or 4K stereo was a good deliverable and that would work,” says Healer. “Now everything is 8K or 10K, because there’s this idea that we also have to future-proof content and prepare for next-gen headsets. You end up with a lot of new variables, like frame rate and resolution. We’re working on stereo commercial right now, and just getting the footage of one shot converted from only six cameras takes almost 3TB of disk space, and that’s just the raw footage.”

When every client suddenly wants to dip their toes into VR, how does a post facility respond? Healer thinks the onus is on production and post services to provide as many options as possible while using their expertise to blaze new paths. “It’s great that everyone wants to experiment in the space, and that puts a certain creative question in our field,” he says. “You have to seriously ask of every project now, does it really just need to be plain-old video? Or is there a game component or interactive component that involves video? We have to explore that. But that means you have to allocate more time in Unity https://unity3d.com/ building out different concepts for how to present these stories.”

As the client projects get more creative, The Molecule is relying on traditional VFX processes like greenscreen, 3D tracking and shooting plates to solve VR-related problems. “These VFX techniques help us get around a lot of the production issues VR presents. If you’re shooting on a greenscreen, you don’t need a 360 lens, and that helps. You can shoot one person walking around on a stage and then just pan to follow them. That’s one piece of footage that you then composite into some other frame, as opposed to getting that person out there on the day, trying to get their performance right and then worrying about hiding all the other camera junk. Our expertise in VFX definitely gives us an advantage in VR post.”

From a post perspective, Healer still hopes most for new camera technology that would radically simplify the stitching process, allowing more time for concepting and innovative project development. “I just saw a prototype of a toric lens,” shaped like the donut-like torus that results from revolving a circle in three-dimensional space, “that films 360 minus a little patch, where the tripod is, in a single frame,” he says. “That would be huge for us. That would really change the workflow around, and while we’re doing a lot of CG stuff that has to be added to VR, stitching takes the most time. Obviously, I care most about post, but there are also lots of production issues around a new lens like that. You’d need a lot of light to make it work well.”

Local Hero Post
For longtime Scratch users Local Hero Post, in Santa Monica, the move to begin grading and compositing in Assimilate Scratch VR was a no-brainer. “We were one of the very first American companies to own a Scratch when it was $75,000 a license,” says founder and head of imaging Leandro Marini. “That was about 10 years ago and we’ve since done about 175 feature film DIs entirely in Scratch, and although we also now use a variety of tools, we still use it.”

Leandro Marini

Marini says he started seeing client demand for VR projects about two years ago and he turned to Scratch VR. He says it allows users do traditional post the way editors and colorist are used to — with all the same DI tools that let you do complicated paint outs, visual effects and 50-layer-deep color corrections, Power Windows, in realtime on a VR sphere.”

New Deal Studios’ 2015 Sundance film, Kaiju Fury was an early project, “when Scratch VR was first really user-friendly and working in realtime.” Now Marini says their VR workflow is “pretty robust. [It’s] currently the only system that I know of that can work in VR in realtime in multiple ways,” which includes a echo-rectangular projection, which gives you a YouTube 360-type of feel and an Oculus headset view.

“You can attach the headset, put the Oculus on and grade and do visual effects in the headset,” he says. “To me, that’s the crux: you really have to be able to work inside the headset if you are going to grade and do VR for real. The difference between seeing a 360 video on a computer screen and seeing it from within a headset and being able to move your head around is huge. Those headsets have wildly different colors than a computer screen.”

The facility’s — and likely the industry’s — highest profile and biggest budget project to date is Invisible, a new VR scripted miniseries directed by Doug Liman and created by 30 Ninjas, the VR company he founded with Julina Tatlock. Invisible premiered in October on Samsung VR and the Jaunt app and will roll out in coming months in VR theaters nationwide. Written by Dallas Buyers Club screenwriter Melisa Wallack and produced by Jaunt and Condé Nast Entertainment, it is billed as the first virtual reality action-adventure series of its kind.

‘Invisible’

“Working on that was a pretty magical experience,” says Marini. “Even the producers and Liman himself had never seen anything like being able to do the grade, do VFX and do composite and stereo fixes in 3D virtual reality all with the headset on. That was our initial dilemma for this project, until we figured it out: do you make it look good for the headset, for the computer screen or for iPhones or Samsung phones? Everyone who worked on this understood that every VR project we do now is in anticipation of the future wave of VR headsets. All we knew was that about a third would probably see it on a Samsung Gear VR, another third would see it on a platform like YouTube 360 and the final third would see it on some other headset like Oculus Rift, HTC or Google’s new Daydream.”

How do you develop a grading workflow that fits all of the above? “This was a real tricky one,” admits Marini. “It’s a very dark and moody film and he wanted to make a family drama thriller within that context. A lot of it is dark hallways and shadows and people in silhouette, and we had to sort of learn the language a bit.”

Marini and his team began exclusively grading in the headset, but that was way too dark on computer monitors. “At the end of the day, we learned to dial it back a bit and make pretty conservative grades that worked on every platform so that it looked good everywhere. The effect of the headset is it’s a light that’s shining right into your eyeball, so it just looks a lot brighter. It had to still look moody inside the headset in a dark room but not too moody that it vanishes on computer laptop in a bright room. It was a balancing act.”

Local Hero

Local Hero also had to figure out how to juggle the new VR work with its regular DI workload. “We had to break off the VR services into a separate bay and room that is completely dedicated to it,” he explains. “We had to slice it off from the main pipeline because it needs around-the-clock custom attention. Very quickly we realized we needed to quarantine this workflow. One of our colorists here has become a VR expert, and he’s now the only one allowed to grade those projects.” The facility upgraded to a Silverdraft Demon workstation with specialized storage to meet the exponential demand for processing power and disk space.

Marini says Invisible, like the other VR work Local Hero has done before is, in essence, a research project in these early days of immersive content. “There is no standard color space or headset or camera. And we’re still in the prototype phase of this. While we are in this phase, everything is an experiment. The experience of being in 3D space is interesting but the quality of what you’re watching is still very, very low resolution. The color fidelity relative to what we’re used to in the theater and on 4K HDR televisions is like VHS 1980’s quality. We’re still very far away from truly excellent VR.”

Scratch VR workflows in Invisible included a variety of complicated processes. “We did things like dimension-alizing 2D shots,” says Marini. “That’s complicated stuff. In 3D with the headset on we would take a shot that was in 2D, draw a rough roto mask around the person, create a 3D field, pull their nose forward, push their eyes back, push the sky back — all in a matter of seconds. That is next-level stuff for VR post.”

Local Hero also used Scratch Web for reviews. “Moments after we finished a shot or sequence it was online and someone could put on a headset and watch it. That was hugely helpful. Doug was in London, Condé Nast in New York. Lexus was a sponsor of this, so their agency in New York was also involved. Jaunt is down the street from us here in Santa Monica. And there were three clients in the bay with us at all times.”

‘Invisible’

As such, there is no way to standardize a VR DI workflow, he says. “For Invisible, it was definitely all hands on deck and every day was a new challenge. It was 4K 60p stereo, so the amount of data we had to push — 4K 60p to both eyes — which was unprecedented.” Strange stereo artifacts would appear for no apparent reason. “A bulge would suddenly show up on a wall and we’d have to go in there and figure out why and fix it. Do we warp it? Try something else? It was like that throughout the entire project: invent the workflow every day and fudge your way through. But that’s the nature of experimental technology.”

Will there be a watershed VR moment in the year ahead? “I think it all depends on the headsets, which are going to be like mobile phones,” he says. “Every six months there will be a new group of them that will be better and more powerful with higher resolution. I don’t think there will be a point in the future when everyone has a self-contained high-end headset. I think the more affordable headsets that you put your phone into, like Gear VR and Daydream, are the way most people will begin to experience VR. And we’re only 20 percent of the way there now. The whole idea of VR narrative content is completely unknown and it remains to be seen if audiences care and want it and will clamor for it. When they do, then we’ll develop a healthy VR content industry in Hollywood.”


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.

VR Audio: Virtual and spacial soundscapes

By Beth Marchant

The first things most people think of when starting out in VR is which 360-degree camera rig they need and what software is best for stitching. But virtual reality is not just a Gordian knot for production and post. Audio is as important — and complex — a component as the rest. In fact, audio designers, engineers and composers have been fascinated and challenged by VR’s potential for some time and, working alongside future-looking production facilities, are equally engaged in forging its future path. We talked to several industry pros on the front lines.

Howard Bowler

Music industry veteran and Hobo Audio founder Howard Bowler traces his interest in VR back to the groundbreaking film Avatar. “When that movie came out, I saw it three times in the same week,” he says. I was floored by the technology. It was the first time I felt like you weren’t just watching a film, but actually in the film.” As close to virtual reality as 3D films had gotten to that point, it was the blockbuster’s evolved process of motion capture and virtual cinematography that ultimately delivered its breathtaking result.

“Sonically it was extraordinary, but visually it was stunning as well,” he says. “As a result, I pressed everyone here at the studio to start buying 3D televisions, and you can see where that has gotten us — nowhere.” But a stepping stone in technology is more often a sturdy bridge, and Bowler was not discouraged. “I love my 3D TVs, and I truly believe my interest in that led me and the studio directly into VR-related projects.”

When discussing the kind of immersive technology Hobo Sound is involved with today, Bowler — like others interviewed for this series — clearly define VR’s parallel deliverables. “First, there’s 360 video, which is passive viewing, but still puts you in the center of the action. You just don’t interact with it. The second type, more truly immersive VR, lets you interact with the virtual environment as in a video game. The third area is augmented reality,” like the Pokemon Go phenomenon of projecting virtual objects and views onto your actual, natural environment. “It’s really important to know what you’re talking about when discussing these types of VR with clients, because there are big differences.”

With each segment comes related headsets, lenses and players. “Microsoft’s HoloLens, for example, operates solely in AR space,” says Hobo producer Jon Mackey. “It’s a headset, but will project anything that is digitally generated, either on the wall or to the space in front of you. True VR separates you from all that, and really good VR separates all your senses: your sight, your hearing and even touch and feeling, like some of those 4D rides at Disney World.” Which technology will triumph? “Some think VR will take it, and others think AR will have wider mass adoption,” says Mackey. “But we think it’s too early to decide between either one.”

Boxed Out

‘Boxed Out’ is a Hobo indie project about how gentrification is affecting artists studios in the Gowanus section of Brooklyn.

Those kinds of end-game obstacles are beside the point, says Bowler. “The main reason why we’re interested in VR right now is that the experiences, beyond the limitations of whatever headset you watch it on, are still mind-blowing. It gives you enough of a glimpse of the future that it’s incredible. There are all kinds of obstacles it presents just because it’s new technology, but from our point of view, we’ve honed it to make it pretty seamless. We’re digging past a lot of these problem areas, so at least from the user standpoint, it seems very easy. That’s our goal. Down the road, people from medical, education and training are going to need to understand VR for very productive reasons. And we’re positioning ourselves to be there on behalf of our clients.”

Hobo’s all-in commitment to VR has brought changes to its services as well. “Because VR is an emerging technology, we’re investing in it globally,” says Bowler. “Our company is expanding into complete production, from concepting — if the client needs it — to shooting, editing and doing all of the audio post. We have the longest experience in audio post, but we find that this is just such an exciting area that we wanted to embrace it completely. We believe in it and we believe this is where the future is going to be. Everybody here is completely on board to move this forward and sees its potential.”

To ramp up on the technology, Hobo teamed up with several local students who were studying at specialty schools. “As we expanded out, we got asked to work with a few production companies, including East Coast Digital and End of Era Productions, that are doing the video side of it. We’re bundling our services with them to provide a comprehensive set of services.” Hobo is also collaborating with Hidden Content, a VR production and post production company, to provide 360 audio for premium virtual reality content. Hidden Content’s clients include Samsung, 451 Media, Giant Step, PMK-BNC, Nokia and Popsugar.

There is still plenty of magic sauce in VR audio that continues to make it a very tricky part of the immersive experience, but Bowler and his team are engineering their way through it. “We’ve been developing a mixing technique that allows you to tie the audio to the actual object,” he says. “What that does is disrupt the normal stereo mix. Say you have a public speaker in the center of the room; normally that voice would turn with you in your headphones if you turn away from him. What we’re able to do is to tie the audio of the speaker to the actual object, so when you turn your head, it will pan to the right earphone. That also allows you to use audio as signaling devices in the storyline. If you want the viewer to look in a certain direction in the environment, you can use an audio cue to do that.”

Hobo engineer Diego Jimenez drove a lot of that innovation, says Mackey. “He’s a real VR aficionado and just explored a lot of the software and mixing techniques required to do audio in VR. We started out just doing a ton of tests and they all proved successful.” Jimenez was always driven by new inspiration, notes Bowler. “He’s certainly been leading our sound design efforts on a lot of fronts, from creating instruments to creating all sorts of unusual and original sounds. VR was just the natural next step for him, and for us. For example, one of the spots that we did recently was to create a music video and we had to create an otherworldly environment. And because we could use our VR mixing technology, we could also push the viewer right into the experience. It was otherworldly, but you were in that world. It’s an amazing feeling.”

boxed-out

‘Boxed Out’

What advice do Bowler and Mackey have for those interested in VR production and post? “360 video is to me the entry point to all other versions of immersive content,” says Bowler. “It’s the most basic, and it’s passive, like what we’re used to — television and film. But it’s also a completely undefined territory when it comes to production technique.” So what’s the way in? “You can draw on some of the older ways of doing productions,” he says, “but how do you storyboard in 360? Where does the director sit? How do you hide the crew? How do you light this stuff? All of these things have to be considered when creating 360 video. That also includes everyone on camera: all the viewer has to do is look around the virtual space to see what’s going on. You don’t want anything that takes the viewer out of that experience.”

Bowler thinks 360 video is also the perfect entry point to VR for marketers and advertisers creating branded VR content, and Hobo’s clients agree. “When we’ve suggested 360 video on certain projects and clients want to try it out, what that does is it allows the technology to breathe a little while it’s underwritten at the same time. It’s a good way to get the technology off the ground and also to let clients get their feet wet in it.”

Any studio or client contemplating VR, adds Mackey, should first find what works for them and develop an efficient workflow. “This is not really a solidified industry yet,” he says. “Nothing is standard, and everyone’s waiting to see who comes out on top and who falls by the wayside. What’s the file standard going to be? Or the export standard?  Will it be custom-made apps on (Google) YouTube or Facebook? We’ll see Facebook and Google battle it out in the near term. Facebook has recently acquired an audio company to help them produce audio in 360 for their video app and Google has the Daydream platform,” though neither platform’s codec is compatible with the other, he points out. “If you mix your audio to Facebook audio specs, you can actually have your audio come out in 360. For us, it’s been trial and error, where we’ve experimented with these different mixing techniques to see what fits and what works.”

Still, Bowler concedes, there is no true business yet in VR. “There are things happening and people getting things out there, but it’s still so early in the game. Sure, our clients are intrigued by it, but they are still a little mystified by what the return will be. I think this is just part of what happens when you deal with new technology. I still think it’s a very exciting area to be working in, and it wouldn’t surprise me if it doesn’t touch across many, many different subjects, from history to the arts to original content. Think about applications for geriatrics, with an aging population that gets less mobile but still wants to experience the Caribbean or our National Parks. The possibilities are endless.”

At one point, he admits, it may even become difficult to distinguish one’s real memory from one’s virtual memory. But is that really such a bad thing? “I’m already having this problem. I was watching an immersive video of Cuban music, that was pretty beautifully done, and by the end of the five-minute spot, I had the visceral experience that I was actually there. It’s just a very powerful way of experiencing content. Let me put it another way: 3D TVs were at the rabbit hole, and immersive video will take you down the rabbit hole into the other world.”

Source Sound
LA-based Source Sound, which has provided supervision and sound design on a number of Jaunt-produced cinematic VR experiences, including a virtual fashion show, a horror short and a Godzilla short film written and directed by Oscar-winning VFX artist Ian Hunter, as well as final Atmos audio mastering for the early immersive release Sir Paul McCartney Live, is ready for spacial mixes to come. That wasn’t initially the case.

Tim

Tim Gedemer

“When Jaunt first got into this space three years ago, they went to Dolby to try to figure out the audio component,” says Source Sound owner/supervising sound designer/editor Tim Gedemer. “I got a call from Dolby, who told me about what Jaunt was doing, and the first thing I said was, ‘I have no idea what you are talking about!’ Whatever it is, I thought, there’s really no budget and I was dragging my feet. But I asked them to show me exactly what they were doing. I was getting curious at that point.”

After meeting the team at Jaunt, who strapped some VR goggles on him and showed him some footage, Gedemer was hooked. “It couldn’t have been more than 30 seconds in and I was just blown away. I took off the headset and said, ‘What the hell is this?! We have to do this right now.’ They could have reached out to a lot of people, but I was thrilled that we were able to help them by seizing the moment.”

Gedemer says Source Sound’s business has expanded in multiple directions in the past few years, and VR is still a significant part of the studio’s revenue. “People are often surprised when I tell them VR counts for about 15-20 percent of our business today,” he says. “It could be a lot more, but we’d have to allocate the studios differently first.”

With a background in mixing and designing sound for film and gaming and theatrical trailers, Gedemer and his studio have a very focused definition of immersive experiences, and it all includes spacial audio. “Stereo 360 video with mono audio is not VR. For us, there’s cinematic, live-action VR, then straight-up game development that can easily migrate into a virtual reality world and, finally, VR for live broadcast.” Mass adoption of VR won’t happen, he believes, until enterprise and job training applications jump on the bandwagon with entertainment. “I think virtual reality may also be a stopover before we get to a world where augmented reality is commonplace. It makes more sense to me that we’ll just overlay all this content onto our regular days, instead of escaping from one isolated experience to the next.”

On set for the European launch of the Nokia Ozo VR camera in London, which featured a live musical performances captured in 360 VR.

For now, Source Sound’s VR work is completed in dedicated studios configured with gear for that purpose. “It doesn’t mean that we can’t migrate more into other studios, and we’re certainly evolving our systems to be dual-purpose,” he says. “About a year ago we were finally able to get a grip on the kinds of hardware and software we needed to really start coagulating this workflow. It was also clear from the beginning of our foray into VR that we needed to partner with manufacturers, like Dolby and Nokia. Both of those companies’ R&D divisions are on the front lines of VR in the cinematic and live broadcast space, with Dolby’s Atmos for VR and Nokia’s Ozo camera.”

What missing tools and technology have to be developed to achieve VR audio nirvana? “We delivered a wish list to Dolby, and I think we got about a quarter of the list,” he says. “But those guys have been awesome in helping us out. Still, it seems like just about every VR project that we do, we have to invent something to get us to the end. You definitely have to have an adventurous spirit if you want to play in this space.”

The work has already influenced his approach to more traditional audio projects, he says, and he now notices the lack of inter-spacial sound everywhere. “Everything out there is a boring rectangle of sound. It’s on my phone, on my TV, in the movie theater. I didn’t notice it as much before, but it really pops out at me now. The actual creative work of designing and mixing immersive sound has realigned the way I perceive it.”

Main Image: One of Hobo’s audio rooms, where the VR magic happens.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

 

VR Production: A roadmap for stereo 360, AR, VR and beyond

By Beth Marchant

It may still be the Wild West in the emerging virtual reality market, but adapting new and existing tools to recreate production workflows is nothing new for the curious and innovative filmmakers hungry for expanding ways to tell stories.

We asked directors at a large VR studio and at a nimble startup how they are navigating the formats, gear and new pipelines that come with the territory.

Patrick Meegan

Jaunt
Patrick Meegan was the first VR-centric filmmaker hired by Jaunt, a prolific producer of immersive content based in Los Angeles. Now a creative director and director of key content for the company, he will also be helping Jaunt refine and revamp its virtual reality app in the coming months. “I came straight from my MFA at USC’s interactive media program to Jaunt, so I’ve been doing VR since day one there. The nice thing about USC is it has a very robust research lab associated with the film school. I worked with a lot of prototype VR technology while completing my degree and shooting my thesis. I pretty much had a hacker mentality in graduate school but I wanted to work with an engineering and content company that was streamlining the VR process, and I found it here.”

Meegan shot with a custom camera system built with GoPro cameras on those first Jaunt shoots. “They had developed a really nice automated VR stitching and post workflow early on,” he says, “but I’d built my own 360 camera from 16 GoPros in grad school, so it wasn’t so dissimilar from what I was used to.” He’s since been shooting with the company’s purpose-built Jaunt One camera, a ground-up, modular design that includes a set of individual modules optimized with desirable features like global shutter, gunlock for frame sync and improved dynamic range.

Focusing primarily on live-action 3D spherical video but publishing across platforms, Jaunt has produced a range of VR experiences to date that include Doug Limon’s longer-form cinematic serial Invisible, (see VR Post) and short documentaries like Greenpeace’s A Journey to the Arctic and Camp4 Collective’s Home Turf: Iceland. The content is stored in the cloud, mostly to take advantage of scalable cloud-based rendering. “We’re always supporting every platform that’s out there but within the last year, to an increasing degree, we’re focusing more on the more fully immersive Oculus, HTC Vive, Gear VR and Google Daydream experiences,” says Meegan. “We’re increasingly looking at the specs and capabilities of those more robust headsets and will do more of that in 2017. But right now, we’re focused on the core market, which is 360 video.”

invisible

Invisible

When out on the VR directing jobs he bids on through Jaunt’s studios, Meegan typically shoots with a Jaunt One as his primary tool and rotates in other bespoke camera arrays as needed. “We’re still in a place where there is no one camera but many terrific options,” he says. “Jaunt One is a great baseline. But if you want to shoot at night or do aerial, you’ll need to consider any number of custom rigs that blend off-the-shelf cameras and components in different types of arrays. Volumetric and light field video are also on the horizon, as the headsets enable more interaction with the audience. So we’ll continue to work with a range of camera systems here at Jaunt to achieve those things.”

Meegan recently took the Jaunt One and a GoPro drone array to the Amazon Rain Forest to shoot a 10-minute 360-degree film for Conservation International, a non-profit organization with a trifold community, corporate partnership and research approach to saving our planet’s natural resources. An early version of the film screened this November in Marrakech during the UN’s Climate Change Conference and will be in wide release through the Jaunt app in January. “I’ve been impressed that there are real budgets out there for cause-based VR documentaries,” he says. “It’s a wonderful thing to infuse in the medium early on, as many did with HD and then 4K. Escaping into a nature-based experience is not an isolating thing — it’s very therapeutic, and most people will never have the means or inclination to go to these places in the first place.”

Pitched as a six-minute documentary, the piece showcases a number of difficult VR camera moves that ended up extending its run. “When we submitted 10-minute cuts to the clients, no one complained about length,” says Meegan. “They just wanted more. Probably half the piece is in motion. We did a lot of cable cams through the jungle, as if you are walking with the indigenous people who live there and encountering wildlife, and also a number of VR firsts, like vertical ascending and descending shots up along these massive trees.”

Tree climbing veterans from shows like Planet Earth were on hand to help set the rigs on high. “These were shots that would take three days to rig into a tree so we could deliver that magical float down through the layers of the forest with the camera. Plus, everything we had to bring into the jungle for the shoot had to fit on tiny planes and canoes. Due to weight limits, we had to cut back on some of the grip equipment we’d originally planned on bringing, like custom cases and riggings to carry and protect the gear from wildlife and the elements. We had to measure everything out to the gram.” Jaunt also customized the cable cam motors to slow down the action of the rigs. “In VR you want to move a lot slower than with a traditional camera so you get a comfortable feel of movement,” says Meegan. “Because people are looking around within the environment, you want to give them time to soak it all in.”

An example of the Jaunt camera at work – Let’s Go Mets!

The isolated nature of the shoot posed an additional challenge: how to keep the cameras rolling, with charging stations, for eight hours at a time. “We did a lot on the front end to determine the best batteries and data storage systems to make that happen,” he says. “And there were many lessons learned that we will start to apply to upcoming work. The post production was more about rig removal and compositing and less about stitching, so for these kinds of documentary shoots, that helps us put more of our resources into the production.”

The future of narrative VR, on the other hand, may have an even steeper learning curve. “What ‘Invisible’ starts to delve into,” explains Meegan, “is how do we tell a more elaborate, longer-form story in VR? Flash back to a year or so ago, when all we thought people could handle in the headset at one time was five or six minutes. At least as headsets get more comfortable — and eventually become untethered — people will become more engaged.” That wire, he believes, is one of VR’s biggest current drawbacks. “Once it goes away, and viewers are no longer reminded they are actually wearing technology, we can finally start to envision longer-form stories.”

As VR production technology matures, Meegan also sees an opening for less tech-savvy filmmakers to join the party. “This field still requires healthy hybrids of creative and technical people, but I think we are starting to see a shift in priorities more toward defining the grammar of storytelling in VR, not just the workflows. These questions are every bit as challenging as the technology, but we need all kinds of filmmakers to engage with them. Coming from a game-design program where you do a lot of iterations, like in visual effects and animation, I think now we can begin to similarly iterate with content.”

The clues to the future may already be in plain sight. “In VR, you can’t cut around performances the way you do when shooting traditional cinema,” says Meegan. “But there’s a lot we can learn from ambient performances in theater, like what the folks at Punchdrunk are doing in Sleep No More immersive live theater experience in New York.” The same goes for the students he worked with recently at USC’s new VR lab, which officially opened this semester.

“I’m really impressed by how young people are able to think around stories in new ways, especially when they come to it without any preconceived notions about the traditional structure of filmmaker-driven perspectives. If we can engage the existing community of cinematic and video game storytellers and get them talking to these new voices, we’ll get the best of both worlds. Our Amazon project reflected that; it was a true blend of veteran nature filmmakers and young kid VR hackers all coming together to tell this beautiful story. That’s when you start to get a really nice dialog of what’s possible in the space.”

Wairua
A former pro skateboarder, director of photography and post pro Jim Geduldick thrives on high-stakes obstacles on the course and on set. He combined both passions as the marketing manager of GoPro’s professional division until this summer, when he returned to his filmmaking roots and co-founded the creative production and technology company Wairua. “In the Maori tradition, the term wairua means a spirit not bound to one body or vessel,” he explains. “It fits the company perfectly because we want to pivot and shape shift. While we’re doing traditional 2D, mixed reality and full-on immersive production, we didn’t want to be called just another VR studio or just a technology studio. If we want to work on robotics and AI for a project, we’ll do that. If we’re doing VR or camera tech, it gives us leeway to do that without being pegged as a service, post or editorial house. We didn’t want to get pigeonholed into a single vertical.”

With his twinned background in camera development and post, Geduldick takes a big-picture approach to every job. “My partner and I both come from working for camera manufacturers, so we know the process that it takes to create the right builds,” he says. “A lot of times we have to build custom solutions for different jobs, whether that be high-speed Phantom set-ups or spherical multicam capture. It leaves us open to experiment with a blend of all the new technology out there, from VR to AR to mixed reality to AI to robotics. But we’re not just one piece of the puzzle; knowing capture through the post pipeline to delivery, we can scale to fit whatever the project needs. And it’s inevitable — the way people are telling stories and will want to tell them will drastically change in the next 10 years.”

Jim Geduldick with a spherical GoPro rig.

Early clients like Ford Motors are already fans of Wairua’s process. One of the new company’s first projects was to bring rally cross racer Ken Block of the Hoonigan Racing Division and his viral Gymkhana video series to VR. The series features Block driving against the clock the Ford Focus RS RX rallycross car he helped design and drove in the 2016 FIA World Rallycross Championship on a racing obstacle course, explaining how he performs extreme stunts like the “insane” and the “train drift” along the way. Part one of Gymkhana Nine VR is now available via the Ford VR app for iOS and Android.

“Those brands that are focused on a younger market are a little more willing to take risks with new content like VR,” Geduldick says. ‘We’re doing our own projects to test our theories and own internal pipelines, and some of those we will pitch to our partners in the future. But the clients who are already reaching out to us are doing so through word of mouth, partly because of our technical reputations but mostly because they’ve seen some of our successful VR work.” Guiding clients during the transition to VR begins with the concept, he says. “Often they are not sure what they want and often you have to consult with them and say, ‘This is what’s available. Are they going for a social reach? Or do you want to push the technology as far as it will go?’ Budgets, of course, determine most of that. If it’s not for a headset experience, it’s usually going to a platform or a custom app.”

Wairua’s kit, as you might expect, is a mix of custom tools and off-the-shelf camera gear and software. “We’re using GoPro cameras and the GoPro Odyssey, which is a Google Jump-ready rig, as well as the Nokia Ozo and other cameras and making different rigs,” he says. “If you’re shooting an interview, maybe you can get away with shooting it single camera on a panohead with one Red Epic with a fisheye lens or a Sony A7s ii. I choose camera systems based on what is the best for the project I’m working on at that moment.”

His advice for seasoned producers and directors — and even film students — transitioning to VR is try before you buy. “Go ahead and purchase the prosumer-level cameras, but don’t worry about buying the bigger spherical capture stuff. Go rent them or borrow them, and test, test, test. So many of the rental houses have great education nights to get you started.”

The shot of NYC was captured by a spherical array shoot on the top of the Empire State Building.

Once you know where your VR business is headed, he suggests, it’s time to invest. “Because of the level that we’re at, we’ve purchased a number of different camera systems, such as Red Epic, Phantom, tons of GoPros and even a Ricoh Theta S camera, which is the perfect small spherical camera for scouting locations. That one is with me in my backpack every time I’m out.”

Geduldick is also using The Foundry’s Cara VR plug-in with Nuke, Kolor’s Autopan Video Pro and Chris Bobotis’s Mettle plug-in for Adobe After Effects. “If you’re serious about VR post and doing your own stitching, and you already use After Effects, Mettle is a terrific thing to have,” he says. A few custom tetrahedral and ambisonic microphones made by the company’s sound design partners and used in elaborate audio arrays, as well as the more affordable Sennheiser Ambeo VR mic, are among Wairua’s go-to audio recording gear. “The latter is one of the more cost-effective tools for spatial audio capture,” says Geduldick.

The idea of always mastering to the best high-resolution archival format available to you still holds true for VR production, he adds. “Do you shoot in 4K just to future-proof it, even if it’s more expensive? That’s still the case for 360 VR and immersive today. Your baseline should always be 4K and you should avoid shooting any resolution less than that. The headsets may not be at 4K resolution per eye yet, but it’s coming soon enough.”

Geduldick does not believe any one segment of expanded reality with take the ultimate prize. “I think it’s silly to create a horse race between augmented reality and virtual reality,” he says. “It’s all going to eventually meld together into immersive storytelling and immersive technology. The headsets are a stopgap. 360 video is a stopgap. They are gateways into what will be and can come in the next five to 10 years, even two years. Yes, some companies will disappear and others will be leaders. Facebook and Google have a lot of money behind it, and the game engine companies also have an advantage. But there is no king yet. There is no one camera or or no single software that will solve all of our problems, and in my opinion, it’s way too soon to be labeling this a movement at all.”

Jim with a GoPro Omni on the Mantis Rover for Gymkhana.

That doesn’t mean that Wairua isn’t already looking well beyond the traditional entertainment marketing and social media space at the VR apps of tomorrow. “We are very excited about industrial, education and health applications,” Geduldick says. “Those are going to be huge, but the money is in advertising and entertainment right now, and the marketing dollars are paying for these new VR experiences. We’re using that income to go right back into R&D and to build these other projects that have the potential to really help people — like cancer patients, veterans and burn victims — and not just dazzle them.”

Geduldick’s advice for early adopters? Embrace failure, absorb everything and get on with it. “The takeaway for every single production you do, whether it be for VR or SD, you should be learning something new and taking that lesson with you to your next project,” he says. “With VR, there’s so much to learn — how the technology can benefit you, how it can hurt you, how it can slow you down as a storyteller and a filmmaker? Don’t listen to everybody; just go out and find out for yourself what works. What works for me won’t necessarily work for someone like Ridley Scott. Just get out there and experiment, learn and collaborate.”

Main Image: A Ford project via Wairua.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

Missing Pieces hires head of VR/AR/360, adds VR director

Production company Missing Pieces has been investing in VR recently by way of additional talent. Catherine Day has joined the studio as head of VR/AR/360. She was most recently at Jaunt VR where she was executive producer/head of unscripted. VR director Sam Smith has also joined the company as part of its VR directing team.

This bi-coastal studio has a nice body of VR under its belt. They are responsible for Dos Equis’ VR Masquerade and for bringing a president into VR with Bill Clinton’s Inside Impact series. They also created Follow My Lead: The Story of the NBA 2016 Finals, a VR sports documentary for the NBA and Oculus.

In her new role, Day (pictured) will drive VR/AR/360 efforts from the studio’s Los Angeles office and oversee several original VR series that will be announced jointly with WME and partners in the coming months. In her previous role at Jaunt VR, Day led projects for ABC News, RYOT/Huffington Post, Camp 4 Collective, XRez, Tastemade, Outside TV, Civic Nation and Conservation International.

VR director Smith is a CD and VR director who previously worked with MediaMonks on projects for Expedia, Delta, Converse and YT. Smith also has an extensive background in commercial visual effects. His has a deep understanding of post and VFX, which is helpful when developing VR/360 projects. He will also act as technical advisor.

New version of VideoStitch software for 360 video post

VideoStitch is offering a new version of its 360 video post software VideoStitch Studio, including support of ProRes and the H.265 codec, rig presets and feathering.

“With the new version of VideoStitch Studio we give professional 360 video content creators a great new tool that will save them a lot of valuable time during the post production process without compromising the quality of their output,” says Nicolas Burtey, CEO of VideoStitch.

VR pros are already using VideoStitch’s interactive high-resolution live preview as well as its rapid processing. With various new features, VideoStitch Studio 2.2 promises an easier and faster workflow. Support of ProRes ensures a high quality and interoperability with third parties. Support of the H.265 codec widens the range of cameras that can be used with the software. Newly added rig presets allow for quick and automatic stitching with optimal calibration results. Feathering provides for improved blending of the input videos. Also, audio and motion synchronization has been enhanced so that various inputs can be integrated flawlessly. Lastly, the software supports the latest Nvidia graphics card, GTX-10 series.

VideoStitch Studio 2.2 is available for trial download at www.video-stitch.com. The full license costs $295.

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.

Jaunt One pro VR camera available for rent from AbelCine

Thanks to an expanding rental plan, the Jaunt One cinematic VR camera is being made available through AbelCine, a provider of products and services to the production, broadcast and new media industries. AbleCine has locations in New York, Chicago and Los Angeles.

The Jaunt One 24G model camera — which features 24 global shutter sensors, is suited for low-light and fast-moving objects, and has the ability to couple with 360-degree ambisonic audio recording — will be available to rent from AbelCine. Creators will also have access to AbelCine’s training, workshops and educational tools for shooting in VR.

The nationwide availability of the Jaunt One camera, paired with access to the company’s end-to-end VR pipeline, provides filmmakers, creators and artists with the hardware and software (through Jaunt Cloud Services) solutions for shooting, producing and distributing immersive cinematic VR experiences (creators can submit high-quality VR content for distribution directly to the Jaunt VR app through the Jaunt Publishing program).

“As we continue to open the Jaunt pipeline to the expanding community of VR creators, AbelCine is a perfect partner to not only get the Jaunt One camera in the hands of filmmakers, but also to educate them on the opportunities in VR,” says Koji Gardiner, VP of hardware engineering at Jaunt. “Whether they’re a frequent experimenter of new mediums or a proven filmmaker dabbling in VR for the first time, we want to equip creators of all backgrounds with everything needed to bring their stories to life.”

Jaunt is also expanding its existing rental program with LA-based Radiant Images to increase the number of cameras available to their customers.

 

AMD’s Radeon Pro WX series graphics cards shipping this month

AMD is getting ready to ship the Radeon Pro WX Series of graphics cards, the company’s new workstation graphics solutions targeting creatives pros. The Radeon Pro WX Series are AMD’s answer to the rise of realtime game engines in professional settings, the emergence of virtual reality, the popularity of new low-overhead APIs (such as DirectX 12 and Vulkan) and the rise of open-source tools and applications.

The Radeon Pro WX Series takes advantage of the Polaris architecture-based GPUs featuring fourth-generation Graphics Core Next (GCN) technology and engineered on the 14nm FinFET process. The cards have future-proof monitor support, are able to run a 5K HDR display via DisplayPort 1.4, include state-of-the-art multimedia IP with support for HEVC encoding and decoding and TrueAudio Next for VR, and feature cool and quiet operation with an emphasis on energy efficiency. Each retail Radeon Pro WX graphics card comes with 24/7, VIP customer support, a three-year limited warranty and now features a free, optional seven-year extended limited warranty upon product and customer registration.

Available November 10 for $799, the Radeon Pro WX 7100 graphics card offers 5.7 TFLOPS of single precision floating point performance in a single slot, and is designed for professional VR content creators. Equipped with 8GB GDDR5 memory and 36 compute units (2304 Stream Processors) the Radeon Pro WX 7100 is targeting high-quality visualization workloads.

Also available on November 10, for $399, the Radeon Pro WX 4100 graphics cards targets CAD professionals. The Pro WX 4100 breaks the 2 TFLOPS single precision compute performance barrier. With 4GB of GDDR5 memory and 16 compute units (1024 stream processors), users can drive four 4K monitors or a single 5K monitor at 60Hz, a feature which competing low-profile CAD focused cards in its class can’t touch.radeon

Available November 18 for $499, the Radeon Pro WX 5100 graphics card (pictured right) offers 3.9 TFLOPS of single precision compute performance while using just 75 watts of power. The Radeon Pro WX 5100 graphics card features 8GB of GDDR5 memory and 28 compute units (1792 stream processors) suited for high-resolution realtime visualization for industries such as automotive and architecture.

In addition, AMD recently introduced Radeon Pro Software Enterprise drivers, designed to combine AMD’s next-gen graphics with the specific needs of pro enterprise users. Radeon Pro Software Enterprise drivers offer predictable software release dates, with updates issued on the fourth Thursday of each calendar quarter, and feature prioritized support with AMD working with customers, ISVs and OEMs. The drivers are certified in numerous workstation applications covering the leading professional use cases.

AMD says it’s also committed to furthering open source software for content creators. Following news that later this year AMD plans to open source its physically-based rendering engine Radeon ProRender, the company recently announced that a future release of Maxon’s Cinema 4D application for 3D modeling, animation and rendering will support Radeon ProRender. Radeon ProRender plug-ins are available today for many popular 3D content creation apps, including Autodesk 3ds Max and Maya, and as beta plug-ins for Dassault Systèmes SolidWorks and Rhino. Radeon ProRender works across Windows, MacOS and Linux and supports AMD GPUs, CPUs and APUs as well as those of other vendors.

SMPTE: The convergence of toolsets for television and cinema

By Mel Lambert

While the annual SMPTE Technical Conferences normally put a strong focus on things visual, there is no denying that these gatherings offer a number of interesting sessions for sound pros from the production and post communities. According to Aimée Ricca, who oversees marketing and communications for SMPTE, pre-registration included “nearly 2,500 registered attendees hailing from all over the world.” This year’s conference, held at the Loews Hollywood Hotel and Ray Dolby Ballroom from October 24-27, also attracted more than 108 exhibitors in two exhibit halls.

Setting the stage for the 2016 celebration of SMPTE’s Centenary, opening keynotes addressed the dramatic changes that have occurred within the motion picture and TV industries during the past 100 years, particularly with the advent of multichannel immersive sound. The two co-speakers — SMPTE president Robert Seidel and filmmaker/innovator Doug Trumbull — chronicled the advance in audio playback sound since, respectively, the advent of TV broadcasting after WWII and the introduction of film soundtracks in 1927 with The Jazz Singer.

Robert Seidel

ATSC 3.0
Currently VP of CBS Engineering and Advanced Technology, with responsibility for TV technologies at CBS and the CW networks, Seidel headed up the team that assisted WRAL-HD, the CBS affiliate in Raleigh, North Carolina, to become the first TV station to transmit HDTV in July 1996.  The transition included adding the ability to carry 5.1-channel sound using Advanced Television Systems Committee (ATSC) standards and Dolby AC-3 encoding.

The 45th Grammy Awards Ceremony broadcast by CBS Television in February 2004 marked the first scheduled HD broadcast with a 5.1 soundtrack. The emergent ATSC 3.0 standard reportedly will provide increased bandwidth efficiency and compression performance. The drawback is the lack of backwards compatibility with current technologies, resulting in a need for new set-top boxes and TV receivers.

As Seidel explained, the upside for ATSC 3.0 will be immersive soundtracks, using either Dolby AC-4 or MPEG-H coding, together with audio objects that can carry alternate dialog and commentary tracks, plus other consumer features to be refined with companion 4K UHD, high dynamic range and high frame rate images. In June, WRAL-HD launched an experimental ATSC 3.0 channel carrying the station’s programming in 1080p with 4K segments, while in mid-summer South Korea adopted ATSC 3.0 and plans to begin broadcasts with immersive audio and object-based capabilities next February in anticipation of hosting the 2018 Winter Olympics. The 2016 World Series games between the Cleveland Indians and the Chicago Cubs marked the first live ATSC 3.0 broadcast of a major sporting event on experimental station Channel 31, with an immersive-audio simulcast on the Tribune Media-owned Fox affiliate WJW-TV.

Immersive audio will enable enhanced spatial resolution for 3D sound-source localization and therefore provide an increased sense of envelopment throughout the home listening environment, while audio “personalization” will include level control for dialog elements, alternate audio tracks, assistive services, other-language dialog and special commentaries. ATSC 3.0 also will support loudness normalization and contouring of dynamic range.

Doug Trumbull

Higher Frame Rates
With a wide range of experience within the filmmaking and entertainment technologies, including visual effects supervision on 2001: A Space Odyssey, Close Encounters of the Third Kind, Star Trek: The Motion Picture and Blade Runner, Trumbull also directed Silent Running and Brainstorm, as well as special venue offerings. He won an Academy Award for his Showscan process for high-speed 70mm cinematography, helped develop IMAX technologies and now runs Trumbull Studios, which is innovating a new MAGI process to offer 4K 3D at 120fps. High production costs and a lack of playback environments meant that Trumbull’s Showscan format never really got off the ground, which was “a crushing disappointment,” he conceded to the SMPTE audience.

But meanwhile, responding to falling box office receipts during the ‘50s and ‘60s, Hollywood added more consumer features, including large-screen presentations and surround sound, although the movie industry also began to rely on income from the TV community for broadcast rights to popular cinema releases.

As Seidel added, “The convergence of toolsets for both television and cinema — including 2K, 4K and eventually 8K — will lead to reduced costs, and help create a global market around the world [with] a significant income stream.” He also said that “cord cutting” — substituting cable subscription services for Amazon.com, Hulu, iTunes, Netflix and the like — is bringing people back to over-the-air broadcasting.

Trumbull countered that TV will continue at 60fps “with a live texture that we like,” whereas film will retain its 24fps frame rate “that we have loved for years and which has a ‘movie texture.’ Higher frame rates for cinema, such as 48fps used by Peter Jackson for several of the Lord of the Rings films, has too much of a TV look. Showscan at 120fps and a 360-degree shutter avoided that TV look, which is considered objectionable.” (Early reviews of director Ang Lee’s upcoming 3D film Billy Lynn’s Long Halftime Walk, which was shot in 4K at 120fps, have been critical of its video look and feel.)

complex-tv-networkNext-Gen Audio for Film and TV
During a series of “Advances in Audio Reproduction” conference sessions, chaired by Chris Witham, director of digital cinema technology at Walt Disney Studios, three presentations covered key design criteria for next-generation audio for TV and film. During his discussion called “Building the World’s Most Complex TV Network — A Test Bed for Broadcasting Immersive & Interactive Audio,” Robert Bleidt, GM of Fraunhofer USA’s audio and multimedia division, provided an overview of a complete end-to-end broadcast plant that was built to test various operational features developed by Fraunhofer, Technicolor and Qualcomm. These tests were used to evaluate an immersive/object-based audio system based on MPEG-H for use in Korea during planned ATSC 3.0 broadcasting.

“At the NAB Convention we demonstrated The MPEG Network,” Bleidt stated. “It is perhaps the most complex combination of broadcast audio content ever made in a single plant, involving 13 different formats.” This includes mono, stereo, 5.1-channel and other sources. “The network was designed to handle immersive audio in both channel- and HOA-based formats, using audio objects for interactivity. Live mixes from a simulated sports remote was connected to a network operating center, with distribution to affiliates, and then sent to a consumer living room, all using the MPEG-H audio system.”

Bleidt presented an overview of system and equipment design, together with details of a critical AMAU (audio monitoring and authoring unit) that will be used to mix immersive audio signals using existing broadcast consoles limited to 5.1-channel assignment and panning.

Dr. Jan Skoglund, who leads a team at Google developing audio signal processing solutions, addressed the subject of “Open-source Spatial Audio Compression for VR Content,” including the importance of providing realistic immersive audio experiences to accompany VR presentations and 360-degree 3D video.

“Ambisonics have reemerged as an important technique in providing immersive audio experiences,” Skoglund stated. “As an alternative to channel-based 3D sound, Ambisonics represent full-sphere sound, independent of loudspeaker location.” His fascinating presentation considered the ways in which open-source compression technologies can transport audio for various species of next-generation immersive media. Skoglund compared the efficacy of several open-source codecs for first-order Ambisonics, and also the progress being made toward higher-order Ambisonics (HOA) for VR content delivered via the internet, including enhanced experience provided by HOA.

Finally, Paul Peace, who oversees loudspeaker development for cinema, retail and commercial applications at JBL Professional — and designed the Model 9350, 9300 and 9310 surround units — discussed “Loudspeaker Requirements in Object-Based Cinema,” including a valuable in-depth analysis of the acoustic delivery requirements in a typical movie theater that accommodates object-based formats.

Peace is proposing the use of a new metric for surround loudspeaker placement and selection when the layout relies on venue-specific immersive rendering engines for Dolby Atmos and Barco Auro-3D soundtracks, with object-based overhead and side-wall channels. “The metric is based on three foundational elements as mapped in a theater: frequency response, directionality and timing,” he explained. “Current set-up techniques are quite poor for a majority of seats in actual theaters.”

Peace also discussed new loudspeaker requirements and layout criteria necessary to ensure a more consistent sound coverage throughout such venues that can replay more accurately the material being re-recorded on typical dub stages, which are often smaller and of different width/length/height dimensions than most multiplex environments.


Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA.

 

Ronen Tanchum brought on to run The Artery’s new AR/VR division

New York City’s The Artery has named Ronen Tanchum head of its newly launched virtual reality/augmented reality division. He will serve as creative director/technical director.

Tanchum has a rich VFX background, having produced complex effects set-ups and overseen digital tools development for feature films including Deadpool, Transformers, The Amazing Spiderman, Happy Feet 2, Teenage Mutant Ninja Turtles and The Wolverine. He is also the creator of the original VR film When We Land: Young Yosef. His work on The Future of Music — a 360-degree virtual experience from director Greg Barth and Phenomena Labs, which immerses the viewer in a surrealist musical space — won the DA&D Silver Award in the “Best Branded Content” category in 2016.

“VR today stands at just the tip of the iceberg,” says Tanchum. “Before VR came along, we were just observers and controlled our worlds through a mouse and a keyboard. Through the VR medium, humans become active participants in the virtual world — we get to step into our own imaginations with a direct link to our brains for the first time, experiencing the first impressions of a virtual world. As creators, VR offers us a very powerful tool by which to present a unique new experience.”

Tanchum says the first thing he asks a potential new VR client is, ‘Why VR? What is the role of VR in your story? “Coming from our long experiences in the CG world by working on highly demanding creative visual projects, we at The Artery have evolved our collective knowledge and developed a strong pipeline into this new VR platform,” he explains, adding that The Artery’s new division is currently gearing up for a big VR project for a major brand. “We are using it to its fullest to tell stories. We inform our clients that VR shouldn’t be created just because it’s ‘cool.’ The new VR platform should be used to play an integral part of the storyline itself — a well crafted VR experience should embellish and complement the story.”

 

AES Conference focuses on immersive audio for VR/AR

By  Mel Lambert

The AES Convention, which was held at the Los Angeles Convention Center in early October, attracted a broad cross section of production and post professionals looking to discuss the latest technologies and creative offerings. The convention had approximately 13,000 registered attendees and more than 250 brands showing wares in the exhibits halls and demo rooms.

Convention Committee co-chairs Valerie Tyler and Michael MacDonald, along with their team, created the comprehensive schedule of workshops, panels and special events for this year’s show. “The Los Angeles Convention Center’s West Hall was a great new location for the AES show,” said MacDonald. “We also co-located the AVAR conference, and that brought 3D audio for gaming and virtual reality into the mainstream of the AES.”

“VR seems to be the next big thing,” added AES executive director Bob Moses, “[with] the top developers at our event, mapping out the future.”

The two-day, co-located Audio for Virtual and Augmented Reality Conference was expected to attract about 290 attendees, but with aggressive marketing and outreach to the VR and AR communities, pre-registration closed at just over 400.

Aimed squarely at the fast-growing field of virtual/augmented reality audio, this conference focused on the creative process, applications workflow and product development. “Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” said conference co-chair Andres Mayo. “This conference demonstrates that convincing VR and AR productions require audio that follows the motions of the subject and produces a realistic immersive experience.”

Spatial sound that follows head orientation for headsets powered either by dedicated DSP, game engines or smartphones opens up exciting opportunities for VR and AR producers. Oculus Rift, HTC Vive, PlayStation VR and other systems are attracting added consumer interest for the coming holiday season. Many immersive-audio innovators, including DST and Dolby, are offering variants of their cinema systems targeted at this booming consumer marketplace via binaural headphone playback.

Sennheiser’s remarkable new Ambeo VR microphone (pictured left) can be used to capture 3D sound and then post produced to prepare different spatial perspectives — a perfect adjunct for AR/VR offerings. At the high end, Nokia unveiled its Ozo VR camera, equipped with eight camera sensors and eight microphones, as an alternative to a DIY assembly of GoPro cameras, for example.

Two fascinating keynotes bookended the AVAR Conference. The opening keynote, presented by Philip Lelyveld, VR/AR initiative program manager at the USC Entertainment Technology Center, Los Angeles, and called “The Journey into Virtual and Augmented Reality,” defined how virtual, augmented and mixed reality will impact entertainment, learning and social interaction. “Virtual, Augmented and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories, and become part of the everyday lives we will live in the future,” he explained.

“Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content,” Lelyveld stressed, “so such media will progress from the current early days of projecting existing media language with a few tweaks to a headset experience into a new VR/AR/MR-specific language that both the creatives and the audience understand.”

Is his closing keynote, “Future Nostalgia, Here and Now: Let’s Look Back on Today from 20 Years Hence,” George Sanger, director of sonic arts at Magic Leap, attempted to predict where VR/AR/MR will be in two decades. “Two decades of progress can change how we live and think in ways that boggle the mind,” he acknowledged. “Twenty years ago, the PC had rudimentary sound cards, now the entire ‘multitrack recording studio’ lives on our computers. By 2036, we will be wearing lightweight portable devices all day. Our media experience will seamlessly merge the digital and physical worlds; how we listen to music will change dramatically. We live in the Revolution of Possibilities.”

According to conference co-chair Linda Gedemer, “It has been speculated by Wall Street [pundits] that VR/AR will be as game changing as the advent of the PC, so we’re in for an incredible journey!”

Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA

IBC: Surrounded by sound

By Simon Ray

I came to the 2016 IBC Show in Amsterdam at the start of a period of consolidation at Goldcrest in London. We had just gone through three years of expansion, upgrading, building and installing. Our flagship Dolby Atmos sound mixing theatre finished its first feature, Jason Bourne, and the DI department recently upgraded to offer 4K and HDR.

I didn’t have a particular area to research at the show, but there were two things that struck me almost immediately on arrival: the lack of drones and the abundance of VR headsets.

Goldcrest’s Atmos mixing stage.

360 audio is an area I knew a little about, and we did provide a binaural DTS Headphone X mix at the end of Jason Bourne, but there was so much more to learn.

Happily, my first IBC meeting was with Fraunhofer, where I was updated on some of the developments they have made in production, delivery and playback of immersive and 360 sound. Of particular interest was their Cingo technology. This is a playback solution that lives in devices such as phones and tablets and can already be found in products from Google, Samsung and LG. This technology renders 3D audio content onto headphones and can incorporate head movements. That means a binaural render that gives spatial information to make the sound appear to be originating outside the head rather than inside, as can be the case when listening to traditionally mixed stereo material.

For feature films, for example, this might mean taking the 5.1 home theatrical mix and rendering it into a binaural signal to be played back on headphones, giving the listener the experience of always sitting in the sweet spot of a surround sound speaker set-up. Cingo can also support content with a height component, such as 9.1 and 11.1 formats, and add that into the headphone stream as well to make it truly 3D. I had a great demo of this and it worked very well.

I was impressed that Fraunhofer had also created a tool for creating immersive content, a plug-in called Cingo Composer that could run as both VST and AAX plug-ins. This could run in Pro Tools, Nuendo and other DAWs and aid the creation of 3D content. For example, content could be mixed and automated in an immersive soundscape and then rendered into an FOA (First Order Ambisonics or B-Format) 4-channel file that could be played with a 360 video to be played on VR headsets with headtracking.

After Fraunhofer, I went straight to DTS to catch up with what they were doing. We had recently completed some immersive DTS:X theatrical, home theatrical and, as mentioned above, headphone mixes using the DTS tools, so I wanted to see what was new. There were some nice updates to the content creation tools, players and renderers and a great demo of the DTS decoder doing some live binaural decoding and headtracking.

With immersive and 3D audio being the exciting new things, there were other interesting products on display that related to this area. In the Future Zone Sennheiser was showing their Ambeo VR mic (see picture, right). This is an ambisonic microphone that has four capsules arranged in a tetrahedron, which make up the A-format. They also provide a proprietary A-B format encoder that can run as a VST or AAX plug-in on Mac and Windows to process the outputs of the four microphones to the W,X,Y,Z signals (the B-format).

From the B-Format it is possible to recreate the 3D soundfield, but you can also derive any number of first-order microphones pointing in any direction in post! The demo (with headtracking and 360 video) of a man speaking by the fireplace was recorded just using this mic and was the most convincing of all the binaural demos I saw (heard!).

Still in the Future Zone, for creating brand new content I visited the makers of the Spatial Audio Toolbox, which is similar to the Cingo Creator tool from Fraunhofer. B-Com’s Spatial Audio Toolbox contains VST plug-ins (soon to be AAX) to enable you to create an HOA (higher order ambisonics) encoded 3D sound scene using standard mono, stereo or surround source (using HOA Pan) and then listen to this sound scene on headphones (using Render Spk2Bin).

The demo we saw at the stand was impressive and included headtracking. The plug-ins themselves were running on a Pyramix on the Merging Technologies stand in Hall 8. It was great to get my hands on some “live” material and play with the 3D panning and hear the effect. It was generally quite effective, particularly in the horizontal plane.

I found all this binaural and VR stuff exciting. I am not sure exactly how and if it might fit into a film workflow, but it was a lot of fun playing! The idea of rendering a 3D soundfield into a binaural signal has been around for a long time (I even dedicated months of my final year at university to writing a project on that very subject quite a long time ago) but with mixed success. It is exciting to see now that today’s mobile devices contain the processing power to render the binaural signal on the fly. Combine that with VR video and headtracking, and the ability to add that information into the rendering process, and you have an offering that is very impressive when demonstrated.

I will be interested to see how content creators, specifically in the film area, use this (or don’t). The recreation of the 3D surround sound mix over 2-channel headphones works well, but whether headtracking gets added to this or not remains to be seen. If the sound is matched to video that’s designed for an immersive experience, then it makes sense to track the head movements with the sound. If not, then I think it would be off-putting. Exciting times ahead anyway.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

Creating VR audio workflows for ‘Mars 2030’ and beyond

Source Sound is collaborating with others and capturing 360 sound for VR environments.

By Jennifer Walden

Everyone wants it, but not everyone can make it. No, I’m not talking about money. I’m talking about virtual reality content.

Let’s say you want to shoot a short VR film. You’ve got a solid script, a cast of known actors, you’ve got a 360-degree camera and a pretty good idea of how to use it, but what about the sound? The camera has a built-in mic, but will that be enough coverage? Should the cast be mic’d as they would be for a traditional production? How will the production sound be handled in post?

Tim Gedemer, owner/sound supervisor at Source Sound in Woodland Hills, California, can help answer these questions. “In VR, we are audio directors,” he says. “Our services include advising clients at the script level on how they should be shooting their visuals to be optimal for sound.”

Tim Gedemer

As audio directors, Source Sound walks their clients through every step of the process, from production to distribution. Starting with the recording on set, they manage all of the technical aspects of sound file management through production, and then guide their clients through the post sound process, both creatively and technically.

They recommend what technology should be used, how clients should be using it and what deals they need to make to sort out their distribution. “It really is a point-to-point service,” says Gedemer. “We decided early on that we needed to influence the entire process, so that is what we do.”

Two years ago, Dolby Labs referred Jaunt Studio to Source Sound to for their first VR film gig. Gedemer explains that because of Source Sound’s experience with games and feature films, Dolby felt they would be a good match to handle Jaunt’s creative sound needs while Dolby worked with Jaunt on the technical challenges.

Jaunt’s Kaiju Fury! premiered at the 2015 Sundance Film Festival. The experience puts the viewer in the middle of an epic Godzilla-like monster battle. “They realized their film needed cinematic sound, so Dolby called us up and asked if we’d like to get involved. We said, ‘We’re really busy with projects, but show us the tech and maybe we’ll help.’ We were disinterested at first, figuring it was going to be gimmicky, but I went to San Francisco and I looked at their first test, and I was just shocked. I had never seen anything like that before in my life. I realized, in that first moment of putting on those goggles, that we needed to do this.”

Paul McCartney on the "Out There" tour 2014.

Paul McCartney on the “Out There” tour 2014.

Kaiju Fury! was just the start. Source Sound completed three more VR projects for Jaunt, all within a week. There was the horror VR short film called Black Mass, a battle sequence called The Mission and the Atmos VR mastering of Paul McCartney’s Live and Let Die in concert.

Gedemer admits, “It was just insane. No one had ever done anything like this and no one knew how to do it. We just said, ‘Okay, we’ll just stay up for a week, figure all of that out and get it done.’”

Adjusting The Workflow
At first, their Pro Tools-based post sound workflow was similar to a traditional production, says Gedemer, “because we didn’t know what we didn’t know. It was only when we got into creating the final mix that we realized we didn’t have the tools to do this.”

Specifically, how could they experience the full immersion of the 360-degree video and concurrently make adjustments to the mix? On that first project, there was no way to slave the VR picture playing back through the Oculus headgear to the sound playing back via Pro Tools. “We had to manually synchronize,” explains Gedemer. “Literally, I would watch the equi-rectangular video that we were working with in Pro Tools, and at the precise moment I would just press play on the laptop, playing back the VR video through the Oculus HMD to try and synchronize it that way. I admit I got pretty good at that, but it’s not really the way you want to be working!”

Since that time, Dolby has implemented timecode synchronization and a video player that will playback the VR video through the Oculus headset. Now the Source Sound team can pick up the Oculus and it will be synchronized to the Pro Tools session.

Working Together For VR
Over the last few years, Source Sound has been collaborating with tech companies like Dolby, Avid, Oculus, Google, YouTube and Nokia on developing audio-related VR tools, workflow solutions and spec standards that will eventually become available to the wider audio post industry.

“We have this holistic approach to how we want to work, both in virtual and augmented reality audio,” says Gedemer. “We’re working with many different companies, beta testing technology and advising on what they should be thinking about regarding VR sound — with a keen eye toward new product development.”

Kaiju Fury

Kaiju Fury!

Since Kaiju Fury, Source Sound has continued to create VR experiences with Jaunt. They have worked with other VR content creators, including the Emblematic Group (founded by “the godmother of VR,” Nonny de la Peña), 30 Ninjas (founded by director Doug Liman, The Bourne Identity and Edge of Tomorrow), Fusion Media, Mirada, Disney, Google, YouTube and many others.

Mars 2030
Currently, Source Sound is working with Fusion Media on a project with NASA called Mars 2030, which takes a player to Mars as an astronaut and allows him/her to experience what life might be like while living in a Mars habitat. NASA feels that human exploration of Mars may be possible in the year 2030, so why not let people see and feel what it’s like.

The project has given Source Sound unprecedented access to the NASA facilities and engineers. One directive for Mars 2030 is to be as accurate as possible, with information on Mars coming directly from NASA’s Mars missions. For example, NASA collected information about the surface of Mars, such as the layout of all the rocks and the type of sand covering the surface. All of that data was loaded into the Unreal Engine, so when a player steps out of the habitat in the Mars 2030 experience and walks around, that surface is going to be the exact surface that is on Mars. “It’s not a facsimile,” says Gedemer. “That rock is actually there on Mars. So in order for us to be accurate from an audio perspective, there’s a lot that we have to do.”

In the experience the player gets to drive the Mars Rover. At NASA in Houston, there are multiple iterations of the rover that are being developed for this mission. They also have a special area that is set up like the Mars surface with a few craters and rocks.

For audio capture, Gedemer and sound effects recordist John Fasal headed to Houston with Sound Devices recorders and a slew of mic options. While the rover is too slow to do burnouts and donuts, Gedemer and Fasal were able to direct a certified astronaut driver and record the rover from every relevant angle. They captured sounds and ambiences from the various habitats on site. “There is a new prototype space suit that is designed for operation on Mars, and as such we will need to capture all the relevant sound associated with it,” says Gedemer. “We’ll be looking into helmet shape and size, communication systems, life support air flow, etc. when recreating this in the Unreal Engine.”

Another question the sound team nSS_NASA_2USEeeds to address is, “What does it sound like out on the surface of Mars?” It has an atmosphere, but the tricky thing is that a human can never actually walk around on the surface of Mars without wearing a suit. Sounds traveling through the Mars atmosphere will sound different than sounds traveling through Earth’s atmosphere, and additional special considerations need to be made for how the suit will impact sound getting to the astronaut’s ears.

“Only certain sounds and/or frequencies will penetrate the suit, and if it is loud enough to penetrate the suit, what is it going to sound like to the astronaut?” asks Gedemer. “So we are trying to figure out some of these technical things along the way. We hope to present a paper on this at the upcoming AES Conference on Audio for Virtual and Augmented Reality.”

Going Live
Another interesting project at Source Sound is the work they’re doing with Nokia to develop specialized audio technology for live broadcasts in VR. “We are currently the sole creative provider of spatial audio for Nokia’s VR broadcasting initiative,” reveals Gedemer. Source Sound has been embedded with the Nokia Ozo Live team at events where they have been demonstrating their technology. They were part of the official Ozo Camera Launches in Los Angeles and London. They captured and spatialized a Los Angeles Lakers basketball game at the Staples Center. And once again they teamed up with Nokia at their NAB event this past spring.

“We’ve been working with them very closely on the technology that they are developing for live capture and distribution of stereoscopic visual and spatial audio in VR. I can’t elaborate on any details, but we have some very cool things going on there.”

However, Gedemer does break down one of the different requirements of live VR broadcast versus a cinematic VR experience — an example being the multi-episode VR series called Invisible, which Source Sound and Doug Liman of 30 Ninjas are currently collaborating on.

For a live broadcast you want an accurate representation of the event, but for a cinematic experience the opposite is true. Accuracy is not the objective. A cinematic experience needs a highly curated soundtrack in order to tell the story.

Gedemer elaborates, “The basic premise is that, for VR broadcasts you need to have an accurate audio representation of camera location. There is the matter of proper perspective to attend to. If you have a multi-camera shoot, every time you change camera angles to the viewer, you change perspective, and the sound needs to follow. Unlike a traditional live environment, which has a stereo or 5.1 mix that stays the same no matter the camera angle, our opinion is that approach is not adequate for true VR. We think Nokia is on the right track, and we are helping them perfect the finer points. To us that is truly exciting.”

Jennifer Walden is a New Jersey based writer and audio engineer.

Testronic opens second VR test center

Testronic has opened a dedicated virtual reality test center in its Burbank headquarters. The VR test center is the company’s second, as they also launched one in their Warsaw, Poland, location earlier this year, further expanding their full-service QA testing services.

“Consumer VR is in its infancy, and nobody knows what it will become years from now,” said Jason Gish (pictured right), Testronic’s senior VP for film and television. “As VR evolves, consumer eJason Gish, Testronicsmallxpectations will grow, requiring more exploratory and inventive QC processes. Testing VR content has unique requirements, and the integrity of VR content is crucial to its functionality. It is critical to have an understanding of aspects like head tracking and other core VR functions in order to develop a thorough test approach. Issues in VR can not only take you out of the experience, but can cause simulator sickness. Beyond testing for the usual bugs and functionality imperfections, VR is deeply rooted in user experience, and Testronic’s test approach reflects that understanding.”

Testronic was also an early testing pioneer of user experience design (UX), developing one of the first UX labs in the US.

Archion’s new Omni Hybrid storage targets VR, VFX, animation

Archion Technologies has introduced the EditStor Omni Hybrid, a collaborative storage solution for virtual reality, visual effects, animation, motion graphics and post workflows.

In terms of performance, an Omni Hybrid with one expansion chassis offers 8000MB/second for 4K and other streaming demands, and over 600,000 IOPS for rendering and motion graphics. The product has been certified for Adobe After Effects, Autodesk’s Maya/Flame/Lustre, The Foundry’s Nuke and Modo, Assimilate Scratch and Blackmagic’s Resolve and Fusion.  The Omni Hybrid is scalable up to a 1.5Petabytes, and can be expanded without shutdown.

“We have Omni Hybrid in post production facilities that range from high-end TV and film to massive reality productions,” reports Archion CTO James Tucci. “They are all doing graphics and editorial work on one storage system.”

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.

Virgil Kastrup talks about color grading ‘Ewa’ VR project

Post pro Virgil Kastrup was the colorist for Ewa, the latest venture into the world of virtual reality and 3D from Denmark’s Makropol. It made its debut at the 2016 Cannes Film Festival. According to Kastrup, it was a whirlwind project — an eight-minute pilot, with one day for him to do the color grading, versioning and finishing.

The concept is fairly simple. The main character is Ewa, and you become her, as she becomes herself. Through the eyes of Ewa, you will access a world you have never seen before. You will be born as Ewa, you will grow up as Ewa, and, as Ewa, you will fight to free yourself. “Out Of Body” is a crucial chapter of Ewa’s life.

Virgil Kastrup

In a recent chat, the Copenhagen-based Kastrup talked about the challenges of posting the Ewa VR and 3D project, and how he handled the color grading and finishing within the one-day deadline.

What was your main challenge?
The time constraints! We had one day to try out looks, color grade in VR and 3D, create versions for reviews and then finish in VR and 3D. We then had to ensure the look and final result met the vision and satisfaction of Makropol’s director, Johan Jensen.

How was the pilot shot?
Four GoPro cameras (two stereo pairs) were mounted on a helmet-rig the actress wore on her head. This created the immersive view into Ewa’s life, so that the viewer is “entering” the scenes as Ewa. When the DP removed the helmet from her head, an out-of-body experience was created. The viewer is seeing the world through the eyes of a young girl.

What material were you working with?
Makropol sent me the final stitched imagery, ProRes 4K x 4K pixels. Because the content was one long shot in eight minutes, there was no need to edit or conform.

What VR challenges did you face?
In viewing the Ewa pilot, the viewer is immersed in the VR experience without it being 360. Achieving a 360 aspect was a little tricky because the imagery was limited to 180 degrees, so I had to find a way to blank the rear part of the sphere on which the image was projected. I tested and tried out different solutions, then went with making a back gradient so the image fades away from the viewer.

Ewa

What tool suite did you use for color grading and finishing?
I had a beta copy of Assimilate’s Scratch VR Suite. I’ve been using Scratch for 2D and other projects for years, so the learning curve for the VR Suite was virtually zero. The VR Suite offers the same of tools and workflow as Scratch, but they’re geared to work in the VR/360 space. It’s intuitive and very easy to use, which gave me a confidence boost for testing looks and achieving a quality result.

How did you handle the VR?
The biggest challenge was that the look had to work everywhere within the VR scene. For example, if you’re looking from the dining room into the living room, and the light was different, it had to be manipulated without affecting either room. The Scratch 3D tools simplify the 3D process — with a click of a button, you can set up the 3D/stereo functions.

Did you use a headset?
I did the color grading and finishing all on the monitor view. For my reviews and the client sessions, we used the Oculus Rift. Our goal was to ensure the content was viewed as a completely immersive experience, rather than just watching another video.

What impact did this project have on your view about VR?
A project like this — an eight-minute test pilot — doesn’t warrant the use of the expensive professional-grade cameras, yet a filmmaker can still achieve a quality VR result on a restricted budget. By using professional color grading and finishing tools, many issues can be overcome, such compression, lighting, hot spots and more. The colorist has the ability to add his/her creative expertise to craft the look and feel, as well as the subtle effects that go into producing a quality video or feature. This combination of expertise and the right tools opens the world of VR to a wide range of creative professionals in numerous markets.

Experiencing autism in VR via Happy Finish

While people with autism might “appear” to be like the rest of us, the way they experience the world is decidedly different. Imagine sensory overload times 10. In an effort to help the public understand autism, the UK’s National Autistic Society and agency Don’t Panic have launched a campaign called “Too Much Information” (#autismTMI) that is set to challenge myths, misconceptions and stereotypes relating to this neurobiological disorder.

In order to help tell that story, the NAS called on London’s Happy Finish to help create a 360-degree VR film that puts viewers into the shoes of a child with autism during a visit to the store. A 2D film had previously been developed based on the experience of a 10-year-old boy autistic boy named Alexander. Happy Finish provided visual effects for that version, which, since March of last year, has over 54 million views and over 850K shares. The new 360-degree VR experience takes the viewer into Alexander’s world in a more immersive way.

After interviewing several autistic adults as part of the research, Happy Finish worked on this idea that aims to trigger viewer’s empathy and understanding. Working with Don’t Panic and The National Autistic Society, they share Alexander’s experience in an immersive and moving way.

The piece was shot by DP Michael Hornbogen using a six-camera GoPro array in 3D printed housing. For stitching, Happy Finish called on Autopano by Kolor, The Foundry’s Nuke and Adobe After Effects. Editing was in Adobe Premiere. Color grading was via Blackmagic’s Resolve.

“It was a long process of compositing using various tools,” explains Jamie Mossahebi, director of the VR shooting at Happy Finish. “We created 18 versions and amended and tweaked based on initial feedback from autistic adults.”

He says that most of the studio’s VR experiences aim to create something comfortable and pleasant, but this one needed to be uncomfortable while remaining engaging. “The main challenge was to be as realistic as possible, for that, we focused a lot on the sound design as well as a testing a wide variety of visual effects, selecting the key ones that contributed to making it as immersive and as close to a sensory overload as possible,” explains Mossahebi, who directed the VR film.

“This is Don’t Panic’s first experience of creating a virtual reality campaign,” says Richard Beer, creative director of Don’t Panic. “The process of creating a virtual reality film has a whole different set of rules: it’s about creating a place for people to visit and a person for them to become, rather than simply telling a story. This interactivity of virtual reality gives it a unique sense of “presence” — it has the power to take us somewhere else in time and space, to help us feel, just for a while, what it’s like to be someone else – which is why it was the perfect tool to communicate exactly what a sensory overload feels like for someone with autism for the NAS.”

Sponsored by Tangle Teaser and Intu, the film will tour shopping centers around the UK and will also be available through Autism TMI Virtual Reality Experience view app.

AR/VR audio conference taking place with AES show in fall


The AES is tackling the augmented reality and virtual reality creative process, applications workflow and product development for the first time with a dedicated conference that will take place on 9/30-10/1 during the 141st AES Convention at the LA Convention Center’s West Hall.

The two-day program of technical papers, workshops, tutorials and manufacturer’s expo will highlight the creative and technical challenges of providing immersive spatial audio to accompany virtual reality and augmented reality media.

The conference will attract content developers, researchers, manufacturers, consultants and students, in addition to audio engineers seeking to expand their knowledge about sound production for virtual and augmented reality. The companion expo will feature displays from leading-edge manufacturers and service providers looking to secure industry metrics for this emerging field.

“Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” shares conference co-chair Andres Mayo. “This conference will demonstrate that VR and AR productions, using a variety of playback devices, require audio that follows the motions of the subject, and produces a realistic immersive experience. Our program will spotlight the work of leading proponents in this exciting field of endeavor, and how realistic spatial audio can be produced from existing game console and DSP engines.”

Proposed topics include object-based audio mixing for VR/AR, immersive audio in VR/AR broadcast, live VR audio production, developing audio standards for VR/AR, cross platform audio considerations in VR and streaming immersive audio content.

Costs range from $195 for a one-day pass for AES members ($295 for a two-day pass) and $125 for accredited students, to $280/$435 for non-members; Early-bird discounts also are available.

Conference registrants can also attend the 141st AES Convention’s companion exhibition, select educational sessions and special events free of charge with an exhibits-plus badge.

Talking VR content with Phillip Moses of studio Rascali

Phillip Moses, head of VR content developer Rascali, has been working in visual effects for over 25 years. His resume boasts some big-name films, including Alice in Wonderland, Speed Racer and Spider-Man 3, just to name a few. Seven years ago he launched a small boutique visual effects studio, called The Resistance VFX, with VFX supervisor Jeff Goldman.

Two years ago, after getting a demo of an Oculus pre-release Dev Kit 2, Moses realized that “we were poised on the edge of not just a technological breakthrough, but what will ultimately be a new platform for consuming content. To me, this was a shift almost as big as the smartphone, and an exciting opportunity for content creators to begin creating in a whole new ecosystem.”

Phillip Moses

Phillip Moses

Shortly after that, his friends James Chung and Taehoon Oh launched Reload Studios, with the vision of creating the first independently-developed first-person shooter game, designed from the ground up for VR. “As one of the first companies formed around the premise of VR, they attracted quite a bit of interest in the non-gaming sector as well,” he explains. “Last year, they asked me to come aboard and direct their non-gaming division, Rascali. I saw this as a huge opportunity to do what I love best: explore, create and innovate.”

Rascali has been busy. They recently debuted trailers for their first episodic VR projects, Raven and The Storybox Project, on YouTube, Facebook/Oculus Video, Jaunt, Littlstar, Vrideo and Samsung MilkVR. Let’s find out more…

You recently directed two VR trailers. How is directing for VR different than directing for traditional platforms?
Directing for VR is a tricky beast and requires a lot of technical knowledge of the whole process that would not normally be required of directors. To be fair, today’s directors are a very savvy bunch, and most have a solid working knowledge of how visual effects are used in the process. However, for the way I have chosen to shoot the series, it requires the ability to have a pretty solid understanding of not just what can be done, but how to actually do it. To be able to previsualize the process and, ultimately, the end result in your head first is critical to being able to communicate that vision down the line.

Also, from a script and performance perspective, I think it’s important to start with a very important question of “Why VR?” And once you believe you have a compelling answer to that question, then you need to start thinking about how to use VR in your story.  Will you require interaction and participation from the viewer? Will you involve the viewer in any way? Or will you simply allow VR to serve as an additional element of presence and immersion for the viewer?

While you gain many things in VR, you also have to go into the process with a full knowledge of what you ultimately lose. The power of lenses, for example, to capture nuance and to frame an image to evoke an emotional response, is all but lost. You find yourself going back to exploring what works best in a real-world framing — almost like you are directing a play in an intimate theater.

What is the biggest challenge in the post workflow for VR?
Rendering! Everything we are producing for Raven is at 4K left eye, 4K right eye and 60fps. The rendering process alone guarantees that the process will take longer than you hoped. It also guarantees that you will need more data storage than you ever thought necessary.

But other than rendering, I find that the editorial process is also more challenging. With VR, those shots that you thought you were holding onto way too long are actually still too short, and it involves an elaborate process to conform everything for review in a headset between revisions. In many ways, it’s similar to the old process of making your edit decisions, then walking the print into the screening room. You forget how tedious the process can be.
By the way, I’m looking forward to integrating some realtime 360 review into the editorial process. Make it happen Adobe/Avid!

These trailers are meant to generate interest from production partners to green light these as full episodic series. What is the intended length of each episode, and what’s the projected length of time from concept to completion for each episode of the all-CG Storybox, and live-action Raven?
Each one of these projects is designed for completely different audiences, so the answer is a bit different for each one. For Storybox, we are looking to keep each episode under five minutes, with the intention that it is a fairly easy-to-consume piece of content that is accessible to a broad spectrum of ages. We really hope to make the experiences fun, playful and surprising for the viewer, and to create a context for telling these stories that fuels the imagination of kids.

For Storybox, I believe that we can start delivering finished episodes before the end of the third quarter — with a full season representing 12 to 15 episodes. Raven, on the other hand, is a much more complex undertaking. While the VR market is being developed, we are betting on the core VR consumers to really want stories and experiences that range closer to 12 to 15 minutes in duration. We feel this is enough time to tell more complex stories, but still make each episode feel like a fantastic experience that they could not experience anywhere else. If green-lit tomorrow, I believe we would be looking at a four-month production schedule for the pilot episode.

Rascali is a division of Reload Studios, which is developing VR games. Is there a technology transfer of workflows and pipelines and shared best practices across production for entertainment content and games within the company?
Absolutely! While VR is a new technology, there is such a rich heritage of knowledge present at Reload Studios. For example, one question that VR directors are asking themselves is: “How can I direct my audience’s attention to action in ways that are organic and natural?” While this is a new question for film directors — who typically rely on camera to do this work for them — this is a question that the gaming community has been answering for years. Having some of the top designers in the game industry at our disposal is an invaluable asset.

That being said, Reload is much different than most independent game companies. One of their first hires was senior Disney animator Nik Ranieri. Our producing team is composed of top animation producers from Marvel and DC. We have a deep bench of people who give the whole company a very comprehensive knowledge of how content of all types is created.

What was the equipment set-up for the Raven VR shoot? Which camera was used? What tools were used in the post pipeline?
Much of the creative IP for Raven is very much in development, including designs, characters, etc. For this reason, we elected to construct a teaser that highlighted immersive VR vistas that you could expect in the world we are creating. This required us to lean very heavily on the visual effects / CG production process — the VFX pipeline included Autodesk 3ds Max, rendering in V-Ray, with some assistance from Nuke and even Softimage XSI. The entire project was edited in Adobe Premiere.

For our one live-action element, this was shot with a single Red camera, and then projected onto geometry for accurate stereo integration.

Where do you think the prevailing future of VR content is? Narrative, training, therapy, gaming, etc.?
I think your question represents the future of VR. Games, for sure, are going to be leading the charge, as this demographic is the only one on a large scale that will be purchasing the devices required to build a viable market. But much more than games, I’m excited to see growth in all of the areas you listed above, including, most significantly, education. Education could be a huge winner in the growing VR/AR ecosystem.

The reason I elected to join Rascali is to help provide solutions and pave the way for solutions in markets that mostly don’t yet exist.  It’s exciting to be a part of a new industry that has the power to improve and benefit so many aspects of the global community.

Assimilate Scratch 8.5, Scratch VR Suite available for open beta

Assimilate is offering an open-beta version of Scratch 8.5, its realtime post system and workflow for dailies, conform, grading, compositing and finishing. Also in open beta is the Scratch VR Suite. Both open-beta versions give users the chance to work with the full suite of Scratch 8.5 and Scratch VR tools while evaluating and submitting requests and recommendations for additional features or updates.

Scratch Web for cloud-based, realtime review and collaboration, and Scratch Play for immediate review and playback, are also included in the ecosystem updates. Current users of Scratch 8.4 can download the Scratch 8.5 open beta. Those who are new to Scratch can access the Scratch 8.5 open-beta version for a 30-day free trial. The Scratch VR open-beta version can also be accessed for a 30-day free trial.

“Thanks to open-Beta programs, we get at lot of feedback from current Scratch users about the features and functions that will simplify their workflows, increase their productivity and enhance their storytelling,” explains Assimilate CEO Jeff Edson. “We have two significant Scratch releases a year for the open-beta program and then provide several incremental builds throughout the year. In this way Scratch is continually evolving to offer bleeding-edge functionality, as well as support for the latest formats, for example, Scratch was the first to support Arri’s mini-camera MXF format.”

New to Scratch 8.5
• Easy validation of availability of physical media and file references throughout a project, timeline and render
• Fast access to all external resources (media / LUT / CTL / etc.) through bookmarks
• Full set of ACES transforms as published by the Academy
• Publishing media directly to Facebook
• Option to launch Scratch from a command-line with a series of xml-script commands, which allows closer integration with post-infrastructure and third-party software and scripts

The new Scratch VR Suite includes all the features and functions of Scratch 8.5, Scratch Play and Scratch Web, plus substantial features, functions and enhancements that are specific to working in a 360 media environment.

What does Fraunhofer Digital Media Alliance do? A lot!

By Jonathan Abrams

While the vast majority of the companies with exhibit space at NAB are for-profit, there is one non-profit that stands out. With a history of providing ubiquitous technology to the masses since 1949, Fraunhofer focuses on applied research and developments that end up — at some point in the near future — as practical products or ready-for-market technology.

In terms of their revenue, one-third of their funding is for basic research, with the remaining two-thirds applied toward industry projects and coming directly from private companies. Their business model is focused on contract research and licensing of technologies. They have sold first prototypes and work with distributors, though Fraunhofer always keeps the rights to continue development.

What projects were they showcasing at NAB 2106 that have real-world applications in the near future? You may have heard about the Lytro camera. Fraunhofer Digital Media Alliance member Fraunhofer IIS has been taking a camera agnostic approach to their work with light-field technology. Their goal is to make this technology available for many different camera set-ups, and they were proving it with a demo of their multi-cam light-field plug-in for The Foundry’s Nuke. After capturing a light-field, users can perform framing correction and relighting, including changes to angles, depth and the creation of point clouds.

The Nuke plug-in (see our main image) allows the user to create virtual lighting (relighting) and interactive lighting. Light-field data also allows for depth estimation (called depth maps) and is useful for mattes and secondary color correction. Similar to Lytro, focus pulling can be performed with this light-field plug-in. Why Nuke? That is what their users requested. Even though Nuke is an OFX host, the Fraunhofer IIS light field plug-in only works within Nuke. As for using this light-field plug-in outside of Nuke, I was told that “porting to Mac should be an easy task.” Hopefully that is an accurate statement, though we will have to wait to find out.

DCP
Fraunhofer IIS has its hand in other parts of production and post as well. The last two steps of most projects are the creation of deliverables and their delivery. If you need to create and deliver a DCP (Digital Cinema Package), then easyDCP may be for you.easydcp1

This project began in 2008, when creating a DCP was not as familiar as it is today to most users, and a deep expertise of the specifications for correctly making a DCP was very complex. Small- to medium-sized post companies, in particular, profit from the easy-to-use easyDCP suite. The engineers of Fraunhofer IIS were also working on behalf of the DCI specifications for Digital Cinema, therefore they are experienced in integrating all important features in this software for DCPs.

The demo I saw indicated that the JPEG2000 encode was as fast as 108fps! In 2013, Fraunhofer partnered with both Blackmagic and Quantel to make this software available to the users of those respective finishing suites. The demo I saw was using a Final Cut Pro X project file and it was with the Creator+ version since it had support for encryption. Avid Media Composer users will have to export their sequence and import it into Resolve to use easyDCP Creator. Amazingly, this software works as far back as Mac OS X Leopard. IMF creation and playback can also be done with the easyDCP software suite.

VR/360
VR and 360-degree video were prominent at NAB, and the institutes of the Fraunhofer Digital Media Alliance are involved in this as well, having worked on live streaming and surround sound as part of a project with the Berlin Symphony Orchestra.

Fraunhofer had a VR demo pod at the ATSC 3.0 Consumer Experience (in South Hall Upper) — I tried it and the sound did track with my head movement. Speaking of ATSC 3.0, it calls for an immersive audio codec. Each country or geographic region that adopts ATSC 3.0 can choose to implement either Dolby AC-4 or MPEG-H, the latter of which is the result of research and development by Fraunhofer, Technicolor and Qualcomm. South Korea announced earlier this year that they will begin ATSC 3.0 (UHDTV) broadcasting in February 2017 using the MPEG-H audio codec.

From what you see to what you hear, from post to delivery, the Fraunhofer Digital Media Alliance has been involved in the process.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

NAB 2016: VR/AR/MR and light field technology impressed

By Greg Ciaccio

The NAB 2016 schedule included its usual share of evolutionary developments, which are truly exciting (HDR, cloud hosting/rendering, etc.). One, however, was a game changer with reach far beyond media and entertainment.

This year’s NAB floor plan featured a Virtual Reality Pavilion in the North Hall. In addition, the ETC (USC’s Entertainment Technology Center) held a Virtual Reality Summit that featured many great panel discussions and opened quite a few minds. At least that’s what I gathered by the standing room only crowds that filled the suite. The ETC’s Ken Williams and Erik Weaver, among others, should be credited for delivering quite a program. While VR itself is not a new development, the availability of relatively inexpensive viewers (with Google Cardboard the most accessible) will put VR in the hands of practically everyone.

Programs included discussions on where VR/AR (Augmented Reality) and now MR (Mixed Reality) are heading, business cases and, not to be forgotten, audio. Keep in mind that with headset VR experiences, multi-channel directional sound must be perceivable with just our two ears.

The panels included experts in the field, including Dolby, DTS, Nokia, NextVR, Fox and CNN. In fact, Juan Santillian from Vantage.tv mentioned that Coachella is streaming live in VR. Often, concerts and other live events have a fixed audience size, and many can’t attend due to financial or sell-out situations. VR can allow a much more intimate and immersive experience than being almost anywhere but onstage.

One example, from Fox Sports’ Michael Davies, involved two friends in different cities virtually attending a football game in a third city. They sat next to each other and chatted during the game, with their audio correctly mapped to their seats. There are no limits to applications for VR/AR/MR, and, by all accounts, once you experience it, there is no doubt that this tech is here to stay.

I’ve heard many times this year that mobile will be the monetary driver for wide adoption of VR. Halsey Minor with Voxelus estimates that 85 percent of VR usage will be via a mobile device. Given that more photos and videos are shot on our phones (by far) than on dedicated cameras, this is not surprising. Some of the latest crop of mobile phones are not only fast and contain high dynamic range and wide color gamut, they feature high-end audio processing from Dolby and others. Plus, our reliance on our mobiles ensures that you’ll never forget to bring it with you.

Light Field Imaging
On both Sunday and Tuesday of NAB 2016, programs were devoted to light field imaging. I was already familiar with this truly revolutionary tech, and learned about Lytro, Inc. a few years ago from Internet ads for an early consumer camera. I was intrigued with the idea of controlling focus after shooting. I visited www.lytro.com and was impressed, but the resolution was low, so, for me, this was mainly a proof of concept. Fast forward three years, and Lytro now has a cinema camera!

Jon Karafin (pictured right), Lytro’s head of Light Field Imaging, not only unveiled the camera onstage, but debuted their short Life, produced in association with The Virtual Reality Company (VRC). Life takes us through a man’s life and is told with no dialog, letting us take in the moving images without distraction. Jon then took us through all the picture aspects using Nuke plug-ins, and minds started blowing. The short is directed by Academy Award-winner Robert Stromberg, and shot by veteran cinematographer David Stump, who is chief imaging scientist at VRC.

Many of us are familiar with camera raw capture and know that ISO, color temperature and other picture aspects can be changed post-shooting. This has proven to be very valuable. However, things like focus, f-stop, shutter angle and many other parameters can now be changed, thanks to light field technology — think of it as an X-ray compared to an MRI. In the interests of trying to keep a complicated technology relatively simple, sensors in the camera capture light fields in not only in X and Y space, but two more “angular” directions, forming what Lytro calls 4D space. The result is accurate depth mapping which opens up so many options for filmmakers.

Lytro_Cinema_2

Lytro Cinema Camera

For those who may think that this opens up too many options in post, all parameters can be locked so only those who are granted access can make edits. Some of the parameters that can be changed in post include: Focus, F-Stop, Depth of Field, Shutter Speed, Camera Position, Shutter Angle, Shutter Blade Count, Aperture Aspect Ratio and Fine Control of Depth (for mattes/comps).

Yes, this camera generates a lot of data. The good news is that you can make changes anywhere with an Internet connection, thanks to proxy mode in Nuke and processing rendered in the cloud. Jon demoed this, and images were quickly processed using Google’s cloud.

The camera itself is very large, but Lytro knows that they’ll need to reduce the size (from around seven feet long) to a more maneuverable form factor. However, this is a huge step in proving that a light field cinema camera and a powerful, manageable workflow is not only possible, but will no doubt prove valuable to filmmakers wanting the power and control offered by light field cinematography.

Greg Ciaccio is a technologist focused primarily on finding new technology and workflow solutions for Motion Picture and Television clients. Ciaccio served in technical management roles for the respective Creative Services divisions for both Deluxe and Technicolor.

NAB: Las Vegas SuperMeet adds VR/360 to event coverage

The Rio Hotel will be hopping on April 19 when it hosts this year’s Las Vegas SuperMeet. The annual Creative Pro User Group (CPUG) Network event draws Final Cut Pro, Adobe, Avid and DaVinci Resolve editors, gurus, digital filmmakers and content creators during NAB.

The second half of this year’s event is focusing on VR and 360 video, the hot topics at this year’s show. We wanted to know what attendees can expect, so we threw some questions at Daniel Bérubé and Michael Horton, the architects of this event, to find out more.

Some compare VR and 360 video to stereo 3D. Why do you feel this is different?
VR/360 video is more accessible to the indie filmmaker than 3D ever was. The camera rigs can be inexpensive and still be professional, or you can rent the expensive ones. The feeling we are getting from everyone is one of revolution, and we have not seen that since the year 2000. This is a new way to tell stories. There are no rules yet, and we are making a lot of this stuff up as we go along, but that’s what is fun. We are actually seeing people giggle again. We never saw this level of excitement with 3D. All we really saw was skepticism.

In what ways are you going to be highlighting VR/360 video?
The second half of the SuperMeet will be devoted to VR and 360 video. We are titling it, “Can I Tell a Compelling Story in VR and 360 Video?” Futurist Ted Schilowitz is going to act as a sort of ringmaster and introduce us to what we need to know. He will then bring on Csillia Kozma Andersen from Nokia to show off the new Ozo camera and how to use it. Next will be John Hendicott of Aurelia Soundworks, who will explain how spatial audio works. And, finally, we will introduce Alex Gollner, who will show how we edit all this stuff.

So the idea here is to try and give you a bit of what you need to know, and then hope it will help you get started on your way to creating your own compelling VR masterpiece.

What can attendees expect?
Expect to have a crazy fun time. Even if you have zero interest in 360 video, SuperMeets are a place to hang out with each other and network. Honestly, you just might meet someone who will change your life. You also can hang out at one of the 25 sponsor tables, where folks will be showing off the latest and greatest software and hardware solutions. VR camera rigs will be running around this area as well. And there will be free food, cash bars and close to $100,000 worth of raffle prizes to give away. It’s going to be a great show and, more importantly, a great time.

To enjoy $5 off your ticket price for the Las Vegas SuperMeet, courtesy of postPerspective, click here.

Daniel Bérubé, of the Boston Creative Pro User Group (BOSCPUG), is co-producer of these SuperMeets with Michael Horton, the founder of the Los Angeles Creative Pro User Group (LACPUG).

Nvidia’s GTC 2016: VR, A.I. and self driving cars, oh my!

By Mike McCarthy

Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.

The current focus of GPU computing is in three areas:

Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.

Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.

This author calls the Tesla P100 "a monster of a chip."

This author calls the Tesla P100 “a monster of a chip.”

Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.

While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.

What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.

As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.

At the conference, this high-fidelity VR simulation that uses scientifically accurate satellite imagery and data from NASA was shown.

The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.

Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.

For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.

Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.

The Maxwell Kepler family of cards.

Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.

While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.

Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at mike@hd4pc.com.

Lytro camera allows capture of massive light field data on all frames

Imagine if your camera could capture the entire light field of a scene in 3D, turning every frame into a three-dimensional model? That is the idea behind the Lytro Cinema system, which uses Light Field technology to capture massive amounts of information per frame, allowing you to control the depth of field, hence, creating more flexibility in post. Oh, and it captures 300 frames per second, adding a level of speed control, including adjustable motion blur, that was previously limited to the live-action process.

In a video released by the company, Brendan Bevensee, lead engineer for Light Field Cideo/Lytro, said Light Field cinematography allows “the ability to capture everything about a scene — from different perspectives,different focal planes and different apertures. Every pixel now has color properties and directional properties, as well as exact placement in 3D space. Essentially we have a virtual camera that can be controlled in post production.”

Lytro says their capture system enables “the complete virtualization of the live-action camera —transforming creative camera controls from fixed, on-set decisions to computational post processes.”

In the aforementioned video, the head of Light Field Video/Lytro Jon Karafin said, “Lytro Cinema offers an infinite ability to focus anywhere in your scene. You have the infinite ability to focus and create any aperture or any depth of field. You can shift your camera to the left or to the right, as if you made that exact decision on set. It can even move your camera in and out. Automated camera tracking removes that tedious task of integration and matching. It has all of the volume, all of that depth information that easily allows you to composite and matte your CG objects. With Depth Screen it’s as if you have a greenscreen for every object, but it’s not limited to any one object, it’s anywhere in space.”

The rich dataset captured by the system produces a Light Field master that can be rendered in any format in post, allowing for a range of creative possibilities. The Light Field Master enables creators to render content in multiple formats —including IMAX, RealD and traditional cinema and broadcast at variable frame rates and shutter angles.

“Lytro has always been a company thinking about what the future of imaging will be,” said Ted Schilowitz, futurist at Fox Studios. “There are a lot of companies that have been applying new technologies and finding better ways to create cinematic content, and they are all looking for better ways and better tools to achieve live-action, highly immersive content. Lytro is focusing on getting a much bigger, better and more sophisticated cinematography-level dataset that can then flow through the VFX pipeline and modernize that world.”

Lytro Cinema offers:
— A sensor that offers 755 RAW megapixels at up to 300fps.
—Up to 16 stops of dynamic range and wide color gamut.
—Integrated high-resolution active scanning.

The Lytro Cinema package includes a camera, a server array for storage and processing — which can also be done in the cloud — and software to edit Light Field data. The entire system integrates into existing production and post workflows, working in tandem with popular industry standard tools.

Life the first short produced with Lytro Cinema in association with The Virtual Reality Company (VRC) will premiere at NAB on April 19 at 4pm PST in Room S222. Life was directed by Academy Award-winner Robert Stromberg, CCO at VRC (The Virtual Reality Company), and shot by David Stump, ASC, chief imaging scientist at VRC.

Senior finishing artist at Light Iron New York Katie Hinsen sees the possibilities. “The coolest thing about Lytro’s tech is that it captures the whole light field coming in to it, rather than a flat representation of the scene. So you can change focus in post, where you could pull stuff out that isn’t there. Basically, once you take a picture it’s still alive. Imagine you take a photo (or a video, now), and it’s got issues. With Lytro you’re capturing all the light information of the scene, not the image. So it’s all there and you can change it.”

Lytro Cinema will be available for production in Q3 of 2016 to exclusive partners on a subscription basis.

Lucas Wilson on Scratch’s new end-to-end VR workflow

With NAB looming, and chatter pointing to virtual reality being ubiquitous on the show floor, Assimilate has launched its new Scratch VR Suite, an end-to-end virtual reality workflow with an all-inclusive realtime toolset for working within a 360 environment. The Scratch VR Suite includes features from Scratch V8.4 (soon to be Scratch 8.5). Scratch VR also includes Scratch Web and creative tools that are specific to the VR Suite. Scratch Web enables realtime, online collaboration and review via Google Cardboard and Samsung GearVR headsets.

We reached out to Lucas Wilson, VR producer at Assimilate to find out more about the product and workflow.

Lucas Wilson on set.

Lucas Wilson on set.

Can you walk us through the workflow of someone shooting VR content and using Scratch VR from on-set to post?
In many ways, Assimilate has kind of just removed “VR” as an issue in a lot of post production, bringing it back to “just production.” Scratch does not do any stitching. So, once material is stitched you take these steps…

1) Publish to Scratch Web and generate review links for clients. 2) Review links can be opened up and played in either “Magic Window” or VR-Cardboard mode, allowing for an effective, real, headset-based review workflow for dailies. 3) VR goes through to editorial. 4) Conform to Scratch 5) Scratch can then grade in a true VR mode — with 360 viewer options on the desktop, to an external monitor, or live to an Oculus Rift DK2. mono or stereo. 6) In addition, grading “respects” 360 mode. Shapes will wrap around in 360 mode, respecting the edges in a lat/long frame, etc. It is real grading and finishing in 360/VR. 7) Publish to YouTube 360 with correctly inserted metadata, or as a normal equi-rectangular video for publishing elsewhere.

What are some common areas of focus that people who are just jumping into VR need to know from the outset?
Best advice I can give is to think through your workflow carefully from the beginning. Planning and pre-production is so important in any VR project, and that can get you into trouble more quickly than with many traditional projects.

You’ve been out on the road shooting VR in real-world situations. What has surprised you the most about this process, and can you talk about some of the tools that were built into the VR suite based on your input?
The biggest surprise was two-fold: the necessity of dealing quickly and effectively with stitching, and then the complete lack of good review and finish tools in VR. Getting anything reviewed by a client was a painful process before Scratch Web. My real-world experience (I think) had a big influence on the VR suite, because in that sense I was kind of a “customer with an inside track.” I was able to feed back my pain quickly and easily to the team, and they listened. Using Scratch and Scratch Web, I can review, conform, color, finish and deliver in VR.

——–

Assimilate and Wilson will be at NAB offering demos

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.

Quick Chat: SGO CEO Miguel Angel Doncel

By Randi Altman

When I first happened upon Spanish company SGO, they were giving demos of their Mistika system on a small stand in the back of the post production hall at IBC. That was about eight years ago. Since then, the company has grown its Mistika DI finishing system, added a new product called Mamba FX, and brought them both to the US and beyond.

With NAB fast approaching, I thought I would check in with SGO CEO Miguel Angel Doncel to find out how the company began, where they are now and where they are going. I also checked in about some industry trends.

Can you talk about the genesis of your company and the Mistika product?
SGO was born out of a technically oriented mentality to find the best ways to use open architectures and systems to improve media content creation processes. That is not a challenging concept today, but it was an innovative view in 1993 when most of the equipment used in the industry was proprietary hardware. The idea of using computers to replace proprietary solutions was the reason SGO was founded.

It seems you guys were ahead of the curve in terms of one product that could do many things. Was that your goal from the outset?
Ten years ago, most of the manufacturers approached the industry with a set of different solutions to address different parts of the workflow; this gave us an opportunity to capitalize on improving the workflow, as disjointed solutions imply inefficient workflows due to their linearity/sequentiality.

We always thought that by improving the workflow, our technology would be able to play in all those arenas without having to change the tools. Making the workflow parallel and saving time when a problem is detected avoids going backwards in the pipeline, and we can focus moving forward.

I think after so many years, the industry is saying we were right, and all are going in that direction.

How is SGO addressing HDR?
We are excited about HDR, as it really improves the visual experience, but at the same time it is a big challenge to define a workflow that can work in both HDR and SDR in a smooth way. Our solution to that challenge is the four-dimensional grading that is implemented with our 4th ball. This allows the colorist to work not only in the three traditional dimensions — R, G and B — but also to work in the highlights as a parallel dimension.

What about VR?
VR pieces together all the requirements of the most demanding 3D with the requirements of 360. Considering what SGO already offers in stereo 3D production, we feel we are well positioned to provide a 360/VR solution. For that reason, we want to introduce a specific workflow for VR that helps customers to work on VR projects, addressing the most difficult requirements, such as discontinuities in the poles, or dealing with shapes.

The new VR mode we are preparing for Mistika 8.7 will be much more than a VR visualization tool. It will allow users to work in VR environments the same way they would work in a normal production. Not having to worry about circles ending up being highly distorted ellipses and so forth.

What do you see as the most important trends happening in post and production currently?
The industry is evolving in many different directions at the moment — 8K realtime, 4K/UHD, HDR, HFR, dual-stream stereo/VR. These innovations improve and enhance the audience’s experience in many different ways. They are all interesting individually, but the most vital aspect for us is that all of them actually have something in common — they all require a very smart way of how to deal with increasing bandwidths. We believe that a variety of content will use different types of innovation relevant to the genre.

Where do you see things moving in the future?
I personally envision a lot more UHD, HDR and VR material in the near future. The technology is evolving in a direction that can really make the entertainment experience very special for audiences, leaving a lot of room to still evolve. An example is the Quantum Break game from Remedy Studios/Microsoft, where the actual users’ experience is part of the story. This is where things are headed.

I think the immersive aspect is the challenge and goal. The reason why we all exist in this industry is to make people enjoy what they see, and all these tools and formulas combined together form a great foundation on which to build realistic experiences.

ILM’s Rob Bredow named CTO of Lucasfilm

Rob Bredow has been promoted to CTO of ILM parent company Lucasfilm. Bredow joined ILM as a VFX supervisor in 2014, but at the end of that year was named VP of new media and head of Lucasfilm’s Advanced Development Group (ADG), which develops tools and techniques for realtime immersive entertainment. It was during this time that Bredow helped launch ILMxLAB, a division that combines the offerings of Lucasfilm, ILM and Skywalker Sound to create interactive storytelling and immersive cinema experiences.

“Rob is truly the perfect fit for this role. His passion for technology and innovation, and his experience in filmmaking, make him the ideal candidate to lead our technology efforts for both Lucasfilm and ILM,” said Lucasfilm GM Lynwen Brennan. “Rob’s many years of experience as a visual effects supervisor combined with his expertise in technology and new media enable us to continue our longstanding tradition of innovation.”

Prior to ILM, Bredow was the CTO and VFX supervisor at Sony Pictures Imageworks. He has worked on films such as Men in Black 3, The Amazing Spider-Man, Green Lantern, Cloudy With A Chance of Meatballs, Surf’s Up, Castaway and Godzilla.

Bredow is a member of the Academy of Motion Pictures Arts & Sciences (VFX Branch) and the AMPAS Scientific and Technical Council, and a Visual Effects Society Technology committee chair.

Reel FX beefs up VR division with GM Steve Nix

Dallas/Santa Monica’s Reel FX has added Steve Nix as general manager of its VR division. Nix will oversee all aspects of development, strategy and technology for the division, which has been working in the VR and AR content space. He joins David Bates, who was recently named GM of the studio’s commercial division. Nix has spent nearly two decades embracing new technology.

Prior to joining Reel FX, Nix was CEO/co-founder of Yvolver, a mobile gaming technology developer, acquired by Opera Mediaworks in 2015. His extensive experience in the gaming industry spans digital distribution and game development for companies such as id Software, Ritual Entertainment and GameStop.

“Reel FX has shown foresight establishing itself as an early leader in the rapidly emerging VR content space,” says Nix. “This foundation, combined with the studio’s resources, will allow us to aggressively expand our VR content offering — combining storytelling and visual and technical expertise in a medium that I firmly believe will change the human experience.”

Nix graduated summa cum laude from Texas Tech University with a BBA in finance, later earning his MBA at SMU where he was an Armentrout Scholar. He got his start in the gaming industry as CEO of independent game developer, Ritual Entertainment, an early pioneer in action games and digitally distributed PC games. Ritual developed and co-developed many titles including Counter-Strike (Xbox), Counter Strike: Condition Zero, Star Trek: Elite Force II, Delta Force Black Hawk Down, Team Sabre and 007: Agent Under Fire.

He moved on to join id Software as director of business development, prior to its acquisition by ZenMax Media, transitioning to the position of director of digital platforms at the company. There he led digital distribution and mobile game development for some of the biggest brands in gaming, including Doom, Rage, Quake and Wolfenstein.

In 2011, Nix joined GameStop as GM of digital distribution.

Quick Chat: East Coast Digital’s Stina Hamlin on VR ‘Cardboard City’

New York City-based East Coast Digital believes in VR and has set up its studio and staff to be able to handle virtual reality projects. In fact, they recently provided editorial, 3D animation, color correction and audio post on the 60-second VR short Cardboard City, co-winner of the Samsung Gear Indie VR Filmmaker Contest. The short premiered at the 2016 Sundance Film Festival. You can check it out here.

Cardboard City, directed by Double Eye Productions’ Kiira Benzing, takes viewers inside the studio of Brooklyn-based stop-motion animator Danielle Ash, who has built a cardboard world inside her studio. There is a pickle vendor, a bakery and a neighborhood bar, all of which can be seen while riding a cardboard roller coaster.

East Coast Digital‘s Stina Hamlin was post producer on the project. We reached out to her to find out more about this project and how the VR workflow differs from the traditional production and post workflow.

Stina Hamlin

How did this project come about?
The project came about organically after being introduced to director Kiira Benzing by narrative designer Eulani Labay. We were all looking to get our first VR project under our belt.  In order to understand the post process involved, I thought it was vital to be involved in a project from the inception, through the production stage and throughout post.  I was seeking projects and people to team up with, and after I met Kiira this amazing team came together.

What direction did you get?
We were given the understanding of the viewer experience that the film should evoke and were asked to be responsible for the technical side of things on set and in editorial.

So you were you on set?
Yes, we were definitely on set. That was an important piece of the puzzle. We were able to consult on what we could do in color and we were able to determine file management and labeling of takes to make it easier to deal with when back in the edit room. Also, we were able to do a couple of stitches at the beginning of the day to determine best camera positioning, etc.

How does your workflow differ from a traditional project to a VR project?
A VR project is different because we are syncing and concerned with seven-plus cameras at a time. The file management has to be very detailed and the stitching process is tedious and uses new software that all editors are getting up to speed with.

Monitoring the cameras on set is tricky, so being able to stitch on set to make sure the look is true to the vision was huge.  That is something that doesn’t happen in the traditional workflow… the post team is definitely not on set.

Cardboard City

Can you elaborate on some of the challenges of VR in general and those you encountered on this project?
The challenges are dealing with multiple cameras and cards, battery or power, and media for every shot from every camera. Syncing the cameras properly in the field and in post can be problematic, and the file management has to uber-detailed.  Then there’s the stitching… there are different software options, no one is a master yet. It is tedious work, and all of this has to get done before you can even edit the clips together in a sequence.

Our project also used stop-motion animation, so we had the artist featured in our film experimenting with us on how to pull that off.  That was really fun and it turned out great!  I heard someone say recently at the Real Screen conference that you have to unlearn everything that you have learned about making a film.  It is a completely different way to tell a story in production and post.

What was your workflow like?
As I mentioned before, I thought that it was vital to be on set to help with media management and “shot looks” using only natural light and organically placed light in preparation for color. We were also able to stitch on set to get a sense of each set-up, which really helped the director and artist see their story and creatively do their job. We then had a better sense of managing the media and understanding how the takes were marked.

Once back in the edit room we used Adobe Premiere to clean up each take and sync each clip for each camera.  We then brought only those clips into the stitching software — Autopano and Giga software from Kolor.com — to stitch and clean up each scene. We rendered out each scene into a self contained QuickTime for color. We colored in DaVinci Resolve and edited the scenes together using Premiere.

What about the audio? 
We recorded nothing on location. All of the sound was designed in post using the mix from the animated short film Pickles for Nickels that was playing on the wall, in addition to the subway and roller coaster sound effects.

What tools were used on set?
We used GoPro Hero 4s with firmware 3.0 and shot in log, 2.7k/30fps. iPads and iPhones were used to wirelessly monitor the rig, which was challenging. We used a laptop with AutoPano and Giga software to stitch on set. This is the same software we used in the edit bay.

What’s next?
We are collaborating once more with Kiira Benzing on the follow-up to Cardboard City. It’s a full-fledged 360 VR short film. The sequel will be even more technically advanced and create additional possibilities for interaction with the user.

Talking to Assimilate about new VR dailies/review tool

CEO Jeff Edson and VP of biz dev Lucas Wilson answer our questions

By Randi Altman

As you can tell from our recent Sundance coverage, postPerspective has a little crush on VR. While we know that today’s VR is young and creatives are still figuring out how it will be used — narrative storytelling, gaming, immersive concerts (looking at you Paul McCartney), job training, therapy, etc. — we cannot ignore how established film fests and trade shows are welcoming it, or the tools that are coming out for its production and post.

One of those tools comes from Assimilate, which is expanding its Scratch Web cloud-platform capabilities to offer a professional, web-based dailies/review tool for reviewing headset-based 360-degree VR content, regardless of location.

How does it work? Kind of simply: Users launch this link vr360.sweb.media on an Android phone (Samsung S6 or other) via Chrome, click the goggles in the lower right corner, put it in their Google Cardboard and view immediate headset-based VR. Once users launch the Scratch Web review link for the VR content, they can playback VR imagery, pan around imagery or create a “magic window” so they can move their smart phone around, similar to looking through a window to see the 360-degree content behind it.

The VR content, including metadata, is automatically formatted for 360-degree video headsets, such as Google Cardboard. The reviewer can then make notes and comments on their mobile device to send back to the sender. The company says they will be announcing support for other mobile devices, headsets and browsers in the near future.

On the heels of this news, we decided to reach out to Assimilate CEO Jeff Edson and VP of business development Lucas Wilson to find out more.

Assimilate has been offering tools for VR, but with this new dailies and reviews tool, you’ve taken it to a new level. Can you talk about the evolution of how you service VR and how this newest product came to be?
Jeff Edson: Professional imagery needs professional tools and workflows to succeed. Much like imagery evolutions to date (digital cinema), this is a new way to capture and tell stories and provide experiences. VR provides a whole new way for people to tell stories amongst other experiences.

So regarding the evolution of tools, Scratch has supported the 360 format for a while now. It has allowed people to playback their footage as well as do basic DI — basic functionality to help produce the best output. As the production side of VR continues to evolve, the workflow aligns itself with a more standard process. This means the same toolset for VR as exists for non-VR. Scratch Web-VR is the natural progression to provide VR productions with the ability to review dailies worldwide.

Lucas Wilson: When VR first started appearing as a real deliverable for creative professionals, Assimilate jumped in. Scratch has supported 360 video live to an Oculus Rift for more than a year now. But with the new Scratch Web toolset and the additional tools added in Scratch to make 360 work more easily and be more accessible, it is no longer just a feature added to a product. It is a workflow and process — review and approval for Cardboard via a web link, or via the free Scratch Play tool, along with color and finishing with Scratch.

It seems pretty simple to use, how are you able to do this via the cloud and through a standard browser?
Jeff: The product is very straight forward to use, as there is a very wide range of people who will have access to it, most of whom do not want the technology to get in the way of the solution. We work very hard at the core of all we have developed — interactive performance.

Lucas: Good programmers (smiles)! Seriously though, we looked at what was needed and what was missing in the VR delivery chain and tried to serve those needs. Scratch Web allows users to upload a clip and generate a link that will work in Cardboard. Review and approval is now just clicking a link and putting your phone into a headset.

What’s the price?
Jeff: The same price as Scratch Web — Free-Trial, Basic-$79/month, Extended-$249/month and Enterprise for special requirements.

Prior to this product, how were those working on VR production going about dailies and reviews?
Jeff: In most cases they were doing it by looking at output from several cameras for review. The main process for viewing was to edit and publish. There really was no tool targeted at dailies/review of VR.

Lucas: It has been really difficult. Reviews are typically done on a flat screen and by guessing, or by reverse engineering MilkVR or Oculus Videos in GearVR.

Can you talk about real-world testing of the product? VR productions that used this tool?
Lucas: We have a few large productions doing review and approval right now with Scratch Web. We can’t talk about them yet, but one of them is the first VR project directed by an A-List director. There are also two of the major sports leagues in the US who employed the tool.

The future is in Park City: highlights from Sundance’s New Frontier

By Kristine Pregot

The future is here, and I caught a glimpse of it while wearing VR glasses at the New Frontier. This is Sundance’s hottest place on the mountain. The Frontier is a who’s who of VR tech, design and storytelling.

These VR products aren’t exactly ready for household consumption yet, but the New Frontier has become a spot for developers to show off their latest and greatest in this ever-growing arena.

On the 2nd and 3rd floors of the Frontier’s dark hallway, you’ll find Oculus Rifts and HTC Vive stations lining the studio walls along with masked viewers sitting on comfy couches reaching for nothing, sitting side by side, but in their own dimension of (virtual) reality.

A very impressive exhibit was Holo-Cinema, a new technology being developed by Walt Disney Co.’s Lucasfilm to expand the Star Wars universe to your very own home. Users, wearing augmented glasses, journey through the Jakku desert and walk around a 3D C3PO while he paces and complains around you, like a hologram. If you were to walk into the room without the glasses, you would see an unfocused projection against the wall and under your feet.

Music meets storytelling was a big trend in the lab as well, with the Kendrick Lamar-scored installation Double Conscience from artist Kahlil Joseph featuring scenes from the inner city of LA rhythmically projected onto two walls and set to Kendrick’s new album.

Another fun and interactive piece that blended music with new technology was 3 Dreams of Black, a film by Chris Milk, with music from the album “Rome” by Danger Mouse, Daniele Luppi, and featuring Norah Jones. Check it out here.

While Sundance is one of the top festivals for filmmakers, I’m impressed with the breadth of new storytelling tools and technology that were on display. I look forward to seeing how the programmers further integrate this type of experience in the years to come.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


The sound of VR at Sundance and Slamdance

By Luke Allen

If last year’s annual Park City film and cultural meet-up was where VR filmmaking first dipped its toes in the proverbial water, count 2016’s edition as its full on coming out party. With over 30 VR pieces as official selections at Sundance’s New Frontier sub-festival, and even more content debuting at Slamdance and elsewhere, festival goers this year can barely take two steps down Main Street without being reminded of the format’s ubiquitous presence.

When I first stepped onto the main demonstration floor of New Frontier (which could be described this year as a de-facto VR mini-festival), the first thing that struck me was, why was it so loud in there? I admit I’m biased since I’m a sound designer with a couple of VR films being exhibited around town, but I am definitely backed up by a consensus among content creators regarding sound’s importance to creating the immersive environment central to VR’s promise as a format (I know, please forgive the buzzwords). In seemingly direct defiance of this principle, Sundance’s two main public exhibition areas for all the latest and greatest content were inundated with the rhythmic bass lines of booming electronic music and noisy crowds.

I suppose you can’t blame the programmers for some of this — the crowds were unavoidable — but I can’t help contrasting the New Frontier experience with the way Slamdance handled its more limited VR offering. Both festivals required visitors to sign up for a viewing time, but while the majority of Sundance’s screenings involved strapping on a headset while seated on a crowded bench in the middle of the demonstration floor, Slamdance reserved a quiet room for the screening experience. Visitors were advised to keep their voices to a murmur while in the viewing chamber, and the screenings took place in an isolated corner seated on — crucially — a chair with full range of motion.

Why is this important? Consider the nature of VR: the viewer has the freedom to look around the environment at their own discretion, and the best content creators make full use the 360-degrees at their disposal to craft the experience. A well-designed VR piece will use directional sound mixing to cue the viewer to look in different directions in order to further the story. It will also incorporate deep soundscapes that shift as one looks around the environment in order to immerse the viewer. Full range of motion, including horizontal rotation, is critical to allowing this exploration to take place.

The Visitor, which I had the pleasure of experiencing in Slamdance’s VR sanctuary, put this concept to use nicely by placing the two lead characters 90 degrees apart from one another, forcing the viewer to look around the beautifully-staged set in order to follow the story. Director James Kaelan and the post sound team at WEVR used subtly shifting backgrounds and eerie footsteps to put the viewer right in the middle of their abstract world.

VR New Frontier

Sundance’s New Frontier VR Bar.

Resonance, an experience directed by Jessica Brillhart that I sound designed and engineered, features violinist Tim Fain performing in a variety of different locations, mostly abandoned, selected both for their visual beauty and their unique sonic character. We used an Ambisonic microphone on set in order to capture the full range of acoustic reflections and, with a lot of love in the mix room at Silver Sound, were able to recreate these incredible sonic landscapes while enhancing the directionality of Fain’s playing in order to help the viewer follow him through the piece (Unfortunately, when Resonance was screening at Sundance’s New Frontier VR Bar, there was a loudspeaker playing Top 40 hits located about three feet above the viewer’s head).

In both of these live-action VR films, sound and picture serve to enhance and guide the experience of the other, much like in traditional cinema, but in a new and more enchanting way. I have had many conversations with other festival attendees here in Park City in which we recall shared VR experiences much like shared dreams, so personal and haunting is this format. We can only hope that in future exhibitions more attention is paid to ensure that viewers have the quiet they need to fully experience the artists’ work.

Luke Allen is a sound designer at Silver Sound Studios in New York City. You can reach him at luke@silversound.us

MPC Creative provides film, VR project for Faraday at CES 2016

VR was everywhere at CES earlier this month, and LA’s MPC played a role. Their content production arm, MPC Creative, produced a film and VR experience for CES 2016, highlighting Faraday Future’s technology platform and providing glimpses of the innovations consumers can expect from their product. The specific innovation shown in the CES VR film was a concept car — the FFZERO1 high-performance electric dream car — and the inspiration around Faraday Future’s consumer-based cars.

“We wanted it to feel elemental. Faraday Future is a sophisticated brand that aims for a seamless connection between technology and transportation,” explains MPC Creative CD Dan Marsh, who also directed the film. “We tried to make the film personal, but natural in the landscape. The car is engineered for the racetrack, but beautiful, in the environmental showcase.”

CES_Faraday_MASTER.0000725      CES_Faraday_MASTER.0000442

To make the film, MPC Creative shot a stand-in vehicle to achieve realistic performance driving and camera work. “We filmed in Malibu and a performance racetrack over two days, then married those locations together with some matte painting and CG to create a unique place that feels like an aspirational Nürburgring of sorts. We match-moved/tracked the real car that was filmed and replaced it with our CG replica of the Faraday Future racecar to get realistic performance driving. Interior shots were filmed on stage. We chose to bridge those stage shots with a slightly stylized appearance so that we could tie it all back together with a full CG demo sequence at the end of the film.”

MPC Creative also produced a Faraday Future VR experience that features the FFZERO1 driving through a series of abstract environments. The experience feels architectural and sculptural, and ultimately offers a spiritual versus visceral journey. Using Samsung’s Gear VR, CES attendees sat in a position similar to the angled seating of the car for their 360-degree CES_Faraday_MASTER.0001174tour.

MPC Creative shot the pursuit vehicle with an Arri Alexa and  used a Red Dragon for drone and VFX support. “We also mounted a Red, with a 4.5mm lens pointed upwards on a follow vehicle that allowed us to capture a mobile spherical environment, which we used to map moving reflections of the environment back onto the CG car,” explains MPC Creative executive producer Mike Wigart.

How did working on the film versus the VR product differ? “The VR project was very different from the film in the sense that it was CG rendered,” says Wigart. “We initially considered the idea of a doing a live-action VR piece, but we started to see several in-car live-action VR projects out in the world, so we decided to do something we hadn’t seen before — an aesthetically driven VR piece with design-based environments. We wanted a VR experience that was visually rich while speaking to the aspirational nature of Faraday Future.”

CES_Faraday_MASTER.0001235      CES_Faraday_MASTER.0000988

Adds Marsh, “Faraday Future wanted to put viewers in the driver’s seat but, more than that, they wanted to create a compelling experience that points to some of the new ideas they are focusing on. We’ve seen and made a lot of car driving experiences, but without a compelling narrative the piece can be in danger of being VR for the sake of it. We made something for Faraday Future that you couldn’t see otherwise. We conceived an architectural framework for the experience. Participants travel through a racetrack of sorts, but each stage takes you through a unique space. But we’re also traveling fast, so, like the film, we’re teasing the possibilities.”

Tools used by MPC Creative included Autodesk Maya, Side Effects Houdini, V-Ray by Chaos Group, The Foundry’s Nuke and Nuke Studio and Tweak’s RV.

Sundance 2016: My festival to-do list

By Kristine Pregot

As a first time Sundancer, I don’t have much expectations to be managed. I am simply thrilled at the opportunity to watch some great films and spend time with friends out west, but I do have a few things that are certainly high on my agenda for the week.

Lovesong

LoveSong, directed by So Yong Kim.

1. Promote our film in the festival — LoveSong
I am very excited for the premiere of LoveSong (competing in the dramatic competition) It was a pleasure to work with the film’s director So Yong Kim. Nice Shoes’ Sal Malfitano graded the film in Baselight, working very closely with So and the film’s two DPs to create a natural and wonderful tone for the film through color. The movie was also edited by So, and she established a a beautiful rhythm in the cut. The acting is just so natural — the characters and performances truly stay with you. I can’t wait to hear the reactions from festival goers.

2.  Check Out Sundance’s Brand’s Digital Storytelling Conference
This year, advertising agencies will have a chance to shine and compete in the festival! I am proud to admit that I am an “ad nerd.”  I have a fascination with advertising and how brands reach their audience. In our digital age — commercials are clearly not what they used to be and have expanded with the potential of new technologies.

Sundance has grown into one of the most important gatherings of independent storytelling, and the festival attracts creative thought leaders from around the world. Increasingly, brands and agencies are partnering with storytellers and journalists to create engaging content. So the opportunity to screen and network with the most talented storytellers sounds like a lot of fun. I really admire what brands are doing with short form storytelling and thrilled to see this competition at the festival.

3. Experience the New Frontier (in the Wild West)
The New Frontier exhibit at Sundance is now in it’s 10th year!!  I have heard from festival-goers in the past that this is where cutting-edge technology is experienced and tested by creative/thought leaders. The New Frontier showcases cinematic works and virtual reality installations, which include an extensive line-up of documentary and narrative mobile VR experiences.

I can’t wait to explore the future of our industry and have a sneak peek at what is being developed by these media research labs.

4.  Keeping My Options Open
I am the type of festival-goer who keeps my options open. Yes, there are films I want to see and old friends I will connect with, but there is a magic that happens at festivals when you catch wind of a hot buzz and discover something unexpected.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


Duck Grossberg joins Local Hero as CTO, will grow dailies, VR biz

Santa Monica-based Local Hero, a boutique post facility working on feature and independent films has hired Duck Grossberg as chief technology officer.

Grossberg, who was most recently at Modern Videofilm, will drive the overall technology vision for Local Hero, as well as expand the dailies part of the studio’s end-to-end workflow services. In addition, Grossberg’s significant virtual reality production and DI experience will also help fuel Local Hero’s rapidly growing VR business.

Grossberg has held a variety of technical roles over the past 15 years, working with facilities such as the The Creative Cartel, Deluxe Labs, The Post Group, Modern, Cameron/Pace, Tyler Perry Studios and 20th Century Fox.

As a DIT, digital lab supervisor and colorist (dailies and on-set), Grossberg’s credits include Real Steel, Life of P, and Dawn of the Planet of the Apes, as well as TV shows such as Dig, Tyrant and Sleepy Hollow.

“Local Hero experienced exponential growth in our core dailies, DI, VFX and finishing business in 2015,” says Leandro Marini, founder/president of Local Hero. “We also saw rapid growth in our VR dailies and finishing business, delivering nearly 20 projects for clients such as Fox, Jaunt Studios and the NFL. The addition of Duck is a crucial component to our expansion at Local Hero. The combination of his technical prowess, creative skills and client experience make him uniquely positioned to help drive our aggressive growth.”

Quick Chat: GoPro EP/showrunner Bill McCullough

By Randi Altman

The first time I met Bill McCullough was on a small set in Port Washington, New York, about 20 years ago. He was directing NewSport Talk With Chet Coppock, who was a popular sports radio guy from Chicago.

When our paths crossed again, Bill — who had made some other stops along the way — was owner of the multiple Emmy Award-winning Wonderland Productions in New York City. He remained there for 11 years before heading over to HBO Sports as VP of creative and operations. Bill’s drive didn’t stop there. Recently, he completed a move to the West Coast, joining GoPro as executive producer of team sports and motor sports.

Let’s find out more:

You were most recently at HBO Sports in New York. Why the jump to GoPro, and why was this the right time?
I was fortunate enough to have a great and long career with HBO, a company that has set the standard for quality storytelling, but when I had the opportunity to join the GoPro team I could not pass it up.

GoPro has literally changed the way we capture and share content. With its unique perspective and immersive style, the capture device has given filmmakers the ability to tell stories and capture visuals that have never existed before. The size of the device makes it virtually invisible to the subject and creates an atmosphere that is much more organic and authentic. GoPro is also a leader in VR capture and we’re excited for 2016.”

What will you be doing in your new role? What will it entail?
I am an executive producer in the entertainment division. I will be responsible for creating, developing and producing content for all platforms.

What do you hope to accomplish in this new role?
I am excited for my new role because I have the opportunity to make films from a completely new perspective. GoPro has done an amazing job capturing and telling stories. My goal is to raise the bar and grow the brand even more.

You have a background in post and production. Will this new job incorporate both?
Yes. I will oversee the creative and production process from concept to completion for my projects.

SuperSphere and Fox team on ‘Scream Queens’ VR videos

Fox Television decided to help fans of its Scream Queens horror/comedy series visit the show’s set in a way that wasn’t previously possible, thanks to eight new virtual reality short videos. For those of you who haven’t seen the show, Scream Queens focuses on a series of murders tied to a sorority and is set at a fictional college in New Orleans. The VR videos have been produced for the Samsung Milk VR, YouTube 360° and Facebook platforms and are rolling out in the coming weeks.

Fox called on SuperSphere Productions — a consultancy that helps with virtual reality project execution and delivery — to bring their VR concepts to life. SuperSphere founder Lucas Wilson worked closely with Fox creative and marketing executives to develop the production, post and delivery process using the talent, tools and equipment already in place for Scream Queens.

“It was the first VR shoot for a major episodic that proved the ability to replicate a realistic production formula, because so much of VR is very ‘science project-y’ right now,” explains industry vet Wilson, who many of you might know from his work with Assimilate and his own company Revelens.

Lucas Wilson

Lucas Wilson

Wilson reports that this project had a reasonable budget and a small crew. “This allowed us to work together to produce and deliver a wide series of experiences for the show that — judging by reaction on Facebook — are pretty successful. As of late November, the Closet Set Tour (extended) has over 660,000 views, over 31,000 likes and over 10,500 shares. In product terms, that price/performance ratio is pretty damn impressive.”

The VR content, captured over a two-day shoot on the show’s set in New Orleans, was directed by Jessica Sanders and shot by the 360Heros team. “The fact that a woman directed these videos is relevant, and it’s intentional. For virtual reality to take root and grow in every corner of the globe, it must become clear very quickly that VR is for everyone,” says Wilson. “So in addition to creating compelling content, it is critical for that content to be produced and influenced by talented people who bring a wide range of perspectives and experiences. Hiring smart, ambitious women like Jessica as directors and DPs is a no-brainer. SuperSphere’s mission is to open up a whole new kind of immersive, enriching experience to everyone on the planet. To reach everyone, you have to include everyone… from the beginning.

In terms of post, editorial and the 5.1 sound mix was done by Fox’s internal team. SuperSphere did the conform and finish on Assimilate Scratch VR. Local Hero did the VR grading, also on Scratch VR. “The way we worked with Local Hero was actually kinda cool,” explains Wilson. “Most of the pieces are very single-location with highly controlled lighting. We sent them representative still frames, and they graded the stills and sent back a Scratch preset, which we used to then render and conform/output. SuperSphere then output the three different VR deliverables — Facebook, MilkV Rand YouTube.

Two videos have already launched — the first includes a behind-the-scenes tour, mentioned earlier, of the set and closet of Chanel Oberlin (Emma Roberts), created by Scream Queens production designer Andrew Murdock. The second shows a screaming match between the Scream Queens‘ Zayday Williams (Keke Palmer) and Grace Gardner (Skyler Samuels).

Following the Emmy-winning Comic-Con VR experience for its drama Sleepy Hollow last year, these Scream Queens videos mark the first of an ongoing Fox VR and augmented reality initiative for its shows.

“The intelligent way that Fox went about it, and how SuperSphere and Fox worked together to very specifically create a formula for replication and success, is in my opinion a model for how episodic television can leverage VR into an overall experience,” concludes Wilson.

IKinema at SIGGRAPH with tech preview of natural language interface

IKinema, a provider of realtime animation software for motion capture, games and virtual reality using inverse kinematics, has launched a new natural language interface designed to enable users to produce animation using descriptive commands based on everyday language. The technology, code-named Intimate, is currently in prototype as part of a two-year project with backing by the UK government’s Innovate UK program.

The new interface supplements virtual reality technology such as Magic Leap and Microsoft HoloLens, offering new methods for creating animation that are suitable for professionals but also simple enough for a mass audience. The user can bring in a character and then animate the character from an extensive library of cloud animation, simply by describing what the character is supposed to do.

Intimate is targeted to many applications including pre-production, games, virtual production, virtual and augmented reality and more. The technology is expected to become commercially available in 2016 and the aim is to make an SDK available to any animation package. Currently, the company has a working prototype and has engaged with top studios for the purpose of technology validation and development.